topic
stringclasses
2 values
relevance score
int64
1
10
paper name
stringlengths
19
239
text
stringlengths
1.56k
680k
ai_researcher
5
Autonomous_Discovery_of_Robot_Structure_and_Motion_Control_Through_Large_Vision_Models.pdf
Reconfigurable Robot Identification from Motion Data Yuhang Hu* Yunzhe Wang Ruibo Liu Zhou Shen Hod Lipson Columbia University 4 2 0 2 r a M 5 1 ] O R . s c [ 1 v 6 9 4 0 1 . 3 0 4 2 : v i X r a Abstract— Integrating Large Language Models (LLMs) and Vision-Language Models (VLMs) with robotic systems enables robots to process and understand complex natural language instructions and visual information. However, a fundamental challenge remains: for robots to fully capitalize on these advancements, they must have a deep understanding of their physical embodiment. The gap between AI models’ cognitive capabilities and the understanding of physical embodiment leads to the following question: Can a robot autonomously understand and adapt to its physical form and functionali- ties through interaction with its environment? This question underscores the transition towards developing self-modeling robots without reliance on external sensory or pre-programmed knowledge about their structure. Here, we propose a meta- self-modeling that can deduce robot morphology through pro- prioception—the robot’s internal sense of its body’s position and movement. Our study introduces a 12-DoF reconfigurable legged robot, accompanied by a diverse dataset of 200k unique configurations, to systematically investigate the relationship between robotic motion and robot morphology. Utilizing a deep neural network model comprising a robot signature encoder and a configuration decoder, we demonstrate the capability of our system to accurately predict robot configurations from proprioceptive signals. This research contributes to the field of robotic self-modeling, aiming to enhance robot’s understanding of their physical embodiment and adaptability in real-world scenarios. I. INTRODUCTION The development of artificial general intelligence (AGI) capable of controlling robots in the real world necessitates a deep understanding of the robot’s physical embodiment. Recent research has focused on integrating Large Language Models (LLM) or Vision-Language Models (VLM) with robots to enhance their capabilities [1]–[4]. However, most approaches rely on human prompts to guide robots in com- pleting specific tasks. LLM and VLM can extract information in task scenarios for robots, allowing high-level controllers to complete decision-making. However, these methods still require pre-programming of some behaviors, which limits the robot’s capabilities. For example, a robot can derive the position of a mug on the table from visual information, but how to grab the mug still needs to rely on the underlying human program. Therefore, in order to create a robot that effectively incorporates the capability of LLM or LVM into its control strategy as human common sense, the model must understand the robot body in the physical world. Traditionally, robots have been programmed with prede- fined models that describe their kinematics, dynamics, and This work was supported in part by the US National Science Foun- dation (NSF) AI Institute for Dynamical Systems (DynamicsAI.org), grant 2112085 [email protected]. For more information: https://github.com/H-Y-H-Y-H/meta self modeling id Fig. 1: Configuration prediction from motion data. To what degree is it possible to reconstruct the topology of a robot from its motion dynamics alone? (concept illustration only) For more insights into the motivation behind our research, we invite readers to view the supplementary videos. physical structure (often represented as URDF - Universal Robot Description Format files). Relying solely on prede- fined models can limit a robot’s adaptability and resilience. These models are typically static and do not change. There- fore, they might not account for wear and tear, modifica- tions to the robot’s body, or entirely new environments. To overcome these limitations, there’s a growing interest in developing self-modeling robots [5]–[7]. These robots can understand and update their models through their own experiences. This capability is crucial for life-long learning, enabling robots to adapt to changes in their physical structure or environment. To enable robots to comprehend the ontology of the physical world, relying on external sensors to understand robot ontology might not be universally effective. This is due to the variability in robots’ operational environments and the diversity in the calibration and configuration of external sensors. For example, a robot equipped with two cameras with different configurations would capture disparate images, leading to variations in perception. In contrast, focusing on proprioception could offer a more consistent and effective means for these models to understand a robot’s kinematics, dynamics, and morphology. Proprioception provides direct insights into a robot’s internal state and dynamics, which are less affected by external environmental factors. This approach might streamline the process of integrating robots with advanced cognitive models by offering a more standard- ized basis for understanding robotic systems. In robotics, the intricate relationship between morphology and motion dynamics plays a pivotal role. For example, is it possible to tell if a robot has two legs or four legs simply by observing time series data of its body accelerations and joint angles? While it is clear that the dynamics of a slithering snake are different the question is to what degree can this information alone be used to recover the complete morphology of a robot (Fig.1). This paper delves into the possibility of reconstructing a robot’s topological structure based solely on its motion dynamics data from proprioception. than those of a bipedal walker, The ability to create a long-term body self-image through movement is natural for humans [8]–[10]: we can perceive the position, orientation and motion of our body parts without seeing and thinking, thanks to proprioceptors located within muscles, tendons, skin and joints [11]. Such spatial self- awareness is also essential for robots to anticipate outcomes of motor action without trying them out in physical reality. Past research has tried to replicate such capability through computational robot self-model [12], [13]. The strategy employed in this study also bears a resem- blance to Biometric Motion Identification in human studies, where individuals can be distinguished based on motion capture data obtained from kinetic videos and 3D skeletal sequences. By mirroring this approach, we aim to imbue our robotic self-model with the capability to learn and predict new and unseen configurations [14], [15]. This could aid with detecting change to the body of existing robots (e.g. due to damage or failure), or speed up the modeling of new robots by automatically deciphering their topology. In this work, we proposed a learning framework to iden- tify individual robots from their proprioceptive data. We designed a 12-DoF reconfigurable legged robot, creating a diverse dataset of 200k unique robotic configurations, each distinctly represented via a signature code as shown in Fig.2. By using a single reconfigurable legged robot platform, we create a controlled environment that can systematically explore a wide range of configurations. We introduce a meta- self-modeling, a Multiclass-Multioutput Robot Morphology Classifier, which allows a robot to form an understanding of its body morphology from a limited set of self-movement data. Once trained, the meta-self-model can predict unseen robot configurations by just observing proprioceptive signals. Through this methodology, our meta-self-model has the ca- pacity to comprehend and interpret robot dynamics, marking a stride forward in the realm of robotic self-modeling. The main contributions of this work are as follows: The motivation of our work is not limited to the identifica- tion and prediction of robot configurations but extends into the realm of applying our model’s latent space for broader applications. If our framework, based on proprioception data, can accurately classify robot configurations, this implies that the model’s latent space possesses the ability to differentiate between different robots based on proprioceptive information alone. Such a richly informed latent space opens up new avenues for applying these insights to other tasks, such as designing adaptive controllers, predicting dynamics with precision, or even visualizing the morphologies of robots in ways previously unattainable. In essence, the foundational understanding of robot morphologies and dynamics, facili- tated by our model’s insights into proprioception data, can serve as a critical bridge for LLM and VLM to perform gen- eral real-world physical interactions. The main contributions of this work are as follows: 1. We present a meta-self-modeling approach that aims to learn shared dynamics across diverse reconfigurable robot morphologies. This approach allows the system to understand multiple robot dynamics and predict specific configurations through proprioceptive data, marking progress in robotic self- modeling. 2. The development of a 12-DoF reconfigurable legged robot. This design allows for a high degree of versatility in physical configurations, enabling the study of the relationship between robotic motions and robot configurations within a single platform. 3. We open-source a diverse dataset of 200k unique config- urations from a 12-DoF reconfigurable legged robot with the same icosahedron body. This dataset includes robot URDF files, initial joint positions, and the hardware CAD design. The entire robot can be fabricated by FDM 3D-printing for easy reproducibility and further research, enabling other researchers to print and assemble the robots. II. RELATED WORK Self-modeling Robots. The concept of self-modeling per- meates various disciplines, from human cognition and animal behavior to robotic systems [5], [16]–[18]. Essentially, it involves an agent—be it biological or artificial—constructing an internal representation of its physical properties. In the robotic field, self-modeling is one of the data-driven control systems providing a computational representation of the dif- ferent aspects of robots, including morphology, kinematics, and dynamics [19], as well as other facets like sensing, actuation, control, and planning [20]. A well-trained Self- model can be implemented as a predictive model for Model Predictive Control [21]. It segregates the model of the robot from a model of its environment and task. The robot itself is relatively consistent across different tasks and environments, so isolating the self-model for reuse simplifies adaptation in varying scenarios, even resilience from the damaged body, thus facilitating transfer learning betwen robots [7], [18], [22]. Robotic System Identification. Bongard and Lipson [23] introduced a coevolutionary algorithm for inferring hidden nonlinear systems. Wu and Movellan [24] proposed the Semi-Parametric Gaussian Processes (SGP), merging the ad- vantages of parametric and non-parametric system identifica- tion approaches for underactuated robotics. Wenhao Yu et al. [25] developed a Universal Policy (UP) and Online System Identification (OSI) function to adapt to unknown dynamic models. Bruder et al. [26] introduced a system identification technique using Koopman operator theory in the domain of soft robotics. The landscape of learning-based controllers for robotic systems is also vast and rapidly evolving. Researchers present learning universal policies understanding the relation- ship between morphology and control, and employing meta- Fig. 2: A comprehensive view of the reconfigurable robots used in our work. (a) Overview of reconfigurable robots exhibiting a variety of configurations in the simulation environment. (b) Detailed view of the robot’s main body, a geometrically precise icosahedron with 20 uniform faces designed for versatile connection to joint modules. (c) Close-up of a single joint module, equipped with an individual motor, demonstrating its potential for connection at 12 distinct angles to allow for a broad range of movement and reconfiguration. (d) Fully-assembled assembled physical robot in the real world. learning for improving learning efficiency [27]–[29]. Gupta et al.’s ”MetaMorph” [30] and Trabucco et al.’s ”AnyMorph” [31] both focus on universal controllers, with the former utilizing Transformers and the latter emphasizing the learn- ing of transferable policies by inferring agent morphology Our work extends these foundations by harnessing shared dynamics among diverse robot morphologies for system identification. Reconfigurable Robots. Due to the capability of Re- configurable robots that can reconstruct their morphology, they have found significant application in space exploration, where compactness is a crucial factor, and other fields requiring versatile adaptability [32], [33]. Optimizing these reconfigurable robots often involves searching for optimal configurations for specific tasks. Prior studies have designed and explored reconfigurable manipulators, where the robot’s structure can be adjusted to suit different tasks [34], [35]. Furthermore, the complexity of reconfigurable systems has been pushed even further with the underactuated robots, offering greater adaptability and control in complex dynamic environments [36]–[38]. Although these systems provide diverse morphology, their usage in learning shared dynamics across different robot configurations has been limited. The complex, high-dimensional configuration space presented by reconfigurable robots presents an ideal opportunity for training a meta-self-model that learns the shareable essence of robot dynamics. III. METHOD A. Robot Configuration Name To study a robot family with similar configurations, we designed a 12-DoF quadruped robot with URDF descriptions to be loaded in physics simulation engines and assembled in the real world. Its legs can be attached to any four faces of an icosahedron body, and each leg consists of three links where the connection point between links can be rotated to 12 different angles with 30 degrees separation counterclockwise. This gives a family of robots with a total number of C 20 4 · (4 · 3)12 ≈ 4.32 × 1016 possible configurations depending on how we assemble the robot. To uniquely describe each symmetric-leg robot, we designed an integer vector coding y ∈ Z12 such that 0 ≤ yf ≤ 19 for f ∈ {0, 4}, indicating the two faces one one side with legs, and that 0 ≤ yl ≤ 11 for l ∈ {1, 2, 3, 5, 6, 7}, indicating the angles of the six links (Fig.3). When calculating errors using the L1 distance, this encoding ensures that adjacent configurations (like ”0” and ”11”) have a difference of 1. Therefore, the maximum distance achievable is 6. For example, when the true label is ”0” but the predicted label is ”6”. We implemented a script to generate the corresponding URDF file, which was given a configuration for simulation. are guaranteed to determine the other half through a mirror function m. This simplification drastically reduces the total number of possible configurations to 2·C 6 2 ·(2×3)12 ≈ 8.96× 107. Despite the simplification, it still remains challenging to predict the structure of these robots due to the vast array of potential configurations. C. Model Architecture We proposed a deep neural networks model that consists of two components: a robot signature encoder and a con- figuration decoder. The encoder handles both channel-wise and temporal dependencies of the collected state sequences, extracting a latent robot morphological representation. The decoder has seven classification heads; each is a single fully connected layer and decodes the latent representation to the leg-face pattern and joint orientations. a) Robot Signature Encoder: Given the collected state- sequences of a robot, we aim to predict its configuration code to differentiate individual robots by observing its move- ment. We name this process “robot signature encoding,” similar to biometric motion identification, where a person can be recognized based on his/her motion capture sequence. Fig.4(a) illustrates the process where dynamic data obtained from sampled robot babbling is introduced into the Robot Signature Encoder. We leverage 1D convolution layers with squeeze-and-excitation blocks [39] for capturing channel- wise dependencies as illustrated in Fig.4(b). The encoded features are concatenated and fed through Multi-Layer Per- ceptron (MLP) layers as a latent vector z = ΦENC(x) ∈ Rd before decoding the configuration code. b) Configuration Decoder: To predict the configuration code vector y, we leveraged a multi-output decoder ΦDEC in Fig.4(c) that with seven prediction heads as present decodes y from the latent space z. One head classifies the leg position pattern with 30 choices, and the other six heads classify the orientation pattern for the six legs on one side of the robot; each has 12 choices. Due to the symmetry constraint, the other half of the robot legs can be completely determined. Therefore, a 7-heads classifier is sufficient for our setup: ΦDEC := (ΦLEG, Φ1 JNT, Φ3 c) Objective Function: We applied the Categorical JNT, Φ2 JNT, Φ4 JNT, Φ6 JNT, Φ5 JNT) Cross-Entroy loss function for each classification head. Lhead = −y log exp(ˆy) c=1 exp(ˆyi) (cid:80)C (1) Where ˆy is the network output logits, y is the ground truth label, and C is the number of classes, where C = 30 if head=leg and C = 12 if head=joint. The total loss across all seven heads is aggregated by taking a weighted sum controlled by a ratio hyper-parameter λ due to the varying difficulties between classifying legs and joints. Ltotal = λLleg + (1 − λ) 1 6 6 (cid:88) i=1 L(i) joint (2) Fig. 3: Coding method of the icosahedron body and the angle of each link. a) Integer vector coding for the icosahedron body. b)Integer vector coding for the angle of the twelve links. Each face of the icosahedron body is sequentially num- bered in a counterclockwise direction from top to bottom. The connected joints allow rotation angles, divided into 12 segments with a 30-degree separation, also numbered in a counterclockwise manner. B. Data Collection Our data collection schema consists of two phases: robot generation and dynamic data collection. We randomly gener- ated 200K robots with different configuration coding, where each code maps to unique URDF files describing the robot’s morphology. All generated robots are symmetric and can stand when loaded into the simulation. Algorithm 1: Robot Generation Data: Left faces indices If , Number of robots N , Configuration mirror function m Result: Robot Dataset Dr as a mapping from config code to URDF Dr ← ∅; while |Dr| < N do CJ ← RandInt(range=12, n=6); CL ← {CL ⊆ If | card(CL) = 2}; C ← [CL; m(CL); CJ ; m(CJ )]; R ← URDF(C); if collide(R) or slip(R) then continue else Dr ← Dr ∪ {C : R}; end end We randomly generated 163k robots for the reconfigurable legged robot dataset with different configuration codes as described in algorithm 1. We generated the corresponding URDF description file for each configuration coding and loaded them in the PyBullet Physics Engine for validation. We filtered out robots with self-collision and those that would slip over when loaded into the simulation. Slipping over is triggered when the robot’s body roll or pitch value is greater than π/2 while it moves. For simplicity, we constrained the robot structure in two ways: 1) only 12 out of the 20 faces will be chosen to attach legs, where all of them are located in the middle and bottom layers of the icosahedron; 2) All robots are symmetric, meaning that given half of the configuration code (8 values) on one side of the robot, we Fig. 4: The model architecture of the classifier. The robot signature encoder (a) employs three 1D convolution blocks for channel-wise dependency. Each convolution block (b) employed the squeeze and excitation operation. The spatial features are then encoded through a 2-layer MLP network into a latent vector. Lastly, the latent vector was decoded by seven prediction heads for leg positions and six joints on one side of the robot, where each one is a single fully connected layer. The predicted indices for leg positions and joint angles are selected by taking argmax over each head’s output. d) Input Data Sampling: We adopted a sampling ap- proach for the input state sequences during training. During data collection, we collected the motion dynamics of the robot with a size of 16 × 100 × 30. Rather than taking all collected trajectories as input, we sampled 10 continuing trajectories as input x. Each trajectory is a vector of 16 × 30, representing the 30 robot state data over 16 steps. For each robot configuration training instance, we forward the input data with a size of 16 × 10 × 30 through the model to get the predicted logits ˆy and updated the gradient based on the configuration label y and the loss function Ltotal with a ratio hyper-parameter λ = 0.75 determined empirically. We split the data with an 8:2 ratio for training and validation. Our model parameters are optimized using the PyTorch [40] deep-learning framework and adaptively tuned the model hyperparameter using the HYPERBAND [41] algo- rithm through the Weight-and-Bias [42] package. We em- ployed the Adam Optimizer [43] with learning rate 3e-4 and weight decay 1e-5. We used the ReLU activation function, Batch Normalization [44], and a Dropout [45] probability of 0.3 for all convolution layers and the first MLP layer. We also applied gradient clipping with value 1 for the LSTM module to avoid gradient exploding. The training was done over one Nvidia RTX2080 GPU for 800 epochs with batch size 128, taking 38 hours in total. IV. EXPERIMENTS A. Dynamic Data Collection For each valid robot configuration, we collected their state sequence for ten trials, each containing ten steps of motor babbling cycles. During random movement, the robot might also fall over before finishing 6 steps. If that happens, the trial would be aborted and rerun as shown in algorithm 2. For each robot, we collected its dynamic state (position, orientation, joint angles) sequences while performing random motor babbling actions defined by the parametric sine-gait function in equation 3. A is the amplitude parameter and ϕ is the phase-shift parameter. i indexes the three joints of the same leg (ordered as inner, middle, and outer joints), and j indexes the four legs (ordered as right-hind, right-front, left-front, and left-hind legs). t is the current timestep, and τ = 16 is a predetermined period constant indicating the number of sub-steps within a cycle. The action aij indicates the targeted angle of the joint i in leg j at timestep t. During motor babbling, A, ϕ, and robot initial joints position θij are chosen uniformly at random with values normalized to the motor action space. Meanwhile, robot state sequences are collected at each timestep. aij = Ai · sin (cid:18) t mod τ τ (cid:19) · 2π + ϕj + θij (3) At timestep t, the robot’s state St ∈ R18, consists of the position and Euler-angles of the icosahedron body at the center of mass [x, y, z, ψ, θ, ϕ] as well as the angles of the 12 joints. The dynamic state data over an entire babbling cycle can thus be seen as a multinomial time series of 18 channels with sequence length τ . In terms of actions, the robot’s subsequent action at timestep t + 1 is represented as At+1 ∈ R12. Each dynamic motion data incorporates 30 parameters: 18 from state data St and 12 from next actions At+1. B. Baselines To evaluate the effectiveness of our approach, we designed three baselines. BL-LSTM baseline is a variation of our proposed method (OM-Conv1). The motivation behind using Algorithm 2: Dynamic Data Collection Data: Robot Dataset Dr, Number of steps N , Number of trajectories T , Sine-gait action function a. Result: Dynamic Dataset Dk foreach {C : R} ∈ Dr do D(C) k ← ∅; while |D(C) Dt ← ∅; for n = {1, ..., N } do | < T do k θ ← Rand(n=10); R.next step (a(θ)); if slip(R) then break end Dt ← Dt ∪ {state(R)}; end D(C) k ← D(C) k ∪ Dt; end end this baseline is to evaluate the temporal learning capacity of LSTM cells when employed within our framework [46]. LSTMs, with their inherent capability of learning long-term dependencies, could be a promising alternative for handling sequences. In this variation, we replace the Conv1D Block (Fig.4) with two layers of Long Short-Term Memory (LSTM) cells. It allows us to discern whether a dedicated temporal model can perform comparably or better than our Conv1D block. With the BL-MLP baseline, we aim to explore the ef- ficiency of simple feed-forward neural networks on our task. By replacing the Conv1D blocks with fully connected (dense) layers, the architecture essentially turns into a multi- layer perceptron (MLP) model. This variation is critical as it measures how well non-sequential models perform side-by- side with our proposed method and other sequential methods. Recognizing that a conventional Inertial Measurement Unit (IMU) struggles to accurately capture the center of mass (CoM) positional data (x,y,z), we incorporated a baseline to simulate such a real-world scenario. By training the model without CoM location data as input, we seek to understand how important this data is for effective modeling and prediction within our framework. C. Evaluation Metrics During the training and evaluation phases, we observed three key metrics. Leg-Acc: Prediction accuracy for the leg position configuration in percentage. Jnt-Acc-Avg: Average prediction accuracy over 6 joints. Tot-Acc: Prediction accu- racy over legs and all 6 joints. We used a L1 distance error function during the evaluation between the prediction and the ground truth for a more intuitive view. When calculating the errors, the L1 distance between configurations ”0” and ”11” is 1, so the largest error is 6. Fig. 5: Performance Comparisons. This figure presents bar plots comparing the leg accuracy, average joint accuracy, and total accuracy of our proposed method against the three baselines. The results show that our method outperforms the baselines across all metrics. D. Quantitative Evaluations During the model training phase, each model variation was trained ten times, each time with different hyperpa- rameters. After these runs, we selected the best model with the minimum loss for the evaluation. Our evaluation utilized a test dataset comprising 40k robots. The process involved inputting the state sequence and predicting the robot configuration name. The l1 distance error of each prediction head is detailed in Table I. TABLE I: Accuracy of predicting joint configuration Acc Mean Err-Dist Mean Err-Dist Std. Jnt 1 0.749 OM OM-rm xyz 0.592 BL-LSTM 0.629 0.267 BL-MLP 0.057 OM OM-rm xyz 0.104 BL-LSTM 0.087 0.271 BL-MLP 0.129 OM OM-rm xyz 0.172 BL-LSTM 0.152 0.268 BL-MLP Jnt 2 0.743 0.586 0.629 0.268 0.059 0.107 0.088 0.273 0.131 0.177 0.154 0.268 Jnt 3 0.746 0.588 0.625 0.265 0.059 0.107 0.090 0.272 0.134 0.177 0.158 0.268 Jnt 4 0.746 0.590 0.623 0.263 0.058 0.105 0.089 0.274 0.128 0.172 0.155 0.268 Jnt 5 0.752 0.590 0.631 0.267 0.057 0.106 0.087 0.274 0.128 0.176 0.153 0.270 Jnt 6 0.744 0.587 0.633 0.269 0.058 0.105 0.087 0.273 0.129 0.174 0.153 0.269 Inside the table, Acc: Prediction accuracy for joint angle configuration in percentage. Jnt1 and Jnt4 are inner joints, Jnt2 and Jnt5 are middle joints, Jnt3 and Jnt6 are outer joints. Err-Dist: The l1 distance between the predicted and real configurations integer vector. We altered our model’s architecture using various modules to assess the encoder module’s performance within the meta-self-model and dif- ferent input data. Our results in the accompanying Fig.5 and Tab.I show that our method with a convolutional module in the encoder can provide satisfied performance, predicting leg configurations with 96% accuracy. In Figure6, we show some robot pictures that the meta- self-model prediction achieved a 100% hit rate for both robot joint and leg positions. The displayed configurations further emphasize the model can consistently identify a wide spectrum of robot morphologies. E. Real-world Experiment Results In the real-world experiments, we tested two reconfig- urable robots as shown in Fig.7. To validate if our model higher, standing at 66.7%. Based on limited interaction steps, the overall prediction accuracy validates the effectiveness of our method in a real-world scenario. V. CONCLUSIONS This paper presents a meta-self-modeling approach for robots to understand their own morphology from proprio- ceptive data. A 12-DOF reconfigurable legged robot was de- signed, enabling the creation of a diverse dataset of 200,000 unique robot configurations. A Multiclass-Multioutput Robot Morphology Classifier was developed to predict unseen robot configurations by observing limited self-movement data. The model architecture consists of a robot signature encoder to extract latent morphological representations from state sequences, and a configuration decoder with multiple classi- fication heads to predict the robot’s leg positions and joint angles. Our experimental evaluations, conducted in both simulated environments and real-world settings, affirm the robustness and reliability of our meta-self-model. In the simulation, the model showcased its ability to predict leg configurations with exceptional accuracy, illustrating its deep understanding of the intricate relationship between motion dynamics and morphology. Real-world experiments further validated the model’s applicability, demonstrating that it can successfully translate its predictions from simulation to physical robots. However, it is noteworthy that the model’s performance in predicting joint positions encountered limitations, particu- larly in real-world tests. This discrepancy underscores the challenges in accurately predicting the robot morphology in different environments. This work makes progress in robotic self-modeling by learning shared dynamics across diverse morphologies. The latent space of the model captures an understanding of how morphology relates to dynamic motion, which could enable future applications such as adaptive control, precise dynamics, and visualizing robot structures. The 12-DOF reconfigurable robot design and dataset of 200k configura- tions are open-sourced to enable further research. Overall, this meta-self-modeling approach offers a path toward robot autonomous identification through the proprioceptive to un- derstand body structure and dynamics. REFERENCES [1] M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakrishnan, K. Hausman, et al., “Do as i can, not as i say: Grounding language in robotic affordances,” arXiv preprint arXiv:2204.01691, 2022. [2] Y. Cao and C. G. Lee, “Ground manipulator primitive tasks to executable actions using large language models,” in Proceedings of the AAAI Symposium Series, vol. 2, no. 1, 2023, pp. 502–507. [3] B. Yu, H. Kasaei, and M. Cao, “L3mvn: Leveraging large language models for visual target navigation,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023, pp. 3554–3560. [4] Y. Ding, X. Zhang, C. Paxton, and S. Zhang, “Task and motion planning with large language models for object rearrangement,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023, pp. 2086–2092. [5] J. Bongard, V. Zykov, and H. Lipson, “Resilient machines through continuous self-modeling,” Science, vol. 314, no. 5802, pp. 1118– 1121, 2006. Fig. 6: Test Dataset Samples Visualization. A selection of nine robots was randomly chosen from the test evaluation dataset. Our model successfully identified both the leg con- figurations and all 6 joint positions accurately, yielding a 100% accuracy rate for these samples. The robot configura- tion name is denoted under each image. trained solely in a simulated environment could effectively predict configurations in the real world, neither robot was seen during the training of the meta-self-model. Each robot collects 10 trajectories in the real world for about 10 seconds. The state data can all be obtained through the Intel Realsense tracking camera T265. Every step involved an action and the subsequent change in the robot’s state, resulting in an input size of 10x16x30. Both robot leg configurations were predicted with 100% accuracy, demonstrating the meta- self-model trained in the simulation can also predict robot leg configurations through dynamic data in the real-world environment. Fig. 7: Real-World Testing of Reconfigurable Robots with Their Corresponding Configuration Name. a) Image of two tested robots, showing their unique configuration as utilized in the real-world experiment. The configuration names for both robots are present below the images. The joint prediction accuracy varied between the two robots. For the robot shown in Fig.7a (left), we achieved a prediction accuracy rate of 50.0%. Meanwhile, for the robot in Fig.7b (right), the joint prediction accuracy was slightly systems [grand challenges of robotics],” IEEE Robotics & Automation Magazine, vol. 14, no. 1, pp. 43–52, 2007. [33] A. Castano and P. Will, “Representing and discovering the configura- tion of conro robots,” in Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164), vol. 4. IEEE, 2001, pp. 3503–3509. [34] M. Ceccarelli and C. Lanni, “A multi-objective optimum design of general 3r manipulators for prescribed workspace limits,” Mechanism and machine theory, vol. 39, no. 2, pp. 119–132, 2004. [35] A. Yun, D. Moon, J. Ha, S. Kang, and W. Lee, “Modman: an ad- vanced reconfigurable manipulator system with genderless connector and automatic kinematic modeling algorithm,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4225–4232, 2020. [36] S. Ha, S. Coros, A. Alspach, J. Kim, and K. Yamane, “Computational co-optimization of design parameters and motion trajectories for robotic systems,” The International Journal of Robotics Research, vol. 37, no. 13-14, pp. 1521–1536, 2018. [37] J. Kim, A. Alspach, and K. Yamane, “Snapbot: A reconfigurable legged robot,” in 2017 IEEE/RSJ International Conference on Intelli- gent Robots and Systems (IROS). IEEE, 2017, pp. 5861–5867. [38] S. Ha, S. Coros, A. Alspach, J. Kim, and K. Yamane, “Task-based limb optimization for legged robots,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 2062–2068. [39] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141. [40] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019. [41] L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar, “Hyperband: A novel bandit-based approach to hyperparameter opti- mization,” The Journal of Machine Learning Research, vol. 18, no. 1, pp. 6765–6816, 2017. [42] L. Biewald, 2020, https://www.wandb.com/ “Experiment software available from wandb.com. tracking with weights and biases,” [Online]. Available: [43] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimiza- tion,” arXiv preprint arXiv:1412.6980, 2014. [44] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning. PMLR, 2015, pp. 448–456. [45] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhut- dinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014. [46] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. [6] G.-Z. Yang, J. Bellingham, P. E. Dupont, P. Fischer, L. Floridi, R. Full, N. Jacobstein, V. Kumar, M. McNutt, R. Merrifield, et al., “The grand challenges of science robotics,” Science robotics, vol. 3, no. 14, p. eaar7650, 2018. [7] A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, “Robots that can adapt like animals,” Nature, vol. 521, no. 7553, pp. 503–507, 2015. [8] U. Proske and S. C. Gandevia, “The proprioceptive senses: their roles in signaling body shape, body position and movement, and muscle force,” Physiological reviews, 2012. [9] B. O’Shaughnessy, “Proprioception and the body image,” The body and the self, pp. 175–203, 1995. [10] J. W. Hart, Robot self-modeling. Yale University, 2014. [11] E. Jankowska, “Interneuronal relay in spinal pathways from proprio- ceptors,” Progress in neurobiology, vol. 38, no. 4, pp. 335–378, 1992. [12] R. Kwiatkowski, Y. Hu, B. Chen, and H. Lipson, “On the origins of self-modeling,” arXiv preprint arXiv:2209.02010, 2022. [13] N. V. Boulgouris, D. Hatzinakos, and K. N. Plataniotis, “Gait recogni- tion: a challenging signal processing technology for biometric identi- fication,” IEEE signal processing magazine, vol. 22, no. 6, pp. 78–90, 2005. [14] F. Han, B. Reily, W. Hoff, and H. Zhang, “Space-time representation of people based on 3d skeletal data: A review,” Computer Vision and Image Understanding, vol. 158, pp. 85–105, 2017. [15] B. C. Munsell, A. Temlyakov, C. Qu, and S. Wang, “Person identifica- tion using full-body motion and anthropometric biometrics from kinect videos,” in European Conference on Computer Vision. Springer, 2012, pp. 91–100. [16] G. G. Gallup Jr, “Self-awareness and the emergence of mind in primates,” American Journal of Primatology, vol. 2, no. 3, pp. 237– 248, 1982. [17] P. Rochat, “Five levels of self-awareness as they unfold early in life,” Consciousness and cognition, vol. 12, no. 4, pp. 717–731, 2003. [18] B. Chen, R. Kwiatkowski, C. Vondrick, and H. Lipson, “Fully body visual self-modeling of robot morphologies,” Science Robotics, vol. 7, no. 68, p. eabn1944, 2022. [19] A. Dearden and Y. Demiris, “Learning forward models for robots,” in IJCAI, vol. 5, 2005, p. 1440. [20] D. M. Wolpert, R. C. Miall, and M. Kawato, “Internal models in the cerebellum,” Trends in cognitive sciences, vol. 2, no. 9, pp. 338–347, 1998. [21] Y. Hu, B. Chen, and H. Lipson, “Egocentric visual self-modeling for legged robot locomotion,” arXiv preprint arXiv:2207.03386, 2022. [22] R. Kwiatkowski and H. Lipson, “Task-agnostic self-modeling ma- chines,” Science Robotics, vol. 4, no. 26, p. eaau9354, 2019. [23] J. Bongard and H. Lipson, “Nonlinear system identification using coevolution of models and tests,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 4, pp. 361–384, 2005. [24] T. Wu and J. Movellan, “Semi-parametric gaussian process for robot system identification,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 725–731. [25] W. Yu, J. Tan, C. K. Liu, and G. Turk, “Preparing for the unknown: Learning a universal policy with online system identification,” 2017. [26] D. Bruder, C. D. Remy, and R. Vasudevan, “Nonlinear system iden- tification of soft robot dynamics using koopman operator theory,” in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 6244–6250. [27] K. Rakelly, A. Zhou, C. Finn, S. Levine, and D. Quillen, “Efficient off- policy meta-reinforcement learning via probabilistic context variables,” in International conference on machine learning. PMLR, 2019, pp. 5331–5340. [28] V. Kurin, M. Igl, T. Rockt¨aschel, W. Boehmer, and S. Whiteson, “My body is a cage: the role of morphology in graph-based incompatible control,” arXiv preprint arXiv:2010.01856, 2020. [29] W. Huang, I. Mordatch, and D. Pathak, “One policy to control them all: Shared modular policies for agent-agnostic control,” in International Conference on Machine Learning. PMLR, 2020, pp. 4455–4464. [30] A. Gupta, L. Fan, S. Ganguli, and L. Fei-Fei, “Metamorph: Learning universal controllers with transformers,” arXiv preprint arXiv:2203.11931, 2022. [31] B. Trabucco, M. Phielipp, and G. Berseth, “Anymorph: Learning transferable polices by inferring agent morphology,” in International Conference on Machine Learning. PMLR, 2022, pp. 21 677–21 691. [32] M. Yim, W.-M. Shen, B. Salemi, D. Rus, M. Moll, H. Lipson, E. Klavins, and G. S. Chirikjian, “Modular self-reconfigurable robot
ai_researcher
1
A_cross-sectional_study_on_risk_factors_and_their_interactions_with_suicidal_ideation_among_the_elderly_in_rural_communities_of_Hunan_China.pdf
7 1 0 2 c e D 9 1 ] M D . s c [ 1 v 0 4 8 6 0 . 2 1 7 1 : v i X r a On Fan-Crossing Graphs Franz J. Brandenburg 94030 Passau, Germany [email protected] Abstract. A fan is a set of edges with a single common endpoint. A graph is fan-crossing if it admits a drawing in the plane so that each edge is crossed by edges of a fan. It is fan-planar if, in addition, the common endpoint is on the same side of the crossed edge. A graph is adjacency- crossing if it admits a drawing so that crossing edges are adjacent. Then it excludes independent crossings which are crossings by edges with no common endpoint. Adjacency-crossing allows triangle-crossings in which an edge crosses the edges of a triangle, which is excluded at fan-crossing graphs. We show that every adjacency-crossing graph is fan-crossing. Thus triangle- crossings can be avoided. On the other hand, there are fan-crossing graphs that are not fan-planar, whereas for every fan-crossing graph there is a fan-planar graph on the same set of vertices and with the same number of edges. Hence, fan-crossing and fan-planar graphs are different, but they do not differ in their density with at most 5n − 10 edges for graphs of size n. 1 Introduction Graphs with or without special patterns for edge crossings are an important topic in Topological Graph Theory, Graph Drawing, and Computational Ge- ometry. Particular patterns are no crossings, single crossings, fans, independent edges, or no three pairwise crossing edges. A fan is a set of edges with a single common endpoint. In complement, edges are independent if they do not share a common endpoint. Important graph classes have been defined in this way, in- cluding the planar, 1-planar [12,13], fan-planar [4,5,11], fan-crossing free [9], and quasi-planar graphs [3]. A first order logic definition of these and other graph classes is given in [6]. These definitions are motivated by the need for classes of non-planar graphs from real world applications, and a negative correlation between edge crossings and the readability of graph drawings by human users. The aforementioned graph classes aim to meet both requirements. We consider undirected graphs G = (V, E) with finite sets of vertices V and edges E that are simple both in a graph theoretic and in a topological sense. Thus we do not admit multiple edges and self-loops, and we exclude multiple crossings of two edges and crossings among adjacent edges. A drawing of a graph G is a mapping of G into the plane so that the vertices are mapped to distinct points and each edge is mapped to a Jordan arc between 2 F. J. Brandenburg the endpoints. Two edges cross if their Jordan arcs intersect in a point other than an endpoint. Crossings subdivide an edge into uncrossed pieces, called edge segments, whose endpoints are vertices or crossing points. An edge is uncrossed if and only if it consists of a single edge segment. A drawn graph is called a topological graph. In other works, a topological graph is called an embedding which is the class of topologically equivalent drawings. An embedding defines a rotation system which is the cyclic sequence of edges incident to each vertex. A drawn graph partitions the plane into topologically connected regions, called faces. The unbounded region is called the outer face. The boundary of each face consists of a cyclic sequence of edge segments. It is commonly specified by the sequence of vertices and crossing points of the edge segments. The subgraph of a graph G induced by a subset U of vertices is denoted by G[U ]. It inherits its embedding from an embedding of G, from which all vertices not in U and all edges with at most one endpoint in U are removed. (a) (b) Fig. 1. (a) A fan-crossing and (b) an independent crossing or fan-crossing free An edge e has a fan-crossing if the crossing edges form a fan, as in Fig. 1(a), and an independent crossing if the crossing edges are independent, see Fig. 1(b). Fan-crossings are also known as radial (k, 1) grid crossings and independent crossings as grid crossings [1]. Independent crossings are excluded if and only if adjacency-crossings are allowed in which two edges are adjacent if they both cross an edge [6]. Fan-planar graphs were introduced by Kaufmann and Ueckerdt [11], who imposed a special restriction, called configuration II. It is shown in Fig. 2(a). Let e, f and g be three edges in a drawing so that e is crossed by f and g, and f and g share a common vertex t. Then they form configuration II if one endpoint of e is inside a cycle through t with segments of e, f and g, and the other endpoint of e is outside this cycle. If e = {u, v} is oriented from u (left) to v (right) and f and g are oriented away from t, then f and g cross e from different directions. Configuration II admits triangle-crossings in which an edge crosses the edges of a triangle, see Fig. 2(b). Observe that a triangle-crossing is the only configuration in which an edge is crossed by edges that do not form a fan and that are not independent. A graph is fan-crossing free if it admits a drawing without fan-crossings [9]. Then there are only independent crossings. A graph is fan-crossing if it admits On Fan-Crossing Graphs 3 (a) (b) Fig. 2. (a) Configuration II in which edge e = {u, v} is crossed by edges {t, x} and {t, y} and x and y are on opposite sides of e and (b) edge e = {u, v} crosses a triangle. The shaded regions represent subgraphs which shall prohibit another routing of e. Similar regions could be added to (a), as in Fig. 12. a drawing in which each crossing is a fan-crossing, and adjacency-crossing if it can be drawn so that each edge is crossed by edges that are adjacent. Then in- dependent crossings are excluded. As stated in [6], adjacency crossing is comple- mentary to independent crossing, but the graph classes are not complementary and both properly include the 1-planar graphs. A graph is fan-planar if it avoids independent crossings and configuration II [11]. Observe the subtle differences between adjacency-crossing, fan-crossing, and fan-planar graphs, which each exclude independent crossings, and in addition ex- clude triangle-crossings and configuration II, respectively. Kaufmann and Ueck- erdt [11] observed that configuration II cannot occur in straight-line drawings, so that every straight-line adjacency-crossing drawing is fan-planar. They proved that fan-planar graphs of size n have at most 5n−10 edges and posed the density of adjacency-crossing graphs as an open problem. The density defines an upper bound an on the number of edges in graphs of size n. We show that triangle- crossings can be avoided by an edge rerouting, and that configuration II can be restricted to a special case. Moreover, the allowance or exclusion of configuration II has no impact on the density, which answers the above question. In particular, we prove the following: 1. Every adjacency-crossing graph is fan-crossing. Thus triangle-crossings can be avoided. 2. There are fan-crossings graphs that are not fan-planar. Thus configuration II is essential. 3. For every fan-crossing graph G there is a fan-planar graph G(cid:48) on the same set of vertices and with (at least) the same number of edges. Thus fan-crossing graphs of size n have at most 5n − 10 edges. egfuvtxyeuvabc 4 F. J. Brandenburg We prove that triangle-crossings can be avoided by an edge rerouting in Section 2 study configuration II in Section 3. We conclude in Section 4 with some open problems on fan-crossing graphs. 2 Triangle-Crossings In this section, all embeddings E(G) are adjacency-crossing or equivalently they exclude independent crossings. We consider triangle-crossings and show that they can be avoided by an edge rerouting. A rerouted edge is denoted by ˜e if e is the original one. More formally, we transform an adjacency-crossing embedding E(G) into an adjacency-crossing embedding ˜E(G) which differs from E(G) in the embedding of the rerouted edges such that ˜e does not cross a particular triangle if e crosses that triangle. For convenience, we assume that triangle-crossings are in a standard config- uration, in which a triangle ∆ = (a, b, c) is crossed by edges e1, . . . , ek for some k ≥ 1 that cross each edge of ∆. We call each ei a triangle-crossing edge of ∆. These edges are incident to a common vertex u if k ≥ 2. We assume that a triangle-crossing edge e = {u, v} crosses {a, c}, {b, c} and {a, b} in this order and that u is outside ∆. Then v must be inside ∆. All other cases are similar exchanging inside and outside and the order in which the edges of ∆ are crossed. We need some further notation. Let f an(v) denote a subset of edges incident to vertex v that cross a particular edge. This is a generic definition. If the crossed edge is given, then f an(v) can be retrieved from the embedding E(G). In general, f an(v) does not contain all edges incident to v. A sector is a subsequence of edges of f an(v) properly between two edges {v, s} and {v, t} in clockwise order. An edge e is covered by a vertex v if e is crossed by at least two edges incident to v so that f an(v) has at least two elements. Let cover(v) denote the set of edges covered by v. Note that uncrossed edges and edges that are crossed only once are not covered. If an edge e is crossed by an edge g = {u, v}, then e is a candidate for cover(u) or cover(v) and e (cid:54)∈ cover(w) for any other vertex w (cid:54)= u, v except if e crosses a triangle. In fact, an edge e = {u, v} is triangle-crossing if and only if {e} = cover(x) ∩ cover(y) for vertices x (cid:54)= y. To see this, observe that e ∈ cover(x) for x = a, b, c if e crosses a triangle ∆ = (a, b, c). Conversely, if e is crossed by edges {a, w1}, {a, w2} and {b, w3} with a (cid:54)= b and w1 (cid:54)= w2, then w1 = w3 and w2 = b (up to renaming) if there are no independent crossings. Triangle crossings are special. If an edge e crosses a triangle ∆, then e cannot be crossed by any edge other than the edges of ∆. In particular, e cannot cross another triangle or another triangle-crossing edge. But an edge may be part of two triangle-crossings, as a common edge of two crossed triangles, as shown in Fig. 3(a), or as a triangle-crossing edge of one triangle and an edge of another triangle, as shown in Fig. 3(b), and both configurations can be combined. A particular example is K5, which has five embeddings [10], see Fig. 4. The one of Fig. 4(e) has a triangle-crossing. If it is a part of an adjacency-crossing embedding, then we show that it can be transformed into the embedding of Fig. 4(c) by rerouting an edge of the crossed triangle. On Fan-Crossing Graphs 5 (a) (b) Fig. 3. Two crossed triangles sharing (a) an edge or (b) an edge and a triangle-crossing edge. (a) (b) (c) (d) (e) Fig. 4. All non-isomorphic embeddings of K5 [10] with two drawings. Only (a) is 1- planar and fan-crossing free, (b), (c), and (d) are fan-planar and (e) is adjacency- crossing and has a triangle crossing with the triangle-crossing edge drawn red. Our rerouting transforms (e) into (c) and reroutes and straightens the curved edge. In return, the edges of ∆ can only be crossed by edges of f an(u) or f an(v) if e = {u, v} is a triangle-crossing edge of ∆. They are covered by u if there are at least two triangle-crossing edges incident to u. In addition, there may be edges that cross only one or two edges of ∆. These are incident to u or v and they are incident to u if there are at least two triangle-crossing edges incident to u. We assume a standard configuration and classify crossing edges by the sequence of crossed edges, as stated in Table 1. Suppose that u is outside ∆. Then the other endpoint of g = {u, w} is inside ∆ if g is a needle, a hook, or a triangle-crossing edge, and w is outside ∆ if g is an arrow or a sickle, see Fig. 5(a). An a-arrow and an a-sickle are covered by a, since they are crossed by at least two edges of f an(a). Similarly, a c-arrow and a c-sickle are covered by c. A needle g may be covered by a or by c and there is a preference for a (c) if g is before (after) any triangle-crossing edge according to the order of crossing points on {a, c} from a to c. Otherwise, there is an instance of configuration II, as shown in Fig. 9(a). Accordingly, an a-hook may be covered ucwdabvawvucb 6 F. J. Brandenburg name needle a-hook c-hook a-arrow c-arrow a-sickle c-sickle clockwise triangle-crossing counterclockwise triangle-crossing set sequence of crossed edges N1, N2, N3 {a, c} {a, b} {b, c} {a, c}, {a, b} {a, c}, {b, c} {a, b}, {a, c} {b, c}, {a, c} {a, c}, {b, c}, {a, b} {a, c}, {a, b}, {b, c} Ha Hc Aa Ac Sa Sc C CC Table 1. Classification of edges crossing the edges of a triangle ∆ = (a, b, c) by a or by b and the crossing edges are on or inside ∆ if it is covered by b, since the triangle-crossing edges prevent edges from b outside ∆ that cross a-hooks. By symmetry, we consider needles, hooks, arrows, and sickles from the view- point of vertex v inside ∆. Then a needle first crosses {a, b} and an a-hook first crosses {a, c} and the other endpoint is outside ∆. On Fan-Crossing Graphs 7 (a) (b) Fig. 5. Triangle-crossings (a) with clockwise triangle-crossing edges, c-hooks, c-sickles, and c-arrows crossing {b, c} drawn red and counterclockwise triangle crossing edges, a-arrows, a-hooks and a-sickles crossing {a, b}, drawn blue and (b) rerouting the edges along ei and ej A triangle ∆ = (a, b, c) can be crossed by several triangle-crossing edges, even in opposite directions, see Fig. 5(a). We say that a triangle-crossing edge crosses clockwise if it crosses {a, c}, {b, c}, {a, b} cyclicly in this order, and counterclock- wise if it crosses the edges in the cyclic order {a, c}, {a, b}, {b, c}. Lemma 1. Let E(G) be an adjacency-crossing embedding of a graph G such that a triangle ∆ is crossed by triangle-crossing edges in clockwise and in counter- clockwise order. Then there is an adjacency-crossing embedding in which each triangle-crossing edge is rerouted so that it crosses only one edge of ∆, and no new triangle-crossings are introduced. Proof. Suppose that the edges of ∆ = (a, b, c) are crossed by the edges of a set X. If there are at least two triangle-crossing edges, then there is a vertex u abeicejubuca 8 F. J. Brandenburg so that X = f an(u). By our assumption, u is outside ∆ and {a, c} is crossed first. All other cases are similar. Classify the edges according to Table 1. Choose a clockwise triangle-crossing edge ei and a counterclockwise triangle-crossing edge ej, and assume that ei precedes ej in clockwise order at u. The other case is similar. Partition the set of needles so that N1, N2 and N3 are the sets of needles before ei, between ei and ej, and after ej in clockwise order at u. Then N3 < ej < N2 < ei < N1 according to the order (of the crossing points) on {a, c}. Accordingly, partition the set of counterclockwise triangle-crossing edges into CCl and CCr, where CCl comprises the edges before ei and CCr = CC − CCl is the set of edges after ei, and partition the set C into the sets of edges to the left and right of ej. Then edges of ∆ are crossed by the edges of X = N1 ∪N2 ∪N3 ∪Ha ∪Hc ∪Aa ∪ Ac ∪ La ∪ Lc ∪ C ∪ CC. Some of these sets may be empty. The edges from these sets are unordered at u. In particular, edges of C and CC may alternate, needles may appear anywhere, whereas c-hooks and c-sickles precede triangle-crossing edges which precede a-hooks and a-sickles. We sort the edges of X in clockwise order at u and reroute them along ei and ej in the following order: Sc < N1 < CCl < Hc < Ac < CCr < N2 < Cr < Aa < Ha < Cl < N3 < Sa. Two edges in a set are ordered by the crossing points with edges of ∆ so that adjacent edges do not cross one another. The edges of Sc and N1 are routed along ei from u to the crossing point of ei and {a, c}, where they make a left turn and follow {a, c}. Then the rerouted edge ˜g follows the original g so that ˜g crosses {a, c} if g is a needle. An edge ˜g first follows ei to the crossing point with {b, c} if g ∈ Hc ∪ CCl ∪ Ac ∪ CCr, then it follows {b, c} and finally g. If g ∈ Hc ∪ CCl, then ˜g makes a left turn and a right turn for edges in CCr. Accordingly, edges ˜g make a left or right turn and cross {b, c} if g is an arrow. An edge ˜g may follow ei or ej from u to {a, c} or adopts the route of g if g ∈ N2 is a needle between the chosen triangle-crossing edges ei and ej. Similarly, edges of Cr, Aa, Cl, N3 and Sa are routed along ej from u to the crossing point with {a, b} and {a, c}, respectively, then along one of these edges, and finally along the original edge. For an illustration see Fig. 5. The rerouting saves many crossings. Only arrows cross two edges of ∆, and needles, hooks and triangle-crossing edges cross {a, c}. In fact, each rerouted edge is crossed by a subset of edges crossing the original one, except if the edge is a hook. This is due to the fact that triangle-crossing edges are only crossed by the edges of the triangle. Hence, there are (uncrossed) segments from u to {a, c} and from {a, c} to {b, c} and {a, b}, respectively. In the final part, ˜g coincides with g and adopts the edge crossings from g. In consequence, ˜g crosses only {a, c} if g is a triangle-crossing edge. If g is a c-hook, then the crossing with edge {b, c} is replaced by a crossing with {a, c} and crossings with edges of f an(c) outside ∆ are avoided. The replacement is feasible. A c-hook cannot be covered by b, since a further crossing edge {b, d} must cross a clockwise triangle-crossing edge, which is excluded. Hence, ˜g is crossed by edges of f an(c), and each edge h crossing ˜g is in f an(u). Similarly, edge {a, b} can be replaced by {a, c} at a- On Fan-Crossing Graphs 9 hooks. The other rerouted edges adopt the crossings from the final part, so that new triangle-crossings cannot be introduced. Topological simplicity is preserved, since the bundle of edges is well-ordered, and two edges cross at most once, since there are segments from u to {a, c} and between {a, c} and {b, c} and {a, b}, respectively. In consequence, triangle-crossings of ∆ are avoided, there are no new triangle- (cid:117)(cid:116) crossings, and the obtained embedding is adjacency-crossing. The rerouting technique of Lemma 1 widely changes the order of the edges of f an(u) and it avoids many crossings. It is possible to restrict the rerouting to triangle-crossing edges so that they cross only a single edge of the triangle. Therefore consider two consecutive crossing points of clockwise triangle crossing edges or c-arrows and {b, c}, and reroute the counterclockwise crossing edges crossing {b, c} in the sector along one of the bounding edges. Accordingly, pro- ceed with clockwise triangle-crossing edges and sectors of {a, b}. Thereby hooks, sickles and arrows remain unchanged. From now on, we assume that all triangle-crossing edges cross clockwise. We wish to reroute them along an a-arrow, a-hook or a-sickle if such an edge exists. This is doable, but we must take a detour if the edge is covered by b or c. Lemma 2. Suppose there is an adjacency crossing embedding E(G) and a trian- gle ∆ is crossed by clockwise triangle-crossing edges. If there are an a-hook, an a-arrow or an a-sickle, then some edges are rerouted so that ˜g crosses only one edge of ∆ if g is a triangle-crossing edge of ∆, and there are no new triangle- crossings. Proof. Our target is edge {a, b} of ∆ = (a, b, c), where the crossing edges are ordered from a to the left to b. Then a-hooks and a-sickles are to the left of all triangle-crossing edges, whereas a-arrows are interspersed. Edge {a, b} is covered by u. Let f = {u, w} be the rightmost edge among all a-hooks, a-arrows, and a- sickles. First, if f is an a-hook, then reroute all edges g crossing {a, b} to the right of f in a bundle from u to {a, b} along the outside of f , see Fig. 6(a). Since f is rightmost, edge g is triangle-crossing. Then ˜g makes a right turn and follows {a, b} and finally it follows g. Thereby, ˜g crosses {a, b}. Let F be the set of edges in the sector between {a, b} and {a, c} that cross f , i.e., outside ∆. Then ˜g is crossed by the edges of F and also by {a, b}. Each crossing edge is in f an(a) and is uncovered or covered by u. It cannot be covered by the other endpoint w of f , since w is inside ∆ and any edge {w, w(cid:48)} crossing an edge {a, d} ∈ F must cross {a, b}, {a, c} or a triangle-crossing edge, which is excluded, since it enforces an independent crossing. Thus ˜g is only crossed by edges of f an(a), and ˜g can be added to the fan of edges of f an(u) that cross such edges. Hence, all introduced crossings are fan-crossings, as Fig. 6(b) shows. We would like to proceed accordingly if f is an a-sickle and reroute triangle- crossing edges along the outside of f from u to {a, b}. However, f may be crossed 10 F. J. Brandenburg by edges {a, d} that are covered by w, as shown in Fig. 7(a). Then a rerouted edge along f introduces an independent crossing. We take another path. Let the a-sickle f = {u, w} cross {a, b} in p1 and {a, c} in p2, see Fig. 7(a). Let H be the set of edges that cross {a, c} between the first triangle-crossing edge e1 and f including f . Now we reroute all edges h ∈ H and all triangle- crossing edges g so that they first follow e1 from u to {a, c}, then {a, c}, where the edges ˜h branch off and and follow h. If g is a triangle-crossing edge, then ˜g crosses {a, c} at p2, and then follows f, {a, b}, and finally g, see Fig. 7(b). The rerouted edges are uncrossed from u to their crossing point with {a, c}. Hence, each edge ˜h is crossed by a subset of edges that cross h for h ∈ H. Let F be the set of edges crossing f in the sector between p1 and p2. Since f is covered by a, these edges are incident to a. Now ˜g is crossed by {a, c} and by the edges of F if g is triangle-crossing, so that ˜g is crossed by edges of f an(a). Each edge h ∈ F is in f an(u), since it crosses f = {u, w} and it cannot be covered by w. Otherwise, it must be crossed by another edge {w, w(cid:48)}. However, w is outside ∆ and {w, w(cid:48)} must cross {a, c} or {a, b} or a triangle-crossing edge, which introduces an independent crossing. Hence, ˜g can be added to the fan of edges at u that cross h so that there is a fan-crossing. We proceed similarly if f = {u, w} is an a-arrow, see Fig. 8. Reroute all edges g that cross {a, c} to the right of the leftmost triangle-crossing edge e1 including e1. Then g is triangle-crossing or an a-arrow. Route ˜g from u to {a, c} along the first edge that crosses {a, c} and is covered by c, then along {a, c} to the crossing point with f , then along f and finally along g. Then there is a segment from u to the crossing with {a, c}. In the sector between {a, c} and {a, b}, ˜g is crossed by the edges of f an(a) that cross f in this sector. If g is a triangle-crossing edge, then ˜g is not crossed by further edges, whereas ˜g adopts the crossings with further edges incident to a outside ∆ if g is an a-arrow. Now, ˜g is crossed by a subset of edges that cross g if g is an a-arrow, since f is the rightmost a-arrow. If g is a triangle-crossing edge, then the edges crossing ˜g are incident to a, and each crossing edge is incident to u. It cannot be incident to or covered by the other endpoint e of f , since w is outside ∆ and the edges crossing ˜g are inside, and and no further edge {w, w(cid:48)} with w(cid:48) (cid:54)= u can cross {a, b}, {a, c}, or a triangle-crossing edge. Hence, there is a fan-crossing, ˜g crosses only one edge of ∆ if g is triangle-crossing, and there are no new triangle- (cid:117)(cid:116) crossings. The existence of an a-hook, a-sickle or a-arrow implies that edge {a, b} is covered by u. By symmetry, we can reroute all triangle-crossing edges, if there are a-hooks, a-sickles or a-arrows from the viewpoint of vertex v inside ∆. Then {a, c} is covered by v. For example, an arrow from v first crosses {a, b} and then {b, c} so that vertex b is enclosed and triangle-crossing edges are rerouted along the outer side of the arrow. It remains to consider the case without such edges. Then there are only triangle-crossing edges, needles (from u and from v), c-hooks, c-arrows, and c-sickles. Lemma 3. Suppose there is an adjacency crossing embedding E(G) and a tri- angle ∆ = (a, b, c) is crossed by clockwise triangle-crossing edges. If there are no On Fan-Crossing Graphs 11 (a) (b) Fig. 6. (a) An a-hook (drawn blue and dashed) and triangle-crossing edges which (b) are rerouted along the a-hook. (a) (b) Fig. 7. An a-sickle and triangle-crossing edges (a) before and (b) after the edge rerout- ing. (a) (b) Fig. 8. An a-arrow and triangle-crossing edges (a) before and (b) after the edge rerout- ing. bcawubcawubcduwabcdauwbcadwubcadwu 12 F. J. Brandenburg a-hooks, a-arrows and a-sickles and edges {a, c} and {b, c} are not covered by v, then edge (cid:96) = {a, b} can be rerouted so that ˜(cid:96) does not cross the rerouted edge, and there are no new triangle-crossings. Similarly, reroute {a, c} if {b, c} is not covered by u and there are no a-hooks, a-arrow and a-sickles from the viewpoint of v. Proof. Besides one or more clockwise triangle-crossing edges there are only nee- dles, c-hooks, c-arrows and c-sickles. We cannot route the triangle-crossing edges along the edges of ∆, since vertices a and b may be incident to “fat edges”, that are explained in Section 3, and prevent a bypass. Therefore, we reroute {a, b}. Similarly, we reroute {a, c} if {a, b} and {b, c} are not covered by u, and both ways may be possible. If {u, b} is an edge of G, then it crosses {a, c} and we take f = {u, b}; otherwise let f be the first edge crossing both {a, c} in p1 and {b, c} in p2. Then f is covered by c and is a triangle-crossing edge or a c-arrow. There is a segment from u to p1, from p1 to p2, and from p2 to b. Other edges incident to c cannot cross f , since f is triangle-crossing or is protected from c by a triangle-crossing edge, and the final part along {b, c} is uncrossed, because f is the first edge crossing {b, c} from b. Reroute (cid:96) = {a, b} so that ˜(cid:96) first follows {a, c} from a to p1, then f to p2 and finally {b, c} to b. If f = {u, b}, then p2 and b coincide. Let N be the set of edges crossing {a, c} in the segment from a to p1. Then N consists of needles so that N = Nc ∪ Na, where a needle n ∈ Nc is covered by c and a needle n ∈ Na is uncovered or covered by a. The needles in Nc cross {a, c} before the needles of Na. In fact, if an edge {x, y} other than {a, c} crosses a needle n ∈ N , then {x, y} is outside ∆ if n ∈ Nc. If {x, y} crosses n inside ∆, then n ∈ Na, since further edges incident to c cannot enter the interior of ∆ below the triangle-crossing edges. Now ˜(cid:96) is crossed by the edges of N . Note that there are no crossings of ˜(cid:96) in the second part along f and in the third part along {b, c}. Since the edges of N are incident to a, ˜(cid:96) is crossed by edges f an(a). In return, consider an edge h crossing some needle n = {u, w} ∈ N . Then n and may be covered by a or by c so that h = {a, d} or h = {d, d}. If h is not covered by c, we are done, since we can add ˜(cid:96) = {a, b} to the fan of edges of f an(a) crossing n. However, there is a conflict if n is covered by c, as shown in Fig. 9(a). Then there are needles {u, w1}, . . . , {u, ws} and edges {c, z1}, . . . , {c, zt} for some s, t ≥ 1 so that each {u, wi} is crossed by some {c, zj}. We resolve the conflict by rerouting the needles in advance, so that needles of Nc are no longer covered by c, see Fig. 9(b). Reroute each needle ˜n from u to p1 along f , then along {a, c}, and finally along n. Then there is a segment from u to the crossing point with {a, c} so that ˜n is only crossed by a subset of edges that cross g. Thereafter, there are no needles covered by c, and we are done. (cid:117)(cid:116) We can now show that triangle-crossings can be avoided. Theorem 1. Every adjacency-crossing graph is fan-crossing. On Fan-Crossing Graphs 13 (a) (b) Fig. 9. A triangle-crossing (a) with a needle covered by vertex c that introduces con- figuration II and an edge rerouting that avoids triangle-crossing edges. Proof. Let E(G) be an adjacency-crossing embedding of a graph G and suppose that there are triangle crossings. We remove them one after another and first consider all triangles with triangle-crossing edges in both directions (Lemma 1), then the triangles with a-hooks, a-arrows or a-sickles (Lemma 2), and finally those without such edges (Lemma 3). Each step removes a crossed triangle and does not introduce new ones. Hence, the resulting embedding is fan-crossing. (cid:117)(cid:116) 3 Fan-Crossing and Fan-Planar Graphs In this Section we assume that embeddings are fan-crossing so that indepen- dent crossings and triangle-crossings are excluded. Fan-planar embeddings also exclude configuration II [11]. An instance of configuration II consists of the fan- crossing embedding of a subgraph C induced by the vertices of an edge e = {u, v} and of all edges {t, w} crossing e, where e is crossed from both sides, as shown in Fig. 2(a). We call e the base and its crossing edges the fan of C, denoted f an(C). Since e is crossed from both sides, it it crossed at least twice, and therefore it is covered by t. It may be crossed by more than two edges. Hence, an edge is the base of at most one configuration, but a base may be in the fan of another configuration. Each edge g of f an(C) is uncovered or is covered by exactly one of u and v. It may cross several base edges so that it is part of several config- urations. An edge of f an(C) is said to be straight if it crosses e from the left and curved if it crosses e from the right. Then an instance of configuration II has at least a straight and a curved edge. Moreover, exactly one of u and v is inside a cycle with edge segments of a curved edge, the base, and a straight edge. For convenience, we assume that u is inside the cycle and curved edges are left curves. Right curves enclose v and both left and right curves are possible. However, if there are left and right curves, then curves in one direction can be rerouted. For convenience, we augment the embedding and assume that for every in- stance C of configuration II there are edges {t, u} and {t, v}. If these edges do cubdwafncdwabu 14 F. J. Brandenburg not exist, they can be added. Therefore, route {u, t} along the first left curve f from u to the first crossing point with an edge g of f an(u) and then along g. Then f is uncovered or covered by u and {t, u} is uncrossed, or f is covered by v and {t, u} is covered by v or is uncovered. Accordingly, {t, v} follows the rightmost edge crossing e and the first crossed edge of f an(v). The case with right curves is similar. Hence, we can assume that there is a triangle ∆ = (t, u, v) associated with C. There are some cases in which configuration II can be avoided by an edge rerouting. A special one has been used in Lemma 3 in which the straight edge is crossed by a triangle-crossing edge. However, there is a case in which config- uration II is unavoidable. Lemma 4. If a straight edge s of an instance C of configuration II is uncovered or is covered by u, then the left curves g to the left of s can be rerouted so that ˜g does not cross the base. The edge rerouting does not introduce new instances of configuration II. Proof. We reroute each edge g to the left of s so that ˜g first follows s from t to the crossing point with the first edge f of f an(u) that crosses both g and s. Then ˜g follows f and finally g. If g is a straight edge, then f = {u, v}, which is crossed. See Fig. 10 for an illustration. If g is a left curve, then ˜g is only crossed by the edges of f an(u) that cross s in the sector between {u, t} and f , and by the edges that cross g in the sector from f to the endpoint. All edges are in f an(u) and {u, v} is not crossed by ˜g. Each edge h that is crossed by ˜g is crossed only once, since f is the first edge crossing g and s. If h ∈ f an(u) is crossed by ˜g and g and h do not cross, then h crosses s and h is a straight edge for ˜g. If there is a curved edge {u, w} crossing ˜g, then {u, w} is also a curved edge for s. Hence, ˜g can be added to that instance of configuration II. If g is a straight edge, then ˜g is crossed by a subset of edges that cross g, since each edge of f an(u) crossing s in the sector between {u, t} and {u, v} must cross g. Hence there are no more (cid:117)(cid:116) edge crossings and instances of configuration II. In consequence, we can remove instances of configuration II in which there are left curves, right curves and straight edges, since Lemma 4 either applies to the left or to the right curves. Lemma 4 cannot be used if left curves are to the right of straight edges, since the left curves may be covered by v and the straight edges by u. Then configuration II may be unavoidable using a construction sim- ilar to the one of Theorem 2. A left curve g = {t, x} is semi-covered by u if it is only crossed by an edge {u, w} in the sector between {u, t} and {u, v}. Thus the crossing edge is inside the triangle ∆ = (t, u, v). Accordingly, a straight edge h = {t, y} is semi-covered by v if each edge {v, w} with w (cid:54)= u crosses h in the sector between {v, t} and {v, u}, i.e., outside ∆. A semi-covered edge is covered, but not conversely. A covered left curve that is not semi-covered is crossed by edges of f an(u) in the sector between {t, v} On Fan-Crossing Graphs 15 (a) (b) Fig. 10. An instance of configuration II with (a) a straight edge s covered by u and left curves to its left and (b) rerouting the edges crossing {u, v} to the left of s. and {t, u} in clockwise order, i.e., outside the triangle (t, u, v). Similarly, a semi- covered straight edge may be crossed by edges of f an(v) inside the triangle. Thus a semi-covered left curve consists of a segment from u to the crossing with {u, v} and a semi-covered straight edge is uncrossed inside ∆. These segments are good for routing other edges. Lemma 5. If there is a semi-covered straight (curved) edge, then all curved (straight) edges can be rerouted such that they do not cross the base, so that configuration II is avoided. Proof. We proceed as in Lemmas 1 and 2 and reroute all straight and curved edges in a bundle along the semi-covered edge f from t to the base {u, v}, where they make a left or right turn, follow the base and finally their original. If f is straight (curved), then the curved (straight) edges do not cross the base. Each rerouted edge ˜g is only crossed by a subset of edges that cross g, since the part (cid:117)(cid:116) of ˜g is uncrossed until it meets g. Next, we construct graph M in which configuration II is unavoidable. Graph M has fat and ordinary edges. A fat edge consists of K7. In fan-crossing graphs, a fat edge plays the role of an edge in planar graphs. It is impermeable to any other fat or ordinary edge. This observation is due to Binucci et al. [5] who proved the following: Lemma 6. For every fan-crossing embedding of K7 and every pair of vertices u and v there is a path of segments in which at least one endpoint is a crossing point. Thus, each pair of vertices is connected if the uncrossed edges are removed. There are (at least) three fan-crossing embeddings of K7 with K5 as in Figs. 4(a-c) and two vertices in the outer face, see Fig. 11. The embeddings in Figs. 4(d) and 4(e) cannot be extended to a fan-crossing embedding of K7 by adding two vertices in the outer face. vstuvstu 16 F. J. Brandenburg (a) (b) (c) Fig. 11. Different fan-crossing embeddings of K7 that are obtained from different em- beddings of K5 by adding two vertices in the outer face Theorem 2. There are fan-crossing graphs that are not fan-planar. In other words, configuration II is unavoidable. Proof. Consider graph M from Fig. 12 with fat edges representing K7 and or- dinary ones. Up to the embedding of the fat edges, graph M has a unique fan-crossing embedding. This is due to the following fact. There is a fixed outer frame consisting of two 5-cycles with vertices U = {t(cid:48), v(cid:48), y(cid:48), a(cid:48), b(cid:48), t, v, y, a, b} and fat edges. If fat edges are contracted to edges or regarded as such, this subgraph is planar and 3-connected and as such has a unique planar embedding. By a similar reasoning, M [U ] has a fixed fan-crossing embedding up to the embeddings of K7. There are two disjoint 5-cycles, since fat edges do not admit a penetration by any other edge. Hence, the edges {t, y} and {b, v} must be routed inside a face of the embedding of M [U ], and they cross. Consider the subgraph M [t, s, u, w, x, z] restricted to fat edges. Since ver- tex t is in the outer frame, it admits four fan-crossing embeddings with outer face (t, u, x, w, z), (t, u, x, z), (t, u, s), and (t, s, z), respectively. But the edges {u, a}, {u, b}, {v, w} and {v, z} exclude the latter three embeddings, since the edges on the outer cycle are fat edges and do not admit any penetration by another edge. Edge {u, a} cannot cross {t, y}, since the latter is crossed by {v, z}. Hence, {t, y} is crossed by {w, v} and {z, v}. Finally, edge {t, x} must cross {u, w}. It cannot cross {v, z} without introducing an independent crossing. Hence, it must cross {u, a}, {u, b}, {u, v} and {u, w}. Modulo the embeddings of K7, every fan-crossing embedding is as shown in Fig. 12 in which {u, v} is crossed by {t, x} from the right and by {t, y} from the left and thus is configuration II. Hence, graph M is fan-crossing and not (cid:117)(cid:116) fan-planar. Theorems 1 and 2 solve a problem of my recent paper on beyond-planar graphs [6]. Let FAN-PLANAR, FAN-CROSSING, and ADJ-CROSSING denote the classes of fan-planar, fan-crossing, and adjacency-crossing graphs. Then The- orems 1 and 2 show: On Fan-Crossing Graphs 17 Fig. 12. Graph M with fat edges representing K7 and an unavoidable configuration II Corollary 1. FAN-PLANAR ⊂ FAN-CROSSING = ADJ-CROSSING. Kaufmann and Ueckeredt [11] have shown that fan-planar graphs of size n have at most 5n − 10 edges, and they posed the density of fan-crossing and adjacency-crossing graphs as an open problem. Theorem 3. For every adjacency-crossing graph G there is a fan-planar graph G(cid:48) on the same set of vertices and with the same number of edges. Proof. By Theorem 1 we can restrict ourselves to fan-crossing graphs. Let E(G) be a fan-crossing embedding of G and suppose there is an instance of configura- tion II in which the base {u, v} is crossed by {t, x} from the right and by {t, y} from the left, or vice-versa. Augment E(G) and add edges {u, w} if they are fan-crossing and do not cross both {t, x} and {t, y}, and similarly, add {v, w}. Consider the cyclic order of edges or neighbors of u and v starting at {u, v} in clockwise order. Let a and b be the vertices encountered first. Vertices a and b exist, since a precedes x and b precedes y, where x = a or b = y are possible. Then a and b are both incident to both u and v and there are two faces f1 and f2 containing a common segment of {u, v} and a and b, respectively, on either side of {u, v}. Otherwise, further edges can be added that are routed close to {u, v} and are crossed either by edges of f an(t) that are covered by u or by v. We claim that there is no edge {a, b} in E(G). Therefore, observe that the base is covered by t, so that {a, b} cannot cross {u, v}. Note that there is a triangle crossing if x = a and b = y and {u, v} crosses {a, b} with a triangle-crossing edge {u, v}. Edge {a, b} crosses neither {t, x} nor {t, y}. If a, b are distinct from x, y, then there is an independent crossing of {t, x} and {t, y}, respectively, by {a, b} and {u, v}. If a = x, then {t, x} and {x, b} are adjacent and do not cross and {x, b} and {u, v} independently cross {t, y} if b (cid:54)= y, and for b = y, {x, y} and {t, y} cannot cross as adjacent edges. xvyab‘a‘t‘twv‘suby‘z 18 F. J. Brandenburg However, after a removal of the base {u, v}, vertices a and b are in a common face and can be connected by an uncrossed edge {a, b}, which clearly cannot be part of another instance of configuration II. Hence, we can successively remove all instances of configuration II and every (cid:117)(cid:116) time replace the base edge by a new uncrossed edge. In consequence, we solve an open problem of Kaufmann and Ueckerdt [11] on the density of fan-planar graphs and show that configuration II has no impact on the density. Corollary 2. Adjacency-crossing and fan-crossing graphs have at most 5n − 10 edges. 4 Conclusion We extended the study of fan-planar graphs initiated by Kaufmann and Ueckerdt [11] and continued in [4, 5] and clarified the situation around fan-crossings. We proved that triangle-crossings can be avoided whereas configuration II is essential for graphs but not for their density. Thereby, we solved a problem by Kaufmann and Ueckerdt [11] on the density of adjacency-crossing graphs. Recently, progress has been made on problems for 1-planar graphs [12] that are still open for fan-crossing graphs, such as (1) sparsest fan-crossing graphs, i.e., maximal graphs with as few edges as possible [8] or (2) recognizing specialized fan-crossing graphs, such as optimal fan-crossing graphs with 5n-10 edges [7]. In addition, non-simple topological graphs with multiple edge crossings and crossings among adjacent edges have been studied [2], and they may differ from the simple ones, as it is known for quasi-planar graphs [3]. Non-simple fan- crossing graphs have not yet been studied. 5 Acknowledgements I wish to thank Christian Bachmaier for the discussions on fan-crossing graphs and his valuable suggestions. References 1. E. Ackerman, J. Fox, J. Pach, and A. Suk. On grids in topological graphs. Comput. Geom., 47(7):710–723, 2014. 2. E. Ackerman and G. Tardos. On the maximum number of edges in quasi-planar graphs. J. Comb. Theory, Ser. A, 114(3):563–571, 2007. 3. P. K. Agarwal, B. Aronov, J. Pach, R. Pollack, and M. Sharir. Quasi-planar graphs have a linear number of edges. Combinatorica, 17(1):1–9, 1997. 4. M. A. Bekos, S. Cornelsen, L. Grilli, S. Hong, and M. Kaufmann. On the recognition of fan-planar and maximal outer-fan-planar graphs. Algorithmica, 79(2):401–427, 2017. On Fan-Crossing Graphs 19 5. C. Binucci, E. Di Giacomo, W. Didimo, F. Montecchiani, M. Patrignani, A. Symvo- nis, and I. G. Tollis. Fan-planarity: Properties and complexity. Theor. Comput. Sci., 589:76–86, 2015. 6. F. J. Brandenburg. A first order logic definition of beyond-planar graphs. J. Graph Algorithms Appl., 2017. Accepted for publication. 7. F. J. Brandenburg. Recognizing optimal 1-planar graphs in linear time. Algorith- mica, published online October 2016, doi:10.1007/s00453-016-0226-8. 8. F. J. Brandenburg, D. Eppstein, A. Gleißner, M. T. Goodrich, K. Hanauer, and J. Reislhuber. On the density of maximal 1-planar graphs. In M. van Kreveld and B. Speckmann, editors, GD 2012, volume 7704 of LNCS, pages 327–338. Springer, 2013. 9. O. Cheong, S. Har-Peled, H. Kim, and H. Kim. On the number of edges of fan- crossing free graphs. Algorithmica, 73(4):673–695, 2015. 10. H. Harborth and I. Mengersen. Drawings of the complete graph with maximum number of crossings. Congressus Numerantium, 88:225–228, 1992. 11. M. Kaufmann and T. Ueckerdt. The density of fan-planar graphs. CoRR, abs/1403.6184, 2014. 12. S. G. Kobourov, G. Liotta, and F. Montecchiani. An annotated bibliography on 1-planarity. Computer Science Review, 25:49–67, 2017. 13. G. Ringel. Ein Sechsfarbenproblem auf der Kugel. Abh. aus dem Math. Seminar der Univ. Hamburg, 29:107–117, 1965.
ai_researcher
2
CURATe_Benchmarking_Personalised_Alignment_of_Conversational_AI_Assistants.pdf
Augmented Understanding and Automated Adaptation of Curation Rules Alireza Tabebordbar A thesis in fulfilment of the requirements for the degree of Doctor of Philosophy School of Computer Science and Engineering Faculty of Engineering March 2020 0 2 0 2 l u J 7 1 ] R I . s c [ 1 v 0 1 7 8 0 . 7 0 0 2 : v i X r a Acknowledgements Firstly, I would like to express my special thanks to my Ph.D. supervisor Dr. Amin Beheshti. Amin was not only a knowledgeable and an expert scientist in the field of data science and Artificial Intelligence, but also sup- portive, loyal, honest, trustworthy, and a true friend. Amin is a credible and effortless research academic, who supported me throughout my study and help my growth as a Ph.D. research student. Thank you for all your supports and comments, and I really enjoyed working with you during these years. I would like to express my appreciation to my supervisor, Prof. Boualem Benatallah, who is a passionate scientist, and an excellent forward thinker. I gained valuable insight from his comments during the last three years. I gratefully thank my co-supervisor, Dr. Hamid Reza Motahari-Nezhad, for his insightful comments on my study. Hamid is an excellent and inspiring scientist and I really appreciated the opportunity to have your suggestions during my study. I like to also express my sincere appreciation to UNSW workers, espe- cially ICT for providing equipment to facilitate my research. 1 I would like to thank my sponsor, Data to Decisions Cooperative Research Centre (D2D CRC), for funding my study during the last three and half years. I would like to thank Reza Nouri for his technical support and the con- figurations he has made for running my codes. I would like to appreciate the UNSW learning centre for providing ad- vanced academic writing courses and helping me to improve my writing skills. 2 Abstract Over the past years, there has been many efforts to curate and increase the added value of the raw data. Data curation has been defined as activities and processes an analyst undertakes to transform the raw data into contextual- ized data and knowledge. Data curation enables decision-makers and data analyst to extract value and derive insight from the raw data. However, to curate the raw data, an analyst needs to carry out various curation tasks including, extraction linking, classification, and indexing, which are error- prone, tedious and challenging. Besides, deriving insight require analysts to spend a long period of time to scan and analyze the curation environments. This problem is exacerbated when the curation environment is large, and the analyst needs to curate a varied and comprehensive list of data. To ad- dress these challenges, in this dissertation, we present techniques, algorithms and systems for augmenting analysts in curation tasks. We propose: (1) a feature-based and automated technique for curating the raw data. (2) We propose an autonomic approach for adapting data curation rules. (3) We provide a solution to augment users in formulating their preferences while curating data in large scale information spaces. (4) We implement a set of APIs for automating the basic curation tasks, including Named Entity extraction, POS tags, classification, and etc. In this dissertation, we automate many of tedious and time-consuming 3 curation tasks and creates a Knowledge Lake (i.e., contextualized data lake) to augment analysts in deriving insight and extracting value. We assist an- alysts to adapt data curation rules in dynamic curation environments. Our solution, autonomic-ally learns the optimal modification for rules using an online learning algorithm. We present a novel approach for augmenting user comprehension of curation environments. We explain techniques for formu- lating user preferences in large and varied environments. We discuss how summarization techniques help users to understand curation environments without scanning and synthesizing a large amount of data. We present a sys- tem, which allows users to retrieve their information using a set of high-level concepts such as persons, locations, and topics. We conduct different experiments to highlight the applicability of our solutions: (1) We discuss how our proposed feature-based approach signif- icantly enhances users in curating data and extraction of knowledge. We study both scalability and precision of our approach in curating social data. (2) We show how our solution can learn to curate data without needing an- alysts. We present the performance of our adaptation technique in adapting curation rules. We compare our results with systems relying on analysts and compare the precision and recall of our solution with analysts. (3) We intro- duced our system, namely ConceptMap, which aids users to comprehend the information space without constantly scanning or querying the information space. Our results show ConceptMap can significantly lower the user’s work- load in understanding a curation environment and extracting value. Our results prove that ConceptMap can significantly lower the user’s workload and time in understanding the data. 4 Publications • A Tabebordbar, A Beheshti, B Benatallah, and M C Barukh, Adap- tive rule adaptation in unstructured and dynamic environ- ments, International Conference on Web Information Systems Engi- neering, Springer, 2019, pp. 326–340. • A Tabebordbar, A Beheshti, and B Benatallah, Conceptmap: A conceptual approach for formulating user preferences in large information spaces, International Conference on Web Information Systems Engineering, Springer, 2019, pp. 779–794. (Selected as the top five paper among 250 submissions) • A Tabebordbar and A Beheshti, Adaptive rule monitoring system, 2018 IEEE/ACM 1st International Workshop on Software Engineering for Cognitive Services (SE4COG), IEEE, 2018, pp. 45–51 (Best paper award). • A Tabebordbar, A Beheshti, B Benatallah, and M C Barukh, Feature- based Rule Adaptation in Unstructured and Dynamic Envi- ronments, Data Science and Engineering (DSE) Journal (2020). • A Tabebordbar, A Beheshti, B Benatallah, Augmenting user’s com- prehension of curation environments using social exploratory 5 search. World Wide Web Journal, 2020, Accepted (minor revision). • A Beheshti, A Tabebordbar, B Benatallah, and Reza Nouri, On au- tomating basic data curation tasks, In companion proceedings of the 26th International Conference on World Wide Web (WWW), In- ternational World Wide Web Conferences Steering Committee, 2017, pp. 165–169. • A Beheshti, B Benatallah, A Tabebordbar, H R Motahari-Nezhad, M C Barukh, and R Nouri, Datasynapse: A social data curation foundry, Distributed and Parallel Databases Journal (2018), 1–34. • A Beheshti, A Tabebordbar, B Benatallah, iStory: Intelligent Sto- rytelling with Social Data, In companion proceedings of the Inter- national Conference on World Wide Web (Web) Conference, Taipei, 2020. • A Beheshti, A Tabebordbar, B Benatallah, Data curation APIs, Tech. Report UNSWCSE-TR-201617, The University of New South Wales, Sydney, Australia, 2016. • A Beheshti, K Vaghani, B Benatallah, and A Tabebordbar, Crowd- correct: a curation pipeline for social data cleansing and cu- ration, International Conference on Advanced Information Systems Engineering, Springer, 2018, pp. 24–38. • A Beheshti, B Benatallah, R Nouri, and A Tabebordbar, Corekg: a knowledge lake service, Proceedings of the VLDB Endowment 11 (2018), no. 12, 1942–1945. 6 Contents Acknowledgements Abstract Publications 1 Introduction 1 3 5 12 1.1 Introduction, Background and Aims . . . . . . . . . . . . . . . 12 1.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.2.1 Knowledge Extraction . . . . . . . . . . . . . . . . . . 14 1.2.2 Adapting Data Curation Rules . . . . . . . . . . . . . 17 1.2.3 Data Comprehension . . . . . . . . . . . . . . . . . . . 18 1.3 Key Research Issues . . . . . . . . . . . . . . . . . . . . . . . 21 1.3.1 Transforming the Raw Data and Extracting Knowledge 22 1.3.2 Rule Adaptation in Dynamic Curation Environments . 23 1.3.3 Comprehension of Curation Environments . . . . . . . 23 1.4 Contributions Overview . . . . . . . . . . . . . . . . . . . . . 24 1.4.1 Automated and Feature-Based Data Curation . . . . . 24 1.4.2 Adaptive Rule Adaptation in Dynamic Curation Envi- ronments . . . . . . . . . . . . . . . . . . . . . . . . . . 25 7 1.4.3 Augmenting User’s Comprehension of Curation Envi- ronments . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.5 Dissertation Structure . . . . . . . . . . . . . . . . . . . . . . 26 2 Background and State of the Art 29 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2 Data Curation . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.1 Data Curation Frameworks . . . . . . . . . . . . . . . 31 2.3 Transforming the Raw Data and Extracting Knowledge . . . . 33 2.3.1 Data Warehouse . . . . . . . . . . . . . . . . . . . . . 34 2.3.2 Data Lake . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.3.3 Knowledge Lake . . . . . . . . . . . . . . . . . . . . . 37 2.3.4 Automated Data Curation . . . . . . . . . . . . . . . . 38 2.4 Data Curation Rules . . . . . . . . . . . . . . . . . . . . . . . 39 2.4.1 Curation Rule Languages . . . . . . . . . . . . . . . . 41 2.4.2 Curation Rule Enrichment . . . . . . . . . . . . . . . . 42 2.4.3 Rule Refinement: . . . . . . . . . . . . . . . . . . . . . 46 2.5 Sensemaking of the Curation Environment . . . . . . . . . . . 48 2.5.1 Sensemaking Challenges . . . . . . . . . . . . . . . . . 58 2.6 Conclusion and Discussions . . . . . . . . . . . . . . . . . . . 59 3 Feature Based and Automated Data Curation Foundry 61 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2 Related Works and Background . . . . . . . . . . . . . . . . . 66 3.3 Solution Overview . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.3.1 Feature Extraction . . . . . . . . . . . . . . . . . . . . 70 3.3.2 Data Curation Services . . . . . . . . . . . . . . . . . . 73 3.4 Knowledge Lake . . . . . . . . . . . . . . . . . . . . . . . . . . 76 8 3.4.1 Building Knowledge Lake . . . . . . . . . . . . . . . . 77 3.5 Implementation and Experiment . . . . . . . . . . . . . . . . . 85 3.5.1 Implementation . . . . . . . . . . . . . . . . . . . . . . 85 3.5.2 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.5.3 System Setup . . . . . . . . . . . . . . . . . . . . . . . 86 3.5.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.5.5 Analysing Budget-KB Accuracy . . . . . . . . . . . . . 87 3.6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . 91 4 Feature-Based Rule Adaptation in Dynamic and Constantly Changing Environment 93 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.2.1 Rule Adaptation . . . . . . . . . . . . . . . . . . . . . 99 4.2.2 Multi Armed Bandit Algorithm . . . . . . . . . . . . . 100 4.2.3 Feature Extraction . . . . . . . . . . . . . . . . . . . . 101 4.3 Preliminaries and Problem Statement . . . . . . . . . . . . . . 102 4.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 102 4.3.2 Problem Statement . . . . . . . . . . . . . . . . . . . . 104 4.3.3 Solution Overview . . . . . . . . . . . . . . . . . . . . 106 4.4 Adaptive Rule Adaptation . . . . . . . . . . . . . . . . . . . . 107 4.4.1 Feature Extraction . . . . . . . . . . . . . . . . . . . . 107 4.4.2 Observation . . . . . . . . . . . . . . . . . . . . . . . . 110 4.4.3 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.4.4 Adaptation . . . . . . . . . . . . . . . . . . . . . . . . 115 4.5 Gathering Workers Feedback . . . . . . . . . . . . . . . . . . . 117 4.5.1 Stopping Condition . . . . . . . . . . . . . . . . . . . . 118 4.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 9 4.6.1 Experiment Settings and Dataset . . . . . . . . . . . . 119 4.6.2 Experiment scenarios . . . . . . . . . . . . . . . . . . . 120 4.6.3 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.7 Conclusion and Future works . . . . . . . . . . . . . . . . . . 127 5 Enhancing Users Comprehension of the Curation Environ- ment 132 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5.2.1 Formulating User Preferences . . . . . . . . . . . . . . 138 5.2.2 Comprehension and Sensemaking of the Information Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 5.2.3 Topic Modeling Techniques . . . . . . . . . . . . . . . 141 5.3 ConceptMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5.3.1 Design Components . . . . . . . . . . . . . . . . . . . . 143 5.4 Solution Overview . . . . . . . . . . . . . . . . . . . . . . . . . 150 5.4.1 Attributes Recognition . . . . . . . . . . . . . . . . . . 151 5.4.2 Knowledge Lake . . . . . . . . . . . . . . . . . . . . . . 152 5.4.3 Summarization . . . . . . . . . . . . . . . . . . . . . . 153 5.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5.5.1 ConceptMap Architecture and Datasets . . . . . . . . . 156 5.5.2 Experiment Settings . . . . . . . . . . . . . . . . . . . 157 5.6 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 5.6.1 ConceptMap Interface . . . . . . . . . . . . . . . . . . 161 5.6.2 Limitations and Future Works . . . . . . . . . . . . . . 162 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 10 6 Automating Basic Data Curation Tasks (Software Prototype)163 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 6.2 Curation Services Overview . . . . . . . . . . . . . . . . . . . 166 6.3 Demonstration Scenarios . . . . . . . . . . . . . . . . . . . . . 173 6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 7 Conclusion and Future Works 178 11 Chapter 1 Introduction 1.1 Introduction, Background and Aims The expansion of Web, social media and sensors’ data have made a deluge in the generation of the raw data. This data can be generated across various platforms and is available in different forms, from structured to unstructured, e.g., atomic data has not been processed for use. This availability of the raw data coupled with the continued improvement in capabilities of big data processing systems introduced a new era for deriving insight from the raw data. Data curation is a quintessential part of every big data processing system, which aims at transforming the raw data into contextualized data knowledge. Data curation may include processes and activities for principled and controlled data creation, maintenance, and management [74]. Typically, a curation task consists of a set of mathematical, statistical, and computa- tional models to help data curators in extracting actionable insight from the raw data [181]. This paradigm, often utilizes various big data processing sub-tasks, including machine learning algorithms (e.g., Bayesian and regres- 12 sion), enrichment (e.g., knowledge base and knowledge graph), annotation, summarization, and visualization. For example, consider a social media plat- form, e.g., Twitter [159], that enables users in expressing their opinions and receive feedback. A data curation system may analyze users’ Tweets 1 to investigate their opinions about their community. The curation system may extract various information, e.g., keywords, part of speech, named entities, synonyms, and stems, from users’ Tweets and link the extracted data to ex- ternal knowledge bases to derive a deeper understanding of users’ opinions regarding their communities [37]. Over the past years, different curation systems have been proposed to help organizations and data curators in transforming their raw data into knowl- edge. Trending applications include: improving government services [76, 86], predict intelligence activities [108, 248], unravel human trafficking activi- ties [13, 16, 69], understand impact of news on stock markets [62], analysis of financial risks [14, 83], accelerate scientific discovery [243], as well as to improve national security and public health [129, 144]. However, often to curate data, analysts 2 need to handle a large number of painstakingly dif- ficult, error-prone, and time-consuming tasks. These challenges exacerbated in dynamic curation environments as curation algorithms typically fail to curate data and analysts need to continuously update their comprehension of curation environments to capture the salient aspect of data. Thus, in this dissertation, we focus on approaches for augmenting analysts in curating the data and augmenting their understanding of curation environments. Overall, we can summarise our contributions as below: 1. We propose an automated and feature-based framework for Extracting 1https://twitter.com/ 2In this dissertation, we use the term data curators and analysts interchangeably. 13 knowledge from the raw data and developing insight. 2. We propose a learning algorithm for Adapting Data Curation Rules in dynamic and constantly changing curation environments. 3. We propose a system for augmenting user’s Comprehension of Curation Environments and lowering user’s cognitive load in formulating her preferences. The rest of this chapter is organized as follows. We first introduce the central concepts discussed in this dissertation in Section 1.2. Then, in Sec- tion 1.3, we describe the key research issues tackled in this dissertation. Finally, we summarize our contributions in Section 1.4, and describe the organization of the dissertation in Section 1.5. 1.2 Preliminaries 1.2.1 Knowledge Extraction Data curation promotes contextualization of the raw data into knowledge by unravelling the hidden patterns and associations [181]. Data curation acts as a glue between the raw data and analysis and greatly assists analysts in interpreting the data and extracting value [37]. Curation of data starts with identifying open, social, and private data islands, and the processing elements that need to be used in the curation task. It divides each curation task into smaller sub-tasks and provides an end-to-end velocity by eliminating errors and diminishing bottlenecks and latency. A robust pipeline of curation tasks removes many barriers involved in curating data and provides a smooth, automated flow of data from one source 14 to another. A data curation pipeline consists of various curation elements, including ingesting, cleansing, integration, transforming, and adding-value. In the followings, we briefly discuss different curation tasks that may involve in transforming the raw data into knowledge. • Ingestion is the process of obtaining data from different sources for immediate use and storage [3]. Data can be ingested as stream or batch. Stream processing systems capture data in real-time emitted from a source. While in batch processing systems data is imported in big chunks at a periodic interval. Examples of data ingestion systems are Apache Kafka 3, AirFlow 4, Amazon Kinesis 5. • Cleansing is the process of repairing or removing unwanted data from a dataset [2]. In many cases, data is incomplete, poorly formatted, or contain duplicated values. Data cleansing allows preparing data for processing by removing outliers. • Integration aims at combining data from multiple sources into a cen- tral repository [184]. The successful integration of data needs to ad- dress several challenges, including schema integration, detecting and resolving inconsistencies, removing duplicates and redundant values. • Transforming aims at smoothing, summarising, generalizing, or nor- malizing the data. Transformation can remove noise from data and normalizes the data within a specified range, e.g., –1.0 to 1.0 or 0.0 to 1.0. 3https://kafka.apache.org/ 4https://airflow.apache.org/ 5https://aws.amazon.com/kinesis/ 15 • Adding Value focuses on deriving insight from data and consists of several activities, including: – Extraction focuses on extracting actionable insight, e.g., named entities, part of speech, keywords, and synonym, from the raw data. Examples of extraction tools are, Stanford Core NLP [173] and NLTK [58]. – Similarity approximates the similar features or aspects between two data items using similarity metrics, such as edit distance [92], jaccard [189], and TF-IDF [7]. – Linking links data items, e.g., named entities, part of speech tags, and keywords, to external knowledge sources for further en- richment and analysis. Example of existing knowledge bases Wiki- data 6, Google Knowledge Graph 7, Geonames 8. – Summarising focuses on identifying and grouping similar items within the data. Examples of summarization techniques include clustering, sampling, compression, and histograms. Over the past years, several solutions [37, 49, 74, 78, 200, 222] have been proposed to assist analysts in curating data through adopting different learn- ing algorithms for deriving insight and extracting knowledge. Usually, relying on these solutions, an analyst investigates the curation environment and per- forms a feature extraction task to identify the content bearing features that best describe the data. Example of such a curation system is Snorkel [202], which relies on a set of user-defined learning functions to train a generative model and curate the data. 6https://www.wikidata.org 7https://developers.google.com/knowledge-graph 8http://geonames.org/ 16 1.2.2 Adapting Data Curation Rules Today, a large number of curation tasks are happening in dynamic and con- stantly changing environments. Example of such an environment is social media, e.g., Twitter and Facebook 9, where data generates as a never-ending and ever-changing stream [118]. In a dynamic curation environment, the curation system needs to be updated iteratively to remain applicable and precise. Let us go back to our example regarding capturing citizens opinions in their communities, which was introduced in the previous sections. A cit- izen may face a new problem in her community, e.g., broken light, traffic, and light rail delay, and create a new hashtag on social media to express her topic of interest. Consequently, the curation system needs to be updated to capture such changes to be applicable. In the past years, several solutions [118, 126, 164, 183, 250, 259] have been proposed to curate data in dynamic environments. Normally, these ap- proaches rely on learning algorithms [34, 164, 202, 203, 250], e.g., regression, naive Bayes, and SVM; to adapt a curation system with recent changes. For example, one may train an initial model to label the data relevant to her topic of interest. Then, over time the system will be updated with new data to capture changes in the curation environment. However, relying on pure al- gorithmic approaches for curating data suffer from several problems [118]: (1) Algorithms are complex and difficult to interpret and require an expert for tuning and training, (2) Algorithms are designed for a specific context and cannot be easily adapted to work in another context, and (3) in many cases, algorithms require a large amount of training data, which may not be avail- able or difficult to obtain. In recent years, several solutions augmented algorithms with curation 9https://www.facebook.com 17 rules to curate data in dynamic and changing environments. These sys- tems [27, 65, 78, 80, 164, 182] relies on a set of hand-crafted rules and analysts for adapting rules (removes the imprecise rules or adds new ones) and main- tain the curation system applicable overtime. The advantage of augmenting algorithms with rules are manifold: (1) Writing rules are more straightfor- ward than designing algorithms. A rule can be added to a curation system much faster than an algorithm [118], (2) Correcting mistake for rules is faster than learning algorithms for analysts [118], and (3) Rules can consider cases that learning algorithms cannot yet cover. In cases that a curation system needs to curate data for a new topic, e.g., transportation and bus schedule, an analyst can easily add new rules to the system. However, algorithms need to be trained with new training data, which may not be available or difficult to obtain [99]. Although coupling rules with learning algorithms enhance the perfor- mance of curation systems in curating data, still an analyst needs to con- tinuously monitor rules’ performance to identify and adapt the imprecise ones. Over the past years, several approaches [27, 118, 182, 183, 235] relied on interactive techniques for adapting rules. These systems adapt a rule by identifying the potential modifications by interacting with the analyst. In the next sections, we discuss how an analyst can be aided to comprehend the curation environment without iteratively scanning and querying the data. 1.2.3 Data Comprehension Understanding of data involves processes and activities a user undertakes to explore the curation environment to describe and determine the quality of data. Typically, to understand the data a user requires to understand the data and re-represents the data in a format that allows planning, evalua- 18 tion and reasoning [199]. Text-based queries are one of the main techniques that have been used to scan the curation environment, deriving insight, and extracting value [238]. From early days of computers, text-based queries have been used to ex- plore and scan curation environments [127]. Today, text-based queries and search button have become a universal user interface component across the operating systems and Web applications. Usually, when a user has a lim- ited information need, text queries in-conjunction with search engines, e.g., Google or Bing, can adequately accommodate user’s searches [127]. The user expresses her information and provides a set of keywords or phrases, and a search engine returns results based on their relevancy to the user queries in a ranked list of items. However, when the aim of information seeking task is not to look up a few or an individual document, the user needs to go beyond the current text-based queries to conduct her searches [175]. Exploratory search refers to search activities that require learning and investigation [174]. In this context, the data curation process can help users to scan and compre- hend the curation environment to retrieve items relevant to her information needs. Overall, users’ behaviour in seeking their information needs can be divided into three steps [174]: lookup, learn, and investigate. Followings, we discuss each of these steps in details: • Lookup is the essential character of a search task and has been widely supported by search engines and database management systems. Lookup tasks retrieve both discrete and structured objects such as names, state- ments, files, numbers or media. An example of a lookup search is retrieving fast and accurate records of data using a database manage- ment system. Mostly lookup searches are considered as "fact retrieval" 19 or "question answering" search task [174]. Lookup searches are also suitable for analytical search approaches that begin with a set of pre- cisely designed queries and retrieve accurate results without the need for further comparison and examination [174]. • Learning Search tasks involve multiple iterations and return sets of results that require additional processing and interpretation [127]. These results can be generated in various formats, e.g., graphs, texts, videos, and maps, and often require user’s judgement and comparison. Learning search tasks allows users to make sense of data and develop new knowledge. Bloom’s taxonomy [113] defines the aim of learning search tasks to achieve: knowledge acquisition, comprehension of con- cepts or skills, interpretation of ideas, and comparisons or aggregations of data and concepts. Social search is another type of learning search, where a user aimed at finding communities of interest in social media, e.g., Twitter, Facebook, and Instagram [67]. Overall, learning search aims at locating, analyzing and assessing similar results and much of users’ time is devoted to examining and reformulating their queries. Learning search tasks can be embedded with lookup searches to guide the user to better locate the information and capture the salient aspect of data. • Investigative search considers a much broader search space and re- quires multiple iterations that take place over very long periods of time [174]. Investigative search results may critically be assessed be- fore being integrated into personal and professional knowledge bases. These searches often include explicit annotation of the search results and may be done to support planning and forecasting or to transform 20 the existing data into new data or knowledge. Another usage of inves- tigative search is to identify gaps in information and to avoid "dead- end alley" [116] in research. Investigative searches also can be used for alerting service profiles that need to be executed systematically and automatically. Serendipitous browsing [178] is another example of an investigative search. Investigative searching is more concerned with recall and aims at retrieving the maximum number of relevant results rather than minimizing the irrelevant results. These searches are not a good fit for today’s Web search engines that are highly tuned to retrieve the most relevant results first. Over the past years different techniques [96, 97, 174, 175, 194, 209] have been proposed to support user’s comprehension of the curation environment. A large number of these approaches focused on lowering user’s cognitive load through relying on visualization, e.g., bar charts, table, and stack bar. Visual encoding of a curation environment maps the data into a visual structure to enhance the user understanding of data [97, 121]. Visual encoding boosts the user’s memory in absorbing information to better extract and locate their information needs. 1.3 Key Research Issues This section outlines the key research issues tackled in this dissertation. We intend to facilitate the curation of data in dynamic and constantly changing environments. We describe techniques to support analysts for transforming the raw data and deriving insight. Finally, we accentuate approaches for augmenting user’s comprehension of the curation environment. 21 1.3.1 Transforming the Raw Data and Extracting Knowl- edge One of the challenges exists in data curation systems is to effectively trans- form a large amount of structured/unstructured data ingested from different sources into contextualized data and knowledge. Usually, an analyst needs to examine the curation environment and write code for performing her cura- tion tasks, which is painstakingly time-consuming and error-prone. Over the past years, several curation systems in both academia [156, 244] and indus- try [27, 79, 235] have been proposed to assist analysts in curating the data. These systems offer a set of tools or algorithms for helping analysts in cu- ration tasks. Examples of such systems in the industry, include Talend 10, which offers services for integration, cleansing and masking a large amount of data. Informica 11, is an Extraction-Transform-Load (ETL) tool and comes with a variety of components, including data quality, data replica, data man- agement, and data virtualization. Alteryx 12, which comes with several ele- ments for discovering, preparing and analyzing the raw data. Alation 13, is an interactive data curation tool for data annotation, and data governance, which contribute user knowledge in curation data. Although, data curation tools lower analysts burden in curating the raw data, using current solutions, analysts require extensive knowledge of the curation environment to extract and identify features that adequately describes their curation needs. Feature extraction has been proven to be painstakingly time-consuming and error- prone, as analysts need to spend an extensive period of time to scan and analyze the data within the curation environment. 10https://www.talend.com 11https://www.informatica.com 12https://www.alteryx.com 13https://www.alation.com 22 1.3.2 Rule Adaptation in Dynamic Curation Environ- ments Rule-based systems have been used increasingly to augment machine learning- based algorithms for annotating data in unstructured and continuously chang- ing environments. Rules can alleviate many of the shortcomings inherent in pure algorithmic approaches. However, to couple rules with a learning algo- rithm: (1) There is a need for an analyst to craft and adapt rules. Adapting rules is challenging and error-prone as the analyst needs to spend an extended period to identify the potential modifications that make a rule applicable and precise. This problem exacerbated in dynamic environments as rule adap- tation is not a one-shot rule modification task, and the analyst needs to adapt the rule over time, and (2) Typically, an analyst adapts a rule at the syntactic level, e.g., keywords and regular expression. Adapting a rule using syntactic level features limits the ability of the rule in annotating items when a curation system needs to curate a varied and comprehensive list of data. 1.3.3 Comprehension of Curation Environments In a large curation environment, often an analyst needs to iteratively investi- gate the data to retrieve items relevant to her topic of interest. Investigating the curation environment is both time-consuming and challenging as the user needs to issue different queries to retrieve items relevant to her information needs. In recent years, several visualization techniques [96, 97, 121, 128, 252] have been proposed to enhance user’s understanding of data in large curation environment. These techniques augment user comprehension of the curation environment with various visualization elements such as line charts [121], tilebars [128], or tables [252]. Although, relying on visual elements lowers 23 user’s cognitive load in absorbing information, using current techniques a user needs to explicitly specify her preferences for curation systems in forms of keywords or phrases. Text-based queries need to iteratively scan the cu- ration environment and fails to retrieve user’s information needs when the user is seeking for a varied and comprehensive list of items. 1.4 Contributions Overview In the previous sections, we discuss different challenges for curating data. In this section, we explain our solutions to address those challenges, in particu- lar (1) We propose an automated and feature-based data curation foundry for transforming the raw data and deriving insight, (2) We propose an adaptive approach for adapting data curation rules in dynamic and changing envi- ronments, and (3) we propose a conceptual system for augmenting a user’s comprehension of curation environments. 1.4.1 Automated and Feature-Based Data Curation To enhance analysts in curating data and reducing the time of curation tasks, we introduced Knowledge Lake [35] and automated data curation [53] ser- vices. The proposed solution offloads analysts from many of time-consuming and error-prone curation tasks and allows analysts to transform the raw so- cial media data (e.g., a Tweet in Twitter) into contextualized knowledge without spending a large amount of time. The Knowledge Lake offers a cus- tomizable feature extraction service to harness desired features from diverse data sources by leveraging a cross-document co-reference resolution tech- 24 nique. The curation services provide a microservice-based architecture 14 that offloads analysts from many of time-consuming curation tasks. Addi- tionally, we introduce a simple rule language to facilitate the interaction of analysts with the Knowledge Lake in querying the data and performing the analytical tasks. 1.4.2 Adaptive Rule Adaptation in Dynamic Curation Environments In a dynamic curation environment, there is a need for an analyst to adapt curation rules to keep them applicable and precise. Rule adaptation is both time-consuming and error-prone. Thus, we propose an autonomic approach for adapting curation rules. We utilize a Bayesian multi-armed-bandit al- gorithm [212], an online learning algorithm, which determines the adequate forms of a curation rule by gathering feedback from the curation environ- ment over time. To frame the problem as a Bayesian multi-armed bandit algorithm, we propose a reward and demote schema. The schema rewards a rule if it identifies the rule correctly tagged 15 an item, and at the same time demotes a rule if it identifies an item incorrectly tagged by the rule. Over time, the algorithm by observing the accumulated rules reward and demote learns a better adaptation for rules [106]. Besides, we propose a technique to adapt rules at the conceptual level, e.g., topic, rather than syntactic level. Conceptual level adaptation boosts rules to annotate a larger number of items. 14publicly available on GitHub supporting networks such as Twitter, Facebook, and LinkedIn 15A tag is a label, e.g., "Mental Health", a rule assigns to a curated item, e.g., "Tweet", to describe the item. 25 1.4.3 Augmenting User’s Comprehension of Curation Environments Understanding of data allows users to formulate their information needs bet- ter when seeking for information in large curation environments [195, 252]. Thus, to enhance users comprehension of data, we propose a method that provides a conceptual summary of curation environments and allows users to specify their preferences implicitly as a set of concepts. Our approach lowers users’ cognitive load in ranking and exploring data in a curation en- vironment. Contrary to previous techniques that allow users to formulate their preferences explicitly, e.g., keywords and phrases. Our approach fo- cuses on creating a conceptual summary of the curation environment to help users understand the data and relate it to their preferences. Hence, we focus on boosting users’ cognitive skill in understanding the data and formulating that understanding to extract information relevant to their topic of interest. We do this by taking advantage of deep learning and a Knowledge Lake to provide a conceptual summary of the information space. Users can specify her preferences implicitly as a set of concepts without the need to iteratively investigate the information space. It provides a 2D Radial Map of concepts where users can rank items relevant to their preferences through dragging and dropping. Our experiment results show that our approach can help users to formulate their preferences better when they need to retrieve a varied and comprehensive list of information across a large curation environment [238]. 1.5 Dissertation Structure The remainder of this dissertation organized as follows. We start with pre- senting the current state of the art on data curation in Chapter 2. We explain 26 in more depth how a curation system can aid analysts to transform the raw data and extract knowledge. We continue our discussion on curating data in dynamic and changing environments. We discuss different components of data curation rules and techniques for enriching and adapting rule. We wrap up the chapter with a discussion on the sensemaking of curation environ- ments and how users can be aided to comprehend the data while formulating their preferences. In Chapter 3, we discuss our proposed solution for transforming the data and extracting knowledge. We discuss related works and our proposed solu- tion to build a Knowledge Lake. We explain steps for constructing Knowledge Lake and how it enhances analysts in feature extraction. Next, we discuss cu- ration services and how it aids analysts in curation tasks. Finally, we discuss the usage scenario and results to illustrate the usability of our approach. In Chapter 4, we present our proposed solution for adapting data curation rules. We discuss related works and present a case study to demonstrate the usage of our approach. Then, we, explain how online learning can learn to adapt a curation rule without relying on analysts. Finally, we wrap up the chapter with results and conclusion. In Chapter 5, we present our proposed solution for augmenting user’s comprehension of curation environments. We offer a data visualization sys- tem that utilizes deep-learning and a Knowledge Lake to provide a visual summary of curation environments. We discuss our proposed approach and how it generates different types of data summarise. Then, we discuss the components of our system and how it interacts with a user to formulate her preferences. We conclude the chapter with experiments and conclusion. In Chapter 6, we present a software prototype for automating data cu- ration tasks. The proposed system facilitates the data curation process 27 and enhances the productivity of researchers and developers in transform- ing their raw data into curated data. The curation APIs enable developers to easily: (1) add features - such as extracting keyword, part of speech, and named entities (e.g., persons, locations, organisations, companies, products, diseases, and drugs), (2) providing synonyms and stems for extracted in- formation items leveraging lexical knowledge bases for the English language (e.g., WordNet [104]), (3) linking extracted entities to external knowledge bases, (4) discovering similarity among the information items, (5) classify- ing, sorting and categorizing data into various types, forms or any other distinct class, and (6) indexing structured and unstructured data. Finally, in Chapter 7, we present concluding remarks of this dissertation and discuss possible directions for future work. 28 Chapter 2 Background and State of the Art In this chapter, we discuss the state of the art in data curation models, and accentuate techniques for transforming the raw data, adapting data curation rules, and augmenting user comprehension of curation environments. This chapter is organized as follows: In Section 2.1, we briefly introduce some of the challenges that exist in data curation systems. Then, we discuss data curation models and accentuate techniques for extracting value from the raw data (Section 2.3). In Section 2.4, we discuss solutions on adapting data curation rules in dynamic and constantly changing environments. Finally, in Section 2.5, we discuss techniques for enhancing user comprehension and sensemaking of a curation environment, before concluding the chapter in Section 2.6. 2.1 Introduction Over the past years, there has been increasing recognition to curate and in- crease the added value of raw data. Data curation increase the visibility of data, and support enterprises to outperform their peers in output and pro- 29 ductivity [8, 71, 84, 167, 172, 191]. Today, many companies and enterprises realized the importance of data curation for deriving insight and extracting value. However, the expansion of data generation platforms, e.g., sensors, social media, and Web, have made curating and analyzing data more chal- lenging. Many enterprises and companies are struggling to implement prac- tices and policies for curating and organizing their raw data. As an ongoing and emerging field, data curation lacks clear answers to several fundamental problems, including: (1) Lack of a cohesive and robust framework to sup- port analysts in transforming and increasing the value of data, (2) Lack of supports for curating data in dynamic and changing environments, and (3) Lack of systems to support analysts in comprehending and analyzing cura- tion environments. In this chapter, we aim at discussing the above problems after digging into data curation models and activities associated with it. 2.2 Data Curation Data curation defined as the activities a user undertakes to preserve the value of data [167, 185]. Digital Curation Centre 1 (DCC) [131] is one of the communities that attempts to define data curation under a unite terminology. In general, data curation is defined as processes and activities related to the long-term management of data throughout its lifecycle to extract value and derive insight. In the following, we discuss two frameworks provided to establish a baseline for data curation activities and processes. 1http://www.dcc.ac.uk/digital-curation/glossary 30 2.2.1 Data Curation Frameworks In this section, we briefly discuss two frameworks proposed for framing data curation: (1) Digital Curation Center (DCC) model [131], and (2) Open Archival Information System (OAIS) model [24]. The former provides a holistic view of activities and actions associated with each stage of curation tasks. The latter aims at providing a conceptual framework for curating and preserving the data. 1. Digital Curation Center (DCC) Model DCC is a curation model to identify and assess the risk associated with man- aging data. It depicts the relationships between different stages of a curation task and provides a set of recommendations for transforming and preserv- ing the value of data. The model categorizes the curation tasks into three different actions: (1) Lifecycle actions and (2) Sequential actions, and (3) Occasional actions. The lifecycle actions encompass activities for describ- ing and representation of information. Sequential actions cover activities for ingestion, transformation, and conceptualization of information. Finally, oc- casional actions consider activities for disposing and migrating information. Figure 2.1 shows different stages of data within the DCC curation model. 2. Open Archival Information System Model Open Archival Information System (OAIS) is a curation model that acts as a starting point’ for building a sustainable pipeline for curating data and extraction of value [24]. The OAIS model defines terminologies to enhance the common understanding of curation tasks between data curators, and standards for better preservation, development, and assessment of data. OAIS model is made up of two components: A functional model and an information model. The functional model defines activities for ingesting and preserving the data, which can be fulfilled either by humans or by machines 31 Figure 2.1: Overview of Digital Curation Center (DCC) model (Source: [131]) such as computer systems. The information model defines different activities for dissemination and understanding of data, and specifies how the different types of information can relate to each other and how they are structured. In the next sections, we focus on techniques proposed for transforming and representation of data (refer to the DCC curation model). In particular, we discuss how an analyst can be aided to transform the raw data and extract knowledge. We, then discuss techniques for adapting data curation rules in dynamic and constantly changing environments, and how to enhance user’s comprehension of curation environments. 32 2.3 Transforming the Raw Data and Extract- ing Knowledge As data grows and diversifies, many organizations realized that traditional methods of managing information are becoming difficult and outdated. Thus, there is a need for solutions to effectively leverage the implication of the new data generation platforms for organizations and enterprises to transform their raw data and extract knowledge. Over the past years, different technologies have been proposed to manage the data grow and augment analysts in de- riving insight and making the decision. Data Warehouse [151] and Data Lake [192] are the most common and widely used technologies for managing and transforming the data. 1. Data Warehouse: A data warehouse is a database optimized to analyze relational data produced by transactional systems. The data in a data warehouse is structured, with a predefined schema to enable users performing fast and effective information retrieval. Typically, data stored in a data warehouse is cleaned, enriched, and transformed, so it can act as the ‘single source of truth’ [81] that users can trust. 2. Data Lake: A data lake is a centralized repository that allows storing structured and unstructured data [34]. Data lake stores data as-is, without having to schema the data, and can provide the ability to perform different types of analytics, including visualizations, big data processing, real-time analytics, and machine learning. Following, we discuss each of these technologies and how they contribute to augment analysts in transforming the raw data and deriving insight. 33 2.3.1 Data Warehouse Combining sparse data collected from different sources into a comprehen- sive and central repository provides several advantages for businesses and enterprises to derive insight [47, 48, 50, 236]. For example, in a sales sys- tem, a data warehouse might incorporate customer information from several sources, including a company’s point-of-sale systems, mailing lists, and com- ment sections. Alternatively, it might include employees’ data, including time cards, demographic data, and salary information [22], allowing the company to analyze the customers and employees interactions. Over the past years, a large number of works [28,87,177,179,204,253,257] leveraged data warehouse technology for their curation tasks. A large number of these works focused on integrating a curation result with data warehouses. For example, Croset et al. [87], proposed a graph-based method to identify and remove the erroneous records of a curation process and their integration with a data warehouse. OntoBrowser [204], is a collaborative and continuous data warehousing system for mapping experts reported terms to ontologies. The system designed to facilitate continuous data integration and mapping tasks in an evolutionary ecosystem. YeastMine [28], is a data warehouse system with a multifaceted search and retrieval interface. YeastMine allows data curators to search and retrieve a diverse set of genes using query cus- tomization. Another line of works mainly aims at leveraging the conceptual aspect of data warehouses for developing insight. These approaches rely on anno- tating and enriching curation results with information collected from data warehouses. For example, Sellam et al. [222], introduced an automated data warehouse exploration system by detecting the fundamental aspect of data using approximation and greedy search. Karlgren et al. [145], proposed an in- 34 cremental system for learning the semantic component of data to distinguish the topical impact of different terms within a data warehouse. The system examines the local context of terms and their neighbourhood to identify the semantic quality. Beheshti et al. [32], introduced a framework for scalable graph-based OLAP analytics over process execution data. The system facili- tates the analytics of OLAP systems through summarising the process graph and providing multi-views of data at different levels of granularity. Besides, many works (e.g., [102,115,240]) have relied on visualization to aid analysts in developing insight and detecting the best view of multidimensional datasets. These systems mainly focused on providing a 2-dimensional scatter-plot of data or analyzing and materializing every possible 2D view of the data. 2.3.2 Data Lake The data lake analogy aims at handling and storing multiple types of data without changing their formats. Data lakes provide a high degree of flexibility and scalability for companies and businesses that require to manage a large amount of data. According to Aberdeen et al. [105], the average company is seeing the volume of their data grow at a rate that exceeds 50% per year. Additionally, these companies are managing an average of 33 different data sources in their analysis. Thus, the need for data lakes is inevitable to respond to the rapid growth of data volume and complexity. In the next section, we accentuate on opportunities data lakes bring to manage the complexity of data. Data Lake Opportunities: A data lake empowers companies to apply more advanced and sophisticated techniques for transforming the raw data, developing insight and supporting decision-makers. The data lake architec- ture boosts scalability in handling the growth of data, so companies can adapt 35 their strategies with changes in the business environment [34]. Besides, data lakes provide the flexibility to support analysts on a variety of sophisticated analyses within an adequate timeframe. Over the past years, a large number of works leveraged the data lake con- cept for transforming the raw data or extracting value (e.g., [18, 33, 34, 37– 40,46,53,220,223,235]). A large body of these works relies on data processing and analysis algorithms, including machine learning-based algorithms for in- formation extraction [80], item classification [138], record linkage [177], clus- tering [61], and sampling [94]. For example, CoreDB [34], a data lake service, offers a single REST API to organize, index and query data and metadata. CoreDB manages multiple database technologies and offers a built-in design for security and tracing. AsterixDB [1], is a BDMS (Big Data Management System) with a rich feature set and is well-suited for social data storage and analysis. AsterixDB provides facilities, including data modelling, query lan- guage, indexing, and transactions. Orchestrate [4], provides a cloud-agnostic service to unify all queries needed for creating interactive applications, such as geospatial, time-series, graph, full-text search, and key-value queries. Another line of works, (e.g., [11,37,217,218]), has focused on coupling al- gorithmic approaches and data lake technologies for organizing the raw data and extracting insight. For example, to curate social media data (e.g., a text in Twitter), a machine learning algorithm can be used to cluster Tweets based on their topical similarity. Then, results can be displayed using differ- ent visualization elements [41], e.g., bar charts and bubble graphs, to assist analysts in identifying the content bearing topics [238]. CiViC [11], also is a real-time data processing system, which clusters citizens’ opinions through analyzing social media data, e.g., Twitter and Facebook, and news agencies’ comments. The system relies on several machine learning algorithms and a 36 data lake to store and analyze citizens’ opinions regarding their communities. Extraction-Transform-Load (ETL) systems also have been used mainly for managing the user data and deriving insight. For example, Apache UIMA 2 is an ETL system. Which facilitates the analysis of unstructured data, and provides a common platform for analytics. PowerCenter 3 is a uni- fied enterprise ETL platform for accessing, discovering, and integrating data. SAP 4 is a service-based ETL tool, that provides pervasive and extensible support for analyzing text, big data, social, and spatial data. IBM InfoSphere Information Server 5 is a data integration platform that helps to understand, cleanse, transform and deliver data relevant to business initiatives. 2.3.3 Knowledge Lake A Knowledge Lake [35–37] defined as a contextualized data lake. It is made up of a set of facts, information, and insights extracted from the raw data us- ing data curation techniques [35] such as extraction, linking, summarization, annotation, enrichment, classification and more. In particular, a Knowledge Lake is a centralized repository containing inexhaustible amounts of data that is readily available to perform analytical activities. Knowledge Lake pro- vides the foundation for deriving insight by automatically curating the raw data into a data lake. Beheshti et al. [35], introduced an open-source data and Knowledge Lake service, which offloads analysts from many of curation tasks for deriving insight and extracting value. In another work, Beheshti et al. [37], proposed a generalized social data curation foundry for transforming the social data. The system relies on a Knowledge Lake to extract features 2https://uima.apache.org/ 3https://www.informatica.com/au/products/data 4https://www.sap.com/australia/index.html 5https://www.ibm.com/au-en/analytics/information-server 37 and uncover hidden patterns of data. Tabebordbar et al. [238], introduced a system which utilizes a Knowledge Lake for augmenting user’ comprehension of curation environments and formulating their preferences. In Chapter 3, we explain how a Knowledge Lake can aid users to transform and extract knowledge from the raw data. 2.3.4 Automated Data Curation Typically, for transforming the raw data into knowledge and deriving in- sight, an analyst may need to perform various curation tasks. These tasks not only are time-consuming and challenging, but the analyst also needs to have extensive knowledge of data curation and the curation environment for accomplishing the curation task. Automated data curation aims at offloading analysts from many tedious and challenging curation tasks [43,44,53,54,171], such as: 1. Extraction: extracting features such as keyword, part of speech, and named entities (Persons, Locations, Organizations, Companies, Prod- ucts, and more) from unstructured texts [43]. 2. Enrichment: enrich the extracted features by providing synonyms and stems leveraging lexical knowledge bases for the English language, such as WordNet [104]. 3. Linking: links the extracted enriched features to external knowledge bases (such as Google knowledge [227] graph and Wikidata [251]) as well as the contextualized data islands. 4. Annotation: annotates features using different similarity metrics, classification, and clustering algorithms [202]. 38 Several works have been proposed for automating the curation tasks. For example, Alex et al. [9], proposed a system for automating the curation of biomedical research papers. The system utilizes different natural language processing techniques, including: named-entity/relation extraction and term identification to form a pipeline of curation tasks and extracting documents. Kurator [100], is a data curation system, which automates data curation pipelines by proposing several services for constructing a workflow of cura- tion tasks. The system provides curation services for modelling, execution, provenance [43, 45, 51], and management of data curation tasks. Song et al. [228], proposed a declarative and semi-automated approach for workflow design. The system implements a set of data curation actors, including name validators, summary validators, and annotation validators, to assist data cu- rators in curation processes. In Chapter 3, we explain how automation reduces many of analysts te- dious and time-consuming curation tasks. 2.4 Data Curation Rules One of the key principles in a curation task is the need to maintain the quality of data. Gartner 6 estimates that at least 25 % of data in the top companies is flawed. Extracting quality data has a significant impact on business outputs, particularly when it comes to the decision-making pro- cesses within organizations [90]. The increasing availability of open data on the Web, and generation of data across different platforms, produces an unprecedented volume of data, which increases the challenges for curating quality data [63, 133]. This problem exacerbated when the curation environ- 6https://www.gartner.com/en 39 ment is dynamic or changing constantly, e.g., in Twitter and Facebook. In such environments, the curation system needs to be updated continuously to capture changes to remain applicable. Over the past years, many solutions coupled humans with knowledge bases and learning algorithms for curating data in dynamic and changing en- vironments. These algorithms focused on identifying and removing residue information through continuously updating the curation using analysts or crowds feedback [12,15,42]. For example, Volks et al [250], proposed a declar- ative data cleaning system coupled with a probabilistic classifier to assist an- alysts in repairing inaccurate records in a database. He et al [126], introduced an interactive data cleaning system that rectifies errors in a database using humans’ feedback and a set of generated SQL update queries. The system uses SQL update queries for repairing database fields. DataSynapse [37], is a feature-based data curation pipeline, which utilizes several knowledge bases and a co-reference resolution technique for creating a Knowledge Lake and annotating the data. Ratner et al. [202], proposed a learning system for the rapid generation of training data. The system relies on weak supervision and a set of user-defined learning functions to train a generative model and label the data. De et al. [93], proposed DeepDive, a method for knowledge base construction by extracting information from unstructured text and tables. DeepDive relies on statistical inference and machine learning for extraction, cleaning, and integration of data into a knowledge base. Another line of works (e.g., [27, 80, 118, 182, 183, 235]) relied on curation rules for curating data in dynamic and changing environments. Curation rules can annotate data within a curation environment to enhance the inter- pretability of data for both humans and machines. In the next section, we accentuate approaches that leverage rules for curating data. First, we intro- 40 duce rule languages proposed for curating data. Then, We discuss techniques for adapting curation rules, after describing the rule enrichment techniques. 2.4.1 Curation Rule Languages Over the past years, different rule languages [79,161,176,247] have been pro- posed for curating the data, such as SystemT, JADE, Odine, AQL, DEL, AIML, etc. Rule languages mainly rely on pattern-matching to extract user information needs. These languages extract information using a set of regu- lar expressions or lexical tokens and return results that satisfying the user- specified patterns. In the following, we review some of these rule languages. 1. Data Extraction Language (DEL) [161]: is an XML based rule language for describing the data conversion process. DEL specifies how to extract and locate pieces of data from an input document. It outputs the resulting documents in a well-formed XML document and locates data fragments using the pattern matching and regular expressions. 2. Odin Runes [247]: is a grammar-based rule language that implies cascades of finite-state automata over both surface text and syntactic dependency graphs. The rule language aims at augmenting analysts in crafting rules through coupling both lexical and syntactic automata. 3. AQL [79]: is a SQL like rule language to extract semi-structured, and structured information from text. AQL is the primary component in many information extraction systems, including the InfoSphere [57] and BigInsights [59]. The syntax of AQL is similar to that of Struc- tured Query Language (SQL), which is case insensitive and removes the need for regular expressions to format the Information Extraction tasks. However, AQL does not support SQL features like recursive 41 queries and sub-queries but relies on extract statements for retrieving the information. 4. SystemT [79]: is a declarative rule language, which extracts infor- mation from both unstructured and semi-structured data using a SQL like syntax. SystemT has been used in a wide array of enterprise appli- cations and many information extraction systems. The rule language is made up of three components: (1)AQL, a declarative rule language with a similar syntax to SQL, (2) Optimizer, which generates high- performance algebraic execution plans for AQL statements, and (3)Ex- ecuting engine, which executes the algebraic plans and performs infor- mation extraction over input documents. 2.4.2 Curation Rule Enrichment Over the past years several rule enrichment techniques [110,118,230,235,237, 238, 245] have been proposed to enhance data curation systems. These tech- niques mostly rely on algorithms, such as similarity, extraction, classification, linking, summarization, etc. For example, for enriching a rule that curates Tweets relevant to ‘mental health’, it is possible to extract information, e.g., keywords and named entities, from Tweets and link them to a knowledge base and generate a graph of related entities to reveal hidden information in the data [29, 30, 37, 52, 123]. Following, we discuss different techniques proposed for enriching curation rules. 1. Knowledge-Graph Based Enrichment: Incorporating the information extracted from Knowledge Graphs (KGs) to enrich rules. For example, consider rule R1: 42 R1 = IF Tweets contains (‘health’) AND Tweets contains (‘service’) THEN tag as “MENTAL HEALTH” This rule tags a Tweet with ‘mental health’, if the Tweet contains ‘Health’ and ‘Service’ keywords. However, there exists a large number of Tweets relevant to mental health, which rule R1 skips as those Tweets may not contain both ‘Health’ and ‘Service’ keywords. To alleviate this problem, an analyst may utilize an ontology e.g., WordNet, to enrich rule R1 with its synonyms. Thus, modifies the rule to: R1 = IF Tweets contains (‘health’|‘wellbeing’|‘wellness’) AND Tweets contains (‘service’) THEN tag as “MENTAL HEALTH” Rule R1 tags a Tweet, if it contains any of ‘health’, ‘wellbeing’, and ‘wellness’ keywords and the ‘service’ keyword. Rule enrichment have gained lots of attention in recent years [118, 235, 238,260]. There are a large number of lexicons or knowledge bases, exist to enrich a set of rule, such as: WordNet [104], ConceptNet [231], Wiki- media [56], Google Knowledge Graph [227], BabelNet [187], Yago [205], KnowItAll [107], DbPedia [25] (see Table 2.1). Typically, an analyst enriches a rule by extracting keywords, phrases, or entities that are relevant to her information needs. Enrichment augments a rule to cu- rate a larger number of items. For example, Kaufmann et al. [146], assists analysts in enriching rules by utilizing several ontologies and Natural Language Processing (NLP) techniques. Lopez et al. [169], propose PowerAcqua, a Question Answering (QA) system, which com- bines several knowledge sources to enrich queries to retrieve the infor- mation stored in heterogeneous knowledge resources. Zamanirad [260], 43 proposed an approach for synthesizing natural language expression, to determine the proper API call. The technique understands the user intention and knowledge over an enriched knowledge graph of APIs. 2. Similarity-Based Enrichment: Similarity-based enrichment is a syntactic level enrichment, which en- riches items by quantifying the measure of similarity between the rule and data. There exist several similarity metrics for both string and nu- merical values. To measure the similarity between string values, metrics such as euclidean distance, Jaccard similarity, TF-IDF [216], and cosine can be used to enrich a rule. To examine the similarity between nu- merical values, metrics like Hamming distance [190], and Soundex [23], can be utilized. One embodiment of similarity metrics in enriching rules is to classify data based on hashtags. As an example, consider a community advertising drastic weight-loss measures for youngsters. Suppose, initially the social media content circulated using the hash- tag #thighgap. Over time, a group of health advocates attempts to counteract these drastic and negative messages by writing rules that identify the posts containing #thighgap, and posting materials that promote healthy weight choices. The supporters of drastic weight loss might be displeased and evolve their hashtag into misspelled versions, say hashtag #thyhgapp. Using similarity metrics, the supporters of healthy weight-loss can enrich their rule to capture such changes. 3. Pattern-Matching Based Enrichment: Pattern matching, or regular expressions, has been used for a long time to enrich rules (e.g., [75,110,230,245]). As an illustrative embodiment, pattern matching based enrichment can be adopted to provide addi- 44 Table 2.1: A sample list of knowledge bases 45 Name Description 1 Wikidata Wikidata [107] is a knowledge base hosted by the Wikimedia Foundation. It focused on representing concepts, objects or topics of terms. Examples of a term includes 1988 Summer Olympics, love. Each term within Wikidata contains a unique identifier, prefixed with the letter Q, known as a "QID". 2 Yago YAGO (Yet Another Great Ontology) [129] is an open source knowledge base with more than 10 million entities and 120 million facts about these entities. The information in YAGO is obtained from Wikipedia, wordNet, GeoNames and linked to several ontologies including DBpedia and SUMO. The accuracy of the extracted data was manually evaluated to be above 95% on a sample of facts. 3 DBpedia DBpedia [131] is a crowd-sourced knowledge graph which provides structured content from the information created in various Wikimedia projects. DBpedia data is served as Linked Data and provides different interfaces for extracting or crawling the information. For querying it provides A SQL-like query languages known as SPARQL. 4 KnowItAll KnowItAll [130] is a knowledge base that populates concepts, facts, and relationships by extracting the information across the web. It is designed to be scalable and high throughput to retrieve and access the information. 5 BabelNet BabelNet [128] is a multilingual ontology which is created automatically by linking Wikipedia and WordNet. The integration is done using an automatic mapping and filling the lexical gaps through a statistical machine translation 6 WordNet WordNet [71] is a lexical database and groups English words into sets of synonyms called synsets. WordNet provides different information about words, including a short definition, their usage, hypernym and hyponym relations. 7 ConceptNet ConceptNet [126] is a freely-available semantic network, designed to help computers understand the meanings of words that people use. ConceptNet originated from the crowdsourcing project Open Mind Common Sense, which was launched in 1999 at the MIT Media Lab. 8 DeepDive DeepDive (http://deepdive.stanford.edu) [114] is a new type of data processing system that has demonstrated the ability to extract structured SQL-like databases from unstructured text and tables (Dark Data) with higher quality than human annotators. DeepDive is used in different applications, including anti-human tracking applications with NGOs and law enforcement, a handful of enterprise companies, and scientic eorts in genomics, drug repurposing, electronic medical records, and paleobiology. tional keywords or classifications by determining common keywords that co-occur with those specified in a rule. For example, Cayrol et al. [75], proposed a fuzzy pattern matching for enhancing rules to ex- tract information by considering the similarity between referents desig- nated in the data and the pattern respectively. Irena Spasić et al. [230], designed a system to enrich curation rules by exploiting the morpho- logical and lexical aspect of data. Ozlem Uzuner et al. [245], proposed a hybrid system (rules, machine learning) for extracting medical text information. The system relies on pattern-matching based enrichment to extract phrases, and eliminate the irrelevant information, then uses the collected information to train learning algorithms and extracting the information. Fatemi et al. [110], proposed an approach to enrich the representation of video content through a combination of semantic concepts and their co-occurrences. The approach leverages an existing partial set of semantic concepts for video archives and exploiting their relationships using association rules. 2.4.3 Rule Refinement: Rule refinement 7 is the process of modifying a rule to make the rule better suited to the curation environment [106]. For example, consider an analyst and is interested in curating Tweets relevant to ‘mental health’. The analyst may examine the curation environment and, after a scanning a set of Tweets, crafts the rule R1. This rule curates items that contain both ‘health’ and ‘service’ keywords. R1 = IF Tweets contains (‘health’) AND Tweets contains (‘service’) THEN tag as “MENTAL HEALTH” 7In this dissertation, we use the terms refinement and adaptation interchangeably. 46 However, after curating a set of items, the analyst may identify the rule is imprecise and needs adaptation. Typically, to adapt a rule, an analyst examines different modifications to determine the optimal one. For example, after several changes, the analyst may adapt the rule R1 to R(cid:48) 1 = IF Tweets contains (‘health’) AND Tweets contains (‘mental’) THEN tag as “MENTAL HEALTH” Rule adaptation is time-consuming and error-prone, which has been studied in several areas, including information retrieval [80,164], fraud detection [182, 183], and database integration [250]. A large number of works for adapting rules relied on a ground truth of manually annotated items [27,164,182,183, 235, 250]. In these solutions, an analyst uses a ground truth to determine whether an adaptation could improve the rule precision or not. For example, Milo et al. [183], used grand truth for assessing the performance of rules in a fraud detection system. Liu et al. [164], relied on a ground truth for assisting analysts in adapting rules and assessing the impact of her modifications. However, these solutions have focused on adapting rules that operate in a structured and more static environment, where the grand truth doesn’t need to be updated frequently. Although relying on ground truth can reduce the analyst burden in de- termining the performance of rules, in environments where the distribution of data is changing, e.g., social media, the analyst needs to iteratively adapt a rule to keep the rule applicable and precise [5, 6]. Thus, for adapting rules in dynamic and changing environments coupled crowd workers and analysts (e.g., [27, 118, 118]). For example, Sun et al. [235], coupled analysts and crowd workers in adapting rules. The approach relies on workers to verify items curated with rules and the analyst for determining the optimal modifi- cations for the rule. Bak et al. [27], proposed a voting technique for validating 47 rules performance in information extraction applications. The approach re- lies on crowd workers’ feedback to determine whether an adaptation of a rule produces a positive impact on extracting information or not. Alternatively, in recent years some solutions focused on offloading ana- lysts from adapting rules [106, 237]. For example, These solutions, consider a rule a set of features and determines the performance of rules by adding or removing features [106, 237]. GC et al. [118], relied on a relevance feedback algorithm [207] to determine the performance of features for adapting a rule. The algorithm based on the analyst feedback proposes an adaptation to make the rule applicable and precise. 2.5 Sensemaking of the Curation Environment This section explains techniques focused on enhancing users’ comprehension of curation environments. In particular, we discuss solutions proposed for assisting users in the sensemaking of data and formulating their preferences. In a large curation environment, users’ information needs can range from relatively simple tasks, e.g., looking up disputed facts or finding weather in- formation, to rich and complex ones, e.g., job seeking and planning vacations. Typically, user interaction with a curation environment may vary based on the amount of time and effort the user can invest in the curation task and the level of her expertise [127]. The most common interface for interacting with a curation environment is search engines, e.g., google and bing. These interfaces are more appro- priate for information lookup tasks, finding information relevant to websites or answers to questions. However, as Marchionini [174] explained, search engine interfaces are inherently limited for many of the user’s curation tasks, 48 especially when a user needs to retrieve a varied and comprehensive list of information across a large amount of data. Marchionini [174] makes a dis- tinction between information lookup and exploratory search. Lookup tasks are suitable for retrieval of discrete data, question answering, numbers, dates as well as names of files and Web sites. Standard Web search interactions work well for these retrieval tasks. On the other hand, exploratory search considers much broader information- seeking tasks, which requires learning and investigation. During learning, users need to issue queries, retrieve, scan, and incorporate a large amount of data. Investigating refers to a much longer search activity that requires a continuous reformulation of queries and assessing the results. The investiga- tion may take place over a an extended period, and results may need to be analyzed before being integrated into users’ knowledge sources [174]. In the investigation, a user mostly focuses on recall rather than precision. Examples of investigative search are litigation research or academic research. More broadly, an exploratory search can be seen as part of a more signifi- cant task, known as sensemaking [199,210,211]. Sensemaking is an iterative process and is defined as activities a user undertakes to frame the curation environment in a logical schema [199]. Search and information seeking plays a crucial role in the curation of data. Search allows users to grasp the curation environment by retrieving information relevant to their information needs. However, to make sense of data, a user needs to scan and read a large amount of information and continuously reformulates her queries, which is proven to be painstakingly difficult and time-consuming. Examples of sensemaking tasks include the legal discovery process, epidemiology (disease tracking), studying customer complaints to improve service, and obtaining business in- telligence. Pirolli et al. [199], framed the sensemaking process into four steps: 49 Figure 2.2: Overview of sensemaking loop (Source: [199]). Inf ormation −→ Schema −→ Insight −→ P roduct Figure 2.2, shows the stages within each step, and followings, we explain each of them in detail: 1. Information The first step in the sensemaking of a curation environment is retrieving information to support users to understand the data (stage 1 to 7 in Fig- ure 2.2). This stage is also known as the ‘foraging or learning loop complex’ 50 (Figure 2.3), where a user investigates the curation environment to identify a good representation of data. In this stage, the user’s information needs may evolve as the user learns about the curation environment by analyzing the retrieved information [31]. Typically, to retrieve information, a user starts with a set of imprecise queries to approximately fetch the relevant part of data. Then, the user reformulates (see Section 2.5 for the explanation on how reformulation hap- pens) her queries by examining retrieved information. In the past years, many curation systems [26, 60, 95, 112, 135, 170, 229, 246] have attempted to support users in retrieving their information needs through elaborating their vague queries and recommending better ones. Many of these systems relied on logs accumulated from previous searches. An example of such systems, DirectHit [89], reformulates user’s preferences by suggesting new query terms to narrow down their information retrieval tasks. Another technique that users relying on to investigate a curation envi- ronment is Boolean Operators. Boolean operators have been supported by a large number of data curation systems. However, Boolean operators are difficult to use, and a user hardly could apply these operators to curating their information needs [98, 124, 130, 132]. For example, an examination of the search engine log over 1.5M queries revealed that only 9.7% of queries were contained boolean operators [139]. Another study in 2006 over nearly 600,000 users queries revealed that only 1.1% of the queries were contained boolean operators (double quotes, +, -, and site:) and only 8.7% of the users used an operator at any time [127, 255]. 51 Figure 2.3: Learning Loop Complex. 2. Schema The second step of the sensemaking process is to create a mental structure of the curation environment by analyzing results retrieved from users’ queries (stage 8 to 10 in Figure 2.2). In this stage, a user attempts to encode the cura- tion environment in a new representation, which is more compact and better describes her information needs. The re-representation may informally occur in the mind of the user, aided through pen and pencils, or even computers. Often, in re-representation, the user tries to discard the residue information to identify information relevant to her information needs [210]. Figure 2.4 shows how users explore the curation environment to create a mental schema of a curation environment. Initially, to create a mental structure of the cu- ration environment, the user begins with a broad set of documents and then 52 ExploreEnrichExploit•Curate newData ItemsData Generation Loop•Identify andremove residueinformationInstantiate a Representation•Re-representtheinformationFind a good representationFocuses on increasing the span of the retrieved information Focuses on collecting more significant, higher-precision sets of documents Require activities, such as reading, extraction, and generating inferences Figure 2.4: Exploration Cycle in a Curation Environment. narrows down that set into successively smaller rings. Patterson et al. [194], discusses that as a trade-off between Exploring, Enriching, and Exploiting of data. Followings explain each of these steps in detail: 1. Exploring: Focuses on increasing the span of the retrieved information and corresponds to improving the recall of the information search. 2. Enriching: Focuses on collecting more significant, higher-precision sets of documents by removing the residue data. 3. Exploiting: In this process, a user more involves in activities, such as reading, extraction, and generating inferences. Over the past years, many data curation systems (e.g., [97, 118, 235, 238]) attempted to support users in creating a mental structure of data. Many of these solutions rely on augmenting user comprehension of data through different visualization elements. Ranked lists of items, is one of the most common techniques that have been used to aid users in creating a mental schema of data. A ranked list ranks documents based on their relevance to 53 ExploreEnrichExploitFocuses on increasing the span of the retrieved information Focuses on collecting more significant, higher-precision sets of documents Require activities, such as reading, extraction, and generating inferences the user query (e.g., [85, 114]). The advantage of ranked lists is that users are familiar with the presentation arrangement and know where to start their scan for documents that seem relevant to their information needs [97]. A study by Shani et al. [224], notes that augmenting ranked lists with bars lowers the users’ cognitive load in grasping the curation environment. On the other side, ranked lists limit the number of items a user can examine within the curation environment as they imply a sequential search, and only a small subset of items are visible to users [97]. In addition to ranked lists, other techniques for supporting users to create a mental structure of curation environments are: 1. F ocus + Context [72]: Enables users to examine objects relevant to their information needs in full detail, while users can get an overview impression of all other available information at the same time. Fo- cus+Context systems allow having the information of interest in the foreground and the rest of the information in the background. It is made up of three components: (1) It provides both overview and de- tail information together, (2) Information provided in the overview can be different from those presented in detail, and (3) The context and the overview information can be combined within a single (dynamic) display. 2. Overview + Detail [68]: Focuses on simultaneously displaying both an overview and a detailed view of a curation environment. This design shows each overview and detail in a distinct presentation area. For example, consider two images that are used for presentation. In an Overview + detail interface, the first image shows an overview of the whole curation environment, while the second image shows a small portion of the curation environment and visualizes details. 54 3. Insight The third step of the sensemaking is developing insight through manipulating the representation created in the previous step (stage 11 to 13 in Figure 2.2). In this step, a user examines the curation environment to extract evidence relevant to her information needs. The user examines different hypotheses and concludes the relevancy of evidence to her information needs by analyz- ing the relationship between documents. Pirolli et al. [199], provides below guidelines for verifying hypotheses and evidence: 1. Span of attention between hypotheses and evidence: Humans have a limited memory capacity in absorbing the information, which limits the number of hypotheses, evidence, and the relation between hy- potheses and evidence can heed. This problem exacerbated while users require to conduct reasoning of the extracted evidence and hypotheses as it has an exponential cost structure. 2. Generating alternative hypotheses: Typically, humans compre- hension is biased towards the interpretation of information into some prejudged expectations. Human reasoning is also biased to some heuris- tics that deviate from ‘normative rationality’ [199]. This problem limits the ability of people to generate new hypotheses. Besides, factors such as time pressures and data overload decrease humans’ ability to pro- duce, manage, and evaluate their hypotheses effectively. 3. Confirmation bias: People typically fail to consider the diagnosticity of evidence and the disconfirmation of hypotheses. A solution would be to understand users need to distribute their attention to profoundly suggestive evidence and also search for disconfirming relations within the information space. 55 Over the past years, data curation systems have been focused on aug- menting users in deriving insight and verifying their curation hypotheses. These solutions mainly focused on enhancing users whist they reformulating their preferences. An early study [140] of search engine logs, showed that during a curation task, least 50% of users are modifying their queries to dis- cover their information. Query recommendation is one of the conventional techniques that have been employed by Web search systems to aid users in deriving insight. A query recommendation system helps users to better verify their hypotheses by showing terms related to their queries. An example of such systems is spelling corrections or suggestion systems [88, 157, 163]. Additionally, query expansion is another technique for supporting users to formulate their preferences. Query expansion focuses on formulating users’ information needs based on previous users’ searches. A study by Jansen et al. [139] suggests that at least 6% of users who were exposed to query suggestion systems chose to click on them [21]. Relevance feedback [207] is another method and proposed to help users to derive insight through reformulating their queries. The main idea of rel- evance feedback is to determine the relevancy of documents and queries. In some variations of relevance feedback, users specify the terms within docu- ments that are relevant to their queries [154]. Then the system computes a new query using the feedback received from the user. Although relevance feedback successfully integrated with non-interactive systems, they were not successful from the usability aspect and couldn’t incorporate with data cu- ration interfaces [17, 147, 214] 56 4. Product This step focused on aiding the user to organize curation results to under- stand the data and makes the decision (stage 14 to 16 in Figure 2.2). Two common systems for organizing the curation results are: category systems and clustering. 1. Category System: A category system is made up of a set of labels that formed in a way to represent concepts related to a domain [127]. A category system needs to be consistent and impeccable with predictable and consistent structure across a curation environment. Examples of category systems are faceted, flat, and hierarchical categories [221]. (a) Flat Categories: A flat category is a list of topics or subjects that are grouped to help a user in organizing her information. Flat categories also can be used for filtering or classifying documents. The early studies on the usability of flat categories have shown that these systems are not useful for organizing a large amount of information with an extended number of topical subspaces. In- stead, flat categories showed positive feedback for more focused information-gathering tasks [101, 158]. (b) Hierarchical Categories: Hierarchical categories first used for file system browsing, e.g., explorer window. In Web search inter- faces, hierarchical categories have been introduced by Yahoo to organize popular sites into a browsable fashion. (c) Faceted Categories: Faceted categories utilize both flat and hi- erarchical categories and are suitable for organizing curation en- vironments with a large number of documents. 57 2. Clustering: Clustering refers to the grouping a set of items that share some measure of similarity. In document clustering, the analogy is computed using the commonality among features, where a feature can be a keyword or a phrase [91]. Clustering presents a fully automated strategy for rep- resenting the information within a curation environment. An example of clustering is to group documents by the language they have writ- ten, e.g. English, German, and Japanese. However, Also, clustering algorithms require high computational power and is difficult for use in real-time Web search or information retrieval tasks. 2.5.1 Sensemaking Challenges Sensemaking of the information space is the quintessential part of every data curation system [174]. Sensemaking is known as a challenging and time- consuming task, especially when a user needs to extract information across a large number of data [195]. As we discussed in the previous sections, over the past years, different solutions have been proposed to aid users in sensemaking of a curation environment. These solutions focus on augmenting users com- prehension of the information space through different visualization elements, e.g., ranked lists, tables, keyword expansion, clustering and categorization. Although, relying on such solutions lower user’s cognitive load in understand- ing the curation environment. Still, users need to scan and examine a large amount of data to identify the relationship between different attributes in the curation environment. Besides, in large curation environments, many of these relations remain invisible either due to users limited memory capacity in absorbing information or visual clutter [234]. To alleviate this problem in Chapter 5, we discussed our solution for enhancing the user’s comprehension 58 and sensemaking of a curation environment. We proposed a summarization technique that generates a conceptual summary of data without the need to scan or investigate the curation environment. Our approach automati- cally discovers associations among different attributes. It boosts the user’s comprehension of data by formulating their preferences as a set of high-level concepts such as topic, category, and locations. 2.6 Conclusion and Discussions In this chapter, we reviewed state of the art on data curation systems. We started the chapter by defining data curation and frameworks proposed for framing the curation tasks. Then, we continued our discussion on techniques proposed for transforming the raw data. We discussed their strengths and weaknesses and explained that current solutions require analysts to conduct various time-consuming and tedious curation tasks to curate the data. We also demonstrated that analysts need to spend an extended period to scan cu- ration environments to identify and extract features that best describe their curation needs. In the next chapter, we accentuate our proposed feature- based solution for curating the raw data. We introduce different types of features that can be extracted from data and tools to automate many of curation tasks. We present the notion of Knowledge Lake and compared it with a data lake in deriving insight and extracting knowledge. Next, we discussed data curation rules. We explained different techniques for curating data in dynamic and changing environments. We discussed why relying on algorithmic approaches fail to curate data in dynamic environ- ments. We noted how rules could complement curation algorithms for curat- 59 ing data in dynamic and constantly changing environments. We introduced different rule languages and enrichment techniques for enriching data cura- tion rules. Finally, we wrapped up the section by reviewing methods proposed for adapting data curation rules. In Chapter 4, we discuss our proposed so- lution for adapting curation rules. We explain how learning algorithms can be utilized to offload analysts from adapting rules in dynamic and changing environments. Besides, we explain how rules can be boosted to curate data at the conceptual level to annotate a larger number of items. Finally, we reviewed state of the art on augmenting user’s comprehension of curation environments. We discussed the importance of the sensemaking in formulating users preferences and curating data. Then, we explained the sensemaking process, including information, schema, insight, and product, and how these steps may impact the user’s comprehension of a curation environment. We wrapped up the section, with challenges in the sensemaking process and possible solutions to augment user’s understanding of data. In Chapter 5, we discuss our proposed solution for the sensemaking of data. We discuss how summarization enhances users to comprehend the data without the need to scan or investigate the curation environment. 60 Chapter 3 Feature Based and Automated Data Curation Foundry In this chapter, we present a feature-based data curation foundry for ex- tracting value. We introduce a set of API’s that made available publicly to automate curation tasks. We discuss, an algorithm for creating a Knowledge Lake (e.g., a contextualized data lake), to facilitate the transformation of the raw data (e.g., a Tweet in Twitter) into a curated item. The rest of this chapter is organized as follows: We introduce the research problem in Section 3.1. In Section 3.2, we provide background and related works. We present our solution in Section 3.3. In Section 3.5, we present the implementation and the evaluation results of our approach. Finally, we conclude the chapter with remarks for future directions in Section 3.6. The content of this chapter is derived from the following paper(s): • A Beheshti, B Benatallah, A Tabebordbar, H R Motahari-Nezhad, M C Barukh, and R Nouri, Datasynapse: A social data curation foundry, Distributed and Parallel Databases Journal (2018), 1–34 (ERA Rank A). 61 • A Beheshti, A Tabebordbar, B Benatallah, R Nouri, On automating basic data curation tasks. In Proceedings of the 26th International Conference on World Wide Web Companion 2017 Apr 3 (pp. 165-169). 62 3.1 Introduction By the expansion of various data generation platforms, e.g., social media, Web, sensors, and big data processing systems have become the quintessential method for extracting knowledge and deriving insights from vastly growing data [205]. This increase in the volume of data, made opportunities for orga- nizations and governments [48,213] to extract knowledge and generate value. For example, over the last few years, several companies started mining social media contents to personalize the advertisements in elections [241], analyse citizens’ opinions on urban issues [11], improve government services [76], pre- dict intelligence activities [168], unravel human trafficking activities [93], as well as to improve national security and public health [241]. Social media, e.g., Twitter, LinkedIn, and Facebook, have provided an un- precedented opportunity for data generation platforms to propagate people opinions in real-time. In this context, a fundamental principle is to provide an efficient technique to transform the raw data generated by users into curated data, e.g., contextualized data and knowledge that is maintained and made available for use by end-users and applications. This process significantly enhances business operations, especially when it comes to decision-making processes and analysis. Data curation involves various curation tasks, in- cluding identifying relevant data sources, extracting data and knowledge, cleaning, maintaining, merging, enriching and linking data and knowledge (see Chapter 1 for more detail). For example, a government can analyze cit- izens’ opinions regarding urban issues [11], by curating their Tweets, Posts and comments on social media platforms, or an organization may target an advertisement for a group of users based on the contents posted on their social media page. Thus, data curation acts as the glue between the raw data and analytics, providing an abstraction layer that relieves analysts from 63 time-consuming, tedious and error-prone curation tasks. Despite wide-spread efforts for data analytics, big data systems are still in their preliminary stages, with several unsolved theoretical and technical challenges stemming from the lack of adequate support for complex data cu- ration tasks [233]. At present, current approaches mostly rely on: (1) Purely algorithmic approaches [118], while these approaches are dominant in a pre- defined context, they cannot be easily adapted for a large number of curation tasks that suffers from lack of enough training data, (2) Scripting languages, while offering increased flexibility, they demand sophisticated programming and mastery over the associated low-level libraries to create and maintain complex curation. To facilitate the curation process, in this chapter, we present the notion of Knowledge Lake, (e.g., a contextualized data lake) [34, 242], which pro- vides a foundation for data analytics by automatically curating the raw data into actionable insights. We leverage a Cross Document Co-reference Resolu- tion (CDCR) algorithm to assist analysts in linking the raw data to domain knowledge and derive insight. The algorithm facilitates the transformation of data by offering a customizable feature extraction to harness the desired features from the data. The unique contributions of this chapter can be summarised as: 1. We present the notion of Knowledge Lake, to facilitate data analytics by automatically curating the raw data and preparing them for deriving insights. The term Knowledge here refers to a set of facts, information, and insights extracted from the raw data using data curation techniques such as extraction, linking, summarization, annotation, enrichment, and classification. 2. We proposed an approach for transforming the raw data, using a ‘Feature- 64 based’ data extraction technique. We also define a set of service-based APIs to facilitate data curation tasks, e.g., ingesting, extracting, clean- ing, Summarizing and classifying data, as well as extracting features. Examples of an API in the category of ‘extraction’ include: named entities1, keywords 2, synonyms 3, stem and part-of-speech4. 3. We offer an algorithm for linking the extracted data to the domain knowledge by producing a summary of data and developing its contex- tualization. To do so, we leverage a CDCR technique [50] to identify coreferent entities within the data. For example, considering an analyst who is interested in gaining an accurate and deep understanding of cy- berbullying, a keyword-based summary can enhance her comprehension of the threats exists within the data. 4. We provide a simple rule language to assist analysts in querying the Knowledge Lake to facilitate analytical tasks. We have implemented our approach as a set of reusable APIs, which are available publicly on Github5. We adopt a typical scenario for an- alyzing urban issues from Twitter. We demonstrate how our approach improves the quality of extracted data compared to the classical cura- tion pipeline (in the absence of feature extraction and domain-linking contextualization). Figure 3.1 illustrates the proposed data curation foundry. 1A named entity is a phrase that clearly identifies one item from a set of other things that have similar attributes, such as people, organization and places 2a word or concept of great significance. 3a word or phrase that means exactly or nearly the same as another word or phrase in the same language. 4part-of-speech is a category to which a word is assigned in accordance with its syntactic functions, such as noun, adjective and verb. 5https://github.com/unsw-cse-soc/Data-curation-API.git 65 Figure 3.1: Overview of Proposed Data curation Foundry [37]. 3.2 Related Works and Background Data curation have been studied extensively in the past years. One of the main domain of data curation is to transform social data into actionable insights. Data curation has been defined as the active and ongoing man- agement of data through its lifecycle of interest and usefulness [74]. In this chapter, we primarily aim at data creation and value generation, rather than maintenance and management of data over time. More specifically, we focus on curation tasks that transform the raw social data (e.g., a Tweet in Twit- ter) into contextualized data and knowledge include extracting, enriching, linking, annotating and summarizing social data. The contributions of this chapter aimed at breathing meaning into the raw data generated on social media and transforming it into contextualized knowledge, for effective consumption in social analytics and insight discov- ery. For example, information extracted from Tweets is often enriched with metadata on geolocation. Current approaches in data curation rely mostly on data processing and analysis algorithms including machine learning-based al- gorithms for information extraction, item classification, record-linkage, clus- tering, and sampling. Snorkel [202], is an example of an algorithmic curation 66 InsightContextualized DataFeaturized DataRaw DataExtractContextualizeAnalyzeAPIsPre-ProcessRaw Data...BuildSchemaExtractFeaturesConstruct Domain KnowledgeLink Extracted items to Domain KnowledgeIdentifyGoalClassify, Summarize, Enrich, Annotate, etc.Named EntityKeywordPoIDataLinkingSimilarity......ClassificationFilter...EnrichmentRule LanguagesynonymAnalystFeature EngineeringSynapse system for the rapid generation and annotation of the raw data. The system relies on weak supervision and a set of user-defined functions to train the learning algorithms and labelling the data. DeepDive [93], is an algorith- mic curation system for knowledge base construction. The system relies on statistical inference and machine learning for extraction, cleaning, and inte- gration of data into a knowledge base. For example, consider a system, which extracts named entities from Tweets (e.g., ‘ISIS’ and ‘Palmyra’ in ‘There are 1800 ISIS terrorists in Palmyra, only 300 are Syrians’). The system may link the entities to a knowledge base (e.g., Wikidata 6, Google Knowledge Graph 7), to annotate and classify the Tweets into a set of predefined topics (e.g., using naive Bayes classifier). Learning algorithms are undoubtedly the core components of data-curation platforms, where high-level curation tasks may require a non-trivial combination of several algorithms [19], e.g., IBM Watson question-answering system uses hundreds of various algorithms for producing an answer [111]. Another set of related works [46,145,200,222,254] focuses on the seman- tical analysis of social media contents to breathe meaning into information extracted from the raw data. Many of these approaches directed at creating, enriching or reusing Knowledge Graphs (KGs). KGs are large knowledge- bases that contain a wealth of information about entities (e.g., millions of people, organizations, places, topics, events) and their relationships (for a list of existing knowledge bases and their description, please refer to Chap- ter 2). A knowledge base can be curated manually or automatically. Many of knowledge bases are interlinked at the entity level, (i.e., a Web of linked data) to provide more insights and knowledge which in turn would be an excellent asset for facilitating data curation pipelines. For example, cogni- 6https://www.wikidata.org 7https://developers.google.com/knowledge-graph/ 67 tive applications, knowledge-centric services, deep question answering, and semantic search and analytics can benefit from the knowledge bases. Alternatively, many data analytics platforms rely on scripting languages ( rule and query-based languages) for curating data [186]. Examples of these languages in academia, include DEL 8 (Data Extraction Language), as well as AQL [79] (for more detail on curation languages and their description, please refer to Chapter 2). Typically, scripting languages use regular expressions, dictionaries and taxonomies to curate user-defined information needs [118]. Overall, even sophisticated and professional data scientists tools force ana- lysts to use scripting languages to retrieve and curate their information [186]. Finally, there has been considerable amount of works on curating open data. These works provide domain-specific solutions for different curation tasks, including leveraging crowdsourcing techniques to extract keywords from Tweets in Twitter [42, 241], Named entity recognition in tweets [206], linking entities for enriching and structuring social media content [244], and sentiment analysis and identifying mental health cases on Facebook [208]. However, to the best of our knowledge, there has been no work proposed a general-purpose approach for curating open data. Our proposed solution enables analysts to automatically link the data and knowledge generated on different social networks, uncover hidden patterns and generate insight. Motivating Scenario. Consider an analytic task related to ‘under- standing a government budget in the context of urban issues’ : A typical government budget denotes how policy objectives are reconciled and imple- mented in various categories and programs. In particular, budget categories (e.g., ‘health’, ‘social services’, ‘transport’, and ‘employment’) defines a hier- archical set of programs (e.g., ‘medicare benefits’ in health, and ‘aged care’ 8https://w3c.org 68 in social-services). These programs refer to a set of activities or services that meet specific policy objectives of the government [150]. Using traditionally adopted budget systems would be challenging to evaluate the government services requirements and performance. For example, it is paramount to sta- bilize the economy through timely and dynamic adjustment in expenditure plans by considering related social issues. For instance, a problem or conflict raised by a society ranging from local to national issues such as health, social security, public safety, welfare support, and domestic violence [150]. There- fore the opportunity to link ongoing social problems to budget categories provide the public with increased transparency, and government agencies with real-time insight about how to make decisions. 3.3 Solution Overview Social media allows people of different walk of life to share their ideas and views by tagging, commenting or retweeting each other Posts. Examples of social media networks include Twitter 9, Facebook 10, and LinkedIn 11. Ana- lyzing social media posts allowing companies and business owners to promote brands, connect to new customers and foster their business. However, this requires businesses to transform the raw social media data into meaning- ful insights. In this context, we propose an automated and feature-based data curation foundry. We augment users in curating data by suggesting a set of curation services that offloads analysts from many tedious and time- consuming curation tasks. We propose a set of features to enhance analysts in curation tasks to grasp the salient aspect of data. An example of a feature 9www.twitter.com 10www.facebook.com 11www.linkedin.com 69 is mentions of a person in Tweets or other social media Posts. Followings, we describe different types of features we extract from social media data. 3.3.1 Feature Extraction In this section, we introduce different types of features we extract from so- cial media contents. Today, we extract two types of features: surface level features and semantic level features. Surface level feature refers to the type of features that can be extracted from social media by analyzing their syn- tactical characteristics. In contrast, semantic level feature refers to the type of features that describe social media contents semantically. 1. Surface Level Features (a) Schema-based features: This feature is related to the informa- tion we extract from the properties of a social item. For example, according to the Twitter schema 12, a Tweet may have attributes such as text, source and language, and a user may have attributes such as username, description and timezone. (b) Lexical-based features: This feature extracts information from social media texts, e.g., keywords, topic, phrase, abbreviation, special characters (e.g., a quotation in the text of a Tweet), slangs, informal language and spelling errors. (c) Natural language-based features: This feature is related to entities that can be extracted by the analysis and synthesis of natural language (NL) and speech, such as part-of-speech (e.g., verb, noun), named entity type (e.g., person, organization, prod- 12https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/tweet- object 70 uct). For example, an instance of an entity type such as ‘Malcolm Turnbull’ is an instance of an entity type ‘person’. (d) Time-based features: This feature is related to the information that can be extracted from the time in the schema of social me- dia contents (e.g., ‘Tweet.Timestamp’ and ‘user.TimeZone’). For example, a Tweet in Twitter or a Post in Facebook may contain a date, e.g., ‘3 May 2017’. (e) Location-based features: This feature is related to the men- tions of locations in the schema of items. For example, in Twitter ‘Tweet.GEO’ and ‘user.Location’, or the content of a Tweet may contain mentions of locations, e.g., ‘Sydney’, a city in Australia. (f) Meta data-based features: This feature is related to a set of data that describes and gives information about the social items. For example, it is important to know the number of followers (followers count) and friends (friends count), the number of times a social item has been visited (view count), liked (like count), or the sentiment of the content posted on a social media. 2. Semantic Level Features (a) Schema-based semantics: We use knowledge services such as Google Cloud Platform 13, Alchemyapi 14, Microsoft Computer Vision API 15 and Apache Prediction IO 16 to extract various features from the social media properties. For example, if a Tweet contains an Image, it is possible to extract objects (e.g., people) 13 https://cloud.google.com/ 14 https://www.ibm.com/watson/alchemy-api.html. 15 https://azure.microsoft.com/en-gb/services/. 16 https://github.com/PredictionIO/. 71 from the image. (b) Lexical-based semantics: We leverage knowledge sources such as dictionaries and WordNet 17 to enrich lexical-based features with their synonyms, stems, hypernyms 18 hyponyms 19 and more. (c) Natural language-based semantics: We leverage knowledge sources such as WikiData, GoogleKG and DBPedia to enrich Nat- ural language based features with similar and related entities. For example, M alcolmT urnbull is similar to T onyAbbott —they both acted as the prime minister of Australia but M alcolmT urnbull is related to U niversityof Sydney. (d) Temporal-based semantics: We leverage different knowledge sources and services (such as events and storyline mining) to enrich time-based features. For example, a Tweet posted from Australia might be enriched with all events within that time frame. For instance, if a Tweet posted on ‘3 May 2017’, we enrich the Tweet to be related to ‘Australian Budget’ as we know from knowledge bases that the Australian Treasurer is handing the budget on 3 May every year. (e) Metadata-based semantics: We use metadata-based features (such as followers count and share count) to calculate semantics such as the influence of a user. These semantics will enable ana- lysts to get more insight from the social media posts and analyze the capacity to affect the character, development, or behaviour of 17 https://wordnet.princeton.edu/. 18 A hypernym is a word with abroad meaning constituting a category into which words with more specific meanings fall. For example, colour is a hypernym of red. 19 A hyponym is a word of more specific meaning than a general or superordinate term applicable to it. For example, a spoon is a hyponym of cutlery. 72 other social users. Identifying and writing features is an extremely time-consuming and te- dious task, especially when an analyst needs to extract features from a very large dataset [19]. We have designed a set of curation APIs to assist analysts in curating data and extracting features. We implemented a set of uniformly accessible micro-services, which can be cascaded to produce analysts desired features. For example, to identify Tweets of a positive sentiment that relates to the 29th prime minister of Australia (Malcolm Turnbull), we may craft a high-level feature that combines sentiment analysis (Metadata-based feature) with named entities (Natural-language-based feature). Figure 3.2 illustrates examples of features that can be extracted from a Tweet. 3.3.2 Data Curation Services To augment users in extracting features, we proposed a set of curation APIs. The APIs are implemented as micro-services and provide services such as extraction, classification, linking, and indexing. The curation services use natural language processing technology and machine learning algorithms for curating the raw data. For example, by extracting semantic meta-data from social media contents, such as information on people, places, and companies and link them to knowledge graphs such as WikiData and Google Knowledge Graph - using similarity techniques - or classify the extracted entities using classification services. We provide curation services for performing content analysis on internet- accessible Web pages, HTML or text content. The full description of the services is available in Chapter 6. Also, a technical report [50] is available on arXiv, which further guides analysts on utilizing the services for their 73 Figure 3.2: Syntactical and semantical features that can be extracted from a Tweet [50]. 74 11/28/2016www.cse.unsw.edu.au/~sbeheshti/CAiSE17/mslist/mslist.htmlhttp://www.cse.unsw.edu.au/~sbeheshti/CAiSE17/mslist/mslist.html1/1UNSW.CSE.SOC­Group  click or option­click to expand or collapseOtherTestingSummarizationSamplingEnrichingMergingClassificationIndexingLinkingNamed­EntityPart­of­SpeechNewsWikidataDBPediaYoutubeFacebookTwitterOpen DataExtractionCuration Micro ServicesUserTweetTweet.RetweetTweet.RegionTweet.StreetAddressTweet.CountryNameTweet.CountryCodeTweet.SensitiveContentTweet.MentionedUserNameIDsTweet.MentionedUserNamesTweet.MediaTypeTweet.ImageURLTweet.URLTweet.LinksTweet.LangTweet.InReplyToTweet.HashtagsTweet.GeoTweet.DomainsTweet.SourceTweet.TextPhraseTopicNamed­EntityStemSynonymPart­of­SpeechKeywordextractQuotationextractConceptextractPhraseextractAdverbextractAdjectiveextractNounextractVerbextractSentenceextractParagraphextractMoneyextractTelevisionStationextractTelevisionShowextractStateextractRadioStationextractRadioProgramextractProductextractOrganizationextractMovieextractDrugextractCountryextractContinentextractCompanyextractCityextractPersonUser.FeaturesUser.URLUser.TimezoneUser.ProfileAgeUser.IDUser.UserNameUser.NameUser.LocationUser.LanguageUser.GeoEnabledUser.Descriptioninfluential­followers(hashTags/keyword/geo)influence/outreach­Scoreoutreach­Networkinfluence­NetworkProfile­RankKlout­Scoreactivity­ratesentimentmentions­Countlikes­Countreplies­Countretweets­Countaudience­Sizepageviews­CountFriends­CountFollower­RatioFollowers­CountFavoritedTweet­CountKnowledge­BaseSimilarityGraphNode.RelationshipGraphNode.PropertyValueGraphNode.PropertyNumeric.HammingDistanceNumeric.RelativeDistanceString.PhonexString.SoundexString.CosineString.JaccardString.QGramString.JaroYagoWikidataGoogle­KGAttribute­BasedEntity­BasedKeyword­BaseText.RandomForestsText.NeuralNetworksText.MLRText.SLDAText.DecisionTreesText.SVMText.NaiveBayes curation tasks. In the following, we present an overview of the services. Overall, the curation APIs provide four essential services for automat- ing the curation tasks: Extraction Service, Linking Service, Classification Service, and Indexing Service. 1. Extraction Service: This service extracts syntactical features from social media contents. The Extraction service can obtain features from both structured and unstructured data. The Service provides a wide range of APIs, including named entity recognition, part of speech, syn- onym, stem, and URL extraction. 2. Linking Service: We rely on Linking Service to automate the extrac- tion of semantic level features. This service can be used for enriching social media contents, summarization, and computing the similarity between objects. The enrichment service utilizes knowledge bases such as Google Knowledge Graph and Wikidata. The summarization service identifies and groups the semantically related keywords. 3. Classification Service: This service facilitates utilizing the machine learning algorithms for classifying social media contents. The Classi- fication Service assigns social media contents to a set of pre-defined target categories or classes. This service facilitates the usage of algo- rithms, such as naive Bayes, Support Vector Machine (SVM), decision tree, random forest, linear regression, logistic regression, and neural network. 4. Indexing Service: This service enables analysts to scan and retrieve a curation environment quickly without the operational burden of man- aging it. For Indexing the social media contents, we utilized Elastic- 75 Figure 3.3: Overview of A Knowledge Lake [35] Search 20, which speeds up querying the data and deriving insight. 3.4 Knowledge Lake In the previous section, we discussed different features that we can extract from the social media contents. However, to transform the raw data from a large number of independently-managed datasets such as Twitter, Facebook, and LinkedIn, into actionable knowledge there is a need to organize and facilitate the way users deal with these datasets. In recent years, data lake has been introduced as a centralized repository containing limitless amounts of the raw data ingested from different data sources. The rationale behind the data lake is to store the raw data and let the data analyst decide how to curate it later. Instead, we introduce the notion of Knowledge Lake —a contextualized data lake. The term Knowledge in a Knowledge Lake, refers to a set of facts, information, and insights to create a contextualization layer for transforming the raw data into knowledge. A Knowledge Lake provides 20https://www.elastic.co/ 76 the foundation for big data analytics by automatically curating the raw data in a data lake and prepare it for deriving insights. On top of the Knowledge Lake, we provide a rule language to enable analysts to query and retrieve the data. Figure 3.3 illustrates the architecture and the main components of Knowledge Lake. Technical details of Knowledge Lake and how it organizes the information can be found in [35]. In the rest of this section, we discuss how we can automatically link information extracted from social media contents to a Knowledge Lake. 3.4.1 Building Knowledge Lake Data extracted from social media may be interpreted in many different ways. To make sense of extracted data and to augment user’s comprehension, it is beneficial to enrich the data through different features to produce contextu- alized knowledge. We do this by building a Knowledge Lake that implements a rich structure of relevant entities, their semantics, and relationships. We then utilize a CDCR technique to link the information extracted from so- cial media contents to entities in Knowledge Lake. In this manner, we can discover hidden relationships and knowledge amongst extracted entities, or to group related entities (text or non-text), or to find paths describing the relationships among entities. Step 1: Constructing Budget-KB. This section, we explain the tech- nique to build Budget-KB, a domain specific knowledge base that represents entities relevant to the Australian Government’s Budget. The Budget-KB is made up of a set of concepts related to the Australian budget organized into a taxonomy, instances for each concept, and relationships among these concepts. Figure 3.4 illustrates a sample fragment of the Budget-KB. To build the knowledge base, we first identified the list of budget categories and 77 Figure 3.4: A sample fragment of the Budget-KB. [37] their related programs provided by Australian government data services 21. Then, we filtered out the irrelevant categories and selected popular ones. For example, we have identified: 1. people, from GPs and nurses to health ministers and hospital managers, 2. organizations, such as hospitals, pharmacies and nursing Federation, 3. locations, states, cities and suburbs in Australia, 4. health funds, such as Medibank, Bupa and HCF, 5. drugs, such as amoxicillin, tramadol and alprazolam, 6. diseases, such as cancer, influenza and tuberculosis, 7. medical devices, such as gas control, blood tube and needle, 21http://data.gov.au/ 78 9/1/20161BudgetTransportHealthEmploymentDefenceWelfareEconomyAgricultureTrade……PlacesPeopleOrganizationsHospitalsNursingFederationMedicareHealth fundsMedibankJillianSkinnerWestern-SydneyJobsGPNSW Health MinisterLidcombe HospitalMedicalDevicesElizabeth KoffSecretary of NSW HealthOxygenKeywordspatienthealthcaredrugPharmaciesHCFNeedleBlood TubeGas Control AmoxicillinAlprazolamTramadolDrugsHospital ManagerDiseasesCancerInfluenzaTuberculosisGoogleKnowledge GraphWikiDataDisease KB(medicinenet.com)Drug KB(medicinenet.com)(drugs.com)Australian DoctorsDirectory(australiandoctorsdirectory.com.au)wordnet…Chris Leahyis-aworks-atlocated-in……vaccine…… 8. job titles, such as GP, nurse, hospital manager, secretary of NSW Health and NSW health minister, and 9. keywords, such as healthcare, patient, virus, vaccine and drug. We also extracted a set of concepts for each category using the intro- duced curation APIs [53]: locations from auspost22, doctors from Australian doctors directory23 (including GPs, specialists and nurses), hospitals from myHospitals24, health funds from health-services25, drugs from drug-index26, diseases from medicine-net27, medical devices from FDA28, job titles from compdata29, and keywords from Australia national health and medical re- search council30. The concepts work as the seed data for the categories, which enriched using several readily available knowledge bases such as Wiki- data31, Google Knowledge Graph32 and WordNet33. For example, we extract relationships from Wikidata to form a relationship graph, e.g., ‘Bankstown Lidcombe Hospital’ located-in ‘Bankstown, Sydney, NSW, Australia’, and we have used Google Knowledge Graph API to link entities to Wikipedia, e.g., by using ‘Jillian Skinner’ as an input we have learned that ‘Jillian Skinner’ is- a ‘person’, linked-to ‘https : //en.wikipedia.org/wiki/Jillian Skinner’, is a ‘member-of ‘New South Wales Legislative Assembly’, and is-a ‘New South Wales Minister for Health’ for Australia. Figure 3.4 shows a small snippet of the created domain knowledge, which illustrates the above notions. As 22http://auspost.com.au/postcode/ 23https://www.ahpra.gov.au/ 24https://www.myhospitals.gov.au/browse-hospitals/ 25http://www.privatehealth.gov.au/ 26http://www.rxlist.com/ 27http://www.medicinenet.com/ 28http://www.fda.gov/ 29http://compdatasurveys.com/compensation/healthcare 30https://www.nhmrc.gov.au/ 31https://www.wikidata.org/ 32https://developers.google.com/knowledge-graph/ 33https://wordnet.princeton.edu 79 presented, ‘Jillian Skinner’ is a ‘person’ and is the ‘Health Minister for New South Wales in Australia ’ (see the link between this person and the job title in the Figure 3.5). As another example ‘Lidcombe Hospital’ is an instance of a hospital and is located in Western Sydney (a location, suburb, in NSW Australia). Step 2: Linking Features and the Budget-KB. So far, we have presented how we construct the Budget-KB, and how we leverage the cura- tion APIs to curate the data. In this section, we explain a method to link the curated data to Budget-KB categories. Identifying and linking entities across various information sources can be considered as the basis of knowl- edge acquisition and at the heart of analytics. We achieve this by identifying coreferences between entities extracted from data (e.g, Tweets, Posts) and those exist in the Budget-KB [50]. More clearly, we find the similarity among the data objects in Tweets (e.g., named entities that have been extracted from the text of the Tweet) and the entities in the Budget-KB (e.g., keywords and named entities - such as hospitals, GPs and drugs - related to health). We have designed a similarity API to find similarity not only among strings, numbers and dates, but also among entities (e.g., finding similar- ity among the attributes and their values), using a wide range of similarity techniques such as dice, cosine, TF-IDF, jaccard, euclidean, city block and levenshtein similarity techniques. For example ‘Bankstown-Lidcombe Hospi- tal’ is related to an item in our Knowledge Lake (Budget-KB) or an external Konwledge Graphs (e.g., Google-KG or a Webpage in Wikipedia) 34. Scalability: To provide a scalable approach, we divide the CDCR- Similarity process into several stages and assign each stage into a specific MapReduce (MR) job. In the first MR job, we pre-process the informa- 34https://en.wikipedia.org/wiki/Bankstown_Lidcombe_Hospital 80 Figure 3.5: A typical scenario for analyzing urban social issues from Twit- ter as it relates to the government budget, to highlight how our solution transforms the raw data into contextualized data by leveraging a domain knowledge. [37] 81 tion item based on the social network schema. After this phase, we use the curation micro-services to generate the surface-level and semantic-level fea- tures. In the final MR job, we generate the (cross-document) co-reference entities and classify them into related summaries to assist analysts in deriv- ing insights from the contextualized knowledge (Figure 3.6). For example, consider an analyst who is interested in identifying Tweets on Twitter dis- cussing a social issue related to ‘health’. Identifying such Tweets is largely subjective: an analyst may consider a Tweet relevant to ‘health’ social issue if it only contains the ‘Health’ keyword. While another analyst may consider a Tweet as relevant to ‘health’ social issue if it contains mentions of current Australia’s health minister ‘Hon Greg Hunt’ and a negative opinion (i.e., a negative sentiment). To respond to the analyst needs in extracting the in- formation, we generate the following summaries using the adopted CDCR process: 1. Keyword-based summaries (in the category of Lexical-based features): for example, the feature keyword (‘health’) can be used to identify Tweets that contain mentions of the keyword ‘health’. 2. Named-entity summaries (in the category of Natural language-based features): for example, the feature named-entity (‘Hon Greg Hunt’, person) can be used to identify tweets that contain mentions of Aus- tralia’s health minister Mr Hunt. 3. Negative-sentiment summaries (in the category of Metadata-based fea- tures): for example, the feature Sentiment(‘Negative’) can be used to identify Tweets that express a negative opinion. Step 3: User-Guided Insight Discovery. The final step is to as- sist the analyst in extracting information relevant to her information needs 82 Figure 3.6: Scalable CDCR-similarity process. [37] 83 tweettweettweet ...Twitter Data CurationHealthPeopleHospitalDisease...MapMapMap...Mapper Input:EntityMapper Output:EntityPairs (EP) Cluster-Key and EntityTechniques:Imbues the entities with additional semantics that must be observed during similarity computations.(EP-Cluster-Key, Entity)...PeopleMapper(EP-Cluster-Key, Entity)...Hospital(EP-Cluster-Key, Entity)...Disease......ReduceReduceReduce...Reducer Input:KBReducer Output: Entity Pairs GraphTechniques:Compute Similarity for all entity pairs.ReducertweettweettweetBudget-KG...............MapMapMap...Mapper Input:Entity Pairs GraphMapper Output: Coreferent and not coreferent pairs + tweet IDTechniques:Classification techniques such as Decision Tree, SVM, ...MappercorefnotcorefcorefnotcorefcorefnotcorefcorefnotcorefReduceReduceReduce...Reducer Input:coreferent entity pairs + tweet IDReducer Output: clustered entitiesTechniques:Compute Similarity for all entity pairs, then compute a score for a tweet, then decide (considering a threshold) to link the to a specific class (e.g. Disease).Reducertweettweettweet ...tweettweettweet ...tweettweettweet ...tweettweettweet ...tweettweettweet ......PeopleHospitalsPharmaciesIntra-Document ProcessingEntity PairsFiltering & FeaturizationClassificationClustering......Pre-Processand tokenizingBudget-KGCuration Data ServicesStreamingBudget-KGBudget-KGE1E2E3e1e2e3Health fundsDrugsDiseases...PartitioningTransportEmploymentDefenceWelfare......Budget Categories/ProgramsIndex (Lucene)Index and AnalyzeTweets that contain:-any mentions of People-any mentions of GPs-any mentions of GPS and Hospitals-any mentions of GPS and Hospitals + Negative Sentiment-any mentions of Health Minister, Hospitals and a specific location (e.g. Sydney)-...AnalyzeBudget Analysttweettweettweettweet through querying the Knowledge Lake. The preceding steps focused on the extraction of raw data, followed by contextualizing the data. However, even when an analyst has a clear goal of her information needs, it is important to pinpoint her required insight effectively. For example, the analyst may assume a Tweet as relevant to Australian budget if the Tweet has mentions of Australia (an instance of type Location) and Tweeted on or around 3 May 2017 (in Australia, the Treasurer handed down the budget each year on 3 May). To assist analysts to formulate their information needs through our pro- posed summaries, we relied on curation rules (See Chapter 5 for detailed description on data curation rules). We define a curation rule as a set of features in forms of: < F eature >::=< Dataset > . < F unction > . < Operator > (< string|integer|boolean >) < Rule >::=< F eature1 > [AN D|OR|N OT < F eature2 >] Where Dataset represents the source a rule operates for curating the data. F unction performs the curation task, and Operator represents the condition for a rule to curate an item. For example, for curating Tweets contains ‘health’ and ‘Hon Greg Hunt’ the curation rule will be inform of: T weet.Keyword.Contains(‘Health(cid:48)) AN D T weet.Entity.P erson(‘HonGregHunt(cid:48)) Rule1 = or to extract a Tweet that contains health with negative sentiment, the an- alyst may write a rule as: T weet.Keyword.Contains(‘Health(cid:48)) AN D T weet.Sentiment.N egative(‘true(cid:48)) Rule1 = 84 3.5 Implementation and Experiment 3.5.1 Implementation We identify and implement a set of APIs and made them available (on GitHub 35) to researchers and developers to assist them in adding features easily – such as extracting keyword, part-of-speech, and named-entities (e.g., persons, locations, organizations, companies, products, diseases, drugs, etc.) providing synonyms and stems for extracted information items leveraging lexical knowledge bases for the English language (e.g., WordNet), linking ex- tracted entities to external knowledge bases (e.g., Google Knowledge Graph and Wikidata), discovering similarity among the extracted information items, (e.g., calculating the similarity between string, number, date and time data), classifying, sorting and categorizing data into various types, and indexing structured and unstructured data - into their applications. The technical implementation of these APIs can be found in Chapter 6. 3.5.2 Dataset The Australian government budget sets out the economic and fiscal outlook for Australia and shows the government’s social and political priorities. The Treasurer handed down the budget 2016-17 at 7.30 pm on Tuesday 3 May 2016. To properly analyze the proposed budget, we have collected all Tweets from one month before and two months after this date. In particular, for these three months, we have selected 15 million Tweets, persisted and indexed them in MongoDB 36. We analyzed the performance of our approach by examining its accuracy using precision and recall. Besides, we study the efficiency of our 35https://github.com/unsw-cse-soc/Data-curation-APIs.git 36mongodb.com 85 approach over 1 million Tweets, of which 409,364 were identified as relevant to the ‘health’ category. 3.5.3 System Setup All the experiments were performed on Amazon EC2 machines (aws.amazon.com/ec2), Sydney Australia region, using instances running Ubuntu Server 14.04. To demonstrate the usability of our approach, we experiment its Accuracy, using metrics such as recall and precision. We also, demonstrate the efficiency of our approach in pairing entities in term of execution time. For the efficiency, we have scaled the experiment over three different configurations on Amazon EC2: single machine, four machines and eight machines. 3.5.4 Evaluation For the initial evaluations, we focus on efficiency of our approach in terms of paring entities (e.g., hospitals, health organizations, pharmaceutical com- panies, health services, drugs, diseases and people) with the categories in the Budget-KB. We examined the efficiency using different similarity met- rics including: edit distance, Q-grams, jaccard, and cosine. We calculated the average similarity score and use it for linking the entities and categories. Here, edit distance and Q-grams are character-based functions, while jaccard and cosine functions are token-based functions. Figures 3.8 and 3.7 shows the execution times taken in making coreference decisions by comparing entities. In particular, generating entity pairs and computing similarity among them is a time-consuming task and requires high-performance computing resources on very large datasets such as Twitter. For example, for around 20k entities, the algorithm generated about 9 million pairs which highlight that pairwise 86 Figure 3.7: Execution Time: one machine, four machines, and eight ma- chines. entity comparison will become exponential across Tweets (for further detail on efficiency, please see [37]). 3.5.5 Analysing Budget-KB Accuracy In this section, we demonstrate the accuracy of our approach in curating data using metrics, such as precision and recall. Precision is the number of Figure 3.8: Execution time of linking entities and categories using different similarity metrics. 87 Single MachineFour MachinesEight MachinesSeries13998623304050010001500200025003000350040004500Execution Time (Minutes)Scalability Study (Performance Evaluation)Entity Type (Concept) # of entities in Budget-KG # of entities in tweets Similarity Computation Execution Time (Second) Edit distance Q-grams Jaccard Cosine Hospital 728 5264 28 96 15 10 Health Organization 25 1012 5 24 9 8 Pharmaceutical Companies 1183 1105 20 61 12 10 Health Services 117 21912 948 2060 287 203 Drugs 24420 6233 451 1158 198 126 Diseases 4627 1401 21 29 8 6 People 17189 656 11 34 17 13 Health Related Keywords 649 409364 1471 6489 627 361 correctly identified coreferent pairs divided by the total number of identified coreferent pairs, recall is the number of correctly identified coreferent pairs divided by the total number of true coreferent pairs, and F-measure is the harmonic mean of precision and recall. Let us assume that TP is the num- ber of true positives, FP the number of false positives (wrong results), TN the number of true negatives, and FN the number of false negatives (miss- ing results). Then, Precision= T P T P +F P , Recall= T P T P +F N , and F-measure= 2∗P recision∗Recall P recision+Recall . We created a set of classifiers using machine learning algorithms to clas- sify Tweets relevant to ‘health’. For creating classifiers, we have used two different approaches. First, we used the classic Keyword Matching (KEYM) approach [162], which uses a Bag of Words (BoW) to identify health-related Tweets. Next, we have used our proposed featurization technique, which uses a variety of features for annotating and extracting Tweets. The goal of this experiment is to identify the performance of the proposed featurization method in boosting the performance of classifiers in training machine learning models. The approach that better improves the precision and recall would have considered as the successful one. For the proposed featurization tech- nique, we have used three different features: (i) We used Budget-KB Entity Matching to link an entity in a Tweet to the entities in the KB, (ii) We have used Google Knowledge Graph API to indicates the existence of a health- related entity in the Google KG with an entity in a Tweet, and (iii) We have used URL Entity Matching to analyze the content of the URLs provided in the tweet and to identify the health-related entities and keywords. Then, we created a set of binary classifiers to classify the extracted Tweets. A binary classifier receives a collection of input data as the training set and creates a model to identify an item is relevant or not. For example, in our scenario, a 88 classifier predicts a Tweet is related to ‘health’ or not. Next section explains how we created the training set and classifiers in detail. To train binary classifiers, we created two different training sets. The first training set was created through KEYM approach, and the second training set was created through the proposed featurization technique. Consider- ing that we have around 15 million tweets, using the KEYM approach, we identified 50 thousand Tweets as relevant to health. Next, we applied some preprocessing on tweets: for example, we eliminated Tweets containing less than four keywords, Tweets that contain non English words, and the URLs. We also removed the duplicate Tweets (e.g., retweeted tweets). Finally, we have generated around 20 thousand preprocessed Tweets. We labelled the extracted Tweets as relevant and fed them as an input to the machine learn- ing algorithm (naiveBayes, KNN and SVM classifiers). In addition, we feed the classifier with a dataset of irrelevant Tweets from our previous works [42] which manually labelled through crowds. For the test set, we have manu- ally labelled 600 Tweets which contain 322 health-related and 278 unrelated Tweets. We consider each Tweet as a document, and process it by stemming, removing stop words, punctuations and numbers, and lower casing the entire Tweet. We followed the same procedure to create the second dataset for evaluating the performance of the featurization technique. As illustrated in Figure 3.9, our proposed approach significantly im- proves the quality of extracted knowledge compared to the classical curation pipeline (in the absence of feature extraction and domain-linking contex- tualization). The proposed technique could identify many relevant Tweets (that should be contained in the returned results) and accordingly, the ac- curacy of the result can be improved. Notice that, accuracy is the proxim- ity of measurement results to the true value, and calculated as accuracy = 89 Figure 3.9: (A) Comparison between featurized and classical classification, (B) Sample of classified Tweets Using the proposed solution (T P + T N )/(T P + T N + F P + F N )), where T P is True Positive, T N is True Negative, F P is False Positive, and F N is False Negative. As an ongo- ing work, to improve the precision and recall: (i) We are going to use rules in combination with the machine learning approach for further filtering re- sults, (ii) We will use some refinement techniques, e.g., to merge the results obtained from KEYM approach with the Budget-KB, (iii) We will add more feature to our model, and (iv) We are enhancing the model, currently sup- porting unigram, to support n-grams and leverage multiple machine learning techniques for further filtering the results. Social Issues. Identifying social issues is challenging as it requires the budget analyst to understand the candidate Tweets properly. To provide the candidate Tweets, we identified the Tweets having negative sentiments. To achieve this goal, we have used classified Tweets. For example, our pro- posed approach classified 5823 Tweets linked to anxiety, 2934 Tweets related to diabetes, 22430 Tweets related to cancer, and 16931 Tweets associated with Mental Health. We have reused the sentiment classifier implemented in the Apache PredictionIO (http://prediction.io) to identify the tweets with negative sentiment. For example, out of 2934 diabetes-related Tweets, the al- 90 (B) Sample Classified Tweets Category People  Health Minister Jillian Skinner has launched a 10-yr eHealth strategy to deliver smart, safe and sustainable care: https://t.co/8lXhe3Mytn  #pmlive Shorten promises to halve suicide rate...maybe some sort of levy payable by each household to an anti-suicide fund?  RT @bongobeach33: @2FBS on the plus side, Fiona Nash urges Health Minister to put leaches on the …Health Fund  What a debacle! Why isn't #ambulance cover part of #car #rego or the #Medibank levy?https://t.co/yaYuwaRvKo  HCF sent a big email about ehealth 'gp2u' product today. e-health is not going away. $RAP. Huge variation in fees and complications among surgeons: Medibank https://t.co/rpkGgcbY2i Disease  There is a clear link between pain & depression, but does pain lead 2 depression, or doesdepression cause pain? @ConversationEDU  RT @HealthRanger: #National Cancer Institute admits #millions of #cancer patients neverhad cancer at all: https://t.co/yOwsgga2zQ Burglar Leo stealing the mental health budget at @TheUSI demonstration outside LeinsterHouse https://t.co/fqD6N2XfM3Hospital  A teenage boy is in a serious condition in a Sydney hospital after being stabbed during a brawl in Lidcombe. https://t.co/xChwBZWK5W  Sydney hospital failures slammed https://t.co/BXeNLJ8bN9 ..  The old parts of Hornsby Hospital remind me of buildings at my high school. Getting horrible flashbacks. (A) Our Approach (With Featurizaiton) Classic (Without Featurizaiton) KNN Naïve Bayes SVM KNN Naïve Bayes SVM Precision 50 69.37 70.85 44.2 68.58 66.33 Recall 83.19 95.65 92.86 94.54 85.4 61.18 F-Measure62.45 80.51 80.37 60.23 76.06 63.64 Accuracy 57.5 73.21 73.93 46.87 69.11 58.92 Kappa 19.97 41.27 43.56 5.29 33.93 18.9 gorithm identified 615 tweets with negative sentiment. As another example, we have identified 1549 tweets with negative sentiment in the mental health category. Later on, the analyst is able to use the proposed declarative lan- guage to analyze the candidate tweets based on a specific goal, e.g., identify Tweets discussing a social issue related to health and specifically about the medicare 37 or an issue related to public hospital services. 3.6 Conclusion and Future Work Big data analytics have become the quintessential engine for extracting knowl- edge and deriving insights from the vastly growing amounts of local, exter- nal and open data. With the advent of widely available data capture and management technologies, coupled with intensifying global competition, fluid business and social requirements, organizations are rapidly shifting to data- fication of their processes [48]. For example, understanding and analyzing open data now is recognized as a strategic priority for governments. In this context, the data curation process becomes a vital analytics asset for un- derstanding the data. To address this need, we have introduced a general- purpose data curation pipeline. The goal is to facilitate analytical tasks through transforming raw data into featurized (through the proposed feature engineering approach) and after that contextualized (requires contracting the domain knowledge and linking extracted data to that) data [239]. We have designed and implemented a set of reusable APIs to assist analyst through the curation process. As an ongoing and future work, we are extending the Budget-KB by identifying further relevant concepts and their instances in other budget categories program. 37Australian federal health insurance program 91 In the next chapters, we are extending our declarative rule language. We explain techniques to adapt curation rules to enable analysts to query and analyze the data more conveniently. We also discuss how a Knowledge Lake enhances users understanding of data to better formulate their preferences. 92 Chapter 4 Feature-Based Rule Adaptation in Dynamic and Constantly Changing Environment In this chapter, we present an adaptive technique for adapting data curation rules in a dynamic and changing environment. Curation rules have been used increasingly to augment learning algorithms, in cases where algorithms are not working well or lack from enough training data. However, in dy- namic curation environments, there is a need for an analyst to adapt rules to keep them applicable and precise. Rule adaptation has been proven to be painstakingly difficult, error-prone, and time-consuming. We proposed an adaptive approach for adapting curation rules. Our approach utilizes an online learning algorithm to learn the optimal modifications for a rule based on the feedback collects from the curation environment. We also proposed a summarization technique to boost rules to curate a larger number of items. In Section 4.1, we present an overview of data curation rules. We dis- cuss the related works in Section 4.2. Then, in Section 4.3, we explain the 93 research problem. We discuss our solution in Section 4.4. Next, we present the performance of our approach on three different curation domains: mental health, domestic violence, and budget. The experiment results showed our approach could significantly improve the precision of rules in annotating data (by as much as 29% precision compared to the initial results). Finally, we discuss future works and conclude the chapter in Section 4.7. The content of this chapter is derived from the following papers: • A Tabebordbar, A Beheshti, B Benatallah, and M C Barukh, Adap- tive rule adaptation in unstructured and dynamic environ- ments, International Conference on Web Information Systems Engi- neering, Springer, 2019, pp. 326–340. (ERA Rank A). • Tabebordbar A, Beheshti A, Benatallah B, Barukh MC. Feature- Based and Adaptive Rule Adaptation in Dynamic Environ- ments. Data Science and Engineering. 2020 Jun 25:1-7. • A Tabebordbar and A Beheshti, Adaptive rule monitoring system, 2018 IEEE/ACM 1st International Workshop on Software Engineering for Cognitive Services (SE4COG), IEEE, 2018, pp. 45–51. (Best paper award). 4.1 Introduction Data curation indicates processes and activities related to the integration, annotation, publication and presentation of data throughout its lifecycle [37]. One category of data curation is data annotation, which aims at labelling the raw data to generate value and increase productivity. Data annotation has been used extensively in various computational machine learning algorithms 94 for information extraction, item classification, record-linkage [34, 202, 203]. However, in dynamic environments, e.g., Twitter and Facebook, where data is continuously changing, relying on pure algorithmic approaches does not scale to the need of businesses that need to annotate data over an extended period. Because algorithms make a prediction based on the historical data only. While, in dynamic environments, the distribution of data is changing and algorithms need to be updated to capture the changes, which is expensive and time-consuming. In recent years, several pioneering solutions (e.g., [27, 118, 164, 183, 235, 259]), have been proposed to augment algorithms with rule-based techniques. Rules can alleviate many of the shortcomings inherent in pure algorithmic approaches. Rules can be written by non-technician analysts, which is less expensive than training algorithms through experts [118]. Updating rules is faster than training algorithms and can supplement algorithms in cases they are not working well [235]. To keep a rule applicable and precise [27, 118, 183, 235, 259] there is a need for an analyst to adapt 1 the rule based on changes in the curation environment. Rule adaptation has been proven to be painstakingly difficult as the analyst needs to understand the context of data and the impact of modifications she applies to the rule [182]. In many cases, the analyst needs to apply different changes to identify the optimal one. This problem exacerbated in dynamic curation environments as adaptation is not a single rule modification task, and the rule needs to be updated over its life-time. In this chapter, we take the first step toward creating adaptive rule adap- tation model in dynamic and constantly changing environments. While pre- vious approaches rely on analysts to identify the optimal modifications for 1The process of modifying a rule to become better suited to the curation environment. 95 rules, we propose a different learning task. We focus on incrementally adapt- ing a rule based on the changes in the curation environment. Hence, we focus on offloading analysts and updating a rule autonomously. We do this by uti- lizing a Bayesian multi-armed-bandit algorithm, which learns the optimal modification by observing rules performance over time. Besides, previous systems adapt rules at the syntactic level, e.g., keyword, regular expression. Syntactic level adaptation limits rules ability in annotating data as rules skip a large number of semantically related items. However, our work focus on coupling syntactic level features with conceptual features to boost rules to annotate a larger number of items. Overall, our solution is made up of the following stages: (1) Each time a rule annotates a set of items, we extract a set of candidate features (e.g., syn- tactic and conceptual features) as the potential modifications. (2) Then, a Bayesian multi-armed-bandit algorithm determines the optimal modification for the rule by estimating a probability distribution for candidate features. (3) Over time, by annotating more items, the algorithm learns the perfor- mance of candidate features better and modifies the rule to keep the rule applicable and precise. Example: Rule Adaptation Through analyst. Consider a government that intends to analyze the quality of their social services, e.g., mental health services, domestic violence services, aged care services, based on citizens’ opinions. Social media is one of the sources that decision-makers may rely on to understand public satisfaction levels about their services. However, this is particularly challenging to take a representative sample of data to train learning algorithms to analyze citizens’ opinions. Because social media users are contributing a million pieces of data every day trickling in through Tweets and Posts. Alternatively, rules can be used to augment learning 96 Figure 4.1: The overview of adapting rules through analysts and crowd work- ers. algorithms for annotating data in dynamic environments. However, as public use different keywords and hashtags for expressing their opinions, rules need to be updated to remain applicable and precise. For example, consider Rule1, which annotates Tweets relevant to ‘Mental Health’, if a Tweet contains both ‘Mental’ and ‘Service’ keywords: Rule1 = IF tweet contains (‘Mental’) keyword AND tweet contains (‘Service’) keyword THEN tag with ‘Mental Health’. To keep Rule1 applicable, the analyst needs to modify the rule based on changes in the curation environment. This task is particularly challenging as the social media data is ‘ever changing and never ending’ [118]. Besides, the analyst does not know the universe of data, or may need to consider too many complex conditions, which might be difficult to integrate with the rule. Thus, the analyst may not craft the perfect rule that adequately annotates 97        Overview of Rule Adaptation (Analyst + Crowd)Rule Annotated ItemsAnalyst Crowd Evaluation (Sending feedback over the number ofitems the rule correctly/incorr-ectly annotated)Adapted RuleItems to annotate                                                      Annotating input Items                           Sampling (Sending a sample of annotated items to crowd to examine the precision of the rule)Modifying the rule based on the crowds feedback data. Figure 4.1, shows a typical workflow of adapting rules through the analyst. Contributions. This chapter makes the following contributions. 1. Rule adaptation is error-prone and challenging as analysts need to ex- amine different rule modifications to identify the optimal one. We propose an autonomic approach that adapts rules without relying on analysts. We utilize a Bayesian multi-armed-bandit algorithm that learns to modify a rule-based on changes in the curation environment. 2. To frame rule adaptation as a Bayesian multi-armed-bandit problem, we propose a reward and demote schema. The schema assigns a re- ward if the algorithm identifies a rule correctly annotated an item and demotes a rule if it annotates an irrelevant item. Over time, the algo- rithm (by observing rewards and demotes) learns a better adaptation for the rule. 3. We proposed a summarization technique to boost rules to annotate a larger number of items. The approach identifies the semantical rela- tionship between keywords to annotate data at the conceptual level. 4.2 Related Works In this section, we discuss prior works related to rule adaptation (Section 4.2.1), and online learning algorithms (Section 4.2.2). In particular, we discuss the usage of a Bayesian multi-armed-bandit algorithm in unstructured and con- stantly changing environments. Besides, we consider it appropriate to discuss approaches proposed for feature extraction and how they differ from our pro- posed summarization technique (Section 4.2.3). 98 4.2.1 Rule Adaptation Rule adaptation is a continuous process focused on modifying a rule to fit the rule to the curation environment better. However, rule adaptation is a chal- lenging and error-prone task. Thus many solutions [118,126,164,183,250,259] have been proposed to assist analysts in adapting rules. Several solutions [126, 164, 182, 183, 250] focused on interactively adapting rules. In these so- lutions, a system proposes possible adaptations for a rule, and an analyst adapts the rule through interacting with the system. For example, Milo et al. [183], proposed a cost-benefit approach for generalizing or specializing fraud detection rules. The approach developed a heuristic algorithm to in- teractively adapt rules with domain experts until the desired set of rules is obtained. Volks et al. [250], proposed a cost function to adapt the integrity constraint (IC) rules. The approach relies on the analyst feedback to update the cost function and resolve the inconsistencies in IC rules. Liu et al. [164], proposed an interactive approach for refining a rule using a set of positive and negative results. The method uses a provenance graph to identify candidate changes that can eliminate negative results. However, these solutions focus on adapting rules that operate on structured data, where a rule may adapted with a limited number of features. Besides, many of these solutions assume the analyst can access to a ground truth, e.g., a dataset of items tagged with the correct label, to verify the effectiveness of an adaptation. Alternatively, to adapt rules in both unstructured and dynamic environ- ments, some solutions [118, 235, 259] focused on augmenting interactive rule adaptation systems by coupling crowds with analysts. These solutions rely on crowd workers to determine the precision of rules. For example, Xie et 99 al. [259], proposed an approach for validating rules for information extraction purposes. The approach relies on a voting technique to identify whether an adaptation of a rule produces a positive impact in extracting information or not. GC et al. [118], designed an interactive system by coupling analysts and crowds for adapting rules. The system verifies items annotated with a rule using crowd workers, and assists analysts in identifying the optimal mod- ification using a relevance feedback algorithm (Rocchio). Sun et al. [235], proposed a rule-based technique (Chimera) for large scale data classification systems. First, the approach identifies the misclassified items in cooperation with crowd workers. Then, forwards the items to analysts to write rules and address the errors. Bak et al. [27], relies on visualization by showing the result of applying a rule on a set of data records. The system requires crowd workers to verify the outcome of applying the rule on the data record, indicating the optimal adjustment for the rule. Although coupling crowd workers with interactive systems provide more flexibility in adapting rules in dynamic environments, these systems still rely on analysts for identifying the optimal modification of rules. In contrast, our approach not only offloads analysts but also it autonomically modifies a rule regarding changes in the curation environment. 4.2.2 Multi Armed Bandit Algorithm In this section, we discuss how a Bayesian multi-armed-bandit algorithm has been used in dynamic and constantly changing environments. This al- gorithm increasingly used in large scale randomized A/B experimentation by technology companies [155]. One area of work that used a Bayesian multi-armed-bandit algorithm is educational learning to facilitate the learn- ers’ learning rate. For example, Williams et al. [258], proposed a system 100 (AXIS) to improve explanation generation for online learning materials by employing a combination of crowds and a Bayesian multi-armed-bandit al- gorithm. Clement et al. [82], used a multi-armed bandit algorithm in intel- ligent tutoring systems to choose activities that provide better learning for students. Other areas that relied on a Bayesian multi-armed-bandit algo- rithm are feature engineering [20], gaming [166], and online marketing [70]. In this context, we follow a similar trend by employing a Bayesian multi- armed-bandit algorithm with the crowd workers. Over time, the algorithm based on the collected feedback determines an adaptation for the rule to keep the rule applicable and precise. 4.2.3 Feature Extraction In addition to interactive systems for helping analysts in adapting rules, we consider it appropriate to include approaches in feature extraction to position our proposed summarization technique. Feature extraction is the process of identifying a set of variables that best describe the data [99]. Feature ex- traction is an ongoing task and requires to iteratively explore the curation environment to identify features that capture the salient aspect of data. Sev- eral approaches have been proposed to aid analysts in feature extraction (e.g., [65, 65, 77, 160, 249, 256]. For example, Anderson et al. [19], proposed BrainWash, a system that provides a pipeline to ease the process of feature extraction in large datasets. The system focused on helping a user to explore, extract, and evaluate features faster. Cheng et al. [78], relied on crowd work- ers for feature extraction. The approach refines the performance of machine learning algorithms based on the feedback receive from crowds. Veeramacha- neni et al. [249], proposed an approach to engage crowd workers in extracting features and predicting students stopout on Massive Open Online Courses 101 (MOOC) systems. The approach provides a pipeline for evaluating and ex- amining the relevancy features through crowd workers. Another type of works focused on easing feature extraction through visu- alization techniques. For example, Patel et al. [193], relied on visualizing the confused region of machine learning classifiers to help analysts in extracting features. Brooks et al. [65], provides a visual summary of the data to aid a user to create a dictionary of features. Stoffel et al. [232], relied on visualiza- tion for examining machine learning features error. The system iteratively interacts with a user to remove ineffective features. In contrast, we propose a summarization technique that identifies the semantical relationship among keywords and extracts features at the concep- tual level. Each conceptual feature represents a group of semantically related keywords, which boosts rules to annotate a larger number of items. 4.3 Preliminaries and Problem Statement We first introduce the component of rules used in this chapter (Section 4.3.1). We then describe the problem in Section 4.3.2. Finally, we provide an overview of our solution in Section 4.3.3. 4.3.1 Preliminaries Feature. We express a rule R in forms of features, where each feature f ∈ R corresponds to a function in forms of (cid:104)Dataset.F unction.Operator(cid:105) → V alue where Dataset is the data source such as Twitter and Facebook, F unction performs the curation task (e.g., feature extraction), Operator represents 102 the condition for a feature to curate the data, and V alue is the output of a feature. Examples of a feature are extraction functions, e.g., named- entities, or similarity extraction. Expressing a feature as a function al- lowing us to leverage the standard data-types as the feature’s operator. For example, if a feature operates over textual data, the operator for the feature will include string operators, such as contains and exact. Simi- larly, if a feature curates integer data the feature will include integer op- erators, such as equals and less-than. As an example, consider the fea- ture f1 = (cid:104)T weet.Keyword.Contains (‘M ental(cid:48)) (cid:105), which curates Tweets that contain ‘Mental’ keyword. In this example, T weet represents the dataset the feature operates for curating the data, Keyword represents the function of the feature, and Contains ((cid:48)M ental(cid:48)) is the operator and represents the condition for curating a Tweet. Rule. We represent a rule R as a tree of features, where each feature f ∈ R can have K children. We denote a path p in the tree as a sequence of features f1, ..., fm, where f1 represents the root feature and fm represents the last feature in the path. More precisely, a path p is a conjunction of features in the form f1 ∧ ... ∧ fm. To curate an item with a rule, the item should be annotated with all features within a path. Notice that, we do not require inventing our rule language. Rather, the benefit of rules being expressed as features, we can adopt any suitable functional or rule-expression language for our purpose. Tag. A Tag is the label, e.g., ‘Mental Health’, a rule assigns to a curated item, e.g., Tweet, to describe the item. In this chapter, we use the term tag and annotate interchangeably. As an example, consider the rule presented in Figure 4.2. This rule is made up of three features {f1, f2, f3}, and tags a Tweet with ‘Mental Health’, if the Tweet curated with features f1 ∧ f2, or 103 Figure 4.2: The overview of the proposed approach for adapting rules [238]. f1 ∧ f3. More clearly, Rule1 tags a Tweet with ‘Mental Health’, if the Tweet contains ‘Mental’ and ‘Health’ keywords or the Tweet contains ‘Mental’ and a keyword related to ‘Medical’ topic. 4.3.2 Problem Statement In the followings, we discuss two major problems in rule-based systems. Adaptation Through Analyst. Typically, to adapt a rule, an analyst ex- amines correctly/incorrectly annotated items to identify the potential modi- fications that make the rule precise [118, 164, 182, 183]. However, rule adap- tation is challenging and error-prone as the analyst needs to evaluate the impact of each modification she applies to the rule. Such a problem is cate- gorized under the category of online learning problem, where an analyst does not have access to the entire knowledge to craft the adequate type of rule. Instead, over time she learns to better adapt a rule through examining the annotated items. To offload analysts from adapting rules, we formulated the problem as a 104 Input Items Annotation Rule Annotated Items F1 Rule 1 F1 = Tweet.Keyword.Contains (“Mental”) -Performing pre-processing -Extracting syntactic and conceptual candidate feature Feature Extraction Observation Estimation Adaptation -Sampling annotated items -Verifying rules precision using crowd workers. -Reward/Demote candidate features -Estimating the performance of candidate features using a Bayesian multi-armed-bandit algorithm -Examining the precision of features associated with a rule -Remove / Replace imprecise features with the candidate features New Annotation Rule Rule 1 F1 F2 F3 F1 = Tweet.Keyword.Contains (“Mental”) F2 = Tweet.Keyword.Contains (“Health”) F3 = Tweet.Topic.Contains (“Medical”) Bayesian multi-armed-bandit algorithm. The algorithm is suitable when the required information for making a decision is serially provided piece-by-piece. Each time a rule annotates a set of items, the algorithm collects feedback over the number of items the rule correctly/incorrectly annotated, over time by receiving more feedback the algorithm learns to adapt the rule better. For example, consider a rule R that operates over a dataset and must annotate data with a threshold . Assume that at time τi rule R annotated a set of items Iτ i = {i1, i2, ..., in}. We denote by P [Rτi] as the precision of the rule observed at time τi. Our algorithm adapts rule R at time τi+1, where P [Rτi+1] > . Syntactic Level Data Annotation. Typically, an analyst adapts a rule at the syntactic level, e.g., keywords, regular expressions. Using syntactic level features allowing the analyst to more conveniently modify a rule by replacing irrelevant keywords or phrases with new ones. However, relying on syntactic level features limits the capacity of a rule in annotating data as these features skip a large number of semantically related items. For example, consider the rule: Rule11 = T weet.Keyword.Contains(‘M ental(cid:48)) ∧ T weet.Keyword.Contains(‘Health(cid:48)) : ‘M entalHealth(cid:48) This rule tags a Tweet if the Tweet contains ‘Mental’ and ‘Health’ keywords. However, there exists a large number of Tweets relevant to ‘Mental Health’, which could not be tagged with the Rule11 as those Tweets may not contain both ‘Mental’ and ‘Health’ keywords. 105 4.3.3 Solution Overview The overview of our proposed solution shown in Figure 4.2. The approach consists of four steps, feature extraction, observation, estimation, and adap- tation. Feature Extraction. The initial step in the workflow is feature extraction, which extracts a set of candidate features T = {t1, t2, ..., tn} from annotated items. The approach extracts candidate features both at the syntactic and conceptual levels. Each syntactic level feature represents a keyword extracted from items annotated with a rule, while a conceptual feature represents a group of semantically related keywords. For extracting conceptual features, we propose a summarization technique, which is made up of two steps: (1) we map each syntactic level feature to an abstract concept using a knowledge base, and (2) we group features with the same concept and consider each group as a conceptual candidate feature. In Section 4.4.1, we accentuate how our approach extracts candidate features for adapting a rule. Observation. The second step in the workflow is observation, which gathers feedback to update a Bayesian multi-armed-bandit algorithm about changes in a curation environment. For gathering feedback, we rely on the crowd workers 2. Each time a rule annotates a set of items I = {i1, i2, i3, ..., in}, the algorithm receives feedback over a sample of annotated items S = {i(cid:48) 1, i(cid:48) 2, i(cid:48) 3, ..., i(cid:48) n}, where S ⊂ I to identify the latest rule performance in annotating the data. Crowds verify whether a rule correctly tagged an item or not. In Sec- tions 4.4.2 and 4.5, we review how crowd workers contribute in verifying items. 2https://www.figure-eight.com/ 106 Estimation. The third step in the workflow is estimation, where a Bayesian multi-armed-bandit algorithm determines the performance of candidate fea- tures by estimating a probability distribution θ. The algorithm calculates the performance of features using workers collected feedback To formulate workers feedback as a Bayesian multi-armed-bandit prob- lem, we propose a reward/demote schema. Each time the rule annotates a set of items, the schema calculates a reward/demote for candidate features to update the algorithm about changes in the curation environment. In Sec- tion 4.4.3, we review how the approach estimates the probability distribution for candidate features. Adaptation. Given a set of candidate features T along with their prob- ability distribution θ, we identify potential modifications that keeps a rule applicable and precise. We do this by removing or restricting features that deteriorate the rule performance. In Section 4.4.4 we review how our ap- proach modifies a rule. 4.4 Adaptive Rule Adaptation In this section, we explain components (feature extraction, observation, es- timation, and adaptation) of our proposed solution. 4.4.1 Feature Extraction The first step in our workflow is feature extraction, where we extract a set of candidate features as the potential modifications for a rule. Each time 107 a rule annotates items, we extract a set of candidate features to calculate their performance in adapting the rule. We extract two types of candidate features, syntactic and conceptual. A syntactic level feature represents a keyword within an annotated item, while the latter represents a group of semantically related keywords. Followings explain how we extract features. Syntactic Candidate Feature: For extracting syntactic candidate fea- tures, we conduct a preprocessing task on annotated items I. The prepro- cessing performs tokenization, normalization, and noise removal. In tokeniza- tion, we split each item i ∈ I into smaller tokens. Normalization removes stop words and conduct stemming., and noise removal skips certain char- acters, e.g., emoji, URLs, that occur in items. We consider the remaining tokens as the candidate feature type of keyword. Conceptual Candidate Feature: Conceptual candidate features are pro- posed to alleviate shortcomings exist in annotating data using syntactic fea- tures. Although syntactic features allow an analyst to modify a rule more conveniently, relying on these features cannot capture the salient aspect of data and limits rules capacity in annotating items. Thus, there is a need for more productive features to boost rules to annotate a larger number of items. We propose a summarization technique, which extracts and groups semantically related keywords and forms a concept. Summarization consists of two steps: (1) mapping, and (2) grouping. In the mapping step [197, 215], we map each syntactic feature using a knowledge base to an abstract con- cept and associate a descriptor to it. In the grouping step, we group fea- tures with an identical descriptor, and consider each group as a conceptual candidate feature. Following explains how our proposed technique extracts two new conceptual features using two readily available knowledge bases: WordNet [104] and Empath [109]. Algorithm 8 shows the pseudo-code of 108 1 Function Feature_Summarization(): Input: T Output: T (cid:48) foreach t ∈ T do set_map.Add(Abstract[t]); foreach tmap ∈ set_map do foreach t ∈ T do if Abstract[t] == tmap then T (cid:48)[tmap].Add(t); return T (cid:48); 2 3 4 5 6 7 8 summarization technique. 1. WordNet: The first knowledge base, we rely on for extracting con- ceptual features is WordNet. WordNet 3 is a semantic lexicon, which grouped English words into sets of synonyms called synsets. We use WordNet to identify semantic relations between keywords using their hypernyms relation. A hypernym is a relationship between a gener- alized term and a specific instance of it. For example, based on the hypernym relationship in the wordNet, we can describe the keyword ‘doctor’ as a ‘medical_practitioner’. Thus, as the mapping step, we mapped each keyword using its hypernym relation to a more general- ized form, where the hypernym acts as the descriptor for the keywords. Next, in the grouping step, we group features with the same descrip- tor and consider each group as a conceptual candidate feature. For example, consider Rule(cid:48) 11, which tags Tweets with mental health. Rule(cid:48) 11 = T weet.Keyword.Contains(‘M ental(cid:48)) ∧ T weet.T opic.Contains(‘M edical_P ractitioner)’ : ‘M entalHealth(cid:48) 3https://wordnet.princeton.edu/ 109 This rule tags a Tweet, if the Tweet contains ‘Mental’ keyword, and a keyword relevant to ‘M edical_P ractitioner’. The topic ‘M edical_P ractitioner’ represents a large number of semantically related keywords, including doctors, physician, dentist. 2. Empath: The second knowledge base we rely on for extracting con- ceptual features is EMPATH [109]. EMPATH is a deep learning skip- gram network, which categorizes text over 200 built-in categories. It represents a token as a vector using a Vector Space Model (VSM) [196] and assigns tokens to categories based on their vector similarity. To extract conceptual candidate features, we query the EMPATH vec- tor space model to map each keyword to a category. We use categories to represent keywords in the abstract concept. Then, we group key- words with the same categories and consider each group as a concep- tual candidate feature. For example, consider the following keywords T = {t1 : f und, t2 : illness, t3 : budget, t4 : disease}. To generate conceptual features, we query the EMPATH vector space model to map each keyword to a category. Assume, the following categories are iden- tified T = {t1 : Economy, t2 : Health, t3 : Economy, t4 : Health}. Then, we group keywords with the identical categories and represent {f und, budget} keywords as the Economy topic, and {disease, illness} keywords as the Health topic. 4.4.2 Observation The second step in our proposed approach is observation, which gathers feed- back to update a Bayesian multi-armed-bandit algorithm about changes in the curation environment. For gathering feedback, we rely on crowd work- 110 ers. Each time a rule annotates a set of items, we take a sample of the items S = {i(cid:48) 1, i(cid:48) 2, i(cid:48) 3, ..., i(cid:48) n}, where S ⊂ I to send to the crowd. The crowd work- ers verify whether an item correctly tagged with the rule or not, e.g., if a rule tags an item with ‘Mental Health’. The task was to confirm whether the item is relevant to ‘Mental Health’ or not. For taking samples, we di- vided annotated items into subgroups [136], and represented each subgroup by a candidate feature — the population of subgroups determined by the frequency of candidate features in annotated items. More clearly, consider the following candidate features T = {t1 : f und, t2 : illness, t3 : budget, t4 : economy} that extracted from items annotated with a rule. The approach divides annotated items into four subgroups, where feature t1 represents items contain f und, feature t2 represents items contain illness, and so forth. Our sampling strategy boosts a Bayesian multi-armed-bandit algorithm (see Section 4.4.3) to better learn the performance of features in adapting a rule. For example, if we used more obvious techniques, such as random sampling, then the algorithm considers all items equally likely. Thus, it takes a longer time to learn the performance of candidate features. 4.4.3 Estimation The third step in our proposed approach is estimation, which computes a probability distribution θ for candidate features to determine their perfor- mance in adapting the rule. This step consists of two components: (i) re- ward/demote schema, which calculates a reward/demote for candidate fea- tures using workers feedback, and (ii) a Bayesian multi-armed-bandit algo- rithm, which estimates the performance of candidate features based on their collected rewards/demotes. 111 Reward/Demote Schema: To adapt rules, we formulated rule adaptation as a Bayesian multi-armed-bandit algorithm. This algorithm is suitable when a system needs additional improvement to their decisions over time. The al- gorithm based on the feedback collects from the curation environment learns more consistent patterns of changes and takes a decision that maximizes its performance. A Bayesian multi-armed-bandit algorithm is a good fit for our problem because each time a rule annotates a set of items, it gets updated by the workers’ feedback. To frame the rule adaptation as a Bayesian multi- armed-bandit problem, we propose a reward and demote schema using the feedback collected from workers. The schema assigns a reward/demote to candidate features t ∈ T appear in annotated items. The schema rewards r, a candidate feature, if it appears in an item that verified as relevant. Similarly, it demotes d, a candidate feature, if a feature appears in an irrel- evant item. Over time, as a rule, annotates more items the schema updates candidate features reward/demote, allowing a Bayesian multi-armed-bandit algorithm to update its estimation regarding the performance of features in adapting the rule. As each conceptual candidate feature represents a group of keywords, we calculate the reward/demote for these features based on the rewards/demotes collected by their associated keywords. More precisely, consider feature t as a conceptual candidate feature. Suppose t = {t(cid:48) sents a keyword associated with t. We calculate reward as rt = (cid:80)n and demote as dt = (cid:80)n n}, where t(cid:48) repre- t(cid:48)=1 rt(cid:48), t(cid:48)=1 dt(cid:48). Clearly, consider the candidate feature ‘M edical_P ractitioner’, that was introduced in the previous section. Sup- 2, ..., t(cid:48) 1, t(cid:48) pose the following features are associated with it: doctor, dentist, and physi- cian. We calculate reward/demote for ‘M edical_P ractitioner’ by summing 112 up the rewards/demotes collected by doctor, dentist and physician. Algorithm 1: Estimating expected performance of candidate features 1 Function Est_Probability_Dist(): Input: T Output: θ foreach t ∈ T do foreach I (cid:48) ∈ S do if t ∈ I (cid:48) AND I (cid:48) is verified as Irrelevant then dt+ = 1; else if t ∈ I (cid:48) AND I (cid:48) is verified as Relevant then rt+ = 1; θt ← Beta(rt, dt) return θ; 2 3 4 5 6 7 8 9 Bayesian Multi-Armed-Bandit Algorithm: This section explains how a Bayesian multi-armed-bandit algorithm estimates the performance of can- didate features. We utilized Thompson sampling [212], a Bayesian multi- armed-bandit algorithm that has shown the near-optimal regret 4 bound. Thompson sampling provides a dynamic policy for choosing which feature should be selected for adapting a rule, and an algorithm for incorporating new information to update this policy based on the candidate features re- wards/demotes. Thompson sampling stores an estimated probability distri- bution θ for each candidate feature to indicate their performance in adapting the rule. The algorithm continuously observes the curation environment and gathers new feedback to update the probability distribution estimated for candidate features, reflecting their performance in adapting the rule. Each time, the algorithm receives a set of candidate features T = {t1, t2, ..., tn} along with their reward/demote it updates candidate features probability 4Given a period of time the regret is the difference between the probability distribution θ the algorithm estimated for the optimal action and the action selected by the algorithm. 113 distributions θ = {θ1, θ2, ..., θn}, where 0 < θ < 1 using the Bayesian for- mula: P (θ | t) = P (t | θ)×P (θ) P (t) ∝ P (t | θ) × P (θ) P (t | θ), represents the likelihood and P (θ), is the prior. The likelihood is a Bernulli distribution and the prior is a Beta distribution. P (t | θ) = θr(1 − θ)n−r, r = (cid:80)n r=0 t P (θ) = θαn−1 n (1− θn)βn−1 β(αn,βn) α and β, are the prior parameters. The initial value of α and β, indicates our initial belief about the performance of candidate features. We have chosen α = β = 1, which means initially, we considered all features to have the same performance in adapting the rule. The prior is updated continuously based on the likelihood of feedback we gather from the curation environment. The posterior is proportional to the product of the prior and the likelihood, with the likelihood updated continuously after receiving workers feedback. This update is easy to implement because the Beta and Bernoulli distribu- tions are conjugate. Algorithm 1 shows how the approach estimates the value of θ for candidate features. As an example consider the following rule. Rule1 = T weet.Keyword.Contains(‘M ental(cid:48)) : ‘M entalHealth(cid:48) Which tags Tweets with ‘Mental Health’. Assume, the following candi- date features T = {t1 : medical, t2 : health, t3 : wellbeing, t4 : care, t5 : qanda} are extracted from annotated items as the potential modifications for the rule. First, to identify the performance of candidate features, the algorithm calculates their reward/demote using workers feedback. Then, a Bayesian multi-armed-bandit algorithm estimates a probability distribution 114 θ for candidate features. Each time the rule annotates a set of items, the algorithm updates the value of θ based on the feedback gathers from workers to better reflect features performance in adapting rules. 4.4.4 Adaptation In this section, we explain how our approach modifies a rule. Recall from Section 4.3.1, that we introduced a rule R as a tree of features, where each feature f ∈ R can have K children. We also defined a path p in a rule as a conjunction of a set of features in forms of p = f1 ∧ f2 ∧ ... ∧ fn. First, to adapt a rule, we identify imprecise paths that annotate data with a precision below a threshold . The threshold represents the minimum precision a path should have to be considered as precise. We determine the precision of paths by calculating the number of relevant/irrelevant items their features annotated. After identifying imprecise paths, we determine whether to replace or further restrict their features. We would replace a feature in an imprecise path if the number of an- notated items was below the average number of items annotated with its siblings, indicating the feature is imprecise and incapable of adequately an- notating data. Conversely, we restrict a feature if the number of annotated items was greater than or equal to average, indicating the feature is applicable but should be restricted to be precise. For replacing or restricting features, we select candidate features that yielded the highest probability distribution θ estimated by a Bayesian multi-armed-bandit algorithm. Example: Suppose, after annotating a set of items at time τi the algorithm identifies that the rule is imprecise 5. Thus, it examines the number of an- 5Annotates data with a precision below . 115 Figure 4.3: Adapting a rule through replacing/restricting its features [106]. notated items and adapts the rule by appending K candidate features 6 that yielded the highest probability distribution (restriction) (Figure 4.3b). After adaptation Rule1 annotates an item if the item curated with features in paths p1 = f1 ∧ f2, or p2 = f1 ∧ f3, or p3 = f1 ∧ f4. Alternatively, the algorithm may replace a feature if it identifies the feature annotates data below the av- erage number of items annotated with its siblings 7. For example, suppose at time τi+n feature f2 is identified as imprecise and incapable of annotating data adequately 8. Thus, the algorithm removes feature f2, and replaces the feature with a candidate feature that yielded the highest probability distri- bution value (Figure 4.3c). On the other hand, to select a candidate feature, the algorithm performs a feature extraction task and estimates candidate features probability distribution θ based on the reward/demote features ac- cumulated from time τ1 to τi+n. The proposed adaptation strategy allows to adapt rules according to changes in the curation environment. For example, by replacing an imprecise 6As feature f1 is the root feature it annotates data above the average, thus satisfies the restriction condition. 7features {f3, f4} are siblings for feature f2 8Annotates data below the average number of items annotated with its siblings 116 Rule1F1F2F3F4ReplaceP1=F1 ^ F2P2=F1 ^ F3P3=F1 ^ F4P1=F1 ^ FKP2=F1 ^ F3P3=F1 ^ F4Replacing an imprecisefeature (F1) with acandidate feature (FK)F1Rule1Rule1F1F2F3F4FKRestrictRestricting feature F1 tomake the rule precise(a)(b)(c)p1p2p3p1p2p3 Figure 4.4: Sample of questions to workers to verify the tag of items. feature with a content bearing feature that obtained a high value of θ over an extended period of time, we keep the rule applicable as the new feature better captures the salient aspect of data. Similarly, by restricting an impre- cise feature that annotates a large number of items, we make the rule precise by filtering out the irrelevant items. 4.5 Gathering Workers Feedback This section explains how we contribute workers to verify items annotated with rules. We created a task on Figure Eight 9 micro-tasking market. The workers’ task was to confirm whether an item is relevant to the tag assigned by a rule or not. Workers could choose ‘Yes’ if they identify the item is related to the tag, and ‘No’, if they determine the item, is irrelevant. In cases workers could not verify an item, they could choose ‘I don’t know’. For example, we present a Tweet to workers, which a rule tagged as relevant to ’Mental Health’. Then, workers task was to verify whether the Tweet is related to ’Mental Health’ or not. Besides, we provide workers with a textual instruction to explain to them how to confirm items 10. We explained steps 9https://www.figure-eight.com/ 10For example, in this job, we will have you to identify whether a Tweet is expressing an issue relevant to mental health or not. An issue can be a shortcoming that exists in 117 workers need to follow and provided them with three positive 11 and three negative 12 examples. For verifying each item, we paid 1 cent, and each worker verified ten items per page. At each round of the annotation task, we sent 3% of annotated items to workers. Figure 4.4 shows a sample question to workers. 4.5.1 Stopping Condition In the previous section, we explained how workers verify annotated items. However, continuously sending items to crowds increases the cost of the adap- tation task. Thus, there is a need to identify when a rule is stabilized to stop verifying more items. To address this problem, we developed a solution using the probabilistic policy defined in Thompson sampling algorithm to deter- mine whether a path in a rule is stabilized or not. For each path, we estimate a probability distribution θ based on the number of relevant/irrelevant items annotated. Then, we define a smoothing window Q to record the value of θ. We set the size of smoothing window Q = 3 and average as the smooth- ing function. We consider a path as stabilized, if the value of Q increases or remains stable within 3(cid:15), where (cid:15) = 0.01 13. More clearly, consider path p3 = f1 ∧ f4 presented in Figure 4.3. Each time the rule annotates a set of items, the algorithm records the value of θ for the path. Then, the approach computes the value of Q, where Q1 = AV G(θ1, θ2, θ3), and Q2 = AV G(θ2, θ3, θ4) and so forth. The algorithm stops sending items to services provided for mental health, or a threat that lack of mental health services may cause to the society, or a suggestion that helps to improve the quality of mental health services. 11Mental health services facing serious shortages of mental health nurses decrease of 12% since 2010 psychiatrists. 12if I have to hire a car and drive home from Belgium i am going to go mental stupid french air traffic control wanks on strike. 13we set the value of (cid:15) and Q, experimentally using simulated data. 118 workers, when the value of Qi+1 + 3(cid:15) ≥ Qi, indicating the path is stabilized. 4.6 Experiments First, we discuss the dataset was used for examining the performance of our proposed approach in Section 4.6.1. Then, in Section 4.6.2, we explain three scenarios have been defined to show the applicability of our approach. Finally, we discuss the results in Section 4.6.3. 4.6.1 Experiment Settings and Dataset The core component of techniques described in the previous sections is imple- mented in Python. Three months of Twitter data (Australian region) were used as the input dataset (from May 2017 to August 2017) with ≈ 15 million Tweets. MongoDB and ElasticSearch were used for storing and indexing the input dataset. We demonstrate the performance of our approach in three dif- ferent curation domains (domestic violence, mental health, and budget). We show how our approach learns to adapt a rule to annotate data more precisely over time. As the initial rules for annotating the data, we used rules that contain only one feature. For example, the initial rule for annotating Tweets in the mental health domain was in the form of T weet.keyword.contai ns(‘M ental(cid:48)) : ‘M entalHealth(cid:48), which tags Tweets that contain ‘Mental’ keyword. Then, at each timestep rules annotate a set of items, our approach adapts rules to make them more precise. We demonstrate the performance of the approach within five rounds of rule adaptation. 119 4.6.2 Experiment scenarios To evaluate the performance of our solution and the applicability of the proposed algorithm, we have defined three different experiment scenarios: 1. Evaluating the performance of a Bayesian multi-armed-bandit algorithm in adaptation: We explain an experiment scenario to rep- resent the performance of a Bayesian multi-armed-bandit algorithm in adapting rules. We demonstrate how the algorithm keeps a rule pre- cise and applicable by adding or removing features. We adapt rules with two different choices of features (K = 10, K = 20) (see Sec- tion 4.3.1). Adapting a rule with a higher number of features allows a rule to annotate a larger number of items, but with less precise ones. 2. Evaluating the proposed feature based adaptation: This sce- nario aims at demonstrating the performance of the proposed feature based technique in augmenting rules to annotate a larger number of items. We demonstrate the improvement rules make in the number of annotated items while adapting rules using both syntactical and conceptual level feature. We also, compare the obtained results with technique that adapts rules at the syntactic level only. 3. Comparison with existing studies: The third scenario, we con- ducted a controlled experiment and compared the performance of our approach with a system proposed by GC et al. [118]. The proposed sys- tem is an interactive rule adaptation system, which relies on analysts for adapting rules. Each time a rule annotates a set of items, the system sends a sample of items to crowds and receives feedback over the num- ber of items correctly/incorrectly tagged by the rule. Then, the system tokenizes items, and weights every token using the TF-IDF weighting 120 Table 4.1: Precision of the approach in adapting rules using summarization approach K = 20. Curation Domain Round 1 Round 2 Round 3 Round 4 Round 5 Budget Mental Health Domestic Violence 54.56 54.74 74.32 73.12 57.35 83.59 78.72 71.40 84.60 81.11 80.16 85.75 84.21 80.61 84.43 Table 4.2: Precision of rules adapted through participants in Budget, Mental Health, and Domestic Violence Domains. Budget Domain Round 1 Round 2 Round 3 Round 4 Round 5 Participant 1 Participant 2 Participant 3 54.56 54.56 54.56 79.75 83.22 86.56 82.02 85.62 86.77 84.21 88.20 86.67 87.19 90.86 86.59 Mental Health Domain Round 1 Round 2 Round 3 Round 4 Round 5 Participant 1 Participant 2 Participant 3 54.74 54.74 54.74 72.88 70.38 71.63 80.65 75.09 81.61 87.19 83.58 85.04 86.32 85.01 84.14 Domestic Violence Domain Round 1 Round 2 Round 3 Round 4 Round 5 Participant 1 Participant 2 Participant 3 74.32 74.32 74.32 87.65 88.63 85.37 88.78 90.28 86.51 90.90 91.36 87.04 90.82 92.59 86.42 scheme. Subsequently, the system ranks tokens based on their TF-IDF weights and iteratively shows tokens to an analyst to adapt a rule. The system continues showing tokens until the analyst is satisfied with the resulting rule. To help the analyst to more effectively adapts the rule, the system incorporates the analyst feedback by adjusting the weight of tokens using a relevance feedback algorithm [207]. Whenever the analyst selects a token, the algorithm increases the weight of other candidate tokens that co-occurred with the selected token. 121 4.6.3 Result Performance of a Bayesian multi-armed-bandit algorithm in adapt- ing rules In this section, we demonstrate the performance of a Bayesian multi-armed- bandit algorithm in adapting rules (see Section 4.4.3). We show the pre- cision of the rules adapted with two different choices of candidate features (k = 10, k = 20) that yielded the highest probability distribution θ. As presented in Figure 4.5 by adapting rules with 10 candidate features the al- gorithm could significantly improve rules precision in all curation domains. For example, in the budget domain, the algorithm could improve the preci- sion for 36.65%, from 54.56% to 91.21%. Similarly, in the domestic violence and the mental health domains, the algorithm could improve the precision for 18.20% and 32.47% respectively. Also, to demonstrate the applicability of the algorithm in adapting rules, we repeated the experiment with a higher number of features (K = 20). This boosts rules to annotate a larger number of items, but with less precise features. Figure 4.5 shows the obtained re- sults for each domain. As presented, adapting rules with a higher number of features decreases the precision of rules, however, the algorithm could learn the performance of features and adapts the rule to improve its precision over time. For example, in mental health domain, the precision is improved by 30.81%, and in budget and domestic violence domains the precision is im- proved by 33.22% and 16.36% respectively. In this experiment, we considered features that annotate data with a precision below 75% (<75%) as imprecise. Discussion on rules performance: As presented in Figure 4.5, the initial rules added to the curation system was imprecise and annotated a large 122 number of irrelevant items. For example, the initial precision of rules in Budget and Mental Health domains was below 55%. However, after collecting a set of feedback the algorithm identifies the need to restrict rules by adding a new set of features. Although the restricting rules could improve their precision, this limited rules to only annotate those items that contain the features selected by the algorithm during the adaptation. As presented in Figures 4.6, 4.7, and 4.8, after adaptation rules are annotating fewer items compared to their initial states. For example, in Budget domain the number of annotated items has reduced by 15240, after two rounds of adaptation. We can see similar trends for other curation domains as well. But, the promising fact is that a Bayesian multi-armed-bandit algorithm can learn a better adaptation for rules by incrementally collecting more feedback over time. This can be seen in Figure 4.5 that the algorithm could dramatically improve the rule precision. For example, in Budget domain the difference in precision between the adaptation that occurred at τ2 and τ5 is over 10%. This difference for the Mental Health domain is over 20%. Based on the obtained results, we concluded that a Bayesian multi-armed-bandit algorithm by collecting more feedback learns a better adaptation for rules over time, and if we can adapt rules with more robust features we can improve both precision and recall. This fact, can be approved by comparing the precision and the number of annotated items between Figures 4.5 and Figures 4.6, 4.7, and 4.8. As presented by adapting rules with a higher number of features (K = 20) the algorithm could annotate a larger number of items, and at the same time maintain rules precision. In the next section, we discuss how feature-based adaptation augments the performance of rules to annotate a larger number of items. 123 Feature-Based Adaptation. As we discussed in the previous section 4.6.3, adaptation limits the ability of rules in annotating items. To alleviate this problem, we discussed that adapting rules with higher number features could boost rules to annotate a larger number of items. However, increasing the number of features has a negative correlation with precision (by increasing the number of features in adaptation the precision of rules drops). Thus, to diminish the impact of an adaptation and maintaining the performance of a rule in annotating items, we proposed feature-based adaptation. In feature-based adaptation, we hypotheses that adapting a rule with a group of semantically related features would have a similar impact on the rule precision when adapting with a single feature. Thus, in this section, we study the impact of feature-based adaptation on rules performance. The goal is to study whether adapting a rule with a group of related features can enhance the performance of rules to annotate a larger number of items, and at the same time maintain their precision. To test our hypotheses, we conducted two sets of experiments. First, we discuss the precision of rules adapted through our approach. Then, we compare the number of annotated items with rules adapted using syntactic level features. Table 4.1 shows the precision of rules adapted using the feature-based technique. The obtained results confirm that feature-based adaptation can dramatically increase the performance of a rule in annotating items. At the same time, a Bayesian multi-armed-bandit algorithm could learn the perfor- mance of features and improves rules precision over time. This improvement for the domestic violence domain is 10.11%, and for mental health and bud- get, domains are 25.87% and 28.47% respectively. Although the learning rate of the algorithm using the feature-based approach is slower than syn- 124 tactic level features, still the algorithm could improve rules precision in all domains. In addition, Figures 4.6, 4.7, and 4.8 compare the number of items annotated with rules adapted using the syntactic and feature-based approach. As presented, adapting rules with different features could boost rules to annotate a larger number of items. For example, in the domestic violence domain, the rule could annotate over 12000 items. In mental health and budget domains, rules could annotate 13574 and 8304 items respectively. These numbers are much higher than adapting rules using syntactic level fea- tures. For example, in the budget domain, the rule (k = 10) could annotate 2137 items only. Annotating data using the syntactic level features in mental health and domestic violence domains show a similar trend and rules could only annotate 5198, and 4127 items respectively. Discussion on Feature-Based Adaptation: An advantage of feature- based adaptation is that it allowing users to better investigate their infor- mation needs while seeking for topics that contain a large number of topical subspaces. Suppose a user intends to curate data relevant to ‘mental health’. There exists a large number of keywords, e.g. health, disorder, service, that are relevant to mental health but may not receive enough feedback to be con- sidered for adapting the rule. Using, feature-based adaptation, we group all keywords that are associated with a topic, thus the rule can easily curate a varied and comprehensive list of items relevant to the user information need. Comparison with existing studies. In this section, we compare the performance of our approach with the state of the art technique on rule adaptation. We implemented the system proposed by GC et al [118] (see Section 4.6.2), and conducted a controlled experiment. 125 We asked three Ph.D. students in a lab that were familiar with the concept of learning algorithms, e.g., true positive rate, false-positive rate, to participate in the experiment. We explained to them how the system works and how they can use the system to adapt rules. Also, we allowed them to work with the system to gain the required understanding for adapting rules. To better compare the performance of our approach with the interactive system, we have asked participants to adapt rules in all domains. Then, in each cura- tion domain, we selected the rule with the highest obtained precision and compared it with rules adapted by our approach. In this experiment, we asked participants to adapt rules with 20 features (k = 20). Table 4.2 shows the results. As presented, our approach has comparable performance to in- teractive systems. For example, in budget domain participants could adapt the rule with 90.86% precision, which is 3.08% higher than our proposed approach. In the domestic violence and mental health domains, participants could adapt rules with the precision of 92.59% and 86.32% respectively. Be- sides, Figure 4.9 shows the number of items annotated with the rules adapted by participants. The figure shows the most precise rules in each domain. Al- though our approach and participants have a shown a similar performance while using syntactic level features for adapting rule, using the proposed feature-based technique our approach could significantly annotate a larger number of items. For example, the number of annotated items in budget do- main is higher by 3233 items. The difference in mental health and domestic violence domains is 5320, and 4305 respectively. The overall cost that we paid for verifying items in the mental health domain is $35.10, and in the budget domain is $29.92, and in the domestic violence domain is $21.22. By comparing the precision and the number of items annotated with our approach and participants, we believe that our adaptive approach outper- 126 forms current rule adaptation techniques. In particular, by considering the prohibitive cost of analysts for adapting rules, our proposed approach can boost companies and data enthusiasts that need to annotate data in un- structured and constantly changing environments with a limited budget. 4.7 Conclusion and Future works In this chapter, we proposed an approach for adapting data annotation rules in unstructured and changing environments. Our approach offloads analysts from adapting rules and autonomically modifies rules based on changes in the curation environment. We utilize a Bayesian multi-armed-bandit algo- rithm, an online learning algorithm that learns the optimal modification for rules using the feedback gathers from the curation environment. In addi- tion, our approach adapts rules at the conceptual level, which boosts rules to annotate a larger number of items compared to current methods that rely on syntactic similarity, e.g., keywords, regular expression, for adapting rules. We evaluated the performance of our approach on three months of Twitter data in three different curation domains: domestic violence, mental health, and budget. The evaluation results showed our approach has comparable performance to systems relying on analysts for adapting rules. There are several exciting directions for future work. In this chapter, we introduced a summarization, which boosts rules to annotate data at the conceptual level. As a part of future works, we plan to identify more features for adapting rules. Specifically, we focused on adapting rules with three other types of features, including entities, word2vec, and relation. We believe adapting rules with different kinds of conceptual features not only enhance the performance of rules to annotate a more significant number of items but 127 also allows rules to capture the salient aspect of data better. In the next chapter, we further expand our summarization technique and introduce and accentuate how it augments users’ comprehension of curation environments. Specifically, we discuss how named entities and deep learning can be coupled with summarization technique to enable users to better for- mulate their preferences while seeking for a varied and comprehensive list of items. 128 Figure 4.5: The performance of a Bayesian multi-armed-bandit algorithm in adapting rules. As presented the algorithm could improve rules precision in all domains. 129 84.5485.5689.7291.2154.5677.378.9787.8287.78506070809010012345PrecisionAnnotation RoundBudget k=10Budget k=2054.7466.685.5486.7487.2164.7284.1785.1885.55506070809010012345PrecisionAnnotation RoundMental Health k=10Mental Health k=2074.3288.8992.293.5292.5288.5489.3290.7190.68506070809010012345PrecisionAnnotation RoundDomestic Violence k=10Domestic Violence k=20 Figure 4.6: Comparison between the number of items annotated using con- ceptual and syntactic level features (Budget Domain). Figure 4.7: Comparison between the number of items annotated using con- ceptual and syntactic level features (Mental Health Domain). 130 1775574728957722783041775523352335213721371775542114947474949030250050007500100001250015000175002000012345Number of curated itemsAnnotation RoundFeature Summarization K=20Keyword K=10Keyword k=202210413779138921357912839221043820445740674127795277658218794802500500075001000012500150001750020000225002500012345Number of curated itemsAnnotation RoundFeature Summarization K=20Keyword K=10Keyword k=202643225597179141232713574264321666444504457519818947896696838910025005000750010000125001500017500200002250025000275003000012345Number of curated itemsAnnotation RoundFeature Summarization K=20Keyword K=10Keyword k=20Figure 6: Comparison between # of items annotated using feature summarization and keyword only (Budget domain) Figure 7: Comparison between # of items annotated using feature summarization and keyword only (Mental Health domain) Figure 8: Comparison between # of items annotated using feature summarization and keyword only (Domestic Violence domain) 1775574728957722783041775523352335213721371775542114947474949030250050007500100001250015000175002000012345Number of curated itemsAnnotation RoundFeature Summarization K=20Keyword K=10Keyword k=202210413779138921357912839221043820445740674127795277658218794802500500075001000012500150001750020000225002500012345Number of curated itemsAnnotation RoundFeature Summarization K=20Keyword K=10Keyword k=202643225597179141232713574264321666444504457519818947896686838910025005000750010000125001500017500200002250025000275003000012345Number of curated itemsAnnotation RoundFeature Summarization K=20Keyword K=10Keyword k=20Figure 6: Comparison between # of items annotated using feature summarization and keyword only (Budget domain) Figure 7: Comparison between # of items annotated using feature summarization and keyword only (Mental Health domain) Figure 8: Comparison between # of items annotated using feature summarization and keyword only (Domestic Violence domain) Figure 4.8: Comparison between the number of items annotated using con- ceptual and syntactic level features (Domestic Violence Domain). Figure 4.9: Number of items annotated with rules adapted through partic- ipants after five rounds of annotations in three different curation domains: Budget, Mental Health, and Domestic Violence. 131 1775574728957722783041775523352335213721371775542114947474949030250050007500100001250015000175002000012345Number of curated itemsAnnotation RoundFeature Summarization K=20Keyword K=10Keyword k=202210413779138921357912839221043820445740674127795277658218794802500500075001000012500150001750020000225002500012345Number of curated itemsAnnotation RoundFeature Summarization K=20Keyword K=10Keyword k=202643225597179141232713574264321666444504457519818947896696838910025005000750010000125001500017500200002250025000275003000012345Number of curated itemsAnnotation RoundFeature Summarization K=20Keyword K=10Keyword k=20Figure 6: Comparison between # of items annotated using feature summarization and keyword only (Budget domain) Figure 7: Comparison between # of items annotated using feature summarization and keyword only (Mental Health domain) Figure 8: Comparison between # of items annotated using feature summarization and keyword only (Domestic Violence domain) 1775541374954508450712643218204810085748254221049232880289318534025005000750010000125001500017500200002250025000275003000012345Number of Annotated itemsRoundsBudgetMental HealthDomestic violence Chapter 5 Enhancing Users Comprehension of the Curation Environment In this chapter, we present a technique for augmenting the user’s under- standing and sensemaking of a curation environment. In a large curation environment, a user often conducts exploratory search for identifying and extracting information relevant to her topic of interest. Often, however, a user needs to iteratively investigate the curation environment to formulate her preferences for Information Retrieval (IR) systems. In recent years sev- eral visualization techniques have been proposed to help a user to better formulate her preferences. However, using current techniques, a user needs to explicitly specify her preferences for IR systems in forms of keywords or phrases. To address this problem, we present ConceptMap, a system that provides a conceptual summary of the curation environment and allows a user to specify her preferences implicitly as a set of concepts. ConceptMap provides a 2D Radial Map of concepts within the information space and al- lows a user to rank items relevant to her preferences through dragging and dropping. 132 We discuss the problem of comprehending the curation environment in Sec- tion 5.1. In Section 5.2, we discuss the related works regarding formulating users preferences and comprehending the curation environment. Section 5.3 describes the interface of the ConceptMap. In Section 5.4, we discuss our proposed approach for generating a conceptual summary of the information space. Then, in Section 5.5, we describe experiments and usage scenarios on retrieving citizens’ opinions about issues in ‘Health Care System’. Sec- tion 5.6 details some of the challenges we encountered in implementing the ConceptMap, and Section 5.7 summarizes the implications of this work. The content of this chapter is derived from the following paper(s): • A Tabebordbar, A Beheshti, and B Benatallah, Conceptmap: A conceptual approach for formulating user preferences in large information spaces, International Conference on Web Information Systems Engineering, Springer, 2019, pp. 779–794 (ERA Rank A). • Beheshti A, Tabebordbar A, Benatallah B. iStory: Intelligent Sto- rytelling with Social Data. In Companion Proceedings of the Web Conference 2020 2020 Apr 20 (pp. 253-256). (ERA Rank A*). 133 Figure 5.1: The ConceptMap interface is made up four components: (a) the main data view, shows a summary of concepts within the information space using a Radial Map. A user can choose summaries from the Action box. The Control Panel (b) allows a user to observe and modify attributes, e.g., keyword, Named Entities, associated to concepts. The Query Box (c) provides two interfaces (Concept and Concept + Rule) for a user to formulate her preferences. The Documents List (d) ranks documents based on their relevancy to concepts [238]. 5.1 Introduction Information Retrieval (IR) systems have been extensively used to extract and locate users information. These systems retrieve a ranked list of items ordered by their relevancy and allow a user to skim and pick items from the list. Exploratory search is part of an information exploration process in which a user is unsure about the way to retrieve her information needs, and often becomes familiar with the information space overtime. Usually, in an exploratory search, a user relies on text-based queries for formulating her preferences. Text queries are made up of a few keywords or phrases [139,140] 134 and allow a user to explore and retrieve the information. However, formu- lating queries has been proven to be painstakingly difficult as a user needs to read and synthesize a large amount of information iteratively. This prob- lem is exacerbated as humans have a limited memory capacity in absorbing information, which can lead to information overload or attention manage- ment [199]. In past years, several studies [98, 118, 235] have been conducted to formulate user’s preferences through rules, e.g., boolean operators. How- ever, these studies have concluded that comprehension of the information space is needed in order a user formulates her preferences accurately. In recent years several pioneering solutions [66,153,201,209,226,234] have been proposed to couple Human-Computer-Interaction (HCI) techniques with IR systems to aid users to develop insight and absorb greater amounts of in- formation. These solutions fuse the traditional text-based queries with vari- ous visualization elements, such as bar-graph [121], table [252], and relevance map [195]. Although visual encoding lowers user’s cognitive load [97, 121], still the user needs to iteratively explore the information space to identify the relation between attributes (e.g., keywords, phrases, named entities), in doc- uments to formulate her preferences. This process is challenging for several reasons: • In many cases an exploratory search scenario contains too many topical subspaces and is difficult for a user to formulate her preferences in forms of keywords or phrases. • Sensemaking of the information space is incomplete as text queries only retrieve a small part of the information space and the rest remains invisible. • Relying on text queries are time-consuming as the user is not familiar 135 with the information space and needs to comprehend a large amount of data. For example, consider a user who intends to analyse citizens opinion on social media platforms, e.g., Twitter and Facebook, to identify issues in ‘Health Care Services‘ that need improvement. Currently, the user needs to read and scan the information space to identify the query terms that properly retrieve items relevant to a large number of topical subspaces, e.g., ‘medical centres’, ‘aged care services’, and ‘mental health’. Such a search scenario needs the user to spend a long period of time to identify the content bearing terms associated with each subtopic. Alternatively, Carterette et al. [73] highlighted that users are more willing to express their preferences relatively, instead of precisely specifying attributes associated with it. In this context, we follow a similar trend by generating a conceptual summary of the information space and helping a user to formulate her preferences implicitly as a set of concepts. In this chapter, we present the ConceptMap, a system for lowering user’s cognitive load in ranking and exploring the information space. While previ- ous systems allow a user to formulate her preferences explicitly, e.g., keywords and phrases, and observing changes in rankings to understand the data, we provide a different ranking and data presentation approach. Our work fo- cuses on creating a conceptual summary of the information space to help a user to understand the data and relate it to her preferences. Hence, we focus on boosting a user’s cognitive skill in understanding the data and formulating that understanding to extract information relevant to her topic of interest. We do this by interacting with the user to explore her preferences in a 2D Radial Map. A user can refine her preferences through dragging and drop- ping concepts into a Query Box to update document rankings, representing the relevance of concepts and documents. 136 ConceptMap is made up of two main technical achievements: Knowledge Lake [35, 37], which is a centralized repository containing several knowledge bases, providing a contextualization layer for annotating attributes within the information space with a set of facts and information. Summarization [109], takes advantage of a deep learning skip-gram embedding network [180] to learn the associations between attributes and groups the similar ones. We discuss two usage scenarios for our technique: (1) Illuminating how concep- tual summary lowers the user’s cognitive load in formulating her preferences, and (2) How conceptual summary and the insight developed from it can mo- tivate a user to formulate her preferences through more advanced IR systems features, such as rules to retrieve relevant documents more precisely. Overall, this chapter’s contributions include: • We introduced ConceptMap, a system that automatically generates a conceptual summary of the information space and allows a user to formulate her preferences implicitly as a set of concepts, such as topic, category and Named Entity. • We study how conceptual summary of data helps a user to understand the information space and formulate her preferences through rules. • We present two usage scenarios using Twitter data, which demonstrated how ConceptMap helps a user to explore and retrieve a varied and comprehensive list of information across a large amount of data. 5.2 Related Work In this section, we discuss prior works related to formulating user’s prefer- ences (Section 5.2.1), comprehending and sensemaking of the information 137 space (Section 5.2.2), and topic modeling approaches (Section 5.2.3). 5.2.1 Formulating User Preferences The relevance judgement for IR systems has been made on binary scale, where a document is considered relevant to a query or not [73]. Such judgement requires a user to precisely formulate her preferences to locate and retrieve the relevant documents. Typically, for formulating preferences a user needs to conduct exploratory search by iteratively investigating the information space to develop insight and create its mental structure [174]. Exploratory search is beyond the basic information seeking task of looking for a few rel- evant documents. In an exploratory search, a user has no predetermined goal or understanding of the information space and learns to formulate her preferences by investigating and learning from the context overtime [152]. This makes formulating user preferences challenging and time-consuming, especially in broad information spaces. Previous works have focused on aug- menting users’ comprehension of the information space with visual encoding to formulate their preferences more precisely [125, 128, 148, 225]. However, recently some solutions [73, 252] have shown that it is easier for a user to make a relative judgement of her preferences rather than explicitly specify- ing attributes associated with it. For example, a user may formulate ‘mental health’ as a ‘disorder’, but unable to precisely determine attributes associated to it. In this context, we allow a user to formulate her preferences implicitly, as a set of abstract concepts, such as topic, category, and Named Entity. Our approach automatically identifies the relation among attributes within the information space, without requiring to specify the exact attributes associ- ated with it. In the next sections, we will accentuate approaches focused on augmenting users’ comprehension of the information space. 138 5.2.2 Comprehension and Sensemaking of the Informa- tion Space Sensemaking of the information space is defined as processes and activi- ties a user undertakes to frame the information space in an understandable schema [199]. Sensemaking has been identified as a quintessential task of information retrieval [174], especially when a user has varied information needs across a large number of data [195]. During the past years, several solutions have been proposed to enhance user comprehension and sensemak- ing of the data. One category of these solutions focused on augmenting the ranked lists of search results with different visualization elements. For ex- ample, tileBar [128] represents the relevancy of ranked documents to query terms with shaded blocks. LineUp [121], used bar charts to visualize the ranking of multi-attributes data, while other approaches highlight ranked lists with stacked bar [97], metaphor-based layout [188], and snippet-based layout [120]. However, ranked lists can only support scenarios where a user has limited information needs and is seeking for a few relevant documents. Another line of works have coupled visualization with HCI techniques for augmenting a user to gain a better understanding of the information space. Comprehension of the information space allows a user to discover the about- ness of data and develop a mental structure of it [67]. For example, Wall et al. [252], proposed a table layout to present a holistic view of the informa- tion space for multi-attribute ranking systems. Di-Sciascio et al. [96] boosts user comprehension of the information space by contributing previous users’ search terms. Peltonen et al. [195] provided a topical overview of the informa- tion space by interacting with the user to visualize the association between keywords on a relevance map. Di-Sciascio et al. [97] focused on transparent and controllable recommendation systems to enhance user understanding of 139 Figure 5.2: The Control Panel is made up of two components: (a) the Details View, allows a user to examine attributes associated to a concept, and (b) the Evidence Box, stores potential concepts relevant to a user information needs [238]. data. However, these approaches focused on enhancing users’ mental capac- ity to better identify the relations between attributes in documents. But, in a large information space many of these relations remain invisible to the user, either due to user inability in identifying them or visual clutter [103]. In- stead, ConceptMap provides a conceptual summary of the data and offloads user to iteratively investigate the information space to discover associations between attributes in documents. 140 5.2.3 Topic Modeling Techniques In addition to interactive methods for augmenting user comprehension, we consider it appropriate to include approaches that provide a topical overview of the information space. Topic modelling is a generative approach, which aims at discovering groups of words that frequently co-occur in documents [61]. Latent Dirichlet Allocation (LDA) is the most common topic modelling al- gorithm, which has been used extensively for providing a topical overview of the information space. For example, TIARA [165] is one of the early works on topic-based text summarization, which creates a visual summary of the information space through visualizing the result of LDA algorithm with a stacked graph. Serendip [10] is a topic modelling system focused on structuring exploration of information for supporting multi-level discov- ery. TopicNets [122] is an interactive topic modelling system that visualizes documents on a graph of connected network. In addition, some techniques relied on hierarchical topic modelling to augment user comprehension of the information space. For example, TopicLens [149] combined a lens technique with a tree-based topic modelling for exploring the data. PolyZone [142] proposed an interactive technique, which progressively builds a hierarchy of focus regions to allow users to explore the magnification of the topic of her interest. Whereas, other researchers coupled topic modeling with different vi- sual encoding such as panning [198], overview + detail [137] or scrolling [141]. Although topical modelling can provide an overview of the information space, these algorithms cannot identify the semantical relation between attributes in information space [134], which is required in exploratory search scenar- ios [97]. Moreover, topical modelling algorithms have a high performance requirement and are computationally expensive to rely on for dynamic and real-time search scenarios. 141 Figure 5.3: The Query Box provides two interfaces for formulating a user preferences: (a) the Concept Only, which allows a user to formulate her preferences as a set of concepts, (b) the Concept + Rule, which aids a user to formulate her preferences through rules [238]. On the other hand, ConceptMap discovers the semantical relation be- tween attributes and groups them based on their similarity. This is different from approaches, which focused on creating topics based on the words co- occurrence. Let us back to our example that was introduced in the previous section, to retrieve citizens opinion about issues in ‘Health Care Services’, ConceptMap may help a user by providing a conceptual summary of people who are in-charge of health care services, organizations that provide health care services, and locations associated to health care services, etc. Then, the user can rank Tweets based on provided summaries and retrieves items relevant to her information needs. Thus the user only needs to focus on her preferences rather than investigating the information space to identify the relation between attributes in documents. This speeds up the exploration of the information space, in cases where a user has varied information needs, which contains too many topical subspaces and is difficult to formulate for IR systems. 5.3 ConceptMap In this section, we describe ConceptMap, a system that provides a conceptual summary of the information space. ConceptMap discovers the semantical relation between attributes and enables a user to formulate her preferences 142 as a set of abstract concepts. In this section, we discuss the character of ConceptMap interface. 5.3.1 Design Components The ConceptMap interface is made up of four components: the Radial Map (Figure 5.1a), the Control Panel (Figure 5.1b), the Query Box (Figure 5.1c) and the Documents List (Figure 5.1d). The Radial Map is the central component of the ConceptMap. It shows concepts discovered from the infor- mation space. The Control Panel provides the controllability for a user to create her topic of interest. It contains two tabs: The Details View, which shows attributes associated with a concept and lets a user manipulate the concept by adding and removing attributes. The Evidence Box, which allows a user to develop a mental structure of data by gathering concepts relevant to her topic of interest. The third component of the ConceptMap is the Query Box, which enables a user to examine the concepts-documents rele- vancy. The Query box provides two interfaces: The Concept Only, which lets a user formulate her preferences implicitly as a set of concepts. The Concept + Rule, which allows a user to formulate her preferences through boolean (AND, OR) operators. The last component of the ConceptMap is the Documents List, which shows a ranked list of documents based on their relevancy to the user-selected concepts. Followings, we explain the character of each component in detail. 1. Radial Map The Radial Map is the main data view in the ConceptMap, and shows a summary of the most frequent concepts within the information space (Figure 5.1a). A user can select her preferred summaries through interacting with 143 the Action Box. It has six toggles for visual encoding of the Radial Map: Persons, Organizations, Locations, Categories, Topics, and Keywords. Each toggle represents a specific summary and colors the summary if it’s selected by a user. A user can observe concepts associated with summaries through the Radial Map by pressing the Summarize button. The coloring of concepts within the Radial Map corresponds to the Action Box, where concepts asso- ciated with a summary mirror that color. For example, ConceptMap colored the Topics summary as red. ConceptMap also displays concepts associated with it within the Radial Map as red. By default, ConceptMap divides the Radial Map into 50 wedges, where each wedge represents a concept. Each wedge augmented with a grid line and shows the relevancy of the concept to the information space. The value of the grid line is between 0 to 1, where zero represents the least relevancy and one represents the highest relevancy. Augmenting wedges with grid lines, enable a user to grasp an overview of the information space along with their relevancy as a whole at a glance. The following explains the types of summaries ConceptMap displays to a user. • Location: Provides a summary of places within the information space based on their geographical distances. For example, location summary may represent a concept such as Suburbs in Sydney, Australia through grouping suburbs located within it, e.g., Five Dock, Canada Bay, and Kensington. • Person: Identifies person names within the information space and groups people based on their title. For example, person summary may create a concept like ‘Health Ministers’ by grouping persons, such as 144 ‘Greg Hunt’ 1 and ‘Brad Hazzard’ 2. • Organization: Identifies companies and organizations within the in- formation space and groups them based on the services they provide. For example, organization summary may place financial companies into a group and consider them as a concept. • Topic: Provides a topical summary of the information space based on the keywords’ semantical relationships. It examines the hypernym relationship of keywords and groups them based on their similarity. For example, this summary may extract the keywords pigeon, crow, eagle and seagull from the information space and groups them as bird. • Category: Categorizes keywords within information space into 200 pre-validated topics [109]. This summary computes the vector similar- ity of keywords and categories and assign a keyword to the category that yielded the highest similarity score. • Keyword: Presents the most frequent keywords within information space and lets a user manually examine the relation between keywords to create her topic of interest. Providing several summaries of the information space not only enables a user examine her preferences from a different perspective, but also enhances the user comprehension and sensemaking of data. For example, if ConceptMap only provides the topical overview of the information space, then the user’s comprehension of information space may remain incomplete as the user may not be able to examine the relations between attributes in documents from other perspectives. Our studies showed that providing different summaries 1Health minister of Australia 2Minister for health and medical research in NSW state of Australia 145 of the information space boosts the user’s understanding to better formulate her preferences, especially when seeking varied information across a large amount of data. 2. Control Panel The second component of the ConceptMap is the Control Panel, which allows a user to modify concepts based on her preferences. The Control Panel is made up of two components: Details View (Figure 5.2a) and Evidence Box (Figure 5.2b). We will now turn to explain the character of components in detail. Details View: The Details View provides a detailed representation of concepts to a user. It shows attributes, e.g., keywords and named entities, associated with a concept and enables a user to modify the concept based on her preferences. Attributes within the Details View are colored to represent their relevancy to a concept. The opacity of the color shows the measure of relevancy between a concept and its attributes. Darker color means higher relevancy, while lighter color shows lower relevancy. A detailed description of attributes can be seen in a tooltip by hovering over them. The tooltip provides a small textual description and lets a user better judge the relevancy of attributes to the concept (Figure 5.2a). In cases where a user identifies an attribute as irrelevant, the user may remove the attribute by pressing the (×) button located on top right side of it. A user can organize the potential concepts relevant to her preferences into the Evidence Box by pressing the Add button, located below the Control Panel. Evidence Box: The Evidence Box acts as a central repository and is designed to aid a user to gather potential concepts relevant to her prefer- ences. Collecting concepts altogether in a place, allows a user to create a 146 Figure 5.4: Overview of the proposed summarization technique: (1) Extracts the potential attributes from the information space, (2) Annotates the at- tribute using the Knowledge Lake, and (3) performs analogous reasoning through mapping attributes on a vector space [238]. mental structure of the information space. It is particularly important in large information spaces as humans have a limited memory capacity in ab- sorbing information. The concepts within the Evidence Box follows the same coloring scheme applied to the Radial Map (Figure 5.2b). Making it easier for a user to identify the type of summaries stored in the Evidence Box. A slider placed horizontally below concepts to visually encode the weight of concepts. Initially, the slider shows a pre-computed weight for each concept, which is the average TF-IDF score of attributes associated with it. A user may change the weight of a concept by moving the slider indicator to the left or right. Dragging the slider indicator to the left decreases the importance of a concept, while moving the indicator to right increases the importance of a concept. 147 Attribute ExtractionWord2VecClassificationKeywordKBSimilarityHashtagSynonymTopicKnowledge LakeGreg HuntBrad HazzardDoctorSurry HillsKensingtonTony AbbottTurnbullHealthWellnessFive DockfundScott MorrisondentistbudgetSurry Hill is a suburb in SydneyTurnbullis the former Prime minister of AustraliaMedical Practitioner is the Hypernym of Physician Medical Practitioner is Hypernym of DoctorKensington a suburb in SydneyAttributeConceptMapUser InterfaceGreg HuntBrad HazzardSurry HillsPhysicianHealthWellnessFive DockfundScott MorrisondentistbudgetTonyAbbottInput DatasetUser QueryKnowledge LakeScott MorrisonTony AbbottGreg HuntBrad HazzardMedicaldentistDoctorWhite collar jobHealthWellnesswellbeingSummarizationQueryPrime MinistersScott Morrison, Tony Abbott,Greg Hunt, Brad HazzardHealth, WellnessHealth MinisterWellbeingDoctor, DentistWhite Collar JobSurry Hills, Five DockSuburbs In Sydney010110100101100101010110010111010101Encoding attributes as vectorsDeep learning skip gram networkAttribute Recognition(1)(2)(3)Generated Summaries 3. Query Box In this section, we explain the Query Box a component for examining a document-concept relevancy. A user can interact with the Query Box through two interfaces: Concept Only (Figure 5.3a) and Concept + Rule (Figure 5.3b). The Concept Only interface allows a user to specify her preferences implicitly by dragging and dropping concepts from the Evidence Box. A user can drag and drop several types of concepts, e.g., Topic, Category, Person, and Organization, into the Query Box, allowing to rank documents from different perspectives. Concepts within the Query Box colored according to their corresponding summary type, help a user to identify the type of sum- mary used in ranking documents. A user can examine the relevancy among documents and concepts by pressing the Rank button, where ConceptMap arranges documents accordingly in the Documents List. The second interface Concept + Rule, aids a user to formulate her pref- erences through rules. Rule based techniques have been coupled with IR systems for an extended period of time. Often, however, users are making mistake in using rules to formulate their preferences [98]. The goal of this in- terface is to study whether providing conceptual summary of the information space augments users ability to understand and utilize rules more effectively or not. The Concept + Rule interface provides four operators for formulating user’s preferences through rules ‘[(cid:48), ‘AN D(cid:48), ‘OR(cid:48), ‘]’. ‘[’, indicates the start of a rule clause, ‘]’, indicates the end of a rule clause. The ‘AND’ operator implies a document must contain a specified concept to be ranked by the ConceptMap. The ‘OR’ operator implies at least one of the user specified concepts must appear in a document to be ranked by the ConceptMap. For example, consider the following rule: Q1 = [Hospital AN D Health] OR [Health Care AN D Budget] 148 Q1 ranks only those documents that contain both ‘Hospital’ and ‘Health’ concepts, or ‘Health Care’ and ‘Budget’ concepts. Next, we will explain how the ConceptMap formulates user’s selected concepts for IR systems to rank documents. As each concept represents a set of attributes rather than a single key- word, we need to transform concepts into forms of text queries for ranking through IR systems. We do this by computing the cartesian product of concepts attributes and using the result set for ranking documents. More formally, we denote a concept C = {c1, c2, c3, ...cn} as a set of attributes. We denote a preference Q = {C1, C2, C3, ..., Cn} as a set of concepts that a user dropped into the Query Box. To score documents based on the user preference, we compute the cartesian product of attributes associated with concepts Q = C1 × C2 × C3 × ... × Cn = {(c1, c2, c3, ..., cn |cn ∈ Cn}. The resulting set Q = {q1, q2, q3, ..., qn} are the queries —ConceptMap computes their relevancy to documents. More clearly, suppose a user intends to analyse citizens opinion about government budget in ‘Health Care System’ system. Assume, the user selects the ‘Health Ministers’ concept from the Persons summary Health M inisters = {Greg Hunt, Brad Hazzard}, and the ‘bud- get’ concept from the Category summary Budget = {f und, money, budget}. To generate the queries that represents the given concepts, ConceptMap computes the cartesian product of concepts as below: Q = {(Greg Hunt, f und), (Greg Hunt, money), (Greg Hunt, budget), (BradHazzard, f und) , (Brad Hazzard, money), (Brad Hazzard, budget)} The following explains how ConceptMap computes the relevancy of queries and documents. ConceptMap arranges documents based on their relevancy to queries. To compute the relevancy of a query and a document, we imple- 149 mented a Vector Space Model (VSM). The model transforms the document d and query q ∈ Q into vectors and computed their relevancy using a cosine similarity. S(d, q) = (cid:80) tf idf (c, d).WC ||d||.||q|| Where tf idf (c, d) is the tf-idf score for the attribute c in document d. WC is the weight a user specified for the concept C (see Section 5.3.1). Also, ||d|| and ||q||, are the Euclidean norms for vectors d and q. ConceptMap arranges documents in descending order based on their cosine similarity scores to queries. 4. Documents List The Documents List (Figure 5.1d) provides a list of documents ranked based on the user’s preferences. Documents List relies on stacked bar charts for visual encoding of documents. It shows a barchart below each document, illuminating the relevancy of documents to user preferences. To aid a user to better comprehend documents-concepts relevancy, Document List applies the same coloring scheme as the Radial Map. The coloring allows a user to identify the contribution of each concept to a document, and provides an explanation of why one document ranked higher than another. The ranking of documents are updated as a user modifies her preferences through adding or removing concepts from the Query Box. 5.4 Solution Overview In this section, we explain how ConceptMap generates a conceptual sum- mary of the information space. To generate a summary, ConceptMap uti- lizes a Knowledge Lake [35, 37] and a deep learning skip-gram embedding 150 network [180]. Knowledge Lake is a central repository made up of several knowledge bases and provides a contextualization layer for transforming the raw data into contextualized knowledge. It annotates attributes within the information space with a set of facts and information. Deep learning network measures the conceptual commonality existing between attributes and groups attributes with similar characteristics. Overall, our approach is made up of three stages, Attributes Recognition, Knowledge Lake, and Summarization. Followings explain each stage in detail. 5.4.1 Attributes Recognition The initial step in generating the summary is the identification of content bearing attributes exists within the information space. These attributes al- low ConceptMap to discover the aboutness of data. The current version of ConceptMap extracts two types of attributes: Keyword and Named Entity. To extract the attribute type of keyword, ConceptMap performs a prepro- cessing task by removing the stopwords 3, keeping the proper names capi- talized, and filtering out of the ungrammatical and irrelevant tokens, e.g., URLs’ or emoji. Also, it applies the WordNet lemmatizer over the remain- ing tokens to increase the probability of matching between words with the common base, e.g., ‘playing’, ‘playful’, ‘plays’ all reduce to the base form ‘play’. The second attribute type is Named Entity. Named Entities are the span of words in a text which refer to real-world objects, such as person and company names, or gene and protein names. Examples of Named Entities include Barack Obama, New York City, Volkswagen Golf, or other proper names. The current version of ConceptMap extracts three types of named 3Stopwords are words, e.g., the, is, are with little meaning that commonly occur within documents. 151 entities: persons, locations, and organizations. For recognizing named en- tities, we used our previous work [53]. It provides a pipeline for various data curation tasks, including Named Entity Recognition, Information Link- ing, Similarity Computation, and Indexing. After extracting attributes, we annotate them through a Knowledge Lake to identify the concepts existing within the information space. In the next section, we explain how Knowledge Lake contributes to our work for generating the conceptual summary of the information space. 5.4.2 Knowledge Lake A Knowledge Lake enables a user to understand attributes within the infor- mation space and provides a foundation to measure the commonality between them. We utilize several readily available knowledge bases and taxonomies to create the Knowledge Lake: (1) Geoname, is a geographical database and con- tains information over 25 million geographical places around the world 4, (2) Wikidata, is a central storage of several Wikimedia data, including Wikipedia, Wikivoyage, Wikisource. 5, (3) WordNet, is a semantic lexicon and grouped English words into sets of synonyms called synsets 6, (4) Empath, is a deep learning skip-gram network, which categorizes text over 200 built-in cate- gories. [109], and (5) Google Knowledge Graph 7, is a knowledge base based on a graph database and provides information about real-world entities, in- cluding persons, locations, and business. These knowledge bases allow dis- covering the aboutness of data through annotating attributes with a more generalized or understandable form. 4http://geonames.org/ 5https://www.wikidata.org 6https://wordnet.princeton.edu/citing-wordnet 7https://developers.google.com/knowledge-graph/ 152 More formally, the Knowledge Lake K acts as a function K : ci (cid:55)−→ (cid:96)(ci), that receives an attribute ci as the input and returns an annotation (cid:96)(ci) to describe the attribute. For example, consider the following Tweet ‘Malcolm Turnbull says his government will focus on growth rather than fix the bud- get deficit’. To understand the Tweet, ConceptMap annotates its attributes, e.g., ‘Malcolm Turnbull’, ‘government’, ‘growth’, ‘budget’, with the Knowl- edge Lake from different perspectives. It may annotate the Named Entity ‘Malcolm Turnbull’ as the ‘former prime minister of Australia’, the keyword ‘budget’ as a topic for ‘fund’, etc. ConceptMap applies the annotation for all attributes within the information space. In the following section, we explain how we employ a deep learning neural network to measure the commonality between attributes to generate a summary of the information space. 5.4.3 Summarization In this section, we explain how ConceptMap generates a conceptual summary of the information space. For example, we explain how it identifies two persons, e.g., ‘Greg Hunt’ 8 and ‘Brad Hazzard’ 9 can be similar and forms a concept. As we discussed, ConceptMap annotates attributes 10 within the informa- tion space through the Knowledge Lake. ConceptMap uses these annotations to generate the summaries. For generating the Topic summary, ConceptMap groups attributes based on their hypernym relations. For example, it groups attributes, such as {doctor, physician, dentist} as ‘Medical_Practitioner’, while attributes like {health and wellness} as ‘wellbeing’. 8Health Minister of Australia 9New South Wales Minister for Health and the Minister for Medical Research 10ConceptMap generates topic and category summaries from the attribute type of key- word, while person, organization and location summaries from the named entities. 153 For generating the Category summary, ConceptMap relies on the EM- PATH, which categorizes text into 200 built-in human-validated categories. For example, it may categorize {doctor and physician} attributes as rel- evant to ‘medical_emergency’, ‘occupation’ and ‘white_collar_job’ cate- gories, while {health} as ‘medical_emergency’, but not ‘occupation’ and ‘white_collar_job’. ConceptMap visualizes categories with the highest fre- quency within the information space. To generate the Person, Organization, and Location summaries, Con- ceptMap takes advantage of a deep learning skip-gram network [180] to pre- dict the semantic similarity between attributes within the information space. For example, the network may learn that the word ‘health’ may predict ‘med- ical’, but not of ‘happiness’. By training the skip-gram network, it learns a representation of words within the information space, which known as neural embeddings. The neu- ral embeddings construct a vector space model and allow to measure the similarity between attributes in an unsupervised fashion. We used word2vec neural embeddings model 11 to map attributes onto a vector space. For attributes annotated with the Knowledge Lake, ConceptMap encodes attributes as vectors by querying the vector space model trained on the word2vec. Then, it performs ‘analogous reasoning’ 12 [109] by conduct- ing the vector arithmetic on generated attributes vectors, e.g., the vector arithmetic for words ‘Women + King - Man’ generates a vector similar to ‘Queen’. The following explains how ConceptMap performs analogous rea- soning to measure the similarity between attributes. For each attribute c, ConceptMap tokenizes its annotation (cid:96)(c) into a set of words (cid:96)(c) = {(cid:96)(cid:48) 1(c), (cid:96)(cid:48) 2(c), (cid:96)(cid:48) 3(c), ..., (cid:96)(cid:48) n(c)}. Then, for each (cid:96)(cid:48)(c) ∈ (cid:96)(c), 11https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit 12A form of comparison to highlight respects in which two attributes can be similar 154 ConceptMap queries the vector space model and extracts the vector V ((cid:96)(cid:48)(c)) corresponds to it. It performs analogous reasoning by computing the vector sum of all V ((cid:96)(cid:48)(c)). The resulting vector V (c) represents the attribute c in vector space. V (c) = (cid:80)n i=1 (cid:96)(cid:48) i(c) ConceptMap groups similar attributes based on their vectors similarity using the cosine measure. For example to compute the vector similarity of two attributes, the cosine similarity computed as: cos(θ) = V (c1) . V (c2) ||V (c1)||. ||V (c2)|| Where V (c1) and V (c2) representing vectors of attributes c1 and c2, and ||V (c1)|| and V (c2) are their lengths. More clearly, consider the attribute c = {Greg Hunt}, which annotated with the Knowledge Lake as the (cid:96)(c) = {Health M inister of Australia}. To represent this attribute as a vector ConceptMap tokenizes (cid:96)(Greg Hunt) = {Health, M inister, Australia} and query a VSM to extract their corresponding vectors (cid:96)(Greg Hunt) = {V (Health), V (M inister), V (Australia)}. Then, it computes the vector sum of all attributes V (Greg Hunt) = V (Health) + V (M inister) + V (Australia). The resulting vector V (Greg Hunt), represents the attribute c = {Greg Hunt} in vector space, and allows ConceptMap to compute its similarity with other attributes vector using the cosine similarity. Con- ceptMap groups attributes that their cosine similarity is above a pre-defined threshold 13. Figure 5.4 illustrates an overview of our proposed summariza- tion technique. 13Currently, we consider attributes over 0.7 % cosine measure as similar. 155 5.5 Experiments In this section, we discuss the structure of the ConceptMap, as well as the dataset used in our experiments. Also, we discuss two usage scenarios to show how the ConceptMap helps a user to formulate her preferences. 5.5.1 ConceptMap Architecture and Datasets The core component of techniques described in the previous sections is im- plemented in Python and JavaScript. We gathered over 300K Tweets (Aus- tralian region) relevant to health care and budget to create the input dataset. We used ElasticSearch 14 as the search engine to index and retrieve Tweets. In our preliminary implementation, we only indexed Tweets and performed the annotation and summarization simultaneously whenever a user inter- acts with the ConceptMap. However, it turns out that the annotation is the most time-consuming task, which hinders the ConceptMap to effectively response to the user interactions. We alleviate the ConceptMap response time by separating the annotation and summarization tasks. First, for each summary, we indexed attributes and their annotation along with the Tweet text. Then, whenever a user interacts with the ConceptMap to fetch a sum- mary of the information space, ConceptMap only computes the similarity between attributes (see Section 5.4.3) to group similar concepts. In this manner, it avoids annotating attributes constantly and could generate sum- maries promptly. 14https://www.elastic.co/start 156 5.5.2 Experiment Settings In this section, we study the performance of ConceptMap in formulating users preferences with respect to a traditional keyword-based UI. The study followed a repeated measures design ANOVA [119] with two independent vari- ables: tool : ConceptMap, which consists of Concept-Only and Concept-Rule interfaces, and a traditional keyword-based UI — and items. The Keyword- Based UI allows users to investigate the information space by entering their keywords and observing the resulting set ordered by their relevancy in the Documents List. Also, to aid users to better identify the relationship among keywords, we visualize the most frequent keywords co-occurred with users’ se- lected keywords. To counterbalance the experiment, we conducted the study on two different topics relevant to social issues: Health Care and Budget, where topic treated as a random variable. The study simulates an exploratory search scenario, where users need to write queries to retrieve Tweets relevant to the given topics. We divided users task into two subtasks: a focused exploratory search scenario and a broad exploratory search scenario. The focused search scenario requires users to investigate the information space to retrieve items for a limited number of topical subspaces, e.g., retrieve a list of Tweets contain information relevant to medical centres within Australia. The broad search scenario simulates cases where users need to retrieve items for a larger number of topical sub- spaces, e.g., retrieve citizens’ opinions using the Twitter about people who involved in health care system, and Tweets about the quality of the services provided by health care centres. We invited five post-graduate students from a research lab to take part in experiments. None of the participants was knowledgable in the topics se- lected for the study. For evaluating the performance of tools, participants 157 Figure 5.5: The bar chart shows ConceptMap imposes lower workload across different dimensions. Error bars shows the standard error. selected a topic and performed the search scenarios. The goal of these exper- iments were to reflect the behaviour of users in formulating their preferences while they were investigating the information space with different informa- tion needs. For each task, participants filled a 7-point likert scale NASA TLX questionnaire. The questionnaire is a multidimensional assessment tool that rates the perceived workload in order to assess a task or system. We limited the duration of focused search scenario to five minutes and the broad search scenario to 10 minutes. During the experiment, we reminded participants when their allotted time was almost over, but we didn’t force them to stop using the tools. Results: Workload and Performance Analysis: A repeated mea- sure ANOVA revealed the impact of ConceptMap in lowering participants overall workload F (1, 5) = 58.803, p = 0.05. This tendency can be ob- served in detail in Figure 5.5, which shows participants were more relaxed while interacting with ConceptMap for formulating their preferences. The results showed ConceptMap could significantly lowers participants temporal demand F (1, 5) = 162.00, p = 0.001. We also observed a similar impact on improving participants efforts F (1, 5) = 83.308, p = 0.003, and performance 158 01234567Mental.DemandPhysical.DemandTemporal.DemandPerformanceEffortFrustrationScore (lower is better)ConceptMapKeyword-Based UI F (1, 5) = 43.560, p = 0.007. Based on the obtained results, we concluded that providing several summaries of information space could boost partici- pants comprehension and sensemaking of data: this impact is more significant on improving users performance and time. We also analyzed the effectiveness of ConceptMap in aiding participants in formulating their preferences through rules. A repeated measure ANOVA showed that ConceptMap could lower participants overall workload F (1, 5) = 12.60, p = 0.05. The results revealed that ConceptMap reduces participants effort in crafting rules F (1, 5) = 18.00, p = 0.005. We also observed a similar impact on participants performance F (1, 5) = 22.04, p = 0.003. In addition to previous experiments, we analyzed the performance of tools by aggregating the top 20 items collected by participants and verifying their relevancy. Thus, we created two datasets from the aggregated items. The first dataset represents items collected by participants through interacting with the ConceptMap, and the second dataset contains items collected through in- teracting with the Keyword-Based UI. Then, we verified whether a retrieved item is relevant to the topics assigned by participants or not. The results showed that participants could retrieve items more precisely using the Con- ceptMap compared to the Keyword-Based UI. Figure 5.6a shows the average precisions obtained using the tools. Based on the observed results, allow- ing users to examine the relevancy of data from different perspectives has a positive impact on retrieving items. Results: Completion Time and Usability Analysis: In the second study, we analysed the completion time and the usability aspect of Con- ceptMap. We assigned a task to participants and let them accomplish the task using the tools in more natural settings, e.g., without times-up. The task was a broad exploratory search scenario for retrieving Tweets relevant to 159 Figure 5.6: (a) The average precision of participants obtained using Con- ceptMap and Keyword-Based UI, and (b) The time participants spent for retrieving their information. issues and people involved in budget planning. Then, we asked participants to fill a standard Software Usability Scale (SUS) questionnaire [64]. SUS pro- vides subjective assessments of software usability, where a statement is made, and respondents can indicate the degree of agreement and disagreement on a five-point scale. We averaged over all participants questions, the mean score amounted to 87.5 out of 100, which falls ConceptMap in the 90-95 percentile range in the curve grading scale interpretation of SUS scores [219]. We also calculated the time participants spent to accomplish their task. We observed using the ConceptMap participants could accomplish their task in a shorter time compared to the Keyword-Based UI (Figure 5.6b). The results confirm the impact of ConceptMap on lowering participants temporal demand. At the end of the study, we asked participants to share their impressions on the strengths and weakness of the ConceptMap. All participants agreed that ConceptMap could enhance users comprehension and sensemaking of the information space. One of the participants noted to the Evidence Box, allowing her to store potentially relevant concepts in one place and shortening the time needs to examine the relevancy of concepts and documents. Another participant mentioned that providing several summaries of the information 160 010203040506070ConceptMapKeyword-Based UIPrecision(a) (b) space could help him to formulate his preferences from different perspectives. Another participant mentioned the potential of ConceptMap to support users in exploratory search scenarios where there is no well-defined goal. 5.6 Discussions In this section, we discuss our goal for designing the ConceptMap interface, and its limitations. 5.6.1 ConceptMap Interface To design the ConceptMap interface, we focused on providing the controlla- bility and transparency a user needs to effectively formulate her preferences. ConceptMap interface, provides the controllability over the information space by allowing a user to combine concepts from different perspectives. Con- ceptMap also lets a user modify concepts based on her preference by remov- ing attributes associated with them. In addition, in cases a user could not formulate her preference using the provided summaries, ConceptMap enables the user to manually create her topic of interest using the Keyword summary. ConeptMap also provides transparency by relying on various visual encod- ings. ConceptMap uses a consistent coloring scheme across all components to help a user to understand the cause of retrieving an item. ConceptMap also provides an explanation of why an attribute associated with a concept in a tooltip, which can be seen by hovering the mouse over the attribute. The explanation facilitates the comparison of attributes grouped as a con- cept allows a user to identify inconsistencies that may occur in summarizing the attributes. 161 5.6.2 Limitations and Future Works Today, the ConceptMap Query Box provides two interfaces for capturing user preferences. However, the Rule + Concept interface supports ‘AND’ and ‘OR’ operators only. In order to more effectively formulate user pref- erences through rules, ConceptMap needs to support other operators like ‘Not’. We are planning to add this operator as a part of future work. The second limitation is that for each concept within the information space, Con- ceptMap labels the concept using the attribute with the highest frequency. For example, if ConceptMap generates a concept that represents hospitals within Sydney, it labels the concept with the hospital name in the concept that has the highest frequency within the information space. However, in our experiments, we discovered that users prefer to see more descriptive names for the concepts visualized on the Radial Map. 5.7 Conclusion In this chapter, we introduced the ConceptMap, a system that automatically identifies the relation between attributes, e.g., Keywords and named entities, within the information space. ConceptMap produces several summaries of data, e.g., Topic, Category, Person, and Organization, and enables a user to formulate her preferences implicitly as a set of abstract concepts. To gen- erate the summaries, ConceptMap relies on a Knowledge Lake and a deep learning skip-gram network, which groups attribute based on their concep- tual similarity. Our results showed that providing a conceptual summary of the information space enables a user to better formulate her preferences, especially when seeking for varied information in a large information space. 162 Chapter 6 Automating Basic Data Curation Tasks (Software Prototype) In this chapter, we present a software prototype to assist analysts in curating the raw data and deriving insight. We develop a set of APIs to automate different curation tasks and creation of data curation pipelines. The APIs are available as open source on GitHub 1. Additionally, we have provided a set of rest services and a Web interface to support analysts to curate their data without writing code. The rest of this chapter is organized as follows: We introduce the problem of data curation in Section 6.1. In Section 6.2, we introduce the services provided by the curation APIs. Then, in Section 6.3, we discuss different usage scenarios and how it assists analysts to derive insight and extract value. Finally, we conclude the chapter in Section 6.4. The content of this chapter is derived from the following paper(s): • A Beheshti, A Tabebordbar, B Benatallah, R Nouri, On automating basic data curation tasks. In Companion Proceedings of the 26th 1https://github.com/unsw-cse-soc/Data-curation-API 163 International Conference on World Wide Web Companion 2017 Apr 3 (pp. 165-169). (ERA Rank A*). • Beheshti, A., Tabebordbar, A. and Benatallah, B., 2020, April. iStory: Intelligent Storytelling with Social Data. In Companion Proceedings of the Web Conference 2020 (pp. 253-256). 164 6.1 Introduction Understanding and analyzing big data is firmly recognized as a powerful and strategic priority [76]. For deeper interpretation of and better intelligence with big data, it is important to transform raw data (unstructured, semi- structured and structured data sources, e.g., text, video, image data sets) into contextualized data and knowledge that is maintained and made available for use by end-users (e.g., data scientists and researchers) and applications (e.g., data and machine learning applications). In particular, data curation acts as the glue between raw data and analytics, providing an abstraction layer that relieves users from time-consuming, tedious and error-prone curation tasks. Data curation involves identifying relevant data sources, extracting data and knowledge, cleaning, maintaining, merging, enriching and linking data and knowledge. For example, consider a tweet in Twitter [Kwak et al. 2010]: a micro-blogging service that enables users to Tweet about any topic within the 140-character limit and follow others to receive their tweets. It is possible to extract various information from a single tweet text such as keywords, part of speech, named entities, synonyms and stems [117]. Then it is possible to link the extracted information to external knowledge graphs to enrich and annotate the raw data. Later, this information can be used to provide deeper interpretation of and better intelligence with the huge number of tweets in Twitter: every second, on average, around 6,000 tweets are tweeted on Twitter, which corresponds to over 350,000 tweets sent per minute, 500 million tweets per day and around 200 billion tweets per year. In particular, the data curation process enables extracting knowledge and deriving insights from the vastly growing amounts of local, external and open data. This task is vital for recent data analytics initiatives include: improv- ing government analytical services, personalized advertisements in elections, 165 and predicting intelligence activities [241]. In this chapter, we identify and implement a set of basic data curation APIs and make them available to researchers and developers as services to assist them in transforming their raw data into curated data. The curation services enable developers to easily add features - such as extracting keyword, part of speech, and named entities such as Persons, Locations, Organizations, Companies, Products, Diseases, Drugs, etc., providing synonyms and stems for extracted information items leveraging lexical knowledge bases for the English language such as Word- Net, linking extracted entities to external knowledge bases such as Google Knowledge Graph and Wikidata, discovering similarity among the extracted information items, such as calculating similarity between string, number, date and time data, classifying, sorting and categorizing data into various types, forms or any other distinct class, and indexing structured and un- structured data into their data applications. These services can be accessed via a REST API, and the data is returned as a JSON file an easy-to-parse structure, that can be integrated into (data and machine learning) applica- tions. The basic data curation APIs are available as an open-source project on GitHub. The technical details for the curation APIs can be found in a technical report [55]. The rest of the chapter is organized as follows. In Sec- tion 6.2, we present an overview of the curation services, and in Section 6.3, we further discuss the usage of our APIs through presenting a demonstration scenario. 6.2 Curation Services Overview To augment users in extracting features, we proposed a set of curation APIs. The APIs are implemented as micro-services and provide services such as 166 Figure 6.1: Data Curation Services ScreenShot- Named Entity Extrac- tion [53]. Extraction, Classification, Linking, and Indexing. The curation services use natural language processing technology and machine learning algorithms to transform the raw data, e.g., by extracting semantic meta-data from content, such as information on people, places, and companies and link them to knowl- edge graphs such as WikiData and Google KG - using similarity techniques - or classify the extracted entities using classification services. We provide curation services for performing content analysis on Internet-accessible web pages, posted HTML or text content. A technical report is also provided [50], 167 which further guides analysts on how to utilize the services for their curation tasks. In the following, we present an overview of the services. 1. Extraction Services: The majority of the digital information pro- duced globally presented in the form of web pages, text documents, news articles, emails, and presentations expressed in natural language text. Collectively, such data is termed unstructured as opposed to structured data that is normalized and stored in a database. The do- main of information extraction is concerned with identifying informa- tion in unstructured documents. In most cases, this activity concerns processing human language texts utilizing Natural Language Processing (NLP). Accordingly, analysts may need a collection of natural language processing APIs to extract entities, Part of Speech (PoS), keywords, synonym, stems and more. (a) Named Entity Recognition (NER), can be used to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, and percentages [49]. In par- ticular, named entities carry important information about the text itself, and thus are targets for extraction. Accordingly, NER is a key part of information extraction systems that supports robust handling of proper names essential for many applications, enables pre-processing for different classification levels, and facilitates in- formation filtering and linking. State-of-the art NER systems for English produce near-human performance. Figure 6.1 illustrates a screenshot of the basic data curation services presenting an ex- ample for the named entity extraction task. 168 (b) Part-of-Speech (PoS) is a category of words (or more gener- ally, of lexical items)with a similar grammatical properties [143]. Words assigned to the same PoS display a similar behaviour in terms of syntax, grammatical structure of sentences, and mor- phology. Commonly listed English PoS are noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection, and some- times numeral, article or determiner. (c) Keyword In corpus linguistics, a keyword is a word which occurs in a text more often than we would expect to occur by chance alone. Keywords are calculated by carrying out a statistical test which compares the word frequencies in a text against their ex- pected frequencies derived in a much larger corpus, which acts as a reference for general language use. To assist analysts filtering and indexing open data, it will be important to extract keywords from unstructured data such as tweets’ text. (d) Synonym is a word or phrase that means exactly or nearly the same as another word or phrase in the same language. An example of synonyms is the words begin, start, and commence. Words can be synonymous when meant in certain contexts, even if they are not synonymous within all contexts. For example, if we talk about a long time or an extended time, ‘long’ and ‘extended’ are synonymous within that context [173]. While analyzing the open data, it is important to extract the synonyms for the keywords and consider them in the analysis steps. For example, sometimes two tweets can be related if we include the synonyms of the keywords in the analysis: instead of only focusing on the exact keyword match. It is important as the synonym can be a word or phrase 169 that means exactly or nearly the same as another word or phrase in the tweets. (e) Stem is a form to which affixes can be attached. To assist analysts understand and analyze the textual context, it will be important to extract derived form of the words in the text. For example, considering the keyword ‘health’, using the Stem service, it is pos- sible to identify derived forms such as healthy, healthier, healthi- est, healthful, healthfully, and healthfulness, and more accurately identify the information items, e.g., tweets, that are related to health. (f) Information Extraction from a URL: A Uniform Resource Locator (URL), is a reference to a Web resource that specfies its location on a computer network and a mechanism for retrieving it. Considering a tweet that contains a URL link, it is possible to extract various types of information including: Web page title, paragraphs, sentences, keywords, phrases, and named entities. For example, consider a tweet which contains URL links. It is possible to extract further information from the link content and use them to analyze the Tweets. 2. Linking Services (a) Similarity Approximate data matching usually relies on the use of a similarity function, where a similarity function f (v1, v2) −→ s can be used to assign a score s to a pair of data values v1 and v2. These values are considered to be representing the same real-world object if s is greater than a given threshold t. Different similar- ity functions have been proposed for comparing [49]: strings (e.g., 170 edit distance and its variations, Jaccard similarity, and TF-IDF based cosine functions), numeric values (e.g., hamming distance and relative distance), images (e.g., Earth Mover Distance) and more. Accordingly, analysts may need a collection of similarity metrics to measure the Cosine similarity of two vectors of an in- ner product space and compares the angle between them, the Jac- card similarity of two sets of character sequence, the length of the longest common sub-sequence between two strings using an edit distance algorithm, the hamming distance between two strings of equal length and more. (b) Knowledge Base While extraction services can augment ana- lysts to extract various features, e.g., named entities, keywords, synonyms, and stems, from a text, it is important to go one step further and link the extracted information into the entities in the existing Knowledge Graphs (e.g., Google KG and Wikidata). For example, consider, we have extracted ‘M. Turnbull’ from a Tweet text. It is possible to identify a similar entity (e.g., ‘Malcolm Turnbull’) in Wikidata. As discussed earlier, the similarity API supports several functions such as Jaro, Soundex, QGram, Jac- card and more. For this pair, the Jaro function returns 0.74, and the Soundex function returns 1. To achieve this, we have lever- aged the Google Knowledge Graph, Wikidata, and ConceptNet knowledge bases to link the extracted entities from the text to the concepts and entities in these knowledge bases. 3. Classification Services Classification is a data mining function that assigns items in a collection to target categories or classes. The goal of classification is to accurately predict the target class for each case in 171 the data. For example, a classification model could be used to identify loan applicants as low, medium, or high credit risks. A classification task begins with a dataset in which the class assignments are known. For example, a classification model that predicts credit risk could be developed based on observed data for many loan applicants over a pe- riod of time. In the terminology of machine learning, classification is considered as an instance of supervised learning. At the same time, the unsupervised procedure is known as clustering, and involves group- ing data into categories based on some measure of inherent similarity or distance [138]. An algorithm that implements classification, espe- cially in a concrete implementation, is known as a classifier. Examples of classification algorithms include: Linear classifiers, Support vector machines, Quadratic classifiers, Kernel estimation and Decision trees. Figure 6.2 illustrates a screenshot of the basic data curation services presenting an example for the classification task. 4. Indexing Services enable analysts to scan and retrieve the curation environment quickly without the operational burden of managing it. For indexing the content of the curation environment, we utilized elastic search, to allow the user to query the data and derive insight. 5. Converter Services The basic curation APIs may be applied to differ- ent data sources and file formats. To facilitate this task, the converter API can be used to convert PDF, Word, PowerPoint, XPS, and HTML documents into a text file to be fed to the basic curation APIs, where the result is returned as a JSON file, an easy-to-parse structure, and can be integrated into data applications. As an ongoing work, we are identifying various data sources and file formats to facilitate converting 172 documents without user interaction. 6.3 Demonstration Scenarios In this section we demonstrate a scenario to illustrate how an analyst can leverage different curation APIs to curate Tweets relevant to Australian bud- get. Consider a data analyst, say Alan, that intends to analyze Tweets to un- derstand citizens’ opinions regarding the quality of services provided through the government. These services includes a large number of government projects such as light rail, highways, road maintenance, public transporta- tion, health cares. Analyzing and understanding citizens opinions regarding these services allows decision-makers to prioritize them and better plan their budget. Using the curation APIs, Alan can leverage different curation ser- vices, including linking, extraction, classification, and indexing, to extract insight from Tweets. For example, Alan, can use the extraction service to extract named entities, keywords, synonyms, PoS and stems from tweets. Then, he may use the linking service to link extracted information to knowl- edge bases such as Wikidata and Google Knowledge Graph. Next, Alan may utilize the classification service to classify Tweets that are related to and, finally index the retrieved information for fast access and further analysis. Additionally, the curation services can allow Alan to perform more fo- cused analysis, such as extracting Tweets relevant to the health category of the government budget. For example, Alan, can extract named entities such as (i) People, from GPs and Nurses to health ministers and hospi- tal managers from Australian doctors directory, (ii) Organizations, such as 173 Figure 6.2: Data Curation Services ScreenShot- Classification [53]. 174 Hospitals, Pharmacies and Nursing Federation from myHospitals, (iii) Lo- cations, states, cities and suburbs in Australia from auspost10, (iv) Health funds, such as Medibank, Bupa and HCF from health-services, (v) Drugs, such as Amoxicillin, Tramadol and Alprazolam from drug index, (vi) Dis- eases, such as Cancer, In uenza and Tuberculosis from medicine-net, (vii) Medical Devices, such as Gas Control, Blood Tube and Needle from FDA14, (viii) Job titles, such as GP, Nurse, Hospital Manager, Secretary of NSW Health and NSW Health Minister from comp data, and (ix) Keywords, such as healthcare, patient, virus, vaccine and drug from Australia national health and medical research council. Then he can enrich these named entities using Knowledge Bases such as Wikidata, Google Knowledge Graph and Wordnet. Then, we use the classification service to identify the Tweets with negative sentiment. Notice that, for the sentiment analysis, the classification API leverages the sentiment classifier implemented in the Apache PredictionIO (http://prediction.io). For example, using the Curation APIs, out of 2934 diabetes-related Tweets, we could identify 615 tweets with negative senti- ment. As another example, we have identified 1549 tweets with negative sentiment in the mental health category. Example: Figure 6.3 shows two real Tweets and different information that have been extracted, e.g., named entities, keywords, and hashtags, to generate a graph where nodes are the main artifacts and extracted informa- tion are the relationships among them. As illustrated in this figure, Tweets are linked through named entities and hashtags and this will generate an interesting graph which reveals the hidden information among the nodes in the graph: for example it is possible to see the path (transitive relationships among the nodes and edges) between user1 and user2 (in Twitter) which in turn reveals that these two users are interested in the same topics, and con- 175 Figure 6.3: Use extracted features from Twitter to link related tweets [53]. sequently may be part of some hidden micro-networks. 6.4 Conclusion In this chapter, we identified and implemented a set of curation services to make them available to researchers/developers to assist them in transforming their raw data into curated data. We have provided the technical details for 176 the curation APIs in a technical report [55]. As an ongoing work, we are identifying and implementing more services to support enriching, annotating, summarizing and organizing raw data. 177 Chapter 7 Conclusion and Future Works Data curation has become a vital asset for organizations and governments to derive insight and extract value from the raw data. For example, over the last few years, enterprises started to curate and analyze vastly growing open data to personalize the advertisements in elections, improve government ser- vices, predict intelligence activities, as well as to improve national security and public health. However, often for curating the data analysts need to per- form several time-consuming and challenging curation tasks to transform the raw data into contextualized data and make available for use by end-users and applications. In this dissertation, we discussed techniques and algorithms to facilitate the transformation and representation of the raw data. We discussed how an analyst could be aided to transform the raw data into knowledge. We presented techniques for adapting data curation rules in dynamic and con- stantly changing environments, and how to enhance user’s comprehension of curation environments. 178 In Chapter 3, we introduced the notion of Knowledge Lake, i.e., a con- textualized Data Lake, to provide the foundation for big data analytics by automatically curating the raw social data and deriving insights. We pre- sented a social data curation foundry to enable analysts to engage with social data to uncover hidden patterns and generate insight. Our approach offers a customizable feature extraction to harness desired features from diverse data sources. To link contextualized information items to the Knowledge Lake, we presented a technique which leverages cross-document co-reference reso- lution assisting analysts to derive targeted insights. Our experimental results showed that a featurized data curation solution could significantly improve the precision of curation tasks. As future work, we plan to provide a scalable approach to the CDCR process. For example, it is possible to split up a CDCR task into several stages and assign each stage into a specific MapRe- duce (MR)/spark job. The first MR job can be used to pre-process and frame the information into a schema. Then, the second job can utilise different cu- ration micro-services to extract potential features from data. The final MR job, can utilise the CDCR on extracted information and classify them into a set of related summaries. Our study showed that linking information (e.g., tweets) to the objects in the domain knowledge greatly assists with interpre- tation of data in a given domain. Adding the scaliability to our algorithm would greatly assists analysts to further extract knowledge and value from the raw data. Also, we are planning to identify a higher number of features from social data to better capture the salient aspect of data. As an ongoing work, we are expanding the presented declarative language to assist analysts in querying and analyzing data more conveniently. We are also extending the Knowledge Lake to cover more concepts. 179 In Chapter 4, we proposed an approach for adapting data curation rules in dynamic and changing environments. We discussed the importance of rule-based curation systems in augmenting curation algorithms for curating data in unstructured and dynamic environments. Rules can alleviate many of the shortcomings inherent in pure algorithmic approaches. However, rule adaptation is a challenging and error-prone task, and there is a need for an analyst to adapt rules to keep them applicable and precise. Besides, analysts adapt rules at the syntactic level, e.g., keywords and regular expression. Using syntactic level features limits the ability of a rule in annotating items when the rule needs to curate a varied and comprehensive list of data. To alleviate the problems mentioned above, we presented an adaptive approach for adapting data curation rules in unstructured and continually changing environments. Our approach offloads analysts from adapting rules and autonomically identifies the optimal modification for rules using a Bayesian multi-armed-bandit algorithm. Besides, our proposed approach adapts rules at the conceptual level, e.g., topic, to boost rules to annotate a larger num- ber of items. We conduct experiments on different curation domains and compare the performance of our approach with systems relying on analysts. The experimental results showed that our approach achieved a comparative performance compared to analysts in adapting rules. As a part of future work, we aim at identifying more features for adapting rules. Specifically, we focused on adapting rules with three other types of features, including enti- ties, word2vec, and relations. We believe adapting rules with higher numbers of features enhance the performance of rules to annotate a larger number of items. Additionally, it is possible to automate the generation of rules using Natural Language (NLP) based approaches. Supporting analysts with NLP based solutions may reduce the analyst’s workload and effort in generating 180 rules and curating the data. Hence, as a future plan, we may fuse NLP and machine learning algorithms to automatically generate and adapt rules. There are several directions in coupling machine learning and NLP based ap- proaches for generating rules, including: (1) whether NLP based techniques can generate a higher number of rules for curating data. (2) Scalability is another issue that needs to be studied while coupling NLP based approaches for generating rules. Additionally, (3) evaluating the analyst workload in generating and adapting rules using NLP based solution is an interesting topic that can be studied as well. In Chapter 5, we discussed techniques for enhancing users comprehension to formulate their preferences in a large curation environment. Understand- ing of data allows a user to better formulate her preferences when seeking information. However, exploring the curation environment has been proven to be painstakingly time-consuming and challenging when a user has a varied information need with a large number of topical sub-spaces. In such envi- ronments, the user continuously issues different queries to scan and read the data and make sense of it. However, using current techniques, a user needs to explicitly specify her preferences for Information Retrieval (IR) systems in forms of keywords or phrases. Text queries limit the ability of users in comprehending the curation environment as they only retrieve a small part of data, and the rest remains invisible. To address the above problem, we proposed a system that provides a con- ceptual summary of curation environments and allows users to specify their preferences implicitly as a set of concepts. Our approach lowered users’ cog- nitive load in ranking and exploring the data. The system takes advantage of deep learning and a knowledge lake to provide a conceptual summary of 181 the information space. User can specify her preferences implicitly as a set of concepts without the need to investigate the information space iteratively. It provides a 2D Radial Map of concepts where users can rank items relevant to their preferences through dragging and dropping. Our experiment results showed that our approach could help users to formulate their preferences bet- ter when they need to retrieve a varied and comprehensive list of information across a large curation environment. As future works, we focus on enhancing the quality of summaries by contributing the other users’ feedback. This, will improve the precision of the system in retrieving user’s information need and reducing the time users need to spend for formulating her preferences. 182 Bibliography [1] Astrixdb, https://asterixdb.apache.org/. [2] https://elitedatascience.com/data-cleaning. [3] https://whatis.techtarget.com/definition/data-ingestion. [4] orchstrate, orchstrate.io/. [5] Bilal Abu-Salih, Pornpit Wongthongtham, Seyed-Mehdi-Reza Be- heshti, and Behrang Zadjabbari, Towards a methodology for social busi- ness intelligence in the era of big social data incorporating trust and semantic analysis, Proceedings of the Second International Conference on Advanced Data and Information Engineering, DaEng 2015, Bali, Indonesia, April 25-26, 2015, Lecture Notes in Electrical Engineering, vol. 520, Springer, 2015, pp. 519–527. [6] Bilal Abu-Salih, Pornpit Wongthongtham, Seyed-Mehdi-Reza Be- heshti, and Dengya Zhu, A preliminary approach to domain-based eval- uation of users’ trustworthiness in online social networks, 2015 IEEE International Congress on Big Data, New York City, NY, USA, June 27 - July 2, 2015 (Barbara Carminati and Latifur Khan, eds.), IEEE Computer Society, 2015, pp. 460–466. 183 [7] Akiko Aizawa, An information-theoretic perspective of tf–idf measures, Information Processing & Management 39 (2003), no. 1, 45–65. [8] Katherine G Akers, Fe C Sferdean, Natsuko H Nicholls, and Jennifer A Green, Building support for research data management: Biographies of eight research universities., IJDC 9 (2014), no. 2, 171–191. [9] Beatrice Alex, Claire Grover, Barry Haddow, Mijail Kabadjov, Ewan Klein, Michael Matthews, Richard Tobin, and Xinglong Wang, Au- tomating curation using a natural language processing pipeline, Genome Biology 9 (2008), no. 2, S10. [10] Eric Alexander, Joe Kohlmann, Robin Valenza, Michael Witmore, and Michael Gleicher, Serendip: Topic model-driven visual exploration of text corpora, 2014 IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, 2014, pp. 173–182. [11] Tooran Alizadeh, Somwrita Sarkar, and Sandy Burgoyne, Capturing citizen voice online: Enabling smart participatory local government, Cities 95 (2019), 102400. [12] Mohammad Allahbakhsh, Aleksandar Ignjatovic, Boualem Benatallah, Seyed-Mehdi-Reza Beheshti, Elisa Bertino, and Norman Foo, Collusion detection in online rating systems, Web Technologies and Applications - 15th Asia-Pacific Web Conference, APWeb 2013, Sydney, Australia, April 4-6, 2013. Proceedings, Lecture Notes in Computer Science, vol. 7808, Springer, 2013, pp. 196–207. [13] Mohammad Allahbakhsh, Aleksandar Ignjatovic, Boualem Benatallah, Seyed-Mehdi-Reza Beheshti, Norman Foo, and Elisa Bertino, An an- 184 alytic approach to people evaluation in crowdsourcing systems, arXiv preprint arXiv:1211.3200 (2012). [14] , Detecting, representing and querying collusion in online rating systems, arXiv preprint arXiv:1211.0963 (2012). [15] Mohammad Allahbakhsh, Aleksandar Ignjatovic, Boualem Benatallah, Seyed-Mehdi-Reza Beheshti, Norman Foo, and Elisa Bertino, Repre- sentation and querying of unfair evaluations in social rating systems, Comput. Secur. 41 (2014), 68–88. [16] Mohammad Allahbakhsh, Aleksandar Ignjatovic, Boualem Benatallah, Elisa Bertino, Norman Foo, et al., Reputation management in crowd- sourcing systems, 8th International Conference on Collaborative Com- puting: Networking, Applications and Worksharing (CollaborateCom), IEEE, 2012, pp. 664–671. [17] James Allan, Hard track overview in trec 2004 (notebook), high accu- racy retrieval from documents, The Thirteenth Text Retrieval Confer- ence (TREC 2004) Notebook, 2004, pp. 226–235. [18] Farhad Amouzgar, Amin Beheshti, Samira Ghodratnama, Boualem Be- natallah, Jian Yang, and Quan Z. Sheng, isheets: A spreadsheet-based machine learning development platform for data-driven process analyt- ics, Service-Oriented Computing - ICSOC 2018 Workshops - ADMS, ASOCA, ISYyCC, CloTS, DDBS, and NLS4IoT, Hangzhou, China, November 12-15, 2018, Revised Selected Papers, Lecture Notes in Com- puter Science, vol. 11434, Springer, 2018, pp. 453–457. [19] Michael Anderson and et al., Brainwash: A data system for feature engineering., CIDR, 2013. 185 [20] Michael R Anderson, Michael Cafarella, Yixing Jiang, Guan Wang, and Bochun Zhang, An integrated development environment for faster feature engineering, Proceedings of the VLDB Endowment 7 (2014), no. 13, 1657–1660. [21] Peter Anick and Raj Gopal Kantamneni, A longitudinal study of real- time search assistance adoption, Proceedings of the 31st annual in- ternational ACM SIGIR conference on Research and development in information retrieval, 2008, pp. 701–702. [22] Francisco Araque, Alberto Salguero, and Maria M Abad, Application of data warehouse and decision support system in soaring site recom- mendation, Information and Communication Technologies in Tourism 2006, Springer, 2006, pp. 308–319. [23] National Archives and Records Administration, The soundex indexing system, he Soundex Indexing System (2007-05-30). [24] CESSDA AS, The open archival information system (oais) reference model, (2004). [25] Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives, Dbpedia: A nucleus for a web of open data, The semantic web, Springer, 2007, pp. 722–735. [26] Ricardo Baeza-Yates, Carlos Hurtado, and Marcelo Mendoza, Query recommendation using query logs in search engines, International con- ference on extending database technology, Springer, 2004, pp. 588–596. [27] Peter Bak, Dotan Dolev, and Tali Yatzkar-Haham, Rule adjustment by visualization of physical location data, September 11 2014, US Patent App. 14/483,158. 186 [28] Rama Balakrishnan, Julie Park, Kalpana Karra, Benjamin C. Hitz, Gail Binkley, Eurie L. Hong, Julie Sullivan, Gos Micklem, and J. Michael Cherry, YeastMine—an integrated data warehouse for Sac- charomyces cerevisiae data as a multipurpose tool-kit, Database 2012 (2012), bar062. [29] Ahmed Barnawi, Omar Batarfi, Seyed-Mehdi-Reza Beheshti, Radwa El Shawi, Ayman G. Fayoumi, Reza Nouri, and Sherif Sakr, On character- izing the performance of distributed graph computation platforms, Per- formance Characterization and Benchmarking. Traditional to Big Data - 6th TPC Technology Conference, TPCTC 2014, Hangzhou, China, September 1-5, 2014. Revised Selected Papers, Lecture Notes in Com- puter Science, vol. 8904, Springer, 2014, pp. 29–43. [30] Omar Batarfi, Radwa El Shawi, Ayman G. Fayoumi, Reza Nouri, Seyed-Mehdi-Reza Beheshti, Ahmed Barnawi, and Sherif Sakr, Large scale graph processing systems: survey and an experimental evaluation, Cluster Computing 18 (2015), no. 3, 1189–1213. [31] Marcia J Bates et al., The design of browsing and berrypicking tech- niques for the online search interface, Online review 13 (1989), no. 5, 407–424. [32] Amin Beheshti, Boualem Benatallah, and Hamid Reza Motahari- Nezhad, Processatlas a scalable and extensible platform for business process analytics, Software Practice and Experience (2018), 842–866. [33] Amin Beheshti, Boualem Benatallah, and Hamid Reza Motahari- Nezhad, Processatlas: A scalable and extensible platform for business process analytics, Softw., Pract. Exper. 48 (2018), no. 4, 842–866. 187 [34] Amin Beheshti, Boualem Benatallah, Reza Nouri, Van Munin Chhieng, HuangTao Xiong, and Xu Zhao, Coredb: a data lake service, Proceed- ings of the 2017 ACM on Conference on Information and Knowledge Management, ACM, 2017, pp. 2451–2454. [35] Amin Beheshti, Boualem Benatallah, Reza Nouri, and Alireza Tabebor- dbar, Corekg: a knowledge lake service, Proceedings of the VLDB En- dowment 11 (2018), no. 12, 1942–1945. [36] Amin Beheshti, Boualem Benatallah, Quan Z. Sheng, and Francesco Schiliro, Intelligent knowledge lakes: The age of artificial intelligence and big data, Web Information Systems Engineering - WISE 2019 Workshop, Demo, and Tutorial, Hong Kong and Macau, China, Jan- uary 19-22, 2020, Revised Selected Papers, Communications in Com- puter and Information Science, vol. 1155, Springer, 2019, pp. 24–34. [37] Amin Beheshti, Boualem Benatallah, Alireza Tabebordbar, Hamid Reza Motahari-Nezhad, Moshe Chai Barukh, and Reza Nouri, Datasynapse: A social data curation foundry, Distributed and Parallel Databases (2018), 1–34. [38] Amin Beheshti, Vahid Moraveji Hashemi, and Shahpar Yakhchi, To- wards context-aware social behavioral analytics, Proceedings of the 17th International Conference on Advances in Mobile Computing & Multi- media, 2019, pp. 28–35. [39] Amin Beheshti, Vahid Moraveji Hashemi, Shahpar Yakhchi, Hamid Reza Motahari-Nezhad, Seyed Mohssen Ghafari, and Jian Yang, personality2vec: Enabling the analysis of behavioral disorders in social networks, WSDM ’20: The Thirteenth ACM International Conference 188 on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, ACM, 2020, pp. 825–828. [40] Amin Beheshti, Francesco Schiliro, Samira Ghodratnama, Farhad Amouzgar, Boualem Benatallah, Jian Yang, Quan Z. Sheng, Fabio Casati, and Hamid Reza Motahari-Nezhad, iprocess: Enabling iot plat- forms in data-driven knowledge-intensive processes, Business Process Management Forum - BPM Forum 2018, Sydney, NSW, Australia, September 9-14, 2018, Proceedings, Lecture Notes in Business Infor- mation Processing, vol. 329, Springer, 2018, pp. 108–126. [41] Amin Beheshti, Alireza Tabebordbar, and Boualem Benatallah, istory: Intelligent storytelling with social data, The World Wide Web Confer- ence, WWW 2020, ACM, 2020. [42] Amin Beheshti, Kushal Vaghani, Boualem Benatallah, and Alireza Tabebordbar, Crowdcorrect: A curation pipeline for social data cleans- ing and curation, Information Systems in the Big Data Era - CAiSE Forum 2018, Tallinn, Estonia, June 11-15, 2018, Proceedings, Lecture Notes in Business Information Processing, vol. 317, Springer, 2018, pp. 24–38. [43] Seyed-Mehdi-Reza Beheshti, Boualem Benatallah, and Hamid R. Mo- tahari Nezhad, Enabling the analysis of cross-cutting aspects in ad-hoc processes, Advanced Information Systems Engineering - 25th Interna- tional Conference, CAiSE 2013, Valencia, Spain, June 17-21, 2013. Proceedings, Lecture Notes in Computer Science, vol. 7908, Springer, 2013, pp. 51–67. 189 [44] Seyed-Mehdi-Reza Beheshti, Boualem Benatallah, Hamid R. Motahari Nezhad, and Mohammad Allahbakhsh, A framework and a language for on-line analytical processing on graphs, Web Information Systems Engi- neering - WISE 2012 - 13th International Conference, Paphos, Cyprus, November 28-30, 2012. Proceedings, Lecture Notes in Computer Sci- ence, vol. 7651, Springer, 2012, pp. 213–227. [45] Seyed-Mehdi-Reza Beheshti, Boualem Benatallah, Hamid R. Mota- hari Nezhad, and Sherif Sakr, A query language for analyzing busi- ness processes execution, Business Process Management - 9th Interna- tional Conference, BPM 2011, Clermont-Ferrand, France, August 30 - September 2, 2011. Proceedings, Lecture Notes in Computer Science, vol. 6896, Springer, 2011, pp. 281–297. [46] Seyed-Mehdi-Reza Beheshti, Boualem Benatallah, and Hamid Reza Motahari-Nezhad, Galaxy: A platform for explorative analysis of open data sources, Proceedings of the 19th International Conference on Ex- tending Database Technology, EDBT 2016, Bordeaux, France, March 15-16, 2016, Bordeaux, France, March 15-16, 2016, OpenProceed- ings.org, 2016, pp. 640–643. [47] , Scalable graph-based OLAP analytics over process execution data, Distributed and Parallel Databases 34 (2016), no. 3, 379–423. [48] Seyed-Mehdi-Reza Beheshti, Boualem Benatallah, Sherif Sakr, Daniela Grigori, Hamid Reza Motahari-Nezhad, Moshe Chai Barukh, Ahmed Gater, and Seung Hwan Ryu, Process analytics - concepts and tech- niques for querying and analyzing process data, Springer, 2016. 190 [49] Seyed-Mehdi-Reza Beheshti, Boualem Benatallah, Srikumar Venu- gopal, Seung Hwan Ryu, Hamid Reza Motahari-Nezhad, and Wei Wang, A systematic review and comparative analysis of cross-document coreference resolution methods and tools, Computing 99 (2017), no. 4, 313–349. [50] , A systematic review and comparative analysis of cross- document coreference resolution methods and tools, Computing 99 (2017), no. 4, 313–349. [51] Seyed-Mehdi-Reza Beheshti, Hamid R. Motahari Nezhad, and Boualem Benatallah, Temporal provenance model (TPM): model and query lan- guage, CoRR abs/1211.5009 (2012). [52] Seyed-Mehdi-Reza Beheshti, Sherif Sakr, Boualem Benatallah, and Hamid R. Motahari Nezhad, Extending SPARQL to support entity grouping and path queries, CoRR abs/1211.5817 (2012). [53] Seyed-Mehdi-Reza Beheshti, Alireza Tabebordbar, Boualem Benatal- lah, and Reza Nouri, On automating basic data curation tasks, Pro- ceedings of the 26th International Conference on World Wide Web Companion, Perth, Australia, April 3-7, 2017, ACM, 2017, pp. 165– 169. [54] Seyed-Mehdi-Reza Beheshti, Srikumar Venugopal, Seung Hwan Ryu, Boualem Benatallah, and Wei Wang, Big data and cross-document coreference resolution: Current state and future opportunities, CoRR abs/1311.3987 (2013). 191 [55] Seyed-Mehdi-Rezaand et al. Beheshti, Data curation APIs, Tech. Re- port UNSW-CSE-TR-201617, The University of New South Wales, Sydney, Australia, 2016. [56] Mark Bergsma, Wikimedia architecture, Wikimedia Foundation Inc (2007). [57] Alain Biem, Eric Bouillet, Hanhua Feng, Anand Ranganathan, Anton Riabov, Olivier Verscheure, Haris Koutsopoulos, and Carlos Moran, Ibm infosphere streams for scalable, real-time, intelligent transporta- tion services, Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, ACM, 2010, pp. 1093–1104. [58] Steven Bird, Ewan Klein, and Edward Loper, Natural language pro- cessing with python: analyzing text with the natural language toolkit, " O’Reilly Media, Inc.", 2009. [59] Marouane Birjali, Abderrahim Beni-Hssane, and Mohammed Erritali, Analyzing social media through big data using infosphere biginsights and apache flume, Procedia computer science 113 (2017), 280–285. [60] Roi Blanco, B Barla Cambazoglu, and Claudio Lucchese, The 8th work- shop on large-scale distributed systems for information retrieval (lsds- ir’10), ACM SIGIR Forum, vol. 44, ACM New York, NY, USA, 2011, pp. 54–58. [61] David M Blei, Andrew Y Ng, and Michael I Jordan, Latent dirichlet allocation, Journal of machine Learning research 3 (2003), no. Jan, 993–1022. 192 [62] Jacob Boudoukh, Ronen Feldman, Shimon Kogan, and Matthew Richardson, Which news moves stock prices? a textual analysis, Tech. report, National Bureau of Economic Research, 2013. [63] Michael L Brodie and Jason T Liu, The power and limits of relational technology in the age of information ecosystems, Keynote at On The Move Federated Conferences, 2010. [64] John Brooke et al., Sus-a quick and dirty usability scale, Usability eval- uation in industry 189 (1996), no. 194, 4–7. [65] Michael Brooks, Saleema Amershi, Bongshin Lee, Steven M Drucker, Ashish Kapoor, and Patrice Simard, Featureinsight: Visual support for error-driven feature ideation in text classification, Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on, IEEE, 2015, pp. 105–112. [66] Eli T Brown, Alvitta Ottley, Helen Zhao, Quan Lin, Richard Souvenir, Alex Endert, and Remco Chang, Finding waldo: Learning about users from their interactions, IEEE Transactions on visualization and com- puter graphics 20 (2014), no. 12, 1663–1672. [67] Peter Brusilovsky, Barry Smyth, and Bracha Shapira, Social search, Social Information Access, Springer, 2018, pp. 213–276. [68] Stefano Burigat and Luca Chittaro, On the effectiveness of overview+ detail visualization on mobile devices, Personal and ubiquitous com- puting 17 (2013), no. 2, 371–385. [69] Mary C Burke, Introduction to human trafficking: definitions and prevalence, Human trafficking: interdisciplinary perspectives. New York: Routledge (2013), 3–23. 193 [70] Giuseppe Burtini, Jason Loeppky, and Ramon Lawrence, Improving online marketing experiments with drifting multi-armed bandits., ICEIS (1), 2015, pp. 630–636. [71] Cunera M Buys and Pamela L Shaw, Data management practices across an institution: Survey and report., Journal of Librarianship & Scholarly Communication 3 (2015), no. 2. [72] Mackinlay Card, Readings in information visualization: using vision to think, Morgan Kaufmann, 1999. [73] Ben Carterette, Paul N Bennett, David Maxwell Chickering, and Su- san T Dumais, Here or there preference judgement for relevance, Euro- pean Conference on Information Retrieval, Springer, 2008, pp. 16–27. [74] José M Cavanillas, Edward Curry, and Wolfgang Wahlster, New hori- zons for a data-driven economy: a roadmap for usage and exploitation of big data in europe, Springer, 2016. [75] M Cayrol, H Farreny, and H Prade, Fuzzy pattern matching, Kybernetes 11 (1982), no. 2, 103–116. [76] Hsinchun Chen, Roger HL Chiang, and Veda C Storey, Business intel- ligence and analytics: From big data to big impact., MIS quarterly 36 (2012), no. 4. [77] Yushi Chen, Hanlu Jiang, Chunyang Li, Xiuping Jia, and Pedram Ghamisi, Deep feature extraction and classification of hyperspectral im- ages based on convolutional neural networks, IEEE Transactions on Geoscience and Remote Sensing 54 (2016), no. 10, 6232–6251. 194 [78] Justin Cheng and Michael S Bernstein, Flock: Hybrid crowd-machine learning classifiers, Proceedings of the 18th ACM conference on com- puter supported cooperative work & social computing, ACM, 2015, pp. 600–611. [79] Laura Chiticariu, Rajasekar Krishnamurthy, Yunyao Li, Sriram Ragha- van, Frederick R Reiss, and Shivakumar Vaithyanathan, Systemt: an algebraic approach to declarative information extraction, Proceedings of the 48th Annual Meeting of the Association for Computational Lin- guistics, Association for Computational Linguistics, 2010, pp. 128–137. [80] Laura Chiticariu, Yunyao Li, and Frederick R Reiss, Rule-based infor- mation extraction is dead! long live rule-based information extraction systems!, EMNLP, no. October, 2013, pp. 827–832. [81] Christopher G Chute, Scott A Beck, Thomas B Fisk, and David N Mohr, The enterprise data trust at mayo clinic: a semantically inte- grated warehouse of biomedical data, Journal of the American Medical Informatics Association 17 (2010), no. 2, 131–135. [82] Benjamin Clement, Pierre-Yves Oudeyer, Didier Roy, and Manuel Lopes, Online optimization of teaching sequences with multi-armed ban- dits, Educational Data Mining 2014, 2014. [83] Rama Cont, Statistical modeling of high-frequency financial data, IEEE Signal Processing Magazine 28 (2011), no. 5, 16–25. [84] Andrew Cox, Rosie Higman, and Stephen Pinfield, Research data man- agement and openness, Program: electronic library and information systems (2015). 195 [85] Maurice Coyle and Barry Smyth, Searchguide: Beyond the results page, International Conference on Adaptive Hypermedia and Adaptive Web- Based Systems, Springer, 2004, pp. 296–299. [86] J Ignacio Criado, Rodrigo Sandoval-Almazan, and J Ramon Gil-Garcia, Government innovation through social media, 2013. [87] Samuel Croset, Joachim Rupp, and Martin Romacker, Flexible data integration and curation using a graph-based approach, Bioinformatics 32 (2016), no. 6, 918–925. [88] Silviu Cucerzan and Eric Brill, Spelling correction as an iterative pro- cess that exploits the collective knowledge of web users, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004, pp. 293–300. [89] Gary A Culliss, Personalized search methods including combining index entries for catagories of personal data, November 9 2004, US Patent 6,816,850. [90] Edward Curry, Andre Freitas, and Sean O’Riáin, The role of community-driven data curation for enterprises, Linking enterprise data, Springer, 2010, pp. 25–47. [91] Douglass R Cutting, David R Karger, Jan O Pedersen, and John W Tukey, Scatter/gather: A cluster-based approach to browsing large doc- ument collections, ACM SIGIR Forum, vol. 51, ACM New York, NY, USA, 2017, pp. 148–159. [92] Per-Erik Danielsson, Euclidean distance mapping, Computer Graphics and image processing 14 (1980), no. 3, 227–248. 196 [93] Christopher De Sa, Alex Ratner, Christopher Ré, Jaeho Shin, Feiran Wang, Sen Wu, and Ce Zhang, Deepdive: Declarative knowledge base construction, ACM SIGMOD Record 45 (2016), no. 1, 60–67. [94] Jeffrey Dean and Sanjay Ghemawat, Mapreduce: simplified data pro- cessing on large clusters, Commun. ACM 51 (2008), no. 1, 107–113. [95] Simon Dennis, Peter Bruza, and Robert McArthur, Web search- ing: A process-oriented experimental study of three interactive search paradigms, Journal of the American Society for Information Science and Technology 53 (2002), no. 2, 120–133. [96] Cecilia di Sciascio, Peter Brusilovsky, and Eduardo Veas, A study on user-controllable social exploratory search, 23rd International Confer- ence on Intelligent User Interfaces, ACM, 2018, pp. 353–364. [97] Cecilia di Sciascio, Vedran Sabol, and Eduardo E Veas, Rank as you go: User-driven exploration of search results, Proceedings of the 21st International Conference on Intelligent User Interfaces, ACM, 2016, pp. 118–129. [98] Jerome Dinet, Monik Favart, and Jean-Michel Passerault, Searching for information in an online public access catalogue (opac): the impacts of information search expertise on the use of boolean operators, Journal of Computer Assisted Learning 20 (2004), no. 5, 338–346. [99] Pedro M Domingos, A few useful things to know about machine learn- ing., Commun. acm 55 (2012), no. 10, 78–87. [100] L Dou, G Cao, Paul J Morris, Robert A Morris, Bertram Ludäscher, James A Macklin, and James Hanken, Kurator: A kepler package for 197 data curation workflows, Procedia Computer Science 9 (2012), 1614– 1619. [101] Susan Dumais, Edward Cutrell, and Hao Chen, Optimizing search by showing results in context, Proceedings of the SIGCHI conference on Human factors in computing systems, 2001, pp. 277–284. [102] Peter Eades, Seok-Hee Hong, An Nguyen, and Karsten Klein, Shape- based quality metrics for large graph visualization., J. Graph Algo- rithms Appl. 21 (2017), no. 1, 29–53. [103] Geoffrey Ellis and Alan Dix, A taxonomy of clutter reduction for infor- mation visualisation, IEEE transactions on visualization and computer graphics 13 (2007), no. 6, 1216–1223. [104] Andrea Esuli and Fabrizio Sebastiani, Sentiwordnet: A publicly avail- able lexical resource for opinion mining, Proceedings of LREC, vol. 6, Citeseer, 2006, pp. 417–422. [105] Aberdeen et al, Angling for insight in today’s data lake, (2017). [106] XX et al, Anonymised, Anonymised. [107] Oren Etzioni, Michael Cafarella, Doug Downey, Stanley Kok, Ana- Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S Weld, and Alexander Yates, Web-scale information extraction in knowitall: (pre- liminary results), Proceedings of the 13th international conference on World Wide Web, 2004, pp. 100–110. [108] Anthony Fader, Luke Zettlemoyer, and Oren Etzioni, Open question answering over curated and extracted knowledge bases, Proceedings of 198 the 20th ACM SIGKDD international conference on Knowledge dis- covery and data mining, ACM, 2014, pp. 1156–1165. [109] Ethan Fast, Binbin Chen, and Michael S Bernstein, Empath: Under- standing topic signals in large-scale text, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, ACM, 2016, pp. 4647–4657. [110] Nastaran Fatemi, Florian Poulin, Laura E Raileany, and Alan F Smeaton, Using association rule mining to enrich semantic concepts for video retrieval, (2009). [111] David A Ferrucci, Introduction to ’this is watson’, IBM Journal of Re- search and Development 56 (2012), no. 3.4. [112] Bruno M Fonseca, Paulo B Golgher, Edleno S De Moura, Bruno Pôs- sas, and Nivio Ziviani, Discovering search engine related queries using association rules, Journal of Web Engineering 2 (2003), no. 4, 215–227. [113] Mary Forehand, Bloom’s taxonomy, Emerging perspectives on learning, teaching, and technology 41 (2010), no. 4, 47–56. [114] Jill Freyne and Barry Smyth, An experiment in social search, Interna- tional Conference on Adaptive Hypermedia and Adaptive Web-Based Systems, Springer, 2004, pp. 95–103. [115] Jerome H Friedman and John W Tukey, A projection pursuit algorithm for exploratory data analysis, IEEE Transactions on computers 100 (1974), no. 9, 881–890. [116] Eugene Garfield, When is a negative search result positive? essays of an information scientist vol. 1, 12 august 1970. 199 [117] Abhishek Gattani, Digvijay S Lamba, Nikesh Garera, Mitul Tiwari, Xi- aoyong Chai, Sanjib Das, Sri Subramaniam, Anand Rajaraman, Venky Harinarayan, and AnHai Doan, Entity extraction, linking, classifica- tion, and tagging for social media: a wikipedia-based approach, Pro- ceedings of the VLDB Endowment 6 (2013), no. 11, 1126–1137. [118] Paul Suganthan GC, Chong Sun, Haojun Zhang, Frank Yang, Narasimhan Rampalli, Shishir Prasad, Esteban Arcaute, Ganesh Kr- ishnan, Rohit Deep, Vijay Raghavendra, et al., Why big data indus- trial systems need rules and what we can do about it, Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, ACM, 2015, pp. 265–276. [119] Ellen R Girden, Anova: Repeated measures, no. 84, Sage, 1992. [120] Erick Gomez-Nieto, Frizzi San Roman, Paulo Pagliosa, Wallace Casaca, Elias S Helou, Maria Cristina F de Oliveira, and Luis Gustavo Nonato, Similarity preserving snippet-based visualization of web search results, IEEE transactions on visualization and computer graphics 20 (2014), no. 3, 457–470. [121] Samuel Gratzl, Alexander Lex, Nils Gehlenborg, Hanspeter Pfister, and Marc Streit, Lineup: Visual analysis of multi-attribute rankings, IEEE transactions on visualization and computer graphics 19 (2013), no. 12, 2277–2286. [122] Brynjar Gretarsson, John O’donovan, Svetlin Bostandjiev, Tobias Höllerer, Arthur Asuncion, David Newman, and Padhraic Smyth, Top- icnets: Visual analysis of large text corpora with topic modeling, ACM 200 Transactions on Intelligent Systems and Technology (TIST) 3 (2012), no. 2, 23. [123] Mohammad Hammoud, Dania Abed Rabbou, Reza Nouri, Seyed- Mehdi-Reza Beheshti, and Sherif Sakr, DREAM: distributed RDF en- gine with adaptive query planner and minimal communication, PVLDB 8 (2015), no. 6, 654–665. [124] Eszter Hargittai, Classifying and coding online actions, Social Science Computer Review 22 (2004), no. 2, 210–227. [125] Lane Harrison, Fumeng Yang, Steven Franconeri, and Remco Chang, Ranking visualizations of correlation using weber’s law, IEEE transac- tions on visualization and computer graphics 20 (2014), no. 12, 1943– 1952. [126] Jian He, Enzo Veltri, Donatello Santoro, Guoliang Li, Giansalvatore Mecca, Paolo Papotti, and Nan Tang, Interactive and deterministic data cleaning, Proceedings of the 2016 International Conference on Management of Data, 2016, pp. 893–907. [127] Marti Hearst, Search user interfaces, Cambridge university press, 2009. [128] Marti A Hearst, Tilebars: visualization of term distribution informa- tion in full text information access, Chi, vol. 95, 1995, pp. 59–66. [129] Jeffrey Heer, Joseph M Hellerstein, and Sean Kandel, Predictive inter- action for data transformation., CIDR, 2015. [130] Morten Hertzum and Erik Frøkjær, Browsing and querying in online documentation: a study of user interfaces and the interaction pro- 201 cess, ACM Transactions on Computer-Human Interaction (TOCHI) 3 (1996), no. 2, 136–161. [131] Sarah Higgins, The dcc curation lifecycle model, International journal of digital curation 3 (2008), no. 1. [132] Charles R Hildreth, General introduction; opac research: laying the groundwork for future opac design, The online catalogue: developments and directions. London: Library Association (1989), 1–24. [133] Doug Howe, Maria Costanzo, Petra Fey, Takashi Gojobori, Linda Han- nick, Winston Hide, David P Hill, Renate Kania, Mary Schaeffer, Su- san St Pierre, et al., Big data: The future of biocuration, Nature 455 (2008), no. 7209, 47. [134] Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith, Interactive topic modeling, Machine learning 95 (2014), no. 3, 423–469. [135] Chien-Kang Huang, Lee-Feng Chien, and Yen-Jen Oyang, Relevant term suggestion in interactive web search based on contextual informa- tion in query session logs, Journal of the American Society for Infor- mation Science and Technology 54 (2003), no. 7, 638–649. [136] Neville Hunt and Sidney Tyrrell, Stratified sampling, Retrieved Novem- ber 10 (2001), 2012. [137] Takeo Igarashi and Ken Hinckley, Speed-dependent automatic zooming for browsing large documents, UIST, Citeseer, 2000, pp. 139–148. [138] Krzystof Jajuga, Andrzej Sokolowski, and Hans-Hermann Bock, Clas- sification, clustering, and data analysis: recent advances and applica- tions, Springer Science & Business Media, 2012. 202 [139] Bernard J Jansen, Amanda Spink, and Sherry Koshman, Web searcher interaction with the dogpile. com metasearch engine, Journal of the American Society for Information Science and Technology 58 (2007), no. 5, 744–755. [140] Bernard J Jansen, Amanda Spink, and Jan Pedersen, A temporal com- parison of altavista web searching, Journal of the American Society for Information Science and Technology 56 (2005), no. 6, 559–570. [141] Waqas Javed and Niklas Elmqvist, Stack zooming for multi-focus in- teraction in time-series data visualization, 2010 IEEE Pacific Visual- ization Symposium (PacificVis), IEEE, 2010, pp. 33–40. [142] Waqas Javed, Sohaib Ghani, and Niklas Elmqvist, Polyzoom: multi- scale and multifocus exploration in 2d visual spaces, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 2012, pp. 287–296. [143] Dan Jurafsky, Speech & language processing, Pearson Education India, 2000. [144] Maged Kamel Boulos, Dean Giustini, and Steve Wheeler, Instagram and whatsapp in health and healthcare: An overview, Future Internet 8 (2016), no. 3, 37. [145] Jussi Karlgren, Martin Bohman, Ariel Ekgren, Gabriel Isheden, Emelie Kullmann, and David Nilsson, Semantic topology, Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, 2014, pp. 1939–1942. [146] Esther Kaufmann, Abraham Bernstein, and Renato Zumstein, Querix: A natural language interface to query ontologies based on clarifica- 203 tion dialogs, 5th International Semantic Web Conference (ISWC 2006), Citeseer, 2006, pp. 980–981. [147] Diane Kelly, Vijay Deepak Dollu, and Xin Fu, The loquacious user: a document-independent source of terms for query expansion, Proceed- ings of the 28th annual international ACM SIGIR conference on Re- search and development in information retrieval, 2005, pp. 457–464. [148] Paul Kidwell, Guy Lebanon, and William Cleveland, Visualizing in- complete and partially ranked data, IEEE Transactions on visualization and computer graphics 14 (2008), no. 6, 1356–1363. [149] Minjeong Kim, Kyeongpil Kang, Deokgun Park, Jaegul Choo, and Niklas Elmqvist, Topiclens: Efficient multi-level visual topic explo- ration of large-scale document collections, IEEE transactions on vi- sualization and computer graphics 23 (2017), no. 1, 151–160. [150] Nam Wook Kim and et al., Budgetmap: Engaging taxpayers in the issue-driven classification of a government budget, CSCW, 2016, pp. 1026–1037. [151] Ralph Kimball and Richard Merz, The data webhouse toolkit: Build- ing the web-enabled data warehouse, Industrial Management & Data Systems (2000). [152] Khalil Klouche, Tuukka Ruotsalo, Diogo Cabral, Salvatore Andolina, Andrea Bellucci, and Giulio Jacucci, Designing for exploratory search on touch devices, Proceedings of the 33rd annual ACM conference on human factors in computing systems, ACM, 2015, pp. 4189–4198. [153] Khalil Klouche, Tuukka Ruotsalo, Luana Micallef, Salvatore Andolina, and Giulio Jacucci, Visual re-ranking for multi-aspect information re- 204 trieval, Proceedings of the 2017 Conference on Conference Human In- formation Interaction and Retrieval, ACM, 2017, pp. 57–66. [154] Jürgen Koenemann and Nicholas J Belkin, A case for interaction: A study of interactive information retrieval behavior and effectiveness, Proceedings of the SIGCHI conference on human factors in computing systems, 1996, pp. 205–212. [155] Ron Kohavi, Roger Longbotham, Dan Sommerfield, and Randal M Henne, Controlled experiments on the web: survey and practical guide, Data mining and knowledge discovery 18 (2009), no. 1, 140–181. [156] Sanjay Krishnan and et al., Towards reliable interactive data cleaning: a user survey and recommendations., HILDA@ SIGMOD, 2016, p. 9. [157] Karen Kukich, Techniques for automatically correcting words in text, Acm Computing Surveys (CSUR) 24 (1992), no. 4, 377–439. [158] Bill Kules and Ben Shneiderman, Users can change their web search tactics: Design guidelines for categorized overviews, Information Pro- cessing & Management 44 (2008), no. 2, 463–484. [159] Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon, What is twitter, a social network or a news media?, Proceedings of the 19th international conference on World wide web, AcM, 2010, pp. 591–600. [160] Zhihui Lai, Dongmei Mo, Wai Keung Wong, Yong Xu, Duoqian Miao, and David Zhang, Robust discriminant regression for feature extraction, IEEE transactions on cybernetics 48 (2018), no. 8, 2472–2484. [161] W3C Data Extraction Language, https://www.w3.org/tr/data- extraction. 205 [162] Kathy Lee and et al., Real-time disease surveillance using twitter data: demonstration on flu and cancer, SIGKDD, 2013. [163] Mu Li, Yang Zhang, Muhua Zhu, and Ming Zhou, Exploring distribu- tional similarity based models for query spelling correction, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguis- tics, Association for Computational Linguistics, 2006, pp. 1025–1032. [164] Bin Liu, Laura Chiticariu, Vivian Chu, HV Jagadish, and Frederick Reiss, Refining information extraction rules using data provenance., IEEE Data Eng. Bull. 33 (2010), no. 3, 17–24. [165] Shixia Liu, Michelle X Zhou, Shimei Pan, Weihong Qian, Weijia Cai, and Xiaoxiao Lian, Interactive, topic-based visual text summarization and analysis, Proceedings of the 18th ACM conference on Information and knowledge management, ACM, 2009, pp. 543–552. [166] Yun-En Liu, Travis Mandel, Emma Brunskill, and Zoran Popovic, Trading off scientific knowledge and user learning with multi-armed bandits., EDM, 2014, pp. 161–168. [167] Anita E Locher, Starting points for lowering the barrier to spatial data preservation, Journal of Map & Geography Libraries 12 (2016), no. 1, 28–51. [168] Steve Lohr, The age of big data, New York Times 11 (2012). [169] Vanessa Lopez, Miriam Fernández, Enrico Motta, and Nico Stieler, Poweraqua: Supporting users in querying and exploring the semantic web, Semantic Web 3 (2012), no. 3, 249–265. 206 [170] Hao Ma, Haixuan Yang, Irwin King, and Michael R Lyu, Learning latent semantic relations from clickthrough data for query suggestion, Proceedings of the 17th ACM conference on Information and knowledge management, 2008, pp. 709–718. [171] Zakaria Maamar, Sherif Sakr, Ahmed Barnawi, and Seyed-Mehdi-Reza Beheshti, A framework of enriching business processes life-cycle with tagging information, Databases Theory and Applications - 26th Aus- tralasian Database Conference, ADC 2015, Melbourne, VIC, Australia, June 4-7, 2015. Proceedings, Lecture Notes in Computer Science, vol. 9093, Springer, 2015, pp. 309–313. [172] Don MacMillan, Data sharing and discovery: What librarians need to know, The Journal of Academic Librarianship 40 (2014), no. 5, 541– 549. [173] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky, The Stanford CoreNLP natu- ral language processing toolkit, Association for Computational Linguis- tics (ACL) System Demonstrations, 2014, pp. 55–60. [174] Gary Marchionini, Exploratory search: from finding to understanding, Communications of the ACM 49 (2006), no. 4, 41–46. [175] Gary Marchionini and Ben Shneiderman, Finding facts vs. browsing knowledge in hypertext systems, Computer 21 (1988), no. 1, 70–80. [176] Maria das Graças Bruno Marietto, Rafael Varago de Aguiar, Gislene de Oliveira Barbosa, Wagner Tanaka Botelho, Edson Pimentel, Rob- son dos Santos França, and Vera Lúcia da Silva, Artificial intelligence 207 markup language: A brief tutorial, arXiv preprint arXiv:1307.3091 (2013). [177] Lluís Màrquez, Marta Recasens, and Emili Sapena, Coreference resolu- tion: an empirical study based on semeval-2010 shared task 1, Language resources and evaluation 47 (2013), no. 3, 661–694. [178] Bruce E Massis, “serendipitous” browsing versus library space, New Library World 112 (2011), no. 3/4, 178–182. [179] James Mayfield, David Alexander, Bonnie J Dorr, Jason Eisner, Tamer Elsayed, Tim Finin, Clayton Fink, Marjorie Freedman, Nikesh Garera, Paul McNamee, et al., Cross-document coreference resolution: A key technology for learning by reading., AAAI Spring Symposium: Learning by Reading and Learning to Read, vol. 9, 2009, pp. 65–70. [180] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean, Distributed representations of words and phrases and their com- positionality, Advances in neural information processing systems, 2013, pp. 3111–3119. [181] Renée J Miller, Big data curation., COMAD, 2014, p. 4. [182] Tova Milo, Slava Novgorodov, and Wang-Chiew Tan, Rudolf: inter- active rule refinement system for fraud detection, Proceedings of the VLDB Endowment 9 (2016), no. 13, 1465–1468. [183] , Interactive rule refinement for fraud detection, EDBT, 2018. [184] Ali Mirza and Imran Siddiqi, Data level conflicts resolution for multi- sources heterogeneous databases, 2016 Sixth International Conference 208 on Innovative Computing Technology (INTECH), IEEE, 2016, pp. 36– 40. [185] Alicia Hofelich Mohr, Josh Bishoff, Carolyn Bishoff, Steven Braun, Christine Storino, and Lisa R Johnston, When data is a dirty word: A survey to understand data management needs across diverse research disciplines, Bulletin of the Association for Information Science and Technology 42 (2015), no. 1, 51–53. [186] Deshpande Mukund, Shareinsights an unified approach to full-stack data processing, SIGMOD, 2015, p. 9. [187] Roberto Navigli and Simone Paolo Ponzetto, Babelnet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network, Artificial Intelligence 193 (2012), 217–250. [188] Tien Nguyen and Jin Zhang, A novel visualization model for web search results, IEEE transactions on visualization and computer graphics 12 (2006), no. 5, 981–988. [189] Suphakit Niwattanakul, Jatsada Singthongchai, Ekkachai Naenudorn, and Supachanun Wanapu, Using of jaccard coefficient for keywords sim- ilarity, Proceedings of the international multiconference of engineers and computer scientists, vol. 1, 2013, pp. 380–384. [190] Mohammad Norouzi, David J Fleet, and Russ R Salakhutdinov, Ham- ming distance metric learning, Advances in neural information process- ing systems, 2012, pp. 1061–1069. [191] Robert Olendorf and Steve Koch, Beyond the low hanging fruit: Archv- ing complex data and data services at university of new mexico, Journal of Digital Information (2012). 209 [192] Pradeep Pasupuleti and Beulah Salome Purra, Data lake development with big data, Packt Publishing Ltd, 2015. [193] Kayur Patel, Steven M Drucker, James Fogarty, Ashish Kapoor, and Desney S Tan, Using multiple models to understand data, IJCAI Proceedings-International Joint Conference on Artificial Intelligence, vol. 22, 2011, p. 1723. [194] Emily S Patterson, Emilie M Roth, and David D Woods, Predicting vulnerabilities in computer-supported inferential analysis under data overload, Cognition, Technology & Work 3 (2001), no. 4, 224–237. [195] Jaakko Peltonen, Kseniia Belorustceva, and Tuukka Ruotsalo, Topic- relevance map: Visualization for improving search result comprehen- sion, Proceedings of the 22nd International Conference on Intelligent User Interfaces, ACM, 2017, pp. 611–622. [196] Jeffrey Pennington, Richard Socher, and Christopher D Manning, Glove: Global vectors for word representation, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543. [197] Quang-Khai Pham, Guillaume Raschia, Noureddine Mouaddib, Regis Saint-Paul, and Boualem Benatallah, Time sequence summarization to scale up chronology-dependent applications, Proceedings of the 18th ACM conference on Information and knowledge management, ACM, 2009, pp. 1137–1146. [198] Emmanuel Pietriga and Caroline Appert, Sigma lenses: focus-context transitions combining space, time and translucence, Proceedings of the 210 SIGCHI Conference on Human Factors in Computing Systems, ACM, 2008, pp. 1343–1352. [199] Peter Pirolli and Stuart Card, The sensemaking process and leverage points for analyst technology as identified through cognitive task anal- ysis, Proceedings of international conference on intelligence analysis, vol. 5, McLean, VA, USA, 2005, pp. 2–4. [200] Xiaojia Pu, Rong Jin, Gangshan Wu, Dingyi Han, and Gui-Rong Xue, Topic modeling in semantic space with keywords, Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, 2015, pp. 1141–1150. [201] Landy Rajaonarivo, Matthieu Courgeon, Eric Maisel, and Pierre De Loor, Inline co-evolution between users and information presen- tation for data exploration, Proceedings of the 22nd International Con- ference on Intelligent User Interfaces, ACM, 2017, pp. 215–219. [202] Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré, Snorkel: Rapid training data creation with weak supervision, arXiv preprint arXiv:1711.10160 (2017). [203] Alexander J Ratner, Stephen H Bach, Henry R Ehrenberg, and Chris Ré, Snorkel: Fast training set generation for information extraction, Proceedings of the 2017 ACM International Conference on Manage- ment of Data, ACM, 2017, pp. 1683–1686. [204] Carlo Ravagli, Francois Pognan, and Philippe Marc, OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts, Bioinformatics 33 (2016), no. 1, 148–149. 211 [205] Thomas Rebele, Fabian Suchanek, Johannes Hoffart, Joanna Biega, Erdal Kuzey, and Gerhard Weikum, Yago: A multilingual knowledge base from wikipedia, wordnet, and geonames, International Semantic Web Conference, Springer, 2016, pp. 177–185. [206] Alan Ritter, Sam Clark, Oren Etzioni, et al., Named entity recogni- tion in tweets: an experimental study, Proceedings of the conference on empirical methods in natural language processing, Association for Computational Linguistics, 2011, pp. 1524–1534. [207] Joseph John Rocchio, Relevance feedback in information retrieval, The SMART retrieval system: experiments in automatic document process- ing, 313–323. [208] Thomas D Ruder, Gary M Hatch, Garyfalia Ampanozi, Michael J Thali, and Nadja Fischer, Suicide announcement on facebook, Crisis (2011). [209] Tuukka Ruotsalo, Jaakko Peltonen, Manuel JA Eugster, Dorota Głowacka, Patrik Floréen, Petri Myllymäki, Giulio Jacucci, and Samuel Kaski, Interactive intent modeling for exploratory search, ACM Trans- actions on Information Systems (TOIS) 36 (2018), no. 4, 44. [210] Daniel M Russell, Malcolm Slaney, Yan Qu, and Mave Houston, Being literate with large document collections: Observational studies and cost structure tradeoffs, Proceedings of the 39th Annual Hawaii Interna- tional Conference on System Sciences (HICSS’06), vol. 3, IEEE, 2006, pp. 55–55. [211] Daniel M Russell, Mark J Stefik, Peter Pirolli, and Stuart K Card, The cost structure of sensemaking, Proceedings of the INTERACT’93 212 and CHI’93 conference on Human factors in computing systems, 1993, pp. 269–276. [212] D Russo, B Van Roy, A Kazerouni, and I Osband, A tutorial on thomp- son sampling, arXiv preprint arXiv:1707.02038 (2017). [213] Philip Russom et al., Big data analytics, TDWI Best Practices Report, Fourth Quarter (2011), 1–35. [214] Ian Ruthven and Mounia Lalmas, A survey on the use of relevance feedback for information access systems, The Knowledge Engineering Review 18 (2003), no. 2, 95–145. [215] Régis Saint-Paul, Guillaume Raschia, and Noureddine Mouaddib, Gen- eral purpose database summarization, Proceedings of the 31st interna- tional conference on Very large data bases, VLDB Endowment, 2005, pp. 733–744. [216] Gerard Salton, Edward A Fox, and Ellen Voorhees, Advanced feedback methods in information retrieval, Journal of the American Society for Information Science 36 (1985), no. 3, 200–210. [217] Somwrita Sarkar and Andy Dong, Community detection in graphs us- ing singular value decomposition, Physical Review E 83 (2011), no. 4, 046114. [218] Somwrita Sarkar, Andy Dong, and John S Gero, Design optimization problem reformulation using singular value decomposition, Journal of Mechanical Design 131 (2009), no. 8. [219] Jeff Sauro and James R Lewis, Standardized usability questionnaires, Quantifying the user experience (2012), 185–240. 213 [220] Francesco Schiliro, Amin Beheshti, Samira Ghodratnama, Farhad Amouzgar, Boualem Benatallah, Jian Yang, Quan Z. Sheng, Fabio Casati, and Hamid Reza Motahari-Nezhad, icop: Iot-enabled polic- ing processes, Service-Oriented Computing - ICSOC 2018 Workshops - ADMS, ASOCA, ISYyCC, CloTS, DDBS, and NLS4IoT, Hangzhou, China, November 12-15, 2018, Revised Selected Papers, Lecture Notes in Computer Science, vol. 11434, Springer, 2018, pp. 447–452. [221] Fabrizio Sebastiani, Machine learning in automated text categorization, ACM computing surveys (CSUR) 34 (2002), no. 1, 1–47. [222] Thibault Sellam, Emmanuel Müller, and Martin Kersten, Semi- automated exploration of data warehouses, Proceedings of the 24th ACM International on Conference on Information and Knowledge Man- agement, 2015, pp. 1321–1330. [223] Usman Shahbaz, Amin Beheshti, Sadegh Nobari, Qiang Qu, Hye-Young Paik, and Mehregan Mahdavi, irecruit: Towards automating the re- cruitment process, Service Research and Innovation - 7th Australian Symposium, ASSRI 2018, Sydney, NSW, Australia, September 6, 2018, and Wollongong, NSW, Australia, December 14, 2018, Revised Selected Papers, Lecture Notes in Business Information Processing, vol. 367, Springer, 2018, pp. 139–152. [224] Guy Shani and Noam Tractinsky, Displaying relevance scores for search results, Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, 2013, pp. 901– 904. 214 [225] Conglei Shi, Weiwei Cui, Shixia Liu, Panpan Xu, Wei Chen, and Huamin Qu, Rankexplorer: Visualization of ranking changes in large time series data, IEEE Transactions on Visualization and Computer Graphics 18 (2012), no. 12, 2669–2678. [226] Vikram Singh and Ajay Singh, Learn-as-you-go: Feedback-driven result ranking and query refinement for interactive data exploration, Procedia Computer Science 125 (2018), 550–559. [227] Amit Singhal, Introducing the knowledge graph: things, not strings, Official google blog 16 (2012). [228] Tianhong Song, Sven kohler, Bertram Ludscher, James Hanken, Mau- reen Kelly, David Lowery, James A Macklin, Paul J Morris, and Robert A Morris, Towards automated design, analysis and optimiza- tion of declarative curation workflows, (2014). [229] Yang Song, Dengyong Zhou, and Li-wei He, Query suggestion by con- structing term-transition graphs, Proceedings of the fifth ACM interna- tional conference on Web search and data mining, 2012, pp. 353–362. [230] Irena Spasić, Farzaneh Sarafraz, John A Keane, and Goran Nenadić, Medication information extraction with linguistic pattern matching and semantic rules, Journal of the American Medical Informatics Associa- tion 17 (2010), no. 5, 532–535. [231] Robyn Speer, Joshua Chin, and Catherine Havasi, Conceptnet 5.5: An open multilingual graph of general knowledge, Thirty-First AAAI Con- ference on Artificial Intelligence, 2017. 215 [232] Florian Stoffel, Lucie Flekova, Daniela Oelke, Iryna Gurevych, and Daniel A Keim, Feature-based visual exploration of text classification, Symposium on Visualization in Data Science at IEEE VIS, 2015. [233] Michael Stonebraker and et al., Data curation at scale: The data tamer system., CIDR, 2013. [234] Nicole Sultanum, Devin Singh, Michael Brudno, and Fanny Chevalier, Doccurate: A curation-based approach for clinical text visualization, IEEE transactions on visualization and computer graphics 25 (2019), no. 1, 142–151. [235] Chong Sun, Narasimhan Rampalli, Frank Yang, and AnHai Doan, Chimera: Large-scale classification using machine learning, rules, and crowdsourcing, VLDB Endowment 7 (2014), no. 13, 1529–1540. [236] Yu-Jen John Sun, Moshe Chai Barukh, Boualem Benatallah, and Seyed-Mehdi-Reza Beheshti, Scalable saas-based process customization with casewalls, Service-Oriented Computing - 13th International Con- ference, ICSOC 2015, Goa, India, November 16-19, 2015, Proceedings, Lecture Notes in Computer Science, vol. 9435, Springer, 2015, pp. 218– 233. [237] Alireza Tabebordbar and Amin Beheshti, Adaptive rule monitoring sys- tem, Proceedings of the 1st International Workshop on Software En- gineering for Cognitive Services, SE4COG@ICSE 2018, Gothenburg, Sweden, May 28-2, 2018, ACM, 2018, pp. 45–51. [238] Alireza Tabebordbar, Amin Beheshti, and Boualem Benatallah, Con- ceptmap: A conceptual approach for formulating user preferences in 216 large information spaces, International Conference on Web Informa- tion Systems Engineering, Springer, 2019, pp. 779–794. [239] Alireza Tabebordbar, Amin Beheshti, Boualem Benatallah, and Moshe Chai Barukh, Feature-based and adaptive rule adaptation in dy- namic environments, Data Science and Engineering (2020), 1–17. [240] Andrada Tatu, Georgia Albuquerque, Martin Eisemann, Peter Bak, Holger Theisel, Marcus Magnor, and Daniel Keim, Automated ana- lytical methods to support visual exploration of high-dimensional data, IEEE Transactions on Visualization and Computer Graphics 17 (2010), no. 5, 584–597. [241] Omer Tene and Jules Polonetsky, Big data for all: Privacy and user control in the age of analytics, Nw. J. Tech. & Intell. Prop. 11 (2012), xxvii. [242] Ignacio Terrizzano, Peter M Schwarz, Mary Roth, and John E Colino, Data wrangling: The challenging yourney from the wild to the lake., CIDR, 2015. [243] John Towns, Timothy Cockerill, Maytal Dahan, Ian Foster, Kelly Gaither, Andrew Grimshaw, Victor Hazlewood, Scott Lathrop, Dave Lifka, Gregory D Peterson, et al., Xsede: accelerating scientific discov- ery, Computing in Science & Engineering 16 (2014), no. 5, 62–74. [244] Raphaël Troncy, Linking entities for enriching and structuring social media content, WWW, 2016, pp. 597–597. [245] Özlem Uzuner, Imre Solti, and Eithon Cadag, Extracting medication information from clinical text, Journal of the American Medical Infor- matics Association 17 (2010), no. 5, 514–518. 217 [246] Hossein Vahabi, Margareta Ackerman, David Loker, Ricardo Baeza- Yates, and Alejandro Lopez-Ortiz, Orthogonal query recommendation, Proceedings of the 7th ACM conference on Recommender systems, 2013, pp. 33–40. [247] Marco A Valenzuela-Escarcega, Gus Hahn-Powell, and Mihai Sur- deanu, Odin’s runes: A rule language for information extraction, Pro- ceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), 2016, pp. 322–329. [248] José Van Dijck, Datafication, dataism and dataveillance: Big data between scientific paradigm and ideology, Surveillance & Society 12 (2014), no. 2, 197–208. [249] Kalyan Veeramachaneni, Una-May O’Reilly, and Colin Taylor, Towards feature engineering at scale for data from massive open online courses, arXiv preprint arXiv:1407.5238 (2014). [250] Maksims Volkovs, Fei Chiang, Jaroslaw Szlichta, and Renée J Miller, Continuous data cleaning, Data Engineering (ICDE), 2014 IEEE 30th International Conference on, IEEE, 2014, pp. 244–255. [251] Denny Vrandečić and Markus Krötzsch, Wikidata: a free collaborative knowledgebase, Communications of the ACM 57 (2014), no. 10, 78–85. [252] Emily Wall, Subhajit Das, Ravish Chawla, Bharath Kalidindi, Eli T Brown, and Alex Endert, Podium: Ranking data using mixed-initiative visual analytics, IEEE transactions on visualization and computer graphics 24 (2018), no. 1, 288–297. [253] Jiannan Wang, Guoliang Li, and Jianhua Fe, Fast-join: An efficient method for fuzzy token matching based string similarity join, 2011 218 IEEE 27th International Conference on Data Engineering, IEEE, 2011, pp. 458–469. [254] Suhang Wang, Jiliang Tang, Charu Aggarwal, and Huan Liu, Linked document embedding for classification, Proceedings of the 25th ACM in- ternational on conference on information and knowledge management, 2016, pp. 115–124. [255] Ryen W White and Dan Morris, Investigating the querying and brows- ing behavior of advanced search engine users, Proceedings of the 30th annual international ACM SIGIR conference on Research and develop- ment in information retrieval, 2007, pp. 255–262. [256] Thomas Wiatowski and Helmut Bölcskei, A mathematical theory of deep convolutional neural networks for feature extraction, IEEE Trans- actions on Information Theory 64 (2018), no. 3, 1845–1866. [257] M. Wick, A. Culotta, K. Rohanimanesh, and A. McCallum, An entity based model for coreference resolution, SDM, 2009. [258] Joseph Jay Williams, Juho Kim, Anna Rafferty, Samuel Maldonado, Krzysztof Z Gajos, Walter S Lasecki, and Neil Heffernan, Axis: Gener- ating explanations at scale with learnersourcing and machine learning, ACM Conference on Learning@ Scale, ACM, 2016, pp. 379–388. [259] Jun Xie, Chong Sun, Fan Yang, and Narasimhan Rampalli, Automatic rule coaching, September 5 2017, US Patent 9,754,208. [260] Shayan Zamanirad, Boualem Benatallah, Moshe Chai Barukh, Fabio Casati, and Carlos Rodriguez, Programming bots by synthesizing natu- ral language expressions into api invocations, Proceedings of the 32nd 219 IEEE/ACM International Conference on Automated Software Engi- neering, IEEE Press, 2017, pp. 832–837. 220
ai_researcher
1
Efficient_Distributed_Framework_for_Collaborative_Multi-Agent_Reinforcement_Learning.pdf
Collaborative Multi-Agent Video Fast-Forwarding Shuyue Lan, Zhilu Wang, Ermin Wei, Amit K. Roy-Chowdhury, Fellow, IEEE, and Qi Zhu 1 3 2 0 2 y a M 7 2 ] V C . s c [ 1 v 9 6 5 7 1 . 5 0 3 2 : v i X r a Abstract—Multi-agent applications have recently gained sig- nificant popularity. In many computer vision tasks, a network of agents, such as a team of robots with cameras, could work collaboratively to perceive the environment for efficient and accurate situation awareness. However, these agents often have limited computation, communication, and storage resources. Thus, reducing resource consumption while still providing an accurate perception of the environment becomes an important goal when deploying multi-agent systems. To achieve this goal, we identify and leverage the overlap among different camera views in multi-agent systems for reducing the processing, transmission and storage of redundant/unimportant video frames. Specifically, we have developed two collaborative multi-agent video fast- forwarding frameworks in distributed and centralized settings, respectively. In these frameworks, each individual agent can selectively process or skip video frames at adjustable paces based on multiple strategies via reinforcement learning. Multiple agents then collaboratively sense the environment via either 1) a consensus-based distributed framework called DMVF that periodically updates the fast-forwarding strategies of agents by establishing communication and consensus among connected neighbors, or 2) a centralized framework called MFFNet that utilizes a central controller to decide the fast-forwarding strate- gies for agents based on collected data. We demonstrate the efficacy and efficiency of our proposed frameworks on a real- world surveillance video dataset VideoWeb and a new simulated driving dataset CarlaSim, through extensive simulations and deployment on an embedded platform with TCP communication. We show that compared with other approaches in the literature, our frameworks achieve better coverage of important frames, while significantly reducing the number of frames processed at each agent. Index Terms—Video fast-forwarding, multi-agent systems, re- inforcement learning. I. INTRODUCTION W ITH the rapid advancement of camera sensors, a network of agents with cameras are increasingly be- ing explored for tasks such as search and rescue, wide- area surveillance, and environmental monitoring, where the cameras may be built-in cameras in robots, cameras on drones, or fixed surveillance cameras. In these systems, multiple cam- eras can observe the same environment and generate videos from different angles, often with overlapping views, so that the fusion of all their perceptions may lead to better scene understanding. For many application tasks, this information fusion of large amount of data needs to be performed in real time or near real time. However, the agents often have limited computation, communication, storage, and energy resources , which makes processing and transmitting all the video data quite challenging. This thus motivates the development of methods that can select an informative subset of the video frames to focus on. In the relevant literature, video summarization and video fast-forwarding both aim at generating a compact summary of the original video. In particular, video summarization methods often summarize videos in an offline manner, which needs an entire video available at hand before processing it [1], [2], [3], [4], [5]. Multi-view summarization methods that summarize videos from multiple cameras have also been proposed [6], [7], [8], [9], [10]. However, as these methods process the entire videos and are often time-consuming, they are unsuitable for online and real-time applications. On the other hand, video fast-forwarding methods generate the video summary on the fly. Most of such methods adjust the playback speed of a video [11], [12], [13], [14], [15], [16], [17] while processing the entirety of it. One exception is our previous work FFNet [18], which performs video fast-forwarding for a single camera in an online manner and only processes a fraction of the video frames by automatically skipping unimportant frames via reinforcement learning. This shows promising results in reducing system computation and storage load. In this work, we build upon this approach and develop our solution for multi-agent video fast-forwarding systems. A. Solution Overview Motivated by the observation that there is often significant overlap among videos captured by cameras from different angles in multi-agent systems, we pose the following question: Is it possible to leverage the overlapping among different views in multi-agent perception to collaboratively perform fast-forwarding that is efficient, causal, online, and results in an informative summary for the scene in real time? In this paper, we introduce two methods for multi-agent video fast-forwarding in distributed and centralized settings, respectively. We target the scenarios where cameras at multiple agents observe the same environment from different angles. Each camera embeds a fast-forwarding agent with multiple strategies, i.e., it can skip the frames of its video input at different paces (e.g., slow, normal, or fast). During operation, each camera fast-forwards its own video stream based on a chosen pace and periodically updates its fast-forwarding strategies. For the distributed setting, part of our work has appeared in [19], named DMVF, which chooses and updates fast- forwarding strategies by establishing communication and con- sensus among connected agents, as shown in the left figure of Fig. 1. Agents are connected by a predetermined undirected communication network1, where each agent can communicate with a set of neighboring agents. At every adaptation period, each agent evaluates the importance of the selected frames from itself and those from its neighbors by comparing their 1Note that some agents may not be able to communicate with each other due to practical factors such as the connection capacity of camera nodes, the physical distances between the nodes, the network bandwidth, etc. 2 Illustration of collaborative multi-agent video fast-forwarding. Multiple cameras at different agents are observing the same environment from Fig. 1. different overlapping views. Each camera performs video fast-forwarding according to its current fast-forwarding strategy, which is decided either via communication and consensus among neighboring agents in a distributed manner (left) or by a central controller that analyzes the data from each agent (right). The colored regions within the bars represent the important video segments that each agent sees in its view. similarities. Then a system-wide consensus algorithm is run among agents to reach an agreement on the importance score for every agent’s view. Finally, based on the score ranking and the system requirement, each agent selects a fast-forward strategy for its next adaptation period. For the centralized multi-agent video fast-forwarding set- ting, we have developed a new framework in this work, named MFFNet, which contains a central controller to decide the fast- forwarding strategies for each agent (the right part of Fig. 1). During operation, each camera fast-forwards its own video stream based on a chosen pace given by the central controller, and periodically sends selected frames (i.e., fast-forwarded clips) to the central controller. The central controller receives the selected frames from every agent and composes a more compact summary video for the scene. Moreover, based on the data at hand, the central controller infers the strategy/pace that should be adopted by each agent for the next period and sends such instruction back to the agents. Intuitively, an agent whose view currently contains more important frames than others should be slowed down for the next period to collect more frames; while agents whose views have significant overlaps with the slowed-down agents can be given a faster pace to reduce their processing and transmission load. In both distributed and centralized settings, each agent only processes a very small portion of frames with fast- forwarding, which significantly reduces the computation load. The agents also do not require transmitting or storing their entire video streams (often only a fraction of them). From the system perspective, both the intra-view at each agent and the inter-view redundancy across different agents are reduced. Furthermore, the online and causal nature of our proposed approaches enables the users to begin fast-forwarding at any point when executing certain multi-agent perception tasks. Our approach is particularly useful for resource-constrained and time-critical systems such as multi-robot systems. The main contributions of this paper include the following. • We formulate the multi-agent video fast-forwarding prob- lem as a collaborative multi-agent reinforcement learning problem. Each agent can fast-forward its video input without processing the entire video and be easily adapted to different fast-forwarding strategies/paces. • Building upon our work in single-agent fast-forwarding (FFNet) [18] and distributed multi-agent fast-forwarding (DMVF) [19], we develop a new centralized framework MFFNet for multi-agent fast-forwarding, which uses a central controller to orchestrate the fast-forwarding strate- gies of agents for achieving better scene coverage with reduced computation and communication load. • We demonstrate the effectiveness of MFFNet on a chal- lenging multi-view dataset, VideoWeb [20], achieving real-time speed on an embedded platform with TCP com- munication. We compare MFFNet with DMVF, FFNet, and a few other methods in the literature. • Moreover, for a more comprehensive comparison, we also include a newly generated multi-camera dataset for multi- agent video fast-forwarding, named CarlaSim, to further evaluate the various methods on moving platforms. In particular, beyond our recent work [19], this paper introduces the new development of 1) the MFFNet method, 2) the new CarlaSim dataset, and 3) the experimental results and analysis of MFFNet, as well as its comparison with DMVF, FFNet and other methods on VideoWeb and CarlaSim. B. Paper Organization This paper highlights our new contributions in MFFNet and also introduces our prior work in FFNet and DMVF, providing a holistic view of our solution in video fast-forwarding. More specifically, FFNet is a single-agent video fast-forwarding method that we developed based on reinforcement learning, and we build a multi-strategy video fast-forwarding agent upon FFNet. Both DMVF and MFFNet use this multi-strategy fast-forwarding agent on their camera nodes – DMVF uses a distributed framework to decide the strategies each agent should use, while MFFNet uses a centralized framework to do so. Both methods are efficient and effective on collaborative video fast-forwarding for a network of resource-limited agents. In the rest of the paper, we first present a review of relevant literature in Sec. II. This is followed by a review Camera with wireless networkColored regions: important video segmentsCentral controller of our work in developing FFNet for single-agent video fast- forwarding in Sec. III, along with the development of a multi- strategy video fast-forwarding agent. In Sec. IV and Sec. V, we present our solutions to the multi-agent video fast-forwarding problem for distributed and centralized settings, i.e., DMVF and MFFNet, respectively. Experimental results in real-life data are presented in Sec. VI. II. RELATED WORK A. Video Summarization and Video Fast-forwarding The objective of video summarization is to take an entire video as input and output a compact subset of frames that can describe the important content of the original video. Many single-view video summarization methods are developed with unsupervised learning [21], [22], [23], [24] and supervised learning techniques based on video-summary labels [2], [3], [4], [25], [26], [27]. There are methods proposed specifi- cally for summarizing crawled web images/videos [28], [29], [30], [31] and photo albums [32], and online methods de- veloped using submodular optimization [1], Gaussian mixture model [33], and online dictionary learning [5]. Beyond single- view, the multi-view video summarization problem has been addressed by random walk over spatio-temporal graphs [6], joint embedding and sparse optimization [7], [8], DPP (De- terminantal Point Processes) [9], and a two-stage system with online single-view summarization and distributed view selection [10]. Different from these methods, our approaches do not process all the frames, which significantly reduces computation and communication load, and they collabora- tively fast-forward multi-view videos, further improving the efficiency and coverage. Video fast-forwarding methods are used for skipping unin- teresting/unimportant parts of the video. Commercial video players often offer the users with manual control on the playback speed, such as Apple QuickTime player with 2x, 5x, and 10x speed fast-forward. In the literature, the playback speed can be automatically adjusted based on the similarity of each candidate clip to a query clip [14] and the motion activity patterns in videos [11], [34], [35]. Besides playback speed adjustment, some works develop the fast-forwarding policy based on mutual information between frames [36], [37], shortest path distance over the semantic graph built from frames [16], [17], and visual and textual features [38]. Hyperlapse is also widely studied for fast-forwarding videos aiming at speed-up and smoothing [15], [12], [13]. Different from these approaches that are for single videos, our work focuses on multi-agent video fast-forwarding methods that collaboratively fast-forward videos from different views. B. Reinforcement Learning Deep reinforcement learning has been widely used in many computer vision tasks and achieved promising performance, such as in action detection [39], object detection [40], image captioning [41], pose estimation [42], visual tracking [43] and query-conditioned video summarization [44]. There are also approaches applying reinforcement learning to the multi-agent domain, i.e., multi-agent reinforcement learning (MARL) (see 3 a detailed review in [45]). Some recent works have used MARL to address computer vision tasks, such as joint object search [46], multi-object tracking [47], and frame sampling for video recognition [48]. There are also works on building learnable communication protocols for collaborative multi- agent deep reinforcement learning [49], [50]. Our earlier work FFNet conducts single video fast-forwarding via reinforcement learning [18], based on which we further develop two ap- proaches for multi-agent video fast-forwarding in centralized and distributed settings. C. Multi-agent System Optimization A fundamental problem in distributed multi-agent systems is the minimization of a sum of local objective functions while maintaining agreement over the decision variable, often referred to as consensus optimization. Seminal work in [51] proposes a distributed consensus protocol for achieving agree- ment in a multi-agent setting by iteratively taking a weighted average with local neighbors. The work in [52] presents a distributed gradient descent (DGD) method, where each agent iteratively updates its local estimate of the decision variable by executing a local gradient descent step and a consensus step. Follow-up works [53], [54], [55] extend this method to other settings, including stochastic networks, constrained problems, and noisy environments. More recently, EXTRA [56], which takes a careful combination of gradient and consensus steps, is proposed to improve convergence speed and is shown to achieve linear convergence with constant step size. In com- puter vision, consensus-based methods are used applications such as human post estimation [57], background subtrac- tion [58], and multi-target tracking [59], etc. To the best of our knowledge, the DMVF framework (more details on [19]) we developed is the first distributed consensus-based framework to address multi-agent video fast-forwarding. In this paper, that we further develop a centralized framework MFFNet facilitates a central controller to adjust the fast-forwarding strategy for multi-agent video fast-forwarding. III. SINGLE-AGENT VIDEO FAST-FORWARDING A. Review of FFNet FFNet [18] uses a Markov decision process (MDP) to formulate the video fast-forwarding problem and solves it using reinforcement learning, i.e., with a Q-learning agent that learns a policy to skip unimportant frames and present the important ones for further processing. Given the current frame, FFNet decides the number of frames to skip next. The MDP formulation of FFNet is defined as follows: • State: A state sk describes the environment at time step k. It is defined as the feature vector of the current frame. • Action: An action ak is performed by the system at step k and devotes to an update of the state. The action set includes the possible numbers of frames to skip. • Reward: An immediate reward rk = r(sk, ak, sk+1) is received by the system at time step k as rk = −SPk + HRk. (1) 4 B. Multi-strategy Fast-forwarding Agent To fit into the multi-agent video fast-forwarding scenario, on each camera that captures a view of the scene, we leverage a multi-strategy fast-forwarding agent that can adaptively fast- forward the incoming videos with different paces. Similar to [19], the FFNet is derived into three different strategies/paces for fast-forwarding: normal-pace, slow-pace, and fast-pace. Note that our approach can be easily extended to consider other numbers of strategies/paces. Normal-pace Strategy. The normal-pace strategy adopts the same immediate reward design as FFNet: rk(normal) = −SPk + HRk. (7) As our normal-pace strategy, we use an action space of size 25, i.e., skipping from 1 to 25 frames. Slow-pace Strategy. The slow-pace strategy aims at skipping fewer frames and thus retaining more frames in the selected buffer, possibly including more numbers of important frames. To meet this goal, we modify the immediate reward in FFNet at time step k as rk(slow) = (−SPk + HRk) × (1 − sigmoid(ak) 2 ). (8) Intuitively, if the agent skips a larger step, it will receive a smaller immediate reward. We also change the action space to 15 to prevent the agent from skipping too much. Fast-pace Strategy. The goal of the fast-pace strategy is to skip more unimportant frames for more efficient processing and transmission. Thus, we modify the immediate reward at time step k as rk(f ast) = (−SPk + HRk) × (1 + sigmoid(ak) 2 ). (9) This reward definition ensures that the agent will get a larger immediate reward if it skips a larger step. The action space is set to 35 to allow the agents to skip larger steps. For each agent, it can flexibly switch among these strategies to adaptively fast-forward its own videos. Fig. 2. The model structure of FFNet. It takes a frame in an incoming video stream as an input for the deep neural network and outputs the number of frames to skip. It consists of the “skip” penalty (SP) and the “hit” reward (HR). SPk defines the penalty for skipping action in the interval tk at step k: (cid:80) i∈tk SPk = 1(l(i) = 1) T − β (cid:80) i∈tk 1(l(i) = 0) T , (2) where 1(·) is an indicator function that equals to 1 if the condition holds. T is the largest number of frames we may skip. β ∈ [0, 1] is a trade-off factor between the penalty for skipping important frames and the reward for skipping unimportant frames. HRk defines the reward for jumping to an important frame or a position near an important frame and is computed as HRk = z+w (cid:88) i=z−w 1(l(i) = 1) · fi(z), (3) where fi(z) extends the one-frame label at frame i to a Gaussian distribution in a neighboring time window w, i.e., z ∈ [i − w, i + w]. • Policy: With the definition of states, actions, and rewards, a skipping policy π is learned for selecting the action that maximizes the expected accumulated reward R: π(sk) = arg max a E[R|sk, a, π], (4) where the accumulated reward R is computed as R = (cid:88) k γk−1rk = (cid:88) k γk−1r(sk, ak, sk+1), (5) IV. DMVF: DISTRIBUTED MULTI-AGENT VIDEO FAST-FORWADING where γ ∈ [0, 1] denotes the discount factor for the rewards in the future. With Q-learning, the value of E[R|s, a, π] is evaluated as Q(s, a). The optimal value Q∗(sk, ak) can be calculated by the Bellman equation in a recursive fashion: Q∗(sk, ak) = rk + γ max ak+1 Q∗(sk+1, ak+1). (6) The model of FFNet is shown in Fig. 2. When training this model, the mean squared error between the target Q-value and the output of MLP is used as the loss function. ϵ-greedy strategy is utilized to better explore the state space, which picks a random action with probability ϵ and the action that has Q∗(s, a) with probability 1-ϵ. A. Overview In this section, we review our approach for addressing the multi-agent video fast-forwarding problem by adapting the skipping strategy of each agent in an efficient, online, and distributed manner, named DMVF (more details in [19]). Fig. 3 shows the workflow design of our framework (take one agent i for illustration). Given the incoming multi-view video streams V = {v1, · · · , vN } captured at different agents, our goal is to generate a final summary F = {f1, · · · , fN } for the scene while reducing the computation, communication, and storage load. In our framework, the fast-forwarding agent of each view is modeled as a reinforcement learning agent with multiple available strategies S = {sm, m = 1, · · · , M }. During oper- ation, at every adaptation period t (with the period length as Conv layers409640020010025Q value of each actionNumber of frames to skip 5 Fig. 3. The workflow of DMVF. At every adaptation period t, each agent i first fast-forwards its video input with current strategy st i and selects a set of frames fi. It then receives neighbor agents’ selected frames (e.g., fj and fk) and computes an initial importance score for itself and its neighbors. Afterwards, agent i refines and finalizes the importance score with other agents via a system-wide maximal consensus algorithm. Based on this importance score vector ⃗x, agent i chooses its strategy for the next period st+1 (so does every other agent). i M N V S st i st+1 i F ⃗x T number of available fast-forwarding strategies number of camera views / agents the set of N views {vi}, i ∈ [1, N ] the set of available strategies {sm}, m ∈ [1, M ] strategy being used in agent i at adaptation step t strategy for agent i in the next adaptation step t + 1 summary of the scene: {f1, · · · , fN } importance score vector after consensus period of strategy update TABLE I NOTATIONS USED IN DMVF. similarities between their selected frames. First, we evaluate the similarity between two frames x and y by computing the exponential of the scaled negative L2-norm of the feature representations of the two frames, as defined in the following equation: sim(x, y) = e−α||x−y||2, (10) where α is used to scale the L2-norm to restrict the similarity value to a satisfactory range (α = 0.05 in our experiment). The similarity of agent j to i is then defined as T ), each agent i fast-forwards its own video stream with a current strategy st i ∈ S and selects a subset of frames fi. Note that the frames being skipped are not processed, transmitted, or saved. Agent i then communicates with its neighbors and receives their selected frames, e.g., fj and fk as shown in the figure. Based on such information, agent i computes an initial importance score for itself and its neighbors. Afterward, agent i refines its initial score together with other agents in the system via a system-wide consensus algorithm, including first an update of its own score and then multiple iterations to reach system-wide consensus. Note that during the consensus process, only scores are transmitted (not selected frames). After running the consensus algorithm, each agent will have the same copy of the final importance scores for their selected frames in the current period, defined as ⃗x = [x1, x2, ..., xN ]. Agent i then chooses its fast-forwarding strategy for the next period st+1 based on the rank of its importance score xi. The notations are highlighted in Table I. i B. Local-neighbor Importance Score Computation In this step, for every agent i, we compute an initial importance score for itself and its neighbor by comparing the sim agent(vi, vj) = 1 |vj| |vj | (cid:88) s=1 max 1≤a≤|vi| sim(ps(vj), pa(vi)), (11) where |vj| denotes the number of selected frames from agent j and ps(vj) denotes a selected frame s from agent j. The similarity for frame ps(vj) to agent i is the maximum among the similarities between ps(vj) and frames of agent vi. Then the agent-to-agent similarity of agent j to agent i is the average frame similarity. We define the communication connections among agents as an undirected graph G = (V, E). With this definition, we compute the importance score of agent j estimated by agent i as x0 ij =   1 |Vi|−1  0 (cid:80) vk∈Vi,k̸=j sim agent(vj, vk) if i = j or (i, j) ∈ E (12) o.w. where Vi = {vk|(i, k) ∈ E} (cid:83) {vi}, is the set of the neighbors of agent i and itself. |Vi| represents the number of agents in Vi. This initial important score will then be refined via a consensus process as described in the following section. . . .. . .. . .View i. . .Multi-strategy Fast-forwardingLocal-neighbor Importance Score ComputationSystem-wide Importance Score ConsensusStrategy UpdateCurrent StrategyAgent i𝑠1𝑠𝑖𝑡𝑠𝑀𝑠2…Neighbor: Agent j𝑠𝑗𝑡𝑓𝑖Neighbor: Agent k. . .Compute Initial Score for Self and Neighbors𝑓𝑗𝑓𝑘𝑠𝑘𝑡𝑠𝑖𝑡+1Update Score for Self𝑥𝑖𝑖0𝑥𝑗𝑖0𝑥𝑘𝑖0𝑥𝑖𝑥𝑗𝑥𝑘Strategy SelectionԦ𝑥Ԧ𝑥Ԧ𝑥𝑠𝑗𝑡+1𝑠𝑘𝑡+1t-thAdaptation Period Maximal Consensus(Multiple Iterations) 6 Fig. 4. The workflow overview of MFFNet. Each camera view is associated with an adaptive fast-forwarding agent that supports multiple fast-forwarding strategies/paces. During every period of operation, each agent n uses current strategy ¯sn to fast-forward its video input and saves selected frames in its buffer. At the end of the period, every agent sends the selected frames in its buffer to the central controller. The central controller computes the similarity among the frames from different agents, and based on it, chooses the strategy ˆsn for each agent n in the next period and generates a more compact summary from their selected frames. C. System-wide Importance Score Consensus To refine the initial importance score and reach an agree- ment across all agents on the relative importance of their frames, we mainly use a maximal consensus algorithm in our framework. We have also explored multiple variants of our framework with different consensus methods in [19]. There are three steps in our maximal consensus algorithm. First, each agent communicates with its neighbors and sends its initial importance scores for each of them. At the end of this step, agent i will have the initial scores of itself from its own computation and from the evaluation by its neighbors (i.e. {x0 ji}, j ∈ Vi). Then, in the second step, agent i updates its score as xi = (cid:80) j∈Vi (cid:80) j∈Vi 1 nj x0 ji 1 nj , (13) which means that the importance score of agent i is updated as the weighted average of the initial importance scores evaluated by itself and its neighbors. Then an importance score vector ⃗xi for all agents is constructed by agent i, with only the i- th element set to xi and all others set to zero. In the third step, all agents will run a maximal consensus algorithm over the importance score vector. This algorithm only requires the number of consensus steps to be the diameter of the graph G to reach an agreement (the convergence is guaranteed). In the end, every agent will have the same copy of the importance score vector for all agents, i.e., ⃗xi = ⃗x = [x1, x2, ..., xN ]. D. Strategy Selection Based on the final importance scores in ⃗x, the agents with higher scores could be assigned with a slower strategy for the next period, while the agents with lower scores could be faster. Given the system requirement, the portions of different strategies are pre-defined, which means there should be a fixed number of agents under each strategy after every update. V. MFFNET: CENTRALIZED MULTI-AGENT VIDEO FAST-FORWADING A. Overview In this section, we present a new method to address the multi-agent video fast-forwarding problem by utilizing a cen- tral controller to analyze the data from each agent and adapt the fast-forwarding strategies of agents in an efficient online manner, named MFFNet. Fig. 4 shows the workflow design of our framework. Given the incoming multi-view video streams V = {v1, · · · , vN } captured at different agents, the goal of MFFNet is to generate a final summary F = {f1, · · · , fN } for the scene while reducing the computation, communication, and storage load. The fast-forwarding agent of each camera view is modeled as a reinforcement learning agent with multiple available strategies {sm, m = 1, · · · , M }. During operation, each agent n fast-forwards its own video stream with a current strategy ¯sn and keeps the selected frames in its buffer Bn. The frames being skipped are not processed, transmitted, or saved. After a period of time T , each agent sends the selected frames in its buffer to the central controller. The central controller receives selected frames of the last period from all agents and computes their similarity. Based on the similarity computation, the controller chooses the strategy ˆsn for each agent n in the next period and notifies them immediately. Such computation and decision are very fast and only performed once every Agent 1S1ഥ𝑠1𝑆𝑀Buffer 1Central Controllerഥ𝑠1ෝ𝑠1ෝ𝑠1S2. . .View 1Agent NS1𝑠𝑁𝑆𝑀Buffer N𝑠𝑁ෞ𝑠𝑁ෞ𝑠𝑁S2. . .View NFinal SummaryAgentBuffer View (Multiple views omitted for better illustration)Strategy ComputationSummary Generation. . .. . .Input: 𝑏1, … , 𝑏𝑁Similarity Computation𝑏1𝑏𝑁Output: 𝑓1,…,𝑓𝑁Input: 𝑏1, … , 𝑏𝑁𝑉′𝑉−𝑉′ M N vn V V ′ sm ¯sn ˆsn {An} {Bn} {bn} F T ρ number of available fast-forwarding strategies number of camera views / agents the video of view n, n ∈ [1, N ]. the set of N views the subset of V containing selected main views available strategy m, m ∈ [1, M ] strategy being used in agent n strategy for agent n in the next period set of fast-forwarding agents, {A1, · · · , AN } set of buffers, {B1, · · · , BN } set of data received by controller, {b1, · · · , bN } summary of the scene: {f1, · · · , fN } period of strategy update the threshold for matching frames TABLE II NOTATIONS USED IN MFFNET. period. The central controller also generates a more compact summary of the selected frames and stores them. The notations are highlighted in Tab. II. B. Central Controller The responsibility of the central controller is to decide the pace for each agent and generate a more compact summary of the scene. At every period T , it receives the selected frames {b1, · · · , bN } from all agents. With those data, it first computes similarity among frames from different agents. Based on the similarity, the central controller decides the new strategies {ˆs1, · · · , ˆsn} for all agents and sends them back. Meanwhile, the controller further reduces redundancy by generating a compact summary F = {f1, · · · , fn}. The central controller consists of three modules: similarity computation, strategy computation, and summary generation. Similarity Computation. From each agent n, the central con- troller receives a set of frames bn per period. In this module, the similarity between two frames is defined in Eqn. (10) in Sec. IV. A threshold ρ is used to match frames. If the similarity of two frames is greater than ρ, we consider them as a match. In order to further compute the strategies for each agent, we define a function named match count M (·, ·), which matches frames from two sources and returns the number of matching frames, as shown below: (cid:88) (sim(x, y)) > ρ), (14) M (u, v) = I(max y∈v x∈u where I(·) is an indicator function that equals 1 if the condition holds. Strategy Computation. The goal of the strategy computation module is to infer the strategies for all agents in the next period, i.e., { ˆs1, · · · , ˆsn}. Intuitively, if a view contains a larger number of important frames, it should receive more attention and should not be skipped too much. Following this idea, we formulate the strategy computation problem as an optimization problem for selecting a subset of views V ′ as the main views from V to better represent the whole scene. The set of main views is selected by V ′ = arg max ¯V (cid:80) i∈V − ¯V M (bi, (cid:83) j∈ ¯V len(bj) (cid:80) j∈ ¯V bj) Algorithm 1 Main View Set Selection Algorithm 1: Input: a set of data received by the controller, 7 Similarity[i, j, k, l] = sim(bi[k], bj[l]) for j = 1 to N , j ̸= i do for l = 1 to Size(bj) do for k = 1 to Size(bi) do {b1, · · · , bN }, the similarity threshold ρ 2: Output: A set of selected main views V ′ 3: Initialize the similarity array Similarity 4: for i = 1 to N do 5: 6: 7: 8: 9: M axScore = 0 10: for δ = 1 to (2N − 2) do 11: 12: 13: 14: 15: ¯V = {}, sz = 0, score = 0 for i = 1 to N do ¯V ← ¯V ∪ {i} sz ← sz + Size(bi) for i = 1 to N , i /∈ ¯V do if the i-th bit of δ is 1 then 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: for k = 1 to Size(bi) do match = 0 for j ∈ ¯V do for l = 1 to Size(bj) do if sim(bi[k], bj[l]) > ρ then match ← 1 score ← score + match score ← score/sz if score > M axScore then V ′ ← ¯V , M axScore ← score where bi and bj are the frames sent back by the fast-forward agents i and j. len(·) represents the number of frames in a fast-forwarded segment. The set of main views is selected as the subset of views that can cover the most of other views. To avoid the effect of the main view size, we divide the sum of match counts of other views by the total number of frames in the main view set. The detailed algorithm for main view set selection is shown in Algorithm 1. Here, sz denotes the total number of frames in the main views. score is the main view score of the subset ¯V , as in Eqn. (15). For the views in the main view set V ′, they can cover more content than other views and have more important information. Thus, we use the slow fast-forwarding strategy for each of them. For any other views in V − V ′, they can be covered significantly by the main views. Therefore, we expect them to fast-forward at a faster speed. More specifically, we decide their strategies by their matching percentage to the main view set. The matching percentage of view n is computed as mp(n) = M (bn, (cid:83) j∈ ¯V bj)/len(bn). If the matching per- centage of a view is smaller than a threshold τ , it will be instructed to maintain the normal pace; otherwise, it will be instructed to use the fast strategy, as below: ˆsn =    slow, normal, f ast, if n ∈ V ′, if n ∈ V − V ′, mp(n) < τ, if n ∈ V − V ′, mp(n) > τ. (16) , (15) Data Buffer and Strategy Update. For each agent n, there is a data buffer Bn for storing the selected frames. At every time period T , the agent will send those frames in the buffer to the central controller. The agent will also receive a new strategy/pace instruction from the central controller and adapt it accordingly in the next period. Summary Generation. After matching the frames among views and choosing the main view, the next step is to generate a more compact summary for the scene. We use the following policy for further reducing redundancy: 1) for the set of main views V ′, we keep all the frames from its buffer in the summary, and 2) for the other views, we remove the frames that are matched with the main views (i.e., similar to some frames in the main views) and only keep the remaining ones in the summary. Please note that when generating the summary, we restrict the reduction of frames within a certain time window. That is, if two frames are similar with respect to the similarity threshold ρ and are close to each other in time, we consider them as a match and drop it. Finally, similarly to [18], we also include some neighboring frames of the selected frames in the summary (with selected ones as the window centroids). All these summary frames are denoted as {f1, · · · , fn}. C. Central Controller Using RL In addition, we design another central controller using deep reinforcement learning with our framework. The central controller acts as a feedback-loop controller system, which can be formulated as an MDP with the following definitions of key elements. State. In our scenario, the fast-forwarded videos of period k from multiple agents are integrated into a single description, which is taken as the state sk. To be more specific, we consider the concatenation of the average feature vector as a state, which is based on the fast-forwarded frames of different agents. Action. At each control period k, we consider the action as the combination of different fast-forwarding strategies used in each agent. As we have M available fast-forwarding strategies for all N agents, i.e. S = {sm, m ∈ [1, M ]}, the entire action space AS = (cid:8)a1, a2, ..., aP (cid:9), where P = M N . Reward. After taking one action, i.e., selecting the proper fast-forwarding for each agent, the system transits from state sk to another state sk+1 and an immediate reward rk = r(sk, ak, sk+1) is received by the system. The accumulated reward is further defined as R = (cid:88) k γk−1r(sk, ak, sk+1), (17) where γ ∈ [0, 1] is the discount factor for the rewards in the future. The goal of the central controller is to control the fast-forwarding paces of agents to maximize the coverage of the important scenes across multiple views and reduce the redundancy in the final summarized videos, by taking a sequence of actions. For a video available in the training set, we assume that the label of it is a binary vector, in which 1 indicates an important frame and 0 means an unimportant one. 8 After receiving the strategy instruction from the central controller, each agent will fast-forward its own video stream with the corresponding model and transmit the fast-forwarded video segment during the current control period. During a period of time T , an agent n sends its fast-forwarded frames bn and the corresponding binary vector of selected frames ˆyn back to the central controller. With this information from all agents, the central controller receives the immediate reward at step k computed by the following equation: rk = N (cid:88) n=1 g(ˆyn,k)T g(yn,k) + α ∥g(¯yk)∥1 n=1 ∥ˆyn,k∥1 (cid:80)N , (18) where ˆyn,k is the binary vector indicating selected frames from agent n at time step k, and yn,k is the ground truth binary vector of the view of agent n during the current period. ¯yk is the global ground truth binary vector of the scene at time step k, which is generated by ¯yk = min( N (cid:88) n=1 yn,k, 1), (19) where the minimum is taken element-wise. The first term in Eqn. (18) gives higher rewards for the fast- forwarding action that selects the frames that match the ground truth better. As neighboring frames are often similar and share the same content, we hope to match the fast-forwarded result to the ground truth in a smoother fashion. That is, if the agent selects a frame that is close to the important frame, we will give some rewards, rather than no reward. To achieve this, we transfer the binary vector of selected frames for both the ground truth yn,k and the transmitted results ˆyn,k to a Gaussian distribution in a time window, denoted by the function g(·). The second term in Eqn. (18) is used to reduce the redundancy in the fast-forwarded result. If the agents select more frames, the central controller will get a smaller reward for the current strategy selection. Policy. The policy π decides the action to be executed at each time step by the system, i.e., it chooses the action for the system that maximizes the expected accumulated reward for the current step and the future as shown in Eqn. (20). In other words, the policy finds the fast-forwarding strategy of each agent that gives a larger expected accumulated reward. π(sk) = arg max a E[R|sk, a, π] (20) Similar to the training of FFNet, We utilize Q-learning to achieve this policy by evaluating the value of E[R|s, a, π] as Q(s, a) and use a feed-forward neural network to approximate the Q-value. VI. EXPERIMENTS In this section, we first present the experimental results of our MFFNet framework and its overall comparison with several single-agent fast-forwarding methods in the literature and FFNet, followed by its further comparison with FFNet in coverage-efficiency tradeoff and high-redundancy cases. We then compare MFFNet with our previous distributed multi- agent fast-forwarding framework DMVF in detail. Finally, we 9 Specification City Number of videos Video length Resolution Camera Terrain Weather Value Town-3 18 10000 720 x 480 front, front-left, front-right 5-lane junction, roundabout, unevenness, tunnel dynamic cloudiness, precipitation, sun angle TABLE III SPECIFICATIONS IN CARLA FOR GENERATING CARLASIM DATASET. Strategy Processing rate(%) 3-view Coverage(%) 6-view Coverage(%) Slow 8.69 66.22 73.45 Normal 6.02 52.88 61.91 Fast 3.73 48.38 55.89 TABLE IV OPERATING POINTS OF STRATEGIES ON VIDEOWEB. dynamic actors, map generation, and more. For generating the CarlaSim dataset, we utilize the Town3 environment in the CARLA simulator, which has a 5-lane junction, a roundabout, unevenness, a tunnel, and so on. The multi-view videos are captured by putting multiple cameras on an autonomous car, which runs with a built-in autonomous driving controller. Detailed specifications for generating the video data are shown in Tab. III. We generate a binary indicator for each video frame according to the existence of vehicles in the view. If a frame captures a nearby vehicle (i.e., with size > 150 pixels), it will be labeled as an important frame. The global ground truth for evaluation is generated by the same method as for the VideoWeb dataset. Fig. 5 shows some example frames in the CarlaSim dataset. From left to right, the columns stand for frames from front-left, front, and front-right views. As the data is collected on a moving platform, it captures more dynamic scenarios than the existing datasets that use stationary cameras (such as VideoWeb) and can help validate the efficacy of our methods in those dynamic scenarios. B. Experimental Setup Implementation Details. Our MFFNet is implemented using the TensorFlow library. The fast-forwarding agents are all modeled as 4-layer neural networks. ϵ-greedy strategy is used to better explore the state space during the training process. In the following experiments, we explore the scenarios of both 3 views (N = 3) and 6 views (N = 6), and set the similarity threshold to ρ = 0.525 and ρ = 0.575, respectively. The strategy computation threshold τ is set to 0.4. The strategy update period T is set to 100 frames of the raw video inputs. The 3 strategies used in our framework are FFNet and its variants as defined in Sec. III-B. The operating points of agents with the slow, normal and fast strategies are shown in Tab. IV for VideoWeb and Tab. V for CarlaSim. Each video frame is represented by the penultimate layer (pool 5) of the GoogLeNet model [61] (1024-dimensions). Each baseline algorithm is evaluated with the same neigh- boring window extension as ours. We randomly use 80% of the videos for training and the remaining 20% for testing. We report the average performance on 5 rounds of experiments. Evaluation Metrics. We evaluate the performance of meth- ods with a coverage metric and a processing rate metric. Fig. 5. Some illustrative example frames from the CarlaSim dataset. From left to right, the columns stand for frames from front-left, front, and front-right views. The CarlaSim dataset has multiple weather conditions, such as cloudy, rainy, and sunny (rows 1-3). Different terrains exist in the map, such as the tunnel in row 4. also evaluate how communication issues may affect MFFNet, an important practical consideration. A. Datasets We evaluate the performance of various methods on a multi- view video dataset VideoWeb [20] with fixed cameras and on a self-built simulated multi-view dataset on moving platforms using the CARLA simulator [60], referred to as CarlaSim. VideoWeb. This dataset is captured in a realistic multi-camera network environment that involves multiple persons perform- ing many different repetitive and non-repetitive activities. Same as in [19], we use the Day 4 subset of the VideoWeb dataset, which contains multiple vehicles and persons. It has 6 scenes and each scene has 6 views of videos. All videos are captured at 640 × 480 resolution and approximately 30 frames/second. The dataset includes the labels for important activities, based on which, we can generate a binary indicator for each frame to label its importance. That is, if a frame contains the labeled important activities, it will be labeled as an important frame with the binary indicator as 1 (otherwise, as 0). With such a frame importance indicator, we can generate a global ground truth across views for evaluation purpose. CarlaSim. CARLA is a simulator for urban autonomous driv- ing. It provides open digital assets (urban layouts, buildings, and vehicles) and supports flexible specifications of sensor suites, environmental conditions, full control of all static and Front-left viewFront viewFront-right view 10 Fig. 6. Trade-off between coverage and processing rate in MFFNet for 3-view and 6-view scenarios of VideoWeb dataset. By tuning the similarity threshold ρ (marked in the figure), different levels of tradeoff can be achieved. Method coverage(%) Processing Rate(%) Slow 80.78 18.06 Normal 67.83 14.76 Fast 60.78 7.09 TABLE V OPERATING POINTS OF STRATEGIES ON CARLASIM. The coverage metric evaluates how well the resulting fast- forwarding videos across multiple agents cover the important frames in the ground truth. It is computed as the percentage of the important frames that are included in the fast-forwarding videos across agents. In other words, if an important frame is included in any one of the agents’ fast-forwarding videos, it will be considered as covered. The processing rate metric measures the percentage of the frames being processed by the system. Comparison Methods. We compare our MFFNet with the following methods for video fast-forwarding and video sum- marization: (1) Random, which skips the incoming frames randomly. (2) Uniform, which fast-forwards the video uni- formly. (3) Online Kmeans (OK) [62], a clustering-based method working in an online update fashion. The summary result consists of the frames that are the closest to the centroid in each cluster. (4) Spectral Clustering (SC) [63], a clustering-based method that provides several clusters from all the frames in a video. The summary is composed by the frames that are closest to each centroid. (5) Sparse Modeling Representative Selection (SMRS) [21], which takes the entire video as the dictionary and finds the representative frames based on the zero patterns of the sparse coding vector. (6) FFNet [18], the method we developed for single-agent video fast-forwarding. (7) DMVF [19], the distributed multi-agent fast-forwarding method we developed. C. Comparison of MFFNet with Single-agent Fast-forwarding Approaches Tab. VI shows the coverage metric and the processing rate of the single-agent fast-forwarding approaches in the literature, FFNet, and MFFNet, on the VideoWeb dataset for the 3-view and 6-view scenarios and the CarlaSim dataset. Note that in the cases of single-agent approaches (including FFNet), every view/camera uses the same approach and configuration. In contrast, a multi-agent approach like MFFNet coordinates the operations of multiple views. From the table, we can clearly see its advantage. More specifically: • For those methods that require processing the entire video (processing rate of 100%), i.e., OK, SC and SMRS, our approach MFFNet achieves higher coverage (more than 25% increase) and much lower processing rate. • When compared with Random and Uniform methods, MFFNet offers significant improvement in coverage with a modest increase in processing rate. • When compared with FFNet, our state-of-the-art single- agent approach, MFFNet achieves a slightly better coverage while reducing the processing rate by 9.3% in VideoWeb 3- view, 7.3% in VideoWeb 6-view and 12.20% in CarlaSim. This shows that MFFNet is able to further reduce the com- putation load in the fast-forwarding process while offering the same (or higher) level of coverage of important frames. D. Further Comparison of MFFNet with FFNet Enabling Flexible Coverage-Efficiency Tradeoff. When de- ploying a video fast-forwarding strategy, the goal of achieving high efficiency (i.e., low processing rate) contradicts with the goal of maintaining high coverage, and the designers may want to trade off between the two metrics. To enable such tradeoff, MFFNet incorporates a tunable parameter, i.e., the similarity threshold ρ. Fig. 6 shows that by changing ρ, different levels of tradeoff between coverage and efficiency can be easily achieved on VideoWeb dataset for 3-view and 6-view scenarios and on CarlaSim. This is much more flexible and systematic than simply deploying FFNet on each agent and manually trying their skipping speeds. Addressing High-redundancy Cases. The different views in the VideoWeb dataset have a modest level of redundancy across them. When the redundancy level is higher, the im- provement of our MFFNet over FFNet will be even more significant. Here we consider the extreme case where each level of view has the same video data, redundancy. The fast-forwarding performance of MFFNet and FFNet in both VideoWeb and CarlaSim is shown in Tab. VII. As CarlaSim only has 3 different camera views, thus no results are available for MFFNet-6v. Note that FFNet does not have any strategy changes in different settings, so its results for the highest i.e., Methods VideoWeb 3-view Coverage (%) VideoWeb 3-view Processing rate (%) VideoWeb 6-view Coverage (%) VideoWeb 6-view Processing rate (%) CarlaSim Coverage (%) CarlaSim Processing rate(%) Random Uniform 41.33 4.40 50.78 4.20 55.69 6.50 27.79 4.00 25.80 3.70 36.74 5.40 OK 39.92 100 50.21 100 52.24 100 SC 42.10 100 44.74 100 51.80 100 SMRS 31.10 100 42.36 100 46.85 100 FFNet MFFNet 52.88 6.02 61.91 6.02 67.83 14.76 53.66 5.46 61.92 5.58 68.65 12.96 TABLE VI COMPARISON OF MFFNET WITH SINGLE-AGENT FAST-FORWARDING APPROACHES FOR BOTH VIDEOWEB AND CARLASIM DATASETS. 11 Methods VideoWeb Coverage(%) VideoWeb Processing rate(%) CarlaSim Coverage(%) CarlaSim Processing rate(%) FFNet MFFNet-3v MFFNet-6v 54.10 8.69 52.38 14.76 75.61 4.53 / / 71.93 5.30 79.31 8.09 TABLE VII COMPARISON OF MFFNET AND FFNET IN THE EXTREME CASE, WHERE ALL VIEWS HAVE THE SAME DATA. 3-view and 6-view are the same for the extreme case. From the result, we can see that MFFNet can achieve much higher coverage and lower processing rate than FFNet. E. Comparison of MFFNet with Distributed Multi-agent Framework DMVF In this section, we compare MFFNet with our distributed multi-agent video fast-forwarding framework DMVF [19], on both VideoWeb 6-view and CarlaSim datasets. The results are shown in Tab. VIII. We have the following findings: • MFFNet and DMVF are comparable in coverage and pro- cessing rates on VideoWeb and CarlaSim. On VideoWeb, DMVF achieves better coverage while MFFNet achieves better coverage on CarlaSim. • On both datasets, MFFNet has less communication load (-44% for VideoWeb and -15% for CarlaSim) and higher frame rate (+34% on VideoWeb and +93% on CarlaSim). This is because that DMVF is a distributed method. The same information from one agent may need to be sent mul- tiple times and the framework needs to reach a consensus on the strategy update, which leads to a higher communication load and longer communication delay. While MFFNet has the advantages on less communication load and higher frame rate, DMVF is more flexible to utilize as it does not need a centralized infrastructure and the con- nections among agents can be adjusted according to system needs and agent capabilities. Both centralized and distributed methods could be suitable for improving the efficiency of a network of resource-limited agents with cameras, which can be used in tasks such as search and rescue, wide-area surveillance, and environment monitoring. Considering the advantages of each method, the choice between them depends on the practical application scenario. If we have a stable centralized infrastructure and each agent is able to reliably connect to the central controller, the centralized MFFNet might be a better choice as it can further reduce the communication load and improve the overall efficiency. However, in some cases (e.g., in an adversarial environment) we do not have a stable and capable centralized infrastructure, and some agents Fig. 7. Effect of desynchronization on MFFNet in 3-view VideoWeb scenario. The desynchronization has some effect on the coverage of MFFNet, but the drop is not too significant. may not be able to reliably connect to the central node due to their physical distance or own resource limitations, in which case DMVF might be a better choice. F. Impact of Communication on MFFNet In this experiment, we consider For a multi-agent strategy such as MFFNet, communication issues such as desynchronization or packet losses could have a major impact in practice, especially in the case of wireless communication (in [19], the impact of network connectivity on DMVF was studied). In this section, we evaluate the per- formance of MFFNet under the impact of such communication issues, using the VideoWeb dataset for illustration. Desynchronization. the desynchronization of one view with respect to the others. For instance, frame 20 from one view may be taken physically at the same time as frame 0 of other views, but is given a time tag that is the same as frame 20 of other views (this could happen due to the desynchronization of camera clocks). Fig. 7 shows the results on the 3-view scenario when one view is 20 or 100 frames desynchronized (either ahead or behind) with the other views. We can see that the desynchronization indeed has some effect on MFFNet coverage, but the drop is not too significant. Similar results can be observed for the 6- view scenario. In practice, with a decent clock synchronization scheme, we should be able to maintain the desynchronization to be under 20 frames. Packet Losses. We consider the cases where a packet from an agent to the central controller may be lost due to communica- Method-Dataset coverage(%) Processing Rate(%) Communication p2p(GB) Communication central (GB) Total Communication (GB) Summary to Server (GB) FPS DMVF-VideoWeb MFFNet-VideoWeb DMVF-CarlaSim MFFNet-CarlaSim 65.87 5.06 0.18 / 0.18 3.59 313 61.92 5.58 / 0.10 0.10 3.22 419 64.54 12.33 0.20 / 0.20 2.27 119 68.65 12.96 / 0.17 0.17 2.46 230 TABLE VIII COMPARISON OF MFFNET WITH DISTRIBUTED MULTI-AGENT FAST-FORWARDING FRAMEWORK DMVF. 12 Loss probability 3-view coverage(%) 6-view coverage(%) 2.5% 52.00 60.43 5.0% 50.98 60.08 7.5% 10.0% 49.00 49.43 57.89 59.90 TABLE IX EFFECT OF PACKET LOSSES ON MFFNET IN 3-VIEW AND 6-VIEW SCENARIOS IN VIDEOWEB. THE PERFORMANCE IS SLIGHTLY AFFECTED BY THE PACKET LOSS (LESS THAN 10% DEGRADATION IN 10% LOSS PROBABILITY). Method Coverage(%) Processing Rate(%) MFFNet MFFNet-DQN-0 MFFNet-DQN-1 53.66 5.46 64.80 7.32 64.24 7.07 TABLE X COMPARISON OF DIFFERENT CONTROLLERS IN MFFNET. tion disturbance. Each packet is the fast-forwarded segments from 100-frame raw videos at an agent. Tab. IX shows the coverage of MFFNet when the packet loss probability varies from 2.5% to 10%. We can see that the drop is not very significant. Moreover, most of the coverage drop is due to the loss of data itself rather than the strategy selection process. G. Study of Central Controller Designs in MFFNet The above results of MFFNet are based on the heuristic central controller design introduced in Sec. V-B, with explicit similarity computation and strategy computation. In this sec- tion, we compare such central controller design with the RL- based design introduced in Sec. V-C, using the VideoWeb 3- view case as an example. The results are shown in Tab. X, where MFFNet is the heuristic central controller based on similarity computation, and MFFNet-DQN-0 and MFFNet- DQN-1 represent two RL-based central controllers using DQN models with the trade-off factor α in Eqn. (18) set to 0 and 1, respectively. From the table, we have the following observations: 1) The RL-based central controllers using DQN have higher coverage than the heuristic one based on similarity computation but also have much higher processing rate. 2) Setting the trade-off term α in the immediate reward to 1 can help lower the processing rate but also degrade the coverage. Note that the choice of which central controller to use depends on the trade-off preference on coverage or processing rate. H. Deployment of MFFNet on Embedded Platform We deployed MFFNet on an actual embedded platform to evaluate its efficiency. The central controller is implemented on a Dell Precision 5820 Tower workstation with a 3.6 GHz Xeon W-2123 CPU and 16GB memory, and the agents are run on Nvidia Jetson TX2. The communication between the central controller and the agents is implemented with a wireless network using TCP. For MFFNet, the average frame rate is 661 FPS for the 3-view scenario and 419 FPS for the 6-view scenario (note that only a fraction of these frames will be actually processed), showing its capability to work efficiently and effectively with real-time speed on embedded processors. VII. CONCLUSION In this paper, we first summarize our previous work on the single-agent video fast-forwarding method FFNet and dis- tributed multi-agent video fast-forwarding framework DMVF, and then present a new centralized multi-agent fast-forwarding framework MFFNet. The MFFNet framework includes a set to of multi-strategy fast-forwarding agents that can adapt different fast-forwarding paces, and a central controller that can choose the proper pace for every agent and generate a compact summary of the scene. We conducted a series of experiments on a real-world surveillance video dataset and a new simulated driving dataset, for MFFNet, DMVF, FFNet, and several methods in the literature. Experimental results demonstrate that our two collaborative multi-agent video fast-forwarding approaches, MFFNet and DMVF, can achieve better scene coverage and lower frame processing rate than applying single-agent fast-forwarding approaches on multiple agents without coordination. The experiments also demonstrate the trade-off between MFFNet and DMVF, the impact of communication disturbance, and the choice of different central controller designs. ACKNOWLEDGMENT We gratefully acknowledge the support from NSF grants 1834701, 1724341, 2038853, 2024774, and ONR grant N00014-19-1-2496. REFERENCES [1] E. Elhamifar and M. C. D. P. Kaluza, “Online summarization via submodular and convex optimization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [2] M. Gygli, H. Grabner, and L. Van Gool, “Video summarization by learning submodular mixtures of objectives,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [3] R. Panda, A. Das, Z. Wu, J. Ernst, and A. K. Roy-Chowdhury, “Weakly supervised summarization of web videos,” in IEEE International Con- ference on Computer Vision (ICCV), 2017. [4] K. Zhang, W.-L. Chao, F. Sha, and K. Grauman, “Video summarization with long short-term memory,” in European Conference on Computer Vision (ECCV), 2016. [5] B. Zhao and E. P. Xing, “Quasi real-time summarization for consumer videos,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [6] Y. Fu, Y. Guo, Y. Zhu, F. Liu, C. Song, and Z.-H. Zhou, “Multi-view video summarization,” IEEE Transactions on Multimedia, vol. 12, no. 7, pp. 717–729, 2010. 13 [7] R. Panda, A. Dasy, and A. K. Roy-Chowdhury, “Video summarization in a multi-view camera network,” in 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016, pp. 2971–2976. [30] Y. Song, J. Vallmitjana, A. Stent, and A. Jaimes, “Tvsum: Summarizing web videos using titles,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [8] R. Panda and A. K. Roy-Chowdhury, “Multi-view surveillance video summarization via joint embedding and sparse optimization,” IEEE Transactions on Multimedia, vol. 19, no. 9, pp. 2010–2021, 2017. [9] M. Elfeki, A. Sharghi, S. Karanam, Z. Wu, and A. Borji, “Multi-view egocentric video summarization,” arXiv preprint arXiv:1812.00108, 2018. [10] S.-H. Ou, C.-H. Lee, V. S. Somayazulu, Y.-K. Chen, and S.-Y. Chien, “On-line multi-view video summarization for wireless video sensor network,” IEEE Journal of Selected Topics in Signal Processing, vol. 9, no. 1, pp. 165–179, 2015. [11] K.-Y. Cheng, S.-J. Luo, B.-Y. Chen, and H.-H. Chu, “Smartplayer: user- centric video fast-forwarding,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2009, pp. 789–798. [12] T. Halperin, Y. Poleg, C. Arora, and S. Peleg, “Egosampling: Wide view hyperlapse from egocentric videos,” IEEE Transactions on Circuits and Systems for Video Technology, 2017. [13] N. Joshi, W. Kienzle, M. Toelle, M. Uyttendaele, and M. F. Cohen, “Real-time hyperlapse creation via optimal frame selection,” ACM Transactions on Graphics, vol. 34, no. 4, p. 63, 2015. [14] N. Petrovic, N. Jojic, and T. S. Huang, “Adaptive video fast forward,” Multimedia Tools and Applications, vol. 26, no. 3, pp. 327–344, 2005. [15] Y. Poleg, T. Halperin, C. Arora, and S. Peleg, “Egosampling: Fast- forward and stereo for egocentric videos,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [16] W. L. Ramos, M. M. Silva, M. F. Campos, and E. R. Nascimento, “Fast- forward video based on semantic extraction,” in IEEE International Conference on Image Processing (ICIP), 2016. [17] M. M. Silva, W. L. S. Ramos, J. P. K. Ferreira, M. F. M. Campos, and E. R. Nascimento, “Towards semantic fast-forward and stabilized ego- centric videos,” in European Conference on Computer Vision (ECCV), 2016. [18] S. Lan, R. Panda, Q. Zhu, and A. K. Roy-Chowdhury, “Ffnet: Video fast-forwarding via reinforcement learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. [19] S. Lan, Z. Wang, A. K. Roy-Chowdhury, E. Wei, and Q. Zhu, “Dis- tributed multi-agent video fast-forwarding,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 1075–1084. [20] G. Denina, B. Bhanu, H. T. Nguyen, C. Ding, A. Kamal, C. Ravishankar, A. Roy-Chowdhury, A. Ivers, and B. Varda, “Videoweb dataset for multi- camera activities and non-verbal communication,” in Distributed Video Sensor Networks. Springer, 2011, pp. 335–347. [21] E. Elhamifar, G. Sapiro, and R. Vidal, “See all by looking at a few: Sparse modeling for finding representative objects,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [22] M. Gygli, H. Grabner, H. Riemenschneider, and L. Van Gool, “Creating summaries from user videos,” in European Conference on Computer Vision (ECCV), 2014. [23] G. Guan, Z. Wang, S. Mei, M. Ott, M. He, and D. D. Feng, “A Top-Down Approach for Video Summarization,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 11, no. 1, p. 4, 2014. [24] E. Elhamifar and Z. Naing, “Unsupervised procedure learning via joint the IEEE International dynamic summarization,” in Proceedings of Conference on Computer Vision (ICCV), 2019, pp. 6341–6350. [25] B. Gong, W. Chao, K. Grauman, and F. Sha, “Diverse sequential subset selection for supervised video summarization,” in Advances in Neural Information Processing Systems (NIPS), 2014. [26] Z. Wu, C. Xiong, C.-Y. Ma, R. Socher, and L. S. Davis, “Adaframe: Adaptive frame selection for fast video recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 1278–1287. [27] M. Rochan and Y. Wang, “Video summarization by learning from unpaired data,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7902–7911. [28] A. Khosla, R. Hamid, C.-J. Lin, and N. Sundaresan, “Large-scale video summarization using web-image priors,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [29] G. Kim, L. Sigal, and E. P. Xing, “Joint summarization of large-scale collections of web images and videos for storyline reconstruction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [31] R. Panda and A. K. Roy-Chowdhury, “Collaborative summarization of topic-related videos,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [32] G. A. Sigurdsson, X. Chen, and A. Gupta, “Learning visual storylines with skipping recurrent neural networks,” in European Conference on Computer Vision (ECCV), 2016. [33] S.-H. Ou, C.-H. Lee, V. S. Somayazulu, Y.-K. Chen, and S.-Y. Chien, “Low complexity on-line video summarization with gaussian mixture model based clustering,” in Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on, 2014. [34] K. A. Peker, A. Divakaran et al., “An extended framework for adaptive playback-based video summarization,” in Internet Multimedia Manage- ment Systems IV, 2003. [35] K. A. Peker, A. Divakaran, and H. Sun, “Constant pace skimming and temporal sub-sampling of video using motion activity,” in IEEE International Conference on Image Processing (ICIP), 2001. [36] J. Jiang and X.-P. Zhang, “A new player-enabled rapid video naviga- tion method using temporal quantization and repeated weighted boost- ing search,” in Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Computer Society Conference on, 2010. [37] ——, “A smart video player with content-based fast-forward playback,” in Proceedings of the 19th ACM international conference on Multimedia, 2011. [38] W. Ramos, M. Silva, E. Araujo, L. S. Marcolino, and E. Nascimento, “Straight to the point: Fast-forwarding videos via reinforcement learning using textual data,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10 931–10 940. [39] S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei, “End-to-end learning of action detection from frame glimpses in videos,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [40] S. Mathe, A. Pirinen, and C. Sminchisescu, “Reinforcement learning for visual object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [41] Z. Ren, X. Wang, N. Zhang, X. Lv, and L.-J. Li, “Deep reinforce- ment learning-based image captioning with embedding reward,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [42] A. Krull, E. Brachmann, S. Nowozin, F. Michel, J. Shotton, and C. Rother, “Poseagent: Budget-constrained 6d object pose estimation via reinforcement learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [43] S. Yun, J. Choi, Y. Yoo, K. Yun, and J. Young Choi, “Action-decision learning,” in networks for visual Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. tracking with deep reinforcement [44] Y. Zhang, M. Kampffmeyer, X. Zhao, and M. Tan, “Deep reinforcement learning for query-conditioned video summarization,” Applied Sciences, vol. 9, no. 4, p. 750, 2019. [45] L. Bu, R. Babu, B. De Schutter et al., “A comprehensive survey of multiagent reinforcement learning,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 38, no. 2, pp. 156–172, 2008. [46] X. Kong, B. Xin, Y. Wang, and G. Hua, “Collaborative deep rein- forcement learning for joint object search,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [47] L. Ren, J. Lu, Z. Wang, Q. Tian, and J. Zhou, “Collaborative deep reinforcement learning for multi-object tracking,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 586–602. [48] W. Wu, D. He, X. Tan, S. Chen, and S. Wen, “Multi-agent reinforcement learning based frame sampling for effective untrimmed video recogni- tion,” in Proceedings of the IEEE International Conference on Computer Vision (CVPR), 2019, pp. 6222–6231. [49] S. Sukhbaatar, R. Fergus et al., “Learning multiagent communication with backpropagation,” in Advances in Neural Information Processing Systems, 2016, pp. 2244–2252. [50] J. Foerster, I. A. Assael, N. de Freitas, and S. Whiteson, “Learning to communicate with deep multi-agent reinforcement learning,” in Advances in Neural Information Processing Systems, 2016, pp. 2137– 2145. [51] J. N. Tsitsiklis, “Problems in decentralized decision making and com- putation.” Massachusetts Inst of Tech Cambridge Lab for Information and Decision Systems, Tech. Rep., 1984. [52] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi- agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009. [53] A. Nedi´c, A. Ozdaglar, and P. A. Parrilo, “Constrained Consensus and Optimization in Multi-agent Networks,” IEEE Transactions on Automatic Control, vol. 55(4), pp. 922–938, 2010. [54] I. Matei and J. S. Baras, “Performance Evaluation of the Consensus- Based Distributed Subgradient Method Under Random Communication Topologies,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 4, pp. 754–771, 2011. [55] A. Nedi´c, “Asynchronous broadcast-based convex optimization over a network,” IEEE Transactions on Automatic Control, vol. 56, no. 6, pp. 1337–1351, 2011. [56] W. Shi, Q. Ling, G. Wu, and W. Yin, “Extra: An exact first-order algorithm for decentralized consensus optimization,” SIAM Journal on Optimization, vol. 25, no. 2, pp. 944–966, 2015. [57] I. Lifshitz, E. Fetaya, and S. Ullman, “Human pose estimation using deep consensus voting,” in European Conference on Computer Vision. Springer, 2016, pp. 246–260. [58] H. Wang and D. Suter, “Background subtraction based on a robust con- sensus method,” in 18th International conference on Pattern recognition (ICPR’06), vol. 1. IEEE, 2006, pp. 223–226. [59] A. T. Kamal, J. H. Bappy, J. A. Farrell, and A. K. Roy-Chowdhury, “Dis- tributed multi-target tracking and data association in vision networks,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 7, pp. 1397–1410, 2015. [60] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” in Proceedings of the 1st Annual Conference on Robot Learning, 2017, pp. 1–16. [61] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [62] D. Arthur and S. Vassilvitskii, “k-means++: The advantages of careful seeding,” in Proceedings of the eighteenth annual ACM-SIAM sympo- sium on Discrete algorithms, 2007. [63] U. Von Luxburg, “A tutorial on spectral clustering,” Statistics and computing, vol. 17, no. 4, pp. 395–416, 2007. Shuyue Lan graduated from Northwestern Univer- sity with a Ph.D. in Computer Engineering in 2021. She spent her first two years of Ph.D. study at UC Riverside. Previously, she received her Bachelor’s degree in Automation from University of Science and Technology of China (USTC) in 2015. Her research interest includes Computer Vision, Machine Learning and Cyber-physical Systems. Currently, her work is focusing on high-performance deep learning inference workflow. Zhilu Wang graduated from Northwestern Univer- sity with a Ph.D. degree in Computer Engineering in 2022. He began his doctoral career at University of California, Riverside in 2016. Prior to that, he received his B.S. degree in Applied Physics from University of Science and Technology of China in 2016. He received the Best Paper Award at the 2022 ACM/IEEE DATE conference and the Best Thesis Award in Computer Engineering from Northwestern University. His research interest includes formal ver- ification, machine learning, real-time systems, and cyber-physical systems. 14 Ermin Wei is currently an Assistant Professor at the Electrical and Computer Engineering Department and Industrial Engineering and Management Sci- ences Department of Northwestern University. She completed her PhD studies in Electrical Engineering and Computer Science at MIT in 2014, advised by Professor Asu Ozdaglar, where she also obtained her M.S.. She received her undergraduate triple degree in Computer Engineering, Finance and Mathematics with a minor in German, from University of Mary- land, College Park. Wei has received many awards, including the Graduate Women of Excellence Award, second place prize in Ernst A. Guillemen Thesis Award and Alpha Lambda Delta National Academic Honor Society Betty Jo Budson Fellowship. Her team also won the 2nd place in the Grid Optimization (GO) competition 2019, an electricity grid optimization competition organized by Department of Energy. Wei’s research interests include distributed optimization methods, convex optimization and analysis, smart grid, communication systems and energy networks and market economic analysis. Amit K. Roy-Chowdhury received his PhD from the University of Maryland, College Park (UMCP) in 2002 and joined the University of California, Riverside (UCR) in 2004 where he is a Professor and Bourns Family Faculty Fellow of Electrical and Computer Engineering, Director of the Center for Robotics and Intelligent Systems, and Cooperating Faculty in the department of Computer Science and Engineering. He leads the Video Computing Group at UCR, working on foundational principles of computer vision, image processing, and statistical learning, with applications in cyber-physical, autonomous and intelligent systems. He has published over 200 papers in peer-reviewed journals and conferences. He has published two monographs: Camera Networks: The Acquisition and Analysis of Videos Over Wide Areas and Person Re- identification with Limited Supervision. He is on the editorial boards of major journals and program committees of the main conferences in his area. His students have been first authors on multiple papers that received Best Paper Awards at major international conferences. He is a Fellow of the IEEE and IAPR, received the Doctoral Dissertation Advising/Mentoring Award 2019 from UCR, and the ECE Distinguished Alumni Award from UMCP. Qi Zhu is an Associate Professor at the ECE Department in Northwestern University. He received a Ph.D. in EECS from University of California, Berkeley in 2008, and a B.E. in CS from Tsinghua University in 2003. His research interests include design automation for cyber-physical systems (CPS) and Internet of Things, safe and secure machine learning for CPS and IoT, cyber-physical security, and system-on-chip design, with applications in do- mains such as connected and autonomous vehicles, energy-efficient smart buildings, and robotic sys- tems. He is a recipient of the NSF CAREER award, the IEEE TCCPS Early-Career Award, and the Humboldt Research Fellowship for Experienced Researchers. He received best paper awards at DAC 2006, DAC 2007, ICCPS 2013, ACM TODAES 2016, and DATE 2022. He is the Conference Chair of IEEE TCCPS, and Young Professionals Coordinator at IEEE CEDA. He is an Associate Editor for IEEE TCAD, ACM TCPS, and IET Cyber-Physical Systems: Theory & Applications, and has served as a Guest Editor for the Proceedings of the IEEE, ACM TCPS, IEEE T-ASE, Elsevier JSA, and Elsevier Integration, the VLSI journal.
ai_researcher
2
OCEAN_Offline_Chain-of-thought_Evaluation_and_Alignment_in_Large_Language_Models.pdf
Draft version March 9, 2021 Typeset using LATEX preprint2 style in AASTeX61 IDEALIZED WIND-DRIVEN OCEAN CIRCULATIONS ON EXOPLANETS Weiwen Ji,1 Ru Chen,2 and Jun Yang1 1Department of Atmospheric and Oceanic Sciences, School of Physics, Peking University, 100871, Beijing, China 2University of California, 92521, Los Angeles, USA ABSTRACT Motivated by the important role of the ocean in the Earth climate system, here we investigate possible scenarios of ocean circulations on exoplanets using a one-layer shallow water ocean model. Specifically, we investigate how planetary rotation rate, wind stress, fluid eddy viscosity and land structure (a closed basin vs. a reentrant channel) influence the pattern and strength of wind-driven ocean circulations. The meridional variation of the Coriolis force, arising from planetary rotation and the spheric shape of the planets, induces the western intensification of ocean circulations. Our simulations confirm that in a closed basin, changes of other factors contribute to only enhancing or weakening the ocean circulations (e.g., as wind stress decreases or fluid eddy viscosity increases, the ocean circulations weaken, and vice versa). In a reentrant channel, just as the Southern Ocean region on the Earth, the ocean pattern is characterized by zonal flows. In the quasi-linear case, the sensitivity of ocean circulations characteristics to these parameters is also interpreted using simple analytical models. This study is the preliminary step for exploring the possible ocean circulations on exoplanets, future work with multi-layer ocean models and fully coupled ocean-atmosphere models are required for studying exoplanetary climates. Keywords: astrobiology — planets and satellites: oceans — planets and satellites: terrestrial planets 8 1 0 2 p e S 5 ] P E . h p - o r t s a [ 1 v 6 7 3 1 0 . 9 0 8 1 : v i X r a Corresponding author: Jun Yang [email protected] 2 Ji, Chen and Yang 1. INTRODUCTION Recently, much attention has been paid to ex- oplanets, and “the trickle of discoveries has be- come a torrent” (Hecht 2016). An exoplanet, also termed as extrasolar planet, is a planet be- yond our solar system that orbits a star. Ac- cording to the latest data from NASA’s Exo- planet Archive, over 3700 exoplanets have been confirmed (https://exoplanets.nasa.gov/). The ultimate goal of exoplanet detection is to find other habitable planets outside the solar system and even other intelligent lives in the universe. Among numerous exoplanets, L´eger et al. (2004) suggested the presence of ocean planets around other stars. In 2006, the discov- ery of the cool planet OGLE-2005-BLG-390Lb (Beaulieu et al. 2006) indicated the opportunity to detect ocean planets in the future missions. These ocean planets are supposed to have very deep oceans (could be as deep as 100 km), hence a lower value of planetary density compared to rocky planets. Though the oceans on exoplanets remain largely unknown, the Earth ocean has been extensively studied and is found to be key to the Earth climate. The Earth ocean stores and exchanges tremendous amount of water, heat and biogeochemical tracer with the atmosphere and cryosphere. In particular, ocean circula- tions have large impacts on climate by trans- porting energy poleward from the tropics to the poles. Trenberth & Caron (2001) have shown that, though small at mid- and high-latitudes, the magnitude of oceanic heat transport in the tropics is comparable to that of the atmospheric heat transport. When the ocean circulations change (i.e., weaken or strengthen), sea sur- face temperature and sea ice distribution would have corresponding responses. For example, sea ice would extend to larger areas if ocean heat transport is turned off, and thereby sur- face temperatures at high latitudes would drop significantly (Winton 2003). Considering the key role of ocean circulations in the Earth climate, it is reasonable to ex- pect that ocean circulations would be impor- tant to exoplanetary climates as well, if ocean exists there. For example, ocean circulations have been proved to be crucial in determining the climate and habitability of exoplanets (Hu & Yang 2014; Cullum et al. 2016; Del Genio et al. 2017). Our goal is to evaluate the ocean circulations on exoplanets, based on our knowledge of the ocean circulations on the Earth. Ocean circu- lations on the Earth have two components: the wind-driven circulations and deep thermohaline circulations. The former is energetic and mainly locates in the upper ocean, whereas the latter is relatively sluggish and can reach the deep ocean (Huang 2010). Boccaletti et al. (2005) demonstrated that the wind-driven circulations dominate the meridional oceanic heat trans- port. Specifically, They found that the shal- low circulations in the upper 500 meters driven by wind stresses contribute to nearly all of the heat transport in the Southern Hemisphere, and the wind-driven circulations also predominate the heat transport in the Northern Hemisphere. Therefore, we focus on the wind-driven circula- tions only in this paper. Two typical types of continental configuration used to study idealized wind-driven ocean circu- lations on the Earth are closed basin and chan- nel model. In the closed ocean basin, a sys- tem of circulating currents will form, which is termed an ocean gyre. Western intensification, which refers to an intense and narrow western boundary current (e.g., the Gulf Stream in the North Atlantic and the Kuroshio in the North Pacific Ocean, shown in Figure 1(a)) is the most remarkable feature of an ocean gyre. The ori- gin of the western intensification is related to the meridional variation of the Coriolis parame- ter (Stommel 1948; Munk 1950). Among all the ocean currents, the western boundary current Ocean Circulations on Exoplanets 3 contributes the most to the poleward heat trans- port and plays a significant role in maintaining the global heat balance (Hu & Cui 1991). In the channel model, a zonal current can arise be- cause of the zonal wind forcing and this model has been used extensively to study the Antarc- tic Circumpolar Current (ACC) in the Southern Ocean (e.g., Nadeau & Ferrari 2015). The ACC, as a major feature of the circulations in the Southern Ocean, circulates around the Antarc- tica and is very energetic (see Figure 1(a)). It is the strongest current in the oceans (Pickard & Emery 2016) and has a zonal transport of 98-154 Sv (1 Sv = 106 m 3 s −1, Whitworth III & Peterson 1985), larger than the transport of the Gulf Stream (approximately 31 Sv of upper- layer wind-driven part, Lund et al. 2006). Though fundamental fluid dynamics probably hold for both the Earth and exoplanet ocean, the ocean circulations structure on exoplan- ets can be quite different from those on the Earth. Our goal is to assess possible scenarios of the wind-driven circulations on exoplanets and compare them with those on the Earth. Specifi- cally, we use a one-layer shallow water model to evaluate how exoplanet-relevant model param- eters can lead to different spatial structure and strength of wind-driven circulations in both a closed basin and a channel. The parameters we consider include planetary β (approximate vari- ation of the Coriolis parameter with latitude), wind stress, viscosity and ocean basin structure (closed or open in the zonal direction). Our work is built on previous knowledge of the wind- driven circulations on the Earth (e.g., Char- ney 1955; Lund et al. 2006; Hu et al. 2015; Mcwilliams et al. 1978; Orsi et al. 1995). This paper is organized as follows. Sections 2 and 3 introduce the numerical model and de- scribe the setup of the numerical experiments. Results and interpretations are presented in Sections 4 and 5. Section 6 provides the sum- mary and discussion. 2. MODEL DESCRIPTIONS We use a one-layer shallow water model to simulate the barotropic wind-driven ocean cir- culations. The shallow water approximation is used when the horizontal scale of the fluid is much larger than its depth, which implies that the large-scale vertical velocities are much smaller than horizontal velocities. The shallow water model is one of the most useful models in geophysical fluid dynamics, especially for study- ing the ocean and atmosphere (Vallis 2017). The one-layer shallow water model, which is one of the simplest model to produce ocean circu- lations, can reveal fundamental ocean dynam- ics and at the same time allows ease of inter- pretation. Recent work indicates that idealized shallow water model is useful for exploring fluid structures on giant planets and successfully pre- dicts the existence of polar cyclones on Jupiter (ONeill et al. 2015, 2016). This model is from one component set of the MIT General Circulation Model (MITgcm, Fer- reira & Marshall 2006; Marshall et al. 2007). The ocean circulations are driven by surface wind stresses and dissipated by eddy viscosity. The model is configured to represent a square enclosed box of water with a horizontal length L of 1,200 km × 1,200 km, a vertical depth of 5 km and a horizontal resolution of 20 km. Lat- eral eddy viscous dissipation is included in the model. The governing equations are Du Dt Dv Dt − f v + g + f u + g − Ah (cid:53)2 h u = τx ρ0 (cid:52) z , − Ah (cid:53)2 h v = 0, ∂η ∂t + + ∂vH ∂y = 0, ∂η ∂x ∂η ∂y ∂uH ∂x (1) (2) (3) where u, v are the x and y zonal and meridional velocities; D ∂y is the horizon- tal material derivative in the Cartesian coordi- ∂x + v ∂ ∂t + u ∂ Dt = ∂ 4 Ji, Chen and Yang R ∂y = 2Ω cos θ0 nate; f = f0 + βy is the Coriolis parameter, and β = ∂f is the meridional vari- ation of the Coriolis parameter at the latitude θ0 (Ω is the planetary rotation rate and R is the planetary radius); η represents sea surface height; g = 9.81 m s−2 is the acceleration due to gravity; Ah is the eddy viscosity coefficient; (cid:53)2 ∂y2 is the horizontal Laplacian op- erator; ρ0 is the reference water density, and we assume the density is constant; (cid:52)z = 5 km is the mean ocean depth; τx is the zonal wind stress; meridional wind is neglected in our ex- periments; and H = (cid:52)z + η is the entire ocean depth. ∂x2 + ∂2 h = ∂2 3. EXPERIMENTAL DESIGN Similar to the studies of Earth ocean circula- tions, we set the experiments into two types of terrains: a closed basin and a channel ocean. We examine the effects of three parameters, the variation of the Coriolis parameter (β), the viscosity parameter (Ah) and the surface wind stress (τ ), on the ocean circulations. The choice of parameters is motivated by the fact or possi- bilities that these exoplanets have sizes, rotation rates and thus β different from the Earth, and their ocean fluid viscosity, atmospheric circula- tions and thus wind stress can also be different from those of the Earth. In the closed basin, the ocean is surrounded by land at all four boundaries, whereas in the channel model, the land locates in the north- ern/southern boundary only. We use a no- slip and no normal flow boundary condition at the four boundaries of the closed basin and at the northern/southern boundary of the channel, that is, the velocities there are set to be zero. In the channel model, a periodic boundary con- dition is employed in the zonal direction, that is, the ocean is reentrant. The ranges of the pa- rameters (β, τ , and Ah) we investigate are 0.1, 0.5, 2, or 10 times the default values (see Table 1). We use zonal wind stress only, with a form of τx = −τ cos πy L (see Figure 1(b)), which is similar to the wind stress in the subtropics of the Earth. All the experiments reach steady state after forty years, except for the cases with the onset of instability. Table 1. Experimental Arrangements Oceans Parameters Runs Design Closed basin Control β Ah τ Control Channel ocean β Ah τ 1 3 3 3 1 2 2 2 Turbulence 1 2 , 1 2, 1 2, 1 planetary β: 10−11 m−1 s−1 viscosity Ah: 400 m2 s−1 wind stress τ : 0.1 N m−2 2, 1 10 × 10−11 m−1 s−1 2 , 1 10 × 400 m2 s−1 2 , 10 × 0.1 N m−2 no east-west boundaries planetary β: 10−11 m−1 s−1 viscosity Ah: 400 m2 s−1 wind stress τ : 0.1 N m−2 2 × 10−11 m−1 s−1 2, 1 2 × 400 m2 s−1 2, 1 2 × 0.1 N m−2 barotropic instability onset 2, 1 4. RESULTS OF A CLOSED BASIN Figure 1(c) shows the flow in a closed basin from the control experiment, driven by the wind It stress with a cosine profile (Figure 1(b)). is characterized by a clockwise gyre: In the oceanic interior, the flow is southward due to the Sverdrup balance, which denotes the vortic- ity balance between the meridional advection of planetary vorticity and wind stress curl (Val- lis 2017), roughly holds in the oceanic interior from both the control experiment here and the mid-latitude ocean (Wunsch 2011). Therefore, the southward flow in the oceanic interior is due Ocean Circulations on Exoplanets 5 (Stommel 1948). Similar to the velocity vectors, the sea surface height also has a gyre structure with the maximum value and the largest slope near the western boundary (see Figure 1(e)). The consistency between the sea surface height and velocity patterns is because of geostrophic balance, which denotes the balance between the Coriolis force and pressure gradient force and roughly holds for large-scale flows. The result of the experiment is generally consistent with the analytic solution shown in Figure 1(d), when the nonlinear advection terms are small. Both the magnitude and meridional struc- ture of the wind stress we used in our control experiment (Figure 1(b)) are similar to those in the subtropical gyre region in the Northern Hemisphere of the Earth Ocean. The simulated ocean currents, however, are about one order of magnitude weaker than the observations (com- paring Figures 1(c) with 1(a)). The reason is that the ocean depth is set to 5 km in our experi- ments, which is much greater than the depth of wind-driven ocean circulations on Earth, gen- erally less than 1 km (Talley 2011). Consis- tently, the simulated sea surface heights (Fig- ure 1(e)) are one order smaller than the obser- vations. Note that, though the strength of the gyre flow depends on the choice of ocean depth, the sensitivity of the oceanic circulations to pa- rameters, presented next, is not sensitive to the choice of ocean depth. Figure 2 shows the sensitivity of the oceanic circulations to three model parameters: β, vis- cosity coefficient Ah and the magnitude of the wind stress τ . The western boundary layer thickness, velocity magnitude and circulation patterns do change with those parameters. First, as Ah increases or β decreases, the width of the western boundary layer gets larger, and vice versa (Figures 2(a4) and 2(c4)). However, the western boundary layer thickness is not sensitive to the wind stress magnitude (Figure 2(b4)). Second, consistent with the momentum Figure 1. Ocean flows in observation of Earth and the control simulation of the model. (a) Ob- served ocean surface currents (5 m below sea level) in three regions, the Kuroshio, the Gulf Stream and the Antarctic Circumpolar Current (annual-mean data of 2009; Carton et al. 2018). (b) The zonal wind stress specified in the model. (c) Simulated steady flows. (d) Analytic solution when the ad- vection terms are neglected (Vallis 2017). (e) Sim- ulated results of sea surface height (SSH). to vorticity input from the negative wind stress curl. Note that the gyre flow is asymmetric in the zonal direction: the oceanic flow at the west- ern boundary is much stronger than that at the ocean interior and the eastern boundary. This phenomenon is termed as “western intensifica- tion” (Gill 1982) and it is due to the meridional variation of the Coriolis parameter (β), induced by rotation and the spherical shape of the Earth 6 Ji, Chen and Yang 1 tion of planetary vorticity, the characteristic width of the western boundary layer thickness is Lb ∼ ( Ah β ) 3 (Munk 1950). The value of Lb in the control experiment is 34 km, which is roughly consistent with the numerical result. As Ah increases or β decreases, Lb increases; consistently, the western boundary layer thick- ness from the numerical experiment also in- creases (Figures 2(a4) and 2(c4)). Second, in the oceanic interior, the Sverdrup balance holds in the linear case, that is βv = curl( τx ), where ρ0 v is the meridional velocity and curl(τx) is the wind stress curl. Thus, as β decreases or wind stress gets larger, the southward oceanic cur- rent in the oceanic interior gets stronger, and consistently to conserve mass, the northward western boundary current gets stronger. When Ah and β are larger or the wind stress magnitude is smaller, the equilibrated oceanic flow field is steady, with a gyre structure sim- ilar to that in the control experiment (Figures 2(a1), (b1) and (c1)).On the other hand, if Ah and β are smaller or the wind stress magni- tude is larger, the equilibrated oceanic flow is turbulent with eddies (Figures 2(a3), (b3) and (c3)). Strong nonlinearity leads to turbulence and from a quasi-geostrophic vorticity balance perspective, the degree of nonlinearity can be qualified by U βL2 , where U and L are charac- teristic velocity and length scales (Vallis 2017). Smaller β and Ah and larger wind stress magni- tude correspond to stronger nonlinearity, lead- ing to turbulence flow. 5. RESULTS OF A CHANNEL OCEAN Figures 3(a) and 3(b) show the oceanic flow and sea surface height in a channel, forced by the wind stress same as that used in the closed- basin control experiment. Here the oceanic cir- culations are dominated by zonal flow, which is generally much stronger than those in the closed basin mainly due to the absence of meridional boundaries. The zonal flow is westward in the southern part of the domain and eastward in Influences of different parameters on Figure 2. the flow field in a closed basin. (a1-a4) Varying the viscosity to 2, 0.5, and 0.1 of the default value. (b1-b4) Varying the wind stress to 0.5, 2, and 10 of the default value. (c1-c4) Varying the β to 2, 0.5, and 0.1 of the default value. (a3, b3, and c3) Snap- shots of unsteady states. (a4, b4, and c4) Profiles of north-south (meridional) velocities in the middle (y = 600 km) of the ocean. balance, when decreasing Ah or increasing τ , the ocean currents become stronger, and vice versa (Figures 2(a1-a3) and 2(b1-b3)). We also found that, as β increases (decreases), the cur- rent speed decreases (increases) (Figures 2(c1)- (c4)). The sensitivity of the oceanic circulations to those parameters, described above, can be interpreted from the vorticity budget under the quasi-geostrophic assumption (Vallis 2017), which roughly holds here. First, assuming a balance between friction and meridional advec- Ocean Circulations on Exoplanets 7 Then we can obtain an analytical solution by solving Equation (4) with the no-slip bound- ary condition (uy=0 = uy=L = 0) at the north- ern/southern boundary, u = −L2τ π2Ahρ0 (cid:52) z (cos ( πy L ) + y L − 1). (6) This analytical solution agrees well with the simulated zonal velocity (Figure 3(c)), and the slight mismatch could be attributed to numeri- cal dissipations in the model. Equation (6) re- veals that the zonal flow is not sensitive to the choice of β, and it increases with the decrease of Ah or increase of τ . These are both confirmed by our numerical experiments (Figures 3(d1), (e1) and (f1)). Figures 3(d2), 3(e2) and 3(f2) show the merid- ional profile of sea surface height. In the merid- ional direction, the momentum equation is re- duced to the geostrophic balance (Equation (5)), and thus, the meridional slope of sea sur- face height increases as the zonal velocity mag- nitude increases. Therefore, both zonal velocity and sea surface height slope increase, as Ah de- creases or τ increases. Note that although the zonal velocity is insensitive to the choice of β, the sea surface height profile slightly depends on β. This is because the Coriolis parameter f is equal to f + βy, and the magnitude of βy, ∼ 10−5 s−1, is one order smaller than the first part f0, ∼ 10−4 s−1. When we accelerate the flows in the channel ocean by reducing Ah or increasing τ , the flow field remains to be zonal currents without turn- ing into an unsteady state. Instability occurs when we change the form of wind stresses (Fig- ure 4(a1)), which drives the zonal currents to meet the necessary condition for barotrpic in- the expression β − ∂2u stability: ∂y2 changes its sign (Figure 4(a2), Rayleigh 1879). In this ex- periment, the adjusted wind stress is concen- trated in the middle part of the ocean and its magnitude enlarges to 0.5 N m−2, five times (a) Figure 3. Ocean flows in a channel ocean. The ocean velocities in the control run. (b) Sea surface height in the control run. (c) Comparisons between the numerical result and the analytic so- lution. Zonal-mean ocean currents and sea surface height when varying β (d1-d2), varying the viscos- ity (e1-e2), and varying the wind stress (f1-f2). the northern part. Its minimum value occurs at the northern and southern boundaries, due to no-slip condition, and at the center of the channel, where the wind stress is weak. Sea surface height reaches maximum at the mid- dle of the channel, which is consistent with the geostrophic balance in the meridional direction. The oceanic circulation in this experiment can be well predicted using a simple analytical model. In the steady state, assuming there is neither meridional velocity nor zonal variation, the governing equations of the model (Equa- tions (1) and (2)) can be reduced to, τx ρ0 (cid:52) z , Ah ∂2u ∂y2 = ∂η ∂y f u + g = 0. (4) (5) 8 Ji, Chen and Yang larger than that in control experiment. Merid- ional random fluctuations with a magnitude of 0.01 N m−2 are added to trigger the instability. When the model runs for 160 days, the insta- bility occurs (Figure 4(a3)). After 400 model days, the flow becomes large-scale waves (Fig- ures 4(a4-a6)). The wavelength developed by the barotropic instability in the model may be limited by the model domain size in the east- west direction. Figure 4. Specified zonal wind stress and the development of barotropic instability in a chan- (a1) Wind stress τx = τ sin π(3y−L) nel ocean. , y ∈ [ L 3 , 2L (a2) A snapshot of β−uyy on 160th 3 ]. day, where warm colors mean positive values and cool colors mean negative values. (a3) Snapshots of ocean currents on 160th, 400th, 415th, and 430th day, showing the appearance of instability. L 6. CONCLUSIONS AND DISCUSSION In the closed ocean basin, the cosine wind we choose generates a single gyre initially. Small planetary β favors the acceleration of the ocean circulation and the broadening of the western boundary layer; the decrease of eddy viscosity Ah, which means less dissipation, also speeds up the ocean flow while shrinks the western boundary layer. The variation of wind stress τ , which directly drives the circulation, influences the strength of the ocean velocities but not the thickness of the western boundary. In the chan- nel ocean, the same wind stress produces zonal currents. Planetary β doesn’t significantly in- fluence the ocean currents in the channel ocean, while the eddy viscosity and wind stress affect the zonal velocity and the sea surface height. When the specified wind satisfies the necessary condition of barotropic instability, instability occurs in the channel ocean. In the barotropic channel ocean, the zonal currents can develop turbulence only when it satisfies the necessary condition of barotropic instability. While in a closed basin ocean, the existence of east and west boundaries makes ocean circulations turn into unsteady states easily as long as the current velocity is large enough. The real ocean is stratified, has complicated land configurations and is forced by wind stress and heat flux with rich structures at a range of spatiotemporal scales. The idealized model we employ here is much simpler than realistic three-dimensional fully coupled global models and the real ocean. Conclusions here may not exactly hold in a realistic model. However, the sensitivity of ocean circulations to various pa- rameters, revealed from our simple model, can shed light on further investigations and under- standings about ocean circulations, planetary climates and habitability. Changes of ocean circulations might greatly influence the plane- tary climate through transporting heat, carbon and nutrients. Future work using fully cou- pled ocean-atmosphere models are needed to understand the coupling between the four dif- ferent components of the climate system and the net effect of ocean circulations on planetary climates. Acknowledgments: J.Y. acknowledges sup- ports from the National Science Foundation of China (NSFC) under grants 41675071, 41606060, 41761144072 and 4171101348. Ocean Circulations on Exoplanets 9 REFERENCES Beaulieu, J.-P., Bennett, D. P., Fouqu´e, P., et al. Marshall, J., Ferreira, D., Campin, J.-M., & 2006, Nature, 439, 437 Boccaletti, G., Ferrari, R., Adcroft, A., Ferreira, D., & Marshall, J. 2005, Geophysical Research Letters, 32 Carton, J., Chepurin, G., & Chen, L. 2018, An updated reanalysis of ocean climate using the Simple Ocean Data Assimilation version 3 (SODA3), manuscript in preparation. http://www.atmos.umd.edu/~ocean/ Charney, J. G. 1955, Proceedings of the National Academy of Sciences, 41, 731 Enderton, D. 2007, Journal of the Atmospheric Sciences, 64, 4270 Mcwilliams, J. C., Holland, W. R., & Chow, J. H. 1978, Dynamics of Atmospheres and Oceans, 2, 213 Munk, W. H. 1950, Journal of Meteorology, 7, 80 Nadeau, L.-P., & Ferrari, R. 2015, Journal of Physical Oceanography, 45, 1491 Orsi, A. H., Whitworth, T., & Nowlin, W. D. 1995, Deep Sea Research Part I: Oceanographic Research Papers, 42, 641 Cullum, J., Stevens, D. P., & Joshi, M. M. 2016, ONeill, M. E., Emanuel, K. A., & Flierl, G. R. Proceedings of the National Academy of Sciences, 113, 4278 Del Genio, A. D., Way, M. J., Amundsen, D. S., et al. 2017, arXiv preprint arXiv:1709.02051 Ferreira, D., & Marshall, J. 2006, Ocean Modelling, 13, 86 Gill, A. E. 1982, International Geophysics Series Hecht, J. 2016, Nature, 530, 272 Hu, D., & Cui, M. 1991, Chinese Journal of Oceanology and Limnology, 9, 1 Hu, D., Wu, L., Cai, W., et al. 2015, Nature, 522, 299 Hu, Y., & Yang, J. 2014, Proceedings of the National Academy of Sciences, 111, 629 Huang, R. X. 2010, Ocean circulation: wind-driven and thermohaline processes (Cambridge University Press) L´eger, A., Selsis, F., Sotin, C., et al. 2004, Icarus, 169, 499 Lund, D. C., Lynch-Stieglitz, J., & Curry, W. B. 2006, Nature, 444, 601 2015, Nature Geoscience, 8, 523 —. 2016, Journal of the Atmospheric Sciences, 73, 1841 Pickard, G. L., & Emery, W. J. 2016, Descriptive physical oceanography: an introduction (Elsevier) Rayleigh, L. 1879, Proceedings of the London Mathematical Society, 1, 57 Stommel, H. 1948, Eos, Transactions American Geophysical Union, 29, 202 Talley, L. D. 2011, Descriptive physical oceanography: an introduction (Academic press) Trenberth, K. E., & Caron, J. M. 2001, Journal of Climate, 14, 3433 Vallis, G. K. 2017, Atmospheric and oceanic fluid dynamics (Cambridge University Press) Whitworth III, T., & Peterson, R. 1985, Journal of Physical Oceanography, 15, 810 Winton, M. 2003, Journal of Climate, 16, 2875 Wunsch, C. 2011, Journal of Marine Research, 69, 417
ai_researcher
1
Students_as_Assessors_A_Novel_Idea_on_Formative_Assessment.pdf
2 2 0 2 b e F 9 1 ] S A . s s e e [ 1 v 1 3 5 9 0 . 2 0 2 2 : v i X r a Can Social Robots Effectively Elicit Curiosity in STEM Topics from K-1 Students During Oral Assessments? Alexander Johnson Electrical and Computer Engineering University of California, Los Angeles Los Angeles, USA [email protected] Alejandra Martin Department of Education University of California, Los Angeles Los Angeles, USA [email protected] Marlen Quintero Department of Education University of California, Los Angeles Los Angeles, USA [email protected] Alison Bailey Department of Education University of California, Los Angeles Los Angeles, USA [email protected] Abeer Alwan Electrical and Computer Engineering University of California, Los Angeles Los Angeles, USA [email protected] Abstract—This paper presents the results of a pilot study that introduces social robots into kindergarten and first-grade classroom tasks. This study aims to understand 1) how effective social robots are in administering educational activities and assessments, and 2) if these interactions with social robots can serve as a gateway into learning about robotics and STEM for young children. We administered a commonly-used assessment (GFTA3) of speech production using a social robot and compared the quality of recorded responses to those obtained with a human assessor. In a comparison done between 40 children, we found no significant differences in the student responses between the two conditions over the three metrics used: word repetition accuracy, number of times additional help was needed, and similarity of prosody to the assessor. We also found that interactions with the robot were successfully able to stimulate curiosity in robotics, and therefore STEM, from a large number of the 164 student participants. Index Terms—K-12 STEM education, social robot, HRI, oral language I. INTRODUCTION Social robots have proven to be an effective aid to early childhood language acquisition [1]. Their welcoming designs and expressive movements make them engaging for children to speak and play with [2]. Several studies have shown that children experience psychological and educational benefits by spending time with social robots ( [3], [4]) including students’ improved academic and social outcomes and student engage- ment. The recent success of these devices with children may also imply that their introduction to educational settings will make children more curious about them and, more broadly, robotics and STEM. Previous studies such as [5] and [6] show the effectiveness of robots as teaching tools in STEM lessons through collected user surveys, and [7] introduces the design for a conversational robotic interface that engages primary school children in STEM discussions. However, further work is needed to explore how robots can be effectively integrated into existing lessons and tasks for young children rather than creating new robot-centered lessons. In addition, more free-response feedback is needed in order to determine what interaction pique young children’s aspects of human-robot curiosity in robotics and could be used to motivate them to later pursue STEM education. Inspired by previous work using voice-enabled social robots to encourage children’s oral language development through storytelling ( [8], [9]), we apply social robots to the critical task of conducting assessments of language and early literacy-related skills in children. Such assessments are particularly important for kindergarten and first-grade children, as they are beginning to learn to read, language delays should be addressed before and any oral they negatively impact early literacy development. Children’s early language abilities are also indicators of their later ap- titude in other subjects including writing, math, and science. Public schools typically do not have sufficient support from speech-language pathologists, reading specialists, or literacy coaches to conduct detailed, dialect-appropriate assessments of every child’s oral language abilities on a routine basis, depriving some children in need of extra language support or intervention from such resources. A social robot enabled with automatic speech recognition (ASR) technology may be used to conduct some of these intensive assessments with children without the need for a trained language specialist. The presence of the social robot during these assessments can also become a seamless way to introduce children to robotics and STEM. However, before these systems are implemented, it must be determined whether or not the presence of a robot has a profound effect on the children’s speech. A person’s tone, prosody, and word choice can vary with whomever they are speaking. Therefore, further investigation is needed to verify whether or not children’s speech productions differ significantly in the presence of a voice-enabled social robot, compared to a human oral assessment administrator, in a way that might hinder obtaining accurate assessments of speech production. In this work, we investigate 1) how effective social robots are in the direct assessment of children’s speech pro- duction (phonological processing skills), and 2) if social robots can be used to promote young children’s curiosity in robotics and STEM during necessary oral language assessments. In order to investigate these questions, we conduct common oral language assessments with kindergarten and first grade children in two scenarios: In one case, the assessments are ad- ministered by a human assessor as typically conducted. In the other case, assessments are administered by, JIBO 1, a social robot designed for interacting with children (pictured in Figure 1), while the human assessor watches and intervenes only as necessary. We selected children of these grade levels because they are at the critical age range for language development and literacy acquisition ( [10], [11]). To identify any significant differences in speech production between children in the two cases that may affect assessment scores, we investigate the students’ changes in behavior, prosody, and accuracy of word repetition during the assessments across the two scenarios for both grade levels. We additionally documented the children’s responses to interacting with JIBO that indicate curiosity in robotics or may be used to further interest in STEM-related topics. A. Participants II. METHODS A sample of 164 kindergarten and first-grade students, consisting of 53 kindergarteners and 111 first graders from a Southern California elementary school, were recruited for the study. Selection criteria included parental consent and completion of all tasks in the study. All of the students were English speaking with some reporting additional language exposure (most often Spanish) at home. B. Recording Interactions with each student were recorded with a Log- itech C920 Webcam microphone with a sampling rate of 48kHz. Each student was approximately two feet from the test administrator (either JIBO or the human assessor), and the mi- crophone was placed equidistantly between them. Recordings took place in an empty office at the school site during school hours to simulate a realistic environment in which JIBO could be used. C. Experiment Data collection followed the protocol described in ( [12]– [14]). The children's speech samples were collected using Goldman-Fristoe Test of Articulation-3 (GFTA-3) [15] sounds in sentences protocol. Each student was first read a story appropriate for their grade level. The kindergarten students 1‘Jibo Robot - He can’t wait to meet you,” Boston, MA, 2017. [Online]. Available: https://www.jibo.com were read a story containing 20 simple sentences and the first-grade students were read a story containing 15 sentences mixing both simple and compound structures. As a fictitious example (the actual assessment material is under copyright restrictions), a sentence such as, “The cat was fat, fuzzy, and orange,” may have appeared in the story for first-graders. After the story was read once in its entirety, the story was repeated sentence by sentence, and the student was asked to repeat each sentence back to the test administerer. Ten kindergarten students and ten first-grade students were administered the test by a human assessor, and the others were administered the test by JIBO. Both gave the same prompts to the children. In giving each prompt, JIBO played a corresponding audio file containing a recording of an adult woman whose voice was pitch shifted up to match the pitch of a child and slowed down. For sessions in which the child was given the prompts by JIBO, a human assessor was present and intervened if the child was unable to complete the task with only the social robot's instructions. In sessions in which the child was given the prompt by a human assessor, they gave the prompt to the child a second time if they were unable to repeat the sentence after hearing it once. Any questions or comments that the child made referring to JIBO before, during, or after the assessment were documented and categorized by their different characteristics. Fig. 1. A session in which JIBO administers the GFTA-3 (sounds in sentences assessment) to a student. D. Assessment Quality Evaluation The 10 kindergarteners and 10 first-graders who completed the assessment with only the human assessor were compared to a group of 10 kindergarteners and 10 first-graders who completed the same assessment administered by the social robot. The robot-assessed children in the comparison group were selected from the total participant pool on the basis that they were assessed in the same time period and overseen by same human assessor as the students in the human-assessed children. The following metrics were calculated for compari- son between the two conditions: 1) Accuracy: each sentence that the students were asked to repeat was scored out of the total number of words in the sentence. One point was given for each word that the child repeated correctly in the correct order. 2) Need for additional prompting: we noted each time that a child required additional prompting to repeat a sentence and noted the reason for the additional prompting. If the child was silent for at least 3 seconds after given the prompt, the reason was noted as “reticence.” If the child forgot the prompt or did not understand what to say, the reason is given as “needs repetition.” If the child said something unrelated to the prompt then the reason was listed as “distracted.” 3) Pitch correlation: An additional question is whether students attempt to follow the administrator’s pitch and rhythm during the assessment. While not directly related to speech production assessment quality, differences in how the children incorporate the human and the robot’s prosody into their repetition of the prompt may indicate how engaged the children were with the assessor. For each prompt, the pitch contour of both the administerer (JIBO or the human assessor) uttering the prompt and the child's voice while repeating it were extracted using Praat's [16] pitch tracking algorithm with parameters set to best measure a child's pitch. The Praat pitch contours were manually corrected if any errors occurred in the pitch tracking algorithm. Then the Pearson correlation coefficient between the child's pitch contour and ad- ministrator's (either the robot or human assessor) pitch contour was calculated and taken as a measure of the perceptual similarity of the two as in [17]. Sentence word accuracy scores were scored manually by the researchers who are experienced in evaluating children's speech. Reasons for the child needing additional prompting were also noted manually following the above criteria. E. Student Responses The students were allowed to interact with the robot and speak freely before, during, and after the assessment. Any questions or comments made by students about JIBO that may be used to grow interest in robotics or STEM were characterized by the following five categories: 1) Mechanical: Referring to how robots work or asking about fundamentals of robotics or other STEM concepts 2) Functional: Referring to why JIBO looks and acts as it does or asking about engineering design choices 3) Relational: Relating JIBO to something that the student has seen previously 4) Personifying: Assigning human characteristics to JIBO 5) Hypothetical: Exploratory questions about JIBO or re- lated STEM concepts Questions/comments could by characterized by more than one category. Some examples of children’s comments are given in Table I. Category Mechanical Functional Relational Personifying Hypothetical Examples “You program him to do that?”, “He didn’t get rusty” “Why is JIBO have one eye?”, “Why he dance like this?” “I have a robot at my home”, “I know robot [that] is more strong” “Now, JIBO is looking at me”, “You miss your mommy, JIBO or not?” “But what if you flip JIBO backwards?”, “Can JIBO do a handstand?” TABLE I SOME EXAMPLES OF COMMENTS OR QUESTIONS FROM CHILDREN WHEN INTERACTING WITH THE SOCIAL ROBOT. III. RESULTS In this section, we present the results of the experiment. Subsections A-C show the results comparing 40 children (10 from each category: kindergarten or first-grade, assessed by the human or JIBO). Section D summarizes our observations of the 144 children assessed by JIBO (164 less the 20 assessed by only a human). A. Accuracy of Repetition Table II shows the percent word accuracy of the repetition task for children in each group. The maximum score, minimum score, and standard deviation for the group are also given. Administration Kindergarteners with JIBO Kindergarteners w/o JIBO First Graders with JIBO First Graders w/o JIBO Mean 95.08 97.85 91.91 93.13 STDV 12.62 7.0 17.0 13.02 Min 86.15 93.7 63.67 81.65 Max 100 100 100 100 TABLE II AVERAGE PERCENT OF WORDS REPEATED CORRECTLY (ACCURACY) FOR A GROUP WITH PERCENT STANDARD DEVIATION (STDV) AND MINIMUM AND MAXIMUM PERCENT SCORES OF EVERY CHILD IN THE CATEGORY B. Additional Prompting Table III gives the number of times students in the two groups required additional prompting to be able to complete the exercise and the reason for the need for additional prompt- ing. C. Prosody Average correlation between the assessor and students’ pitch contours when uttering the same prompt are shown in Table IV for each group. The percentage of repetitions spoken by students that are significantly correlated in pitch contour to the prompt given by the assessor are also given in this Table. Administration Kindergarteners with JIBO Kindergarteners w/o JIBO First Graders with JIBO First Graders w/o JIBO Mean Corr Coef 0.191 0.258 0.145 0.179 STDV % Sig 67.5 0.131 72.5 0.151 54.0 0.108 56.7 0.143 TABLE IV PEARSON'S CORRELATION COEFFICIENT BETWEEN THE EXAMINER'S PITCH CONTOUR AND THE CHILD'S PITCH CONTOUR UPON REPEATING THE EXAMINER'S UTTERANCE. THE PERCENTAGE OF CHILD UTTERANCE WHOSE PITCH CONTOUR CORRELATED SIGNIFICANTLY (p < 0.05) TO THAT OF THE ORIGINAL PROMPT IS ALSO GIVEN. Administration Kindergarteners with JIBO Kindergarteners w/o JIBO First Graders with JIBO First Graders w/o JIBO Reticent 8 1 3 2 Reticent at beginning Max Reticent Needs Repetition Max Needs Rep. 5 0 2 0 3 1 2 1 4 0 13 6 2 0 7 2 Distracted 1 2 0 1 TABLE III NUMBER OF TIMES A CHILD (10 CHILDREN IN EACH CATEGORY, RESULTING IN A TOTAL OF 40 STUDENTS) NEEDED ADDITIONAL PROMPTING FROM THE EXAMINER IN ORDER TO REPEAT THE PROMPT BECAUSE OF: A) RETICENCE THE CHILD TAKING 3 OR MORE SECONDS TO RESPOND AFTER BEING GIVEN A PROMPT, B) RETICENT AT BEGINNING NOTES THE NUMBER OF RETICENT INSTANCES THAT OCCURRED AT THE BEGINNING OF THE SESSION, C) MAX RETICENT IS THE LARGEST NUMBER OF TIMES A SINGLE CHILD SHOWED SIGNS OF BEING RETICENT IN A SESSION. NEEDS REPETITION IS DEFINED AS THE NUMBER OF INSTANCES IN WHICH A CHILD WAS UNABLE TO COMPLETELY SAY THE PROMPT WITHOUT BEING REMINDED OF SOME OR ALL WORDS. MAX NEEDS REP. IS THE LARGEST NUMBER OF TIMES THAT A SINGLE CHILD NEEDED SOME OR ALL OF A PROMPT TO BE REPEATED. DISTRACTED IS DEFINED AS THE NUMBER OF TIMES A CHILD GAVE AN UNRELATED ANSWER. D. Scientific Curiosity In total, 43 kindergarteners and 101 first-graders were assessed using JIBO. Table V shows the percentage of these children in each grade level who made a comment or question in the given category. In total, approximately 45% of the students made comments or questions about JIBO pertaining to at least one of the categories. % Students Kinder. First. Func. Mech. 20.45 31.81 17.82 10.89 Rel. 0.00 3.96 Person. 29.54 19.80 Hypoth. 4.54 3.96 Total 53.2 39.1 TABLE V PERCENT OF STUDENTS WHO MADE A QUESTION OR COMMENT BY GIVEN TYPE (FUNCTIONAL, MECHANICAL, RELATIONAL, PERSONIFYING, OR HYPOTHETICAL) WHILE INTERACTING WITH JIBO. THE SAME CHILD COULD BE COUNTED IN MULTIPLE CATEGORIES IF THEIR QUESTIONS/COMMENTS FELL INTO MORE THAN ONE. THE TOTAL PERCENTAGE OF STUDENTS WHO MADE A QUESTION OR COMMENT IS ALSO GIVEN. IV. DISCUSSION A. Evaluation of Speech Assessment Quality As shown in Table II, in the repetition task, the students' per- formance was not significantly affected by the presence of the social robot, JIBO. The table shows that when repeating words to the examiner, neither the kindergarten students (p > 0.1) nor the first-grade students (p > 0.05) are significantly worse at repeating words with JIBO than they are with a human. Although the mean sentence score was about 2% higher when the test was administered by a human assessor for both the kindergarten and first-grade students, this number is not statis- tically significant (p > 0.05). There is however a significantly larger standard deviation in the sessions administered by JIBO (note the differences between the minimum and maximum sentence scores). Table III shows that kindergarteners are more reticent when the test is administered by JIBO than by a human assessor. However, this reticence typically only occurs at the beginning of the session and disappears as the students are exposed more to the social robot. A possible explanation for this finding is that at this early stage of the activity, students may have been adapting to the activity. This trend in reticence does not extend to the first-grade students. Most of the first-grade students did need sentences to be repeated more often by the human facilitator present when the test was administered by JIBO. It is worth noting that the first-grade student who needed intervention most often when using JIBO predominately spoke a language other than English at home. The prompts that needed repetition were most often those containing longer sentences with compound structures. Table IV implies that the pitch contours of the students' repetition of the prompt more closely match those of the prompt given by the human assessor. This means that while repeating sentences said by the assessor, the students’ changes in pitch and rhythm matched the human assessor more closely than the robot. This may be due to the robot having less natural sounding prosody, making it more difficult to match. However, these correlations are not significant, and we would need a larger sample size to definitively state that the use of a robot affects whether or not the students follow the speech prosody of the human assessor more. It is important to note that the correlations for first graders are lower than those of the kindergarteners. This is likely due to the fact that the prompts read to the first graders were longer and contained more complex sentence structures, requiring the students to turn their attention more towards remembering the words than matching the prosody given. B. Motivating Scientific Curiosity is, Almost half of the students made inquiries as to how robots work, why they are designed in the way that they are, and other ideas that are conducive to furthering children’s interest in STEM. The kindergarteners most commonly asked questions related to how JIBO is able to move, speak, and show images. That their questions typically asked for explanations to observed phenomenon. This implies that these interactions can be used to motivate kindergarten-level lessons on more visual concepts in robotics and STEM like motors, cameras, and programming fundamentals. The first-graders also asked a large number of questions of this nature, albeit fewer on average than the kindergarteners, while also asking personifying and hypothetical questions that delve past just what they can directly observe in the robot, such as, “Where does JIBO sleep?” and “Can you make him do [a different type of movement]?” These questions may be used to motivate further interest into engineering concepts such as biological inspiration for design and user-centered design. We also note that only the first graders related JIBO to other robots that [4] D. E. Logan et al., “Social robots for hospitalized children,” Pediatrics, 2019. [5] S. M. Mizanoor Rahman, “Metrics and Methods for Evaluating Interactions in Robotics-Enabled Learning Outcomes and Learner STEM Education,” 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 2020, pp. 2103-2108, doi: 10.1109/AIM43001.2020.9158900. [6] F. Tuluri, “Using robotics educational module as an interactive STEM learning platform,” 2015 IEEE Integrated STEM Education Conference, 2015, pp. 16-20, doi: 10.1109/ISECon.2015.7119916. [7] F. Mehdipour, M. Pashna and A. Mahanti, “A 3-Tier Solution for Facilitating STEM Education in Primary Schools,” 2018 International Conference on Learning and Teaching in Computing and Engineering (LaTICE), 2018, pp. 1-5, doi: 10.1109/LaTICE.2018.00-15. [8] S. Spaulding, H. Chen, S. Ali, M. Kulinski, & C. Breazeal,“A Social Robot System for Modeling Children’s Word Pronunciation: Socially Interactive Agents Track.” In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1658- 1666). International Foundation for Autonomous Agents and Multiagent Systems. 2018. [9] G. Gordon, S. Spaulding, K. Westlund, J. Lee, L. Plummer, M. Martinez, M. Das, & C. Breazeal, “Affective Personalization of a Social Robot Tutor for Children’s Second Language Skills.” Proceedings of the 30th AAAI Conference on Artificial Intelligence (Palo Alto, CA). 2016. [10] M. M. Clay, “Becoming literate: The construction of inner control,” Portsmouth, NH: Heinemann Education, 1991. [11] R. G. Lomax and L. M. McGee, “Young children’s concepts about print and reading: Toward a model of word reading acquisition,” Reading Research Quarterly, 22, pp. 237-256, 1987. [12] A. L. Bailey, G. Yeung, M. Quintero Perez, A. Martin, A. Afshan, & A. Alwan, “MEET JIBO! Demonstration of a personalized learning companion robot designed for speech and language assessment in early childhood education settings.” CRESST/UCLA Annual Conference, Los Angeles, CA. 2018. [13] G. Yeung et al., “Toward the development of personalized learning companion robots for early speech and language assessment.” Poster presented at the annual meeting of the American Educational Research Association, Annual Conference, Toronto, Canada, 2019. [14] A. L. Bailey, A. Martin, A .Pogossian, M. Perez, G. Yeung, A. Alwan, A. Afshan. “Early Literacy and Oral Language Ties: Extending the range of human-computer interface for early assessment.” Paper (cancelled) at the annual meeting of the American Educational Research Association, Annual Conference, San Francisco, CA, 2020. [15] R. Goldman and M. Fristoe, “Gfta 3: Goldman Fristoe 3 Test of Articulation,” 2015. [16] P. Boersma, “Praat, a system for doing phonetics by computer,” Glot International 5:9/10, pp. 341-345. [17] D.J. Hermes, “Measuring the perceptual similarity of pitch contours.” J Speech Lang Hear Res. 1998, pp. 73-82. [18] S. Kriz, G. Anderson and J. G. Trafton, “Robot-directed speech: Using language to assess first-time users’ conceptualizations of a robot,” 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2010, pp. 267-274. [19] S. Kriz, G. Anderson, J. G. Trafton and M. Bugajska, “Robot- directed speech as a means of exploring conceptualizations of robots,” 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), La Jolla, CA, USA, 2009, pp. 271-272. they had seen previously. This may be because the younger students have had less exposure to other instances of robots. We observed that students became excited to participate in the assessment when they learned that they would be working with a robot. This experience was, for most, if not all, of the students, their first interaction with a social robot. From the students’ excitement and frequency of questions asked, we believe that JIBO can be used to prime young students for robotic and STEM-centered lessons, possibly motivating them to further pursue them in secondary and post-secondary education. We believe that many of the students who did not make comments or questions about JIBO were simply shy, as our work shows that even the students working with only the human assessor still displayed a large amount of reticence throughout the assessment. Further interactions with the robot to build familiarity may reduce that reticence over time and lead to additional questions from students in the future. V. LIMITATIONS Comparing our study with others on robot-directed speech ( [18], [19]) using a power analysis, we noted that that our results on children’s human-directed and robot-directed speech are underpowered. However, our findings demonstrate strong tendencies and suggest directions for future research. In addition, variability might have been reduced if the same children had completed the assessment under both assessment conditions (with or without JIBO), giving different, equally difficult prompts in each condition. However, in the current study, we had only a short duration in each session to work with the children, making it infeasible to give one child both assessment conditions within a session. We instead balanced the pools for each condition as effectively as possible within the constraints. VI. CONCLUSIONS Our pilot results show no detriment in the students’ speech production performances as a result of working with the social robot. Students were sometimes more hesitant to talk to the robot at first but became more willing over the course of the session. The time spent with the social robot also appears to have elicited questions from the students that can be used to grow their interest in this STEM domain. Future work includes measuring students’ curiosity and speech differences in a longitudinal study to investigate how these factors change with time. VII. ACKNOWLEDGEMENTS This work is supported in part by the NSF. REFERENCES [1] J. Kory, S. Jeong, S. and C. L. Breazeal, “Robotic learning companions for early language development,” In J. Epps, F. Chen, S. Oviatt, and K. Mase (Eds.), Proceedings of the 15th ACM on International conference on multimodal interaction, pp. 71-72. ACM: New York, NY, 2013. [2] T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, F. Tanaka, “Social robots for education: A review,” Sci. Robot. 3, eaat5954, 2018. [3] M. M. Neumann, “Social Robots and Young Children's Early Language and Literacy Learning,” Early Childhood Educ J 48, pp. 157–170, 2020.
ai_researcher
1
Dissimilarity-based_hypothesis_testing_for_community_detection_in_heterogeneous_networks.pdf
Matrix dissimilarities based on differences in moments and sparsity 4 2 0 2 p e S 0 1 ] M Q . o i b - q [ 2 v 1 5 0 2 0 . 6 0 4 2 : v i X r a Li Tuobang This manuscript was compiled on September 11, 2024 Generating a dissimilarity matrix is typically the first step in big data analysis. Although numerous methods exist, such as Euclidean dis- tance, Minkowski distance, Manhattan distance, Bray–Curtis dissim- ilarity, Jaccard similarity and Dice dissimilarity, it remains unclear which factors drive dissimilarity between groups. In this paper, we in- troduce an approach based on differences in moments and sparsity. We show that this method can delineate the key factors underlying group differences. For example, in biology, mean dissimilarity in- dicates differences driven by up/down-regulated gene expressions, standard deviation dissimilarity reflects the heterogeneity of re- sponse to treatment, and sparsity dissimilarity corresponds to differ- ences prompted by the activation/silence of genes. Through exten- sive reanalysis of genome, transcriptome, proteome, metabolome, immune profiling, microbiome, and social science datasets, we demonstrate insights not captured in previous studies. For instance, it shows that the sparsity dissimilarity is as effective as the mean dissimilarity in predicting the alleviation effects of a COVID-19 drug, suggesting that sparsity dissimilarity is highly meaningful. Metabolism | Moments | Mass spectra D issimilarity measures are critical in big data analysis. They quantify how different or similar two data points are. Different measures can significantly affect the perfor- mance of clustering algorithms and many other machine learn- ing models. This is because different measures indeed reflect distinct facets of the differences. For example, the Euclidean distance, the most well-known distance, is the square root of the sum of the squares of the differences in each dimension. This effectively captures the average shift in each dimension, aptly termed a measure of mean changes. Other measures of mean changes include Minkowski distance, Manhattan dis- tance, and Average distance. A notable limitation of these measures of mean changes is that, the largest feature would dominate the others. Instead of normalizing the dataset, a different solution proposed in this report is introduced as: let P = (p1, p2, . . . , pn) and Q = (q1, q2, . . . , qn) be two points in Rn. The mean dissimilarity µ∆ ˆLn between P and Q is defined as: µ∆ ˆLn (P, Q) = ˆLn{|p1 − q1|, |p2 − q2|, . . . , |pn − qn|}, where |pi − qi| represents the absolute difference between the i-th coordinates of P and Q, and ˆLn{·} denotes a location estimate of a set of values. In this report, Hodges-Lehmann estimator (H-L) (1) is used. When the objective is to compare dissimilarities between groups rather than individual samples, we extend the above definition for points to matrices. Sup- pose we have two matrices A and B in Rm×n1 and Rm×n2 , represented as: A = a11 a21 ... am1     a12 a22 ... am2 · · · · · · . . . · · · a1n1 a2n1 ... amn1     .. and B =      b12 b22 ... bm2 b11 b21 ... bm1 b1n2 b2n2 ... bmn2 . For each row i, compute the mean · · · · · · . . .    · · · of the row in matrix A as ¯ai = 1 n1 j=1 aij and the mean of n1 P the row in matrix B as ¯bi = 1 n2 j=1 bij. We then define the n2 P set of absolute differences between the means of corresponding rows of A and B as ∆µ = {|¯a1 − ¯b1|, |¯a2 − ¯b2|, . . . , |¯am − ¯bm|}. The mean dissimilarity µ∆ ˆLm between the two matrices A and B is then defined as: µ∆ ˆLm (A, B) = ˆLm(∆m). Then, analogously, for each row i, compute the standard deviation σai for the row in matrix A and σbi for the row in matrix B. Define the sets of absolute differences for standard deviation as ∆σ = {|σa1 − σb1 |, |σa2 − σb2 |, . . . , |σam − σbm |}. The standard deviation dissimilarity between the two matri- ces A and B are then defined as: σ∆ ˆLm (A, B) = ˆLm(∆σ). Dissimilarities based on higher-order standardized moments can also be defined, although their practical relevance may be less pronounced. Another kind of dissimilarity focuses on the commonal- ity of attribute values, exemplified by metrics such as the Bray–Curtis dissimilarity, Jaccard similarity and Dice dissimi- larity. These metrics can be adept at capturing the disparities in data sparsity between two points, hence we refer to them as measures of sparsity changes. In this report, we introduce sparsity dissimilarity defined as follows: for each row i, let zai be the percentage of zeros in the row in matrix A and zbi be the percentage of zeros in the row in matrix B. Also, let ¯ai Significance Statement Handling high sparsity presents a significant challenge in big data analysis. Currently, there are many strategies, e.g., impu- tation, regularization, sampling, dimensionality reduction. Yet, a fundamental question remains largely unanswered, partic- ularly in the field of biology: to what degree does sparsity reflect meaningful disparities between groups rather than be- ing largely a result of technical errors in measurement? In this study, we conducted a systematic reanalysis of datasets encompassing the transcriptome, proteome, metabolome, im- mune profiling, microbiome, and social sciences. We demon- strate that sparsity differences are highly meaningful and carry as much significance as the more commonly used measures of dissimilarity based on mean differences. 1To whom correspondence should be addressed. E-mail: [email protected] PNAS | September 11, 2024 | vol. XXX | no. XX | 1–2 Table 1. Standardized mean, standard deviation, and sparsity dissimilarities of Wyler et al.’s RNA sequencing dataset Treatment AAG AAG AAG DMSO DMSO DMSO CI (0.030,0.033) (0.035,0.039) (0.062,0.066) (0.023,0.025) (0.142,0.153) (0.089,0.095) AAG: 17-AAG, a kind of HSP90 inhibitor; DMSO: Solvent control. The sparsity dissimilarity is in units of 10−3. The relative dissimilarities can be found in the SI Dataset S1. Comparison SARS-CoV-2-Mock SARS-CoV-2-Mock SARS-CoV-2-Mock SARS-CoV-2-Mock SARS-CoV-2-Mock SARS-CoV-2-Mock CI (0.282,0.307) (0.274,0.294) (0.199,0.214) (0.236,0.264) (0.223,0.244) (0.267,0.288) CI (0.026,0.030) (0.029,0.033) (0.028,0.032) (0.027,0.031) (0.074,0.158) (0.056,0.071) Time 24h 48h 72h 24h 48h 72h σ∆∗ H-L 0.295 0.282 0.208 0.250 0.234 0.277 µ∆∗ H-L 0.032 0.037 0.064 0.024 0.148 0.091 s∆∗ H-L 0.028 0.031 0.029 0.030 0.110 0.062 and ¯bi represent the mean of the rows in matrices A and B, respectively. The sparsity dissimilarity s∆(A, B) is defined as: m 2. E Wyler, et al., Transcriptomic profiling of sars-cov-2 infected human cell lines identifies hsp90 as target for covid-19 therapy. IScience 24, 102151 (2021). s∆(A, B) = siwi X i=1 where si = |¯ai − ¯bi|, wi = |zai − zbi |. By substituting absolute differences with relative differ- ences, we obtain the relative mean, standard deviation, and sparsity dissimilarities, denoted as µδ ˆLm (A, B), σδ ˆLm (A, B), and sδ ˆLm (A, B), respectively. Finally, the mean, standard deviation, and sparsity dis- similarity and their relative dissimilarity can be standardized by the mean of {¯a1, · · · , ¯ai, · · · , ¯am, ¯b1, · · · , ¯bi, · · · , ¯bm}, {σa1 , · · · , σai , · · · , σam , σb1 , · · · , σbi , · · · , σbm }, and the half sum of {¯a1, · · · , ¯ai, · · · , ¯am, ¯b1, · · · , ¯bi, · · · , ¯bm}. The stan- dardized versions are denoted as µ∆∗ ˆLm (A, B), s∆∗ ˆLm (A, B), re- spectively. ˆLm (A, B), and sδ∗ ˆLm (A, B), σ∆∗ ˆLm (A, B), σδ∗ ˆLm (A, B), µδ∗ Often, the variables in the matrixes are interelated, and these relations, in most cases, can be demonstrated by a phy- logenetic tree. Converting the phylogenetic tree into corre- sponding weights by their length from leave to root (Algo- rithm 1), the mean, standard deviation, and sparsity dissimi- larity can be further weighted by the phylogenetic tree. Results In this study, we revisited the findings reported by Wyler et al.(2) regarding the protective effects of HSP90 inhibition (17-AAG) on alveolar epithelial cells (AECs) in the context of SARS-CoV-2 infection. We applied our dissimilarity measures to RNA-sequencing data obtained from these cells, whether exposed to SARS-CoV-2 or not. Our results showed that treatment with the HSP90 inhibitor, 17-AAG, at a concen- tration of 200 nM significantly reduced mean and sparsity dissimilarities (from infection to non-infection) compared to the solvent control, besides the 24 hour group, suggesting a weakened response to infection. Furthermore, the treatment has little impact on the standard deviation dissimilarity, indi- cating that the cellular response to this inhibitor is generally homogeneous. These findings underscore the potential of our dissimilarity measure as a tool for quantitatively assessing the overall impact of therapeutic agents on cellular dynamics. Data and Software Availability All data are included in the brief report and SI Dataset S1. All codes have been deposited in github.com/johon-lituobang. 1. J Hodges Jr, E Lehmann, Estimates of location based on rank tests. The Annals Math. Stat. 34, 598–611 (1963). 2 |
ai_researcher
3
Multi-expert_Prompting_Improves_Reliability_Safety_and_Usefulness_of_Large_Language_Models.pdf
0 2 0 2 p e S 8 2 ] E N . s c [ 1 v 7 4 3 3 1 . 9 0 0 2 : v i X r a A Review of Evolutionary Multi-modal Multi-objective Optimization Ryoji Tanabe, Member, IEEE,and Hisao Ishibuchi, Fel- low, IEEE Abstract—Multi-modal multi-objective optimization aims to find all Pareto optimal solutions including overlapping solutions in the objective space. Multi-modal multi-objective optimization has been investigated in the evolutionary computation community since 2005. However, it is difficult to survey existing studies in this field because they have been independently conducted and do not explicitly use the term “multi-modal multi-objective optimization”. To address this issue, this paper reviews existing studies of evolutionary multi-modal multi-objective optimization, including studies published under names that are different from “multi-modal multi-objective optimization”. Our review also clarifies open issues in this research area. Index Terms—Multi-modal multi-objective optimization, evo- lutionary algorithms, test problems, performance indicators I. INTRODUCTION A multi-objective evolutionary algorithm (MOEA) is an efficient optimizer for a multi-objective optimization problem (MOP) [1]. MOEAs aim to find a non-dominated solution set that approximates the Pareto front in the objective space. The set of non-dominated solutions found by an MOEA is usually used in an “a posteriori” decision-making process [2]. A decision maker selects a final solution from the solution set according to her/his preference. Since the quality of a solution set is usually evaluated in the objective space, the distribution of solutions in the solution space has not received much attention in the evolutionary multi-objective optimization (EMO) community. However, the decision maker may want to compare the final solution to other dissimilar solutions that have an equivalent quality or a slightly inferior quality [3], [4]. Fig. 1 shows a simple example. In Fig. 1, the four solutions xa, xb, xc, and xd are far from each other in the solution space but close to each other in the objective space. xa and xb have the same objective vector. xc and xa are similar in the objective space. xd is dominated by these solutions. This kind of situation can be found in a number of real-world problems, including functional brain imaging problems [3], diesel engine design problems [5], distillation plant layout problems [6], rocket engine design problems [7], and game map generation problems [8]. If multiple diverse solutions with similar objective vectors like xa, xb, xc, and xd in Fig. 1 are obtained, the decision maker can select the final solution according to her/his pref- erence in the solution space. For example, if xa in Fig. 1 becomes unavailable for some reason (e.g., material shortages, R. Tanabe and H. Ishibuchi are with Shenzhen Key Laboratory of Computa- tional Intelligence, University Key Laboratory of Evolving Intelligent Systems of Guangdong Province, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China. e-mail: ([email protected], [email protected]). (Corresponding au- thor: Hisao Ishibuchi) 1 Fig. 1: Illustration of a situation where the four solutions are identical or close to each other in the objective space but are far from each other in the solution space (a minimization problem). optimization A multi-modal multi-objective mechanical failures, traffic accidents, and law revisions), the decision maker can select a substitute from xb, xc, and xd. A practical example is given in [4], which deals with two- objective space mission design problems. In [4], Sch¨utze et al. considered two dissimilar solutions x1 = (782, 1288, 1788)T and x2 = (1222, 1642, 2224)T for a minimization problem, whose objective vectors are f (x1) = (0.462, 1001.7)T and f (x2) = (0.463, 1005.3)T, respectively. Although x1 domi- nates x2, the difference between f (x1) and f (x2) is small enough. The first design variable is the departure time from the Earth (in days). Thus, the departure times of x2 and x1 782). If the decision maker differ by 440 days (= 1222 − accepts x2 with a slightly inferior quality in addition to x1, the two launch plans can be considered. If x1 is not realizable for some reason, x2 can be the final solution instead of x1. As explained here, multiple solutions with almost equivalent quality support a reliable decision-making process. If these solutions have a large diversity in the solution space, they can provide insightful information for engineering design [3], [5]. problem (MMOP) involves finding all solutions that are equivalent to Pareto optimal solutions [3], [9], [10]. Below, we explain the difference between MOPs and MMOPs using the two- objective and two-variable Two-On-One problem [11]. Figs. 2 (a) and (b) show the Pareto front F and the Pareto optimal solution set O of Two-On-One, respectively. Two-On-One has two equivalent Pareto optimal solution subsets O1 and O2 that are symmetrical with respect to the origin, where O = O1 O2. Figs. 2 (c) and (d) show O1 and O2, respectively. In Two-On-One, the three solution sets O, O1, and O2 (Figs. 2 (b), (c) and (d)) are mapped to F (Fig. 2 (a)) by the objective functions. On the one hand, the goal of MOPs is generally to find a solution set that approximates the Pareto front F in the objective space. Since O1 and O2 are mapped to the same F in the objective space, it is sufficient for MOPs to find either O1 or O2. On the other hand, the goal of MMOPs is to find the entire equivalent Pareto optimal solution set O = O1 O2 in the solution space. In contrast to MOPs, it is necessary to find both O1 and O2 in MMOPs. Since most MOEAs (e.g., NSGA-II [12] and SPEA2 [13]) do not have mechanisms to maintain the solution space diversity, it is expected that they do not work well for MMOPs. Thus, multi-modal multi-objective evolutionary algorithms (MMEAs) that handle the solution space diversity are necessary for MMOPs. ∪ ∪ This paper presents a review of evolutionary multi-modal Solution spaceObjective space 2 2) Definitions of MMOPs: The term “MMOP” was first coined in [3], [14] in 2005. However, “MMOP” was not used in most studies from 2007 to 2012. Terms that represent MMOPs were not explicitly defined in those studies. For example, MMOPs were referred to as problems of obtaining a diverse solution set in the solution space in [17]. It seems that “multi-modal multi-objective optimization” has been used again as of 2016. Apart from these instances, MMOPs were denoted as “Multi-objective multi-global optimization” and “Multi-modal multi-objective wicked problems” in [18] and [19], respectively. Although MMOPs have been addressed for more than ten years, the definition of an MMOP is still controversial. In this paper, we define an MMOP using a relaxed equivalency introduced by Rudolph and Preuss [17] as follows: Definition 1. An MMOP involves finding all solutions that are equivalent to Pareto optimal solutions. δ. − a (cid:107) (cid:107) f (x1) (cid:107) Definition 2. Two different solutions x1 and x2 are said to f (x2) be equivalent iff (cid:107) ≤ is an arbitrary norm of a, and δ is a non-negative where threshold value given by the decision maker. If δ = 0, the MMOP should find all equivalent Pareto optimal solutions. If δ > 0, the MMOP should find all equivalent Pareto optimal solutions and dominated solutions with acceptable quality. The main advantage of our definition of an MMOP is that the decision maker can adjust the goal of the MMOP by changing the δ value. Most existing studies (e.g., [9], [20], [21]) assume MMOPs with δ = 0. MMOPs with δ > 0 were discussed in [3], [4], [19], [22]. For example, xa, xb, and xc in Fig. 1 should be found for MMOPs with δ = 0. In addition, the non-Pareto optimal solution xd should be found for MMOPs with δ > 0 if (cid:107) ≤ Although there is room for discussion, MMOPs with δ > 0 may be more practical in real-world applications. This is because the set of solutions of an MMOP with δ > 0 can provide more options for the decision maker than that of an MMOP with δ = 0. While it is usually assumed in the EMO community that the final solution is selected from non- dominated solutions, the decision maker may also be interested in some dominated solutions in practice [3], [4]. Below, we use the term “MMOP” regardless of the δ value for simplicity. f (xd) (cid:107) f (xa) − δ. III. MMEAS This section describes 12 dominance-based MMEAs, 3 decomposition-based MMEAs, 2 set-based MMEAs, and a post-processing approach. MMEAs need the following three abilities: (1) the ability to find solutions with high quality, (2) the ability to find diverse solutions in the objective space, and (3) the ability to find diverse solutions in the solution space. MOEAs need the abilities (1) and (2) to find a solution set that approximates the Pareto front in the objective space. Multi-modal single-objective optimizers need the abilities (1) and (3) to find a set of global optimal solutions. In contrast, MMEAs need all abilities (1)–(3). Here, we mainly describe mechanisms of each type of MMEA to handle (1)–(3). (a) F (b) O (c) O1 (d) O2 Fig. 2: (a) The Pareto front F and (b) the Pareto optimal solution set O of Two-On-One [11]. Figs. (c) and (d) show the two Pareto optimal solution subsets O1 and O2, respectively. multi-objective optimization. This topic is not new and has been studied for more than ten years. Early studies include [3], [5], [11], [14]–[16]. Unfortunately, most existing studies were independently conducted and did not use the term “MMOPs” (i.e., they are not tagged). For this reason, it is difficult to survey existing studies of MMOPs despite their significant contributions. In this paper, we review related studies of MMOPs including those published under names that were different from “multi-modal multi-objective optimization”. We also clarify open issues in this field. Multi-modal single- objective optimization problems (MSOPs) have been well studied in the evolutionary computation community [10]. Thus, useful clues to address some issues in studies of MMOPs may be found in studies of MSOPs. We discuss what can be learned from the existing studies of MSOPs. This paper is organized as follows. Section II gives def- initions of MMOPs. Section III describes MMEAs. Section IV presents test problems for multi-modal multi-objective optimization. Section V explains performance indicators for benchmarking MMEAs. Section VI concludes this paper. II. DEFINITIONS OF MMOPS ∈ ⊆ → A solution x1 is said to dominate x2 iff fi(x1) 1) Definition of MOPs: A continuous MOP involves find- S RD that minimizes a given objective ing a solution x RM . Here, S is the D-dimensional function vector f : S solution space, and RM is the M -dimensional objective space. fi(x2) for all i and fi(x1) < fi(x2) for at least one index i. If x∗ is not dominated by any other solutions, it is called a Pareto optimal solution. The set of all x∗ is the Pareto optimal solution set, and the set of all f (x∗) is the Pareto front. The goal of MOPs is generally to find a non-dominated solution set that approximates the Pareto front in the objective space. 1, ..., M ∈ { ≤ } 8101214161820f1012345f2−2−1012x1−2−1012x2−2−1012x1−2−1012x2−2−1012x1−2−1012x2 1) Pareto dominance-based MMEAs: The most representa- tive MMEA is Omni-optimizer [9], [14], which is an NSGA- II-based generic optimizer applicable to various types of prob- lems. The differences between Omni-optimizer and NSGA-II are fourfold: the Latin hypercube sampling-based population initialization, the so-called restricted mating selection, the (cid:15)- dominance-based non-dominated sorting, and the alternative crowding distance. In the restricted mating selection, an indi- vidual xa is randomly selected from the population. Then, xa and its nearest neighbor xb in the solution space are compared based on their non-domination levels and crowding distance values. The winner among xa and xb is selected as a parent. The crowding distance measure in Omni-optimizer takes into account both the objective and solution spaces. For the i- th individual xi in each non-dominated front R, the crowding distance in the objective space cobj is calculated in a similar manner to NSGA-II. In contrast, the crowding distance value of xi in the solution space csol is calculated in a different 1, ..., D manner. First, for each j , a “variable-wise” } ∈ { crowding distance value of xi in the j-th decision variable csol i,j is calculated as follows:  (cid:16) xi+1,j −xi,j  j −xmin xmax (cid:16) xi,j −xi−1,j 2 xmax j −xmin  xi+1,j −xi−1,j j −xmin xmax else if xi,j = xmax if xi,j = xmin otherwise csol i,j = (1) (cid:17) (cid:17) 2 , j j i i j j j where we assume that all individuals in R are sorted based on their j-th decision variable values in descending order. In (1), xmin j = minx∈R{ . Unlike the } crowding distance in the objective space, an infinitely large value is not given to a boundary individual. j = maxx∈R{ and xmax xj xj } Then, an “individual-wise” crowding distance value csol i = ((cid:80)D is calculated as follows: csol i,j )/D. The average value csol avg of all individual-wise crowding distance values is avg = ((cid:80)|R| also calculated as follows: csol . Finally, the crowding distance value ci of xi is obtained as follows: j=1 csol i=1 csol i )/ | R | i (cid:40) ci = cobj max i { cobj min i { , csol i } , csol i } i > cobj if cobj otherwise avg or csol i > csol avg , (2) where cobj avg is the average value of all crowding distance values in the objective space. As shown in (2), ci in Omni-optimizer is the combination of cobj . Due to its alternative crowding distance, the results presented in [9] showed that Omni-optimizer finds more diverse solutions than NSGA-II. and csol i i In addition to Omni-optimizer, two extensions of NSGA- II for MMOPs have been proposed. DNEA [23] is similar to Omni-optimizer but uses two sharing functions in the objective and solution spaces. DNEA requires fine-tuning of two sharing niche parameters for the objective and solution spaces. The secondary criterion of DN-NSGA-II [24] is based on the crowding distance only in the solution space. DN-NSGA-II uses a solution distance-based mating selection. The following are other dominance-based MMEAs. An MMEA proposed in [25] utilizes DBSCAN [26] and the rake selection [27]. DBSCAN, which is a clustering method, is used for grouping individuals based on the distribution of 3 individuals in the solution space. The rake selection, which is a reference vector-based selection method similar to NSGA-III [28], is applied to individuals belonging to each niche for the environmental selection. SPEA2+ [5], [15] uses two archives Aobj and Asol to maintain diverse non-dominated individuals in the objective and solution spaces, respectively. While the environmental selection in Aobj is based on the density of individuals in the objective space similar to SPEA2 [13], that in Asol is based on the density of individuals in the solution space. For the mating selection in SPEA2+, neighborhood individuals in the objective space are selected only from Aobj. PQ,(cid:15)-MOEA [4], 4D-Miner [3], [29], and MNCA [19] are capable of handling dominated solutions for MMOPs with δ > 0. PQ,(cid:15)-MOEA uses the (cid:15)-dominance relation [30] so that an unbounded archive can maintain individuals with ac- ceptable quality according to the decision maker. Unlike other MMEAs, PQ,(cid:15)-MOEA does not have an explicit mechanism to maintain the solution space diversity. 4D-Miner was specially designed for functional brain imaging problems [3]. The population is initialized by a problem-specific method. 4D- Miner maintains dissimilar individuals in an external archive, whose size is ten times larger than the population size. The environmental selection in 4D-Miner is based on a problem- specific metric. Similar to DIOP [22] (explained later), MNCA simultaneously evolves multiple subpopulations P 1, ..., P S, where S is the number of subpopulations. In MNCA, the primary subpopulation P 1 aims to find an approximation that provides a target front for other of the Pareto front subpopulations P 2, ..., P S. While the update of P 1 is based on the same selection mechanism as in NSGA-II, the update of P 2, ..., P S is performed with a complicated method that takes into account both the objective and solution spaces. Although the above-mentioned MMEAs use genetic varia- tion operators (e.g., the SBX crossover and the polynomial mutation [12]), the following MMEAs are based on other approaches. Niching-CMA [20] is an extension of CMA- ES [31] for MMOPs by introducing a niching mechanism. The number of niches and the niche radius are adaptively adjusted in Niching-CMA. An aggregate distance metric in the objective and solution spaces is used to group individ- uals into multiple niches. For each niche, individuals with better non-domination levels survive to the next iteration. MO Ring PSO SCD [21], a PSO algorithm for MMOPs, uses a diversity measure similar to Omni-optimizer. However, MO Ring PSO SCD handles the boundary individuals in the objective space in an alternative manner. In addition, an index- based ring topology is used to create niches. Two extensions of artificial immune systems [32] have been proposed for MMOPs: omni-aiNet [18] and cob-aiNet [33]. These two methods use a modified version of the polynomial mutation [12]. The primary and secondary criteria of omni-aiNet are based on (cid:15)-nondomination levels [30] and a grid operation, respectively. In addition, omni-aiNet uses suppression and insertion operations. While the suppression operation deletes an inferior individual, the insertion operation adds new individuals to the population. The population size is not constant due to these two operations. The primary and secondary criteria of cob-aiNet are based on the fitness assignment method in SPEA2 [13] and a diversity measure with a sharing function in the solution space, respectively. The maximum population size is introduced in cob-aiNet. × × 2) Decomposition-based MMEAs: A three-phase multi- start method is proposed in [16]. First, (1, λ)-ES is carried out on each M objective functions K times to obtain M K best-so-far solutions. Then, an unsupervised clustering method is applied to the M K solutions to detect the number of equivalent Pareto optimal solution subsets s. Finally, s runs of (1, λ)-ES are performed on each N single-objective sub- problem decomposed by the Tchebycheff function. The initial individual of each run is determined in a chained manner. The best solution found in the j-th subproblem becomes an initial individual of (1, λ)-ES for the j + 1-th subproblem ). It is expected that s equivalent solutions (j } are found for each N decomposed subproblems. 1, ..., N ∈ { − 1 Two variants of MOEA/D [34] for MMOPs are proposed in [35], [36]. MOEA/D decomposes an M -objective problem into N single-objective subproblems using a set of weight vec- tors, assigning a single individual to each subproblem. Then, MOEA/D simultaneously evolves the N individuals. Unlike MOEA/D, the following two methods assign one or more individuals to each subproblem to handle the equivalency. The MOEA/D algorithm presented in [35] assigns K indi- viduals to each subproblem. The selection is conducted based on a fitness value combining the PBI function value [34] and two distance values in the solution space. K dissimilar individuals are likely to be assigned to each subproblem. The main drawback of the above methods [16], [35] is the difficulty in setting a proper value for K, because it is problem dependent. MOEA/D-AD [36] does not need such a parameter but requires a relative neighborhood size L. For each iteration, a child u is assigned to the j-th subproblem whose weight vector is closest to f (u), with respect to the perpendicular distance. Let X be a set of individuals already assigned to the jth-subproblem. If x in X is within the L nearest individuals from the child u in the solution space, x and u are compared based on their scalarizing function values g(x) and g(u). If g(u) g(x), x is deleted from the population and u enters the population. u also enters the population when no x in X is in the L neighborhood of u in the solution space. ≤ 3) Set-based MMEAs: DIOP [22] is a set-based MMEA that can maintain dominated solutions in the population. In the set-based optimization framework [37], a single solution in the upper level represents a set of solutions in the lower level (i.e., a problem). DIOP simultaneously evolves an archive A and a target population T . While A approximates only the Pareto front and is not shown to the decision maker, T obtains diverse solutions with acceptable quality by maximizing the following G indicator: G(T ) = wobjDobj(T ) + wsolDsol(T ). Here, wobj + wsol = 1. Dobj is a performance indicator in the objective space, and Dsol is a diversity measure in the solution space. In [22], Dobj and Dsol were specified by the hypervolume indicator [38] and the Solow-Polasky diversity measure [39], respectively. Meta-individuals in T that are (cid:15)- dominated by any meta-individuals in A are excluded for the calculation of the G metric. At the end of the search, T is likely to contain meta-individuals (i.e., solution sets of a 4 TABLE I: Properties of 18 MMEAs. µ and nmax denote the population size and the maximum number of evaluations used in each paper, respectively. “δ > 0” indicates whether each method can handle MMOPs with δ > 0. “U” means whether each method has an unbounded population/archive. Initial µ values are reported for omni- aiNet, cob-aiNet, PQ,(cid:15)-MOEA, and MOEA/D-AD. µ and nmax used in the post-processing step are shown for a method in [17]. MMEAs SPEA2+ [5], [15] Omni-optimizer [9], [14] 4D-Miner [3], [29] omni-aiNet [18] Niching-CMA [20] e A method in [25] c n a n i m o D PQ,(cid:15)-MOEA [4] cob-aiNet [33] MNCA [19] DN-NSGA-II [24] MO Ring PSO SCD [21] DNEA [23] . A method in [16] p m o c e D A method in [35] MOEA/D-AD [36] t DIOP [22] e S A method in [40] . A method in [17] P Year 2004 2005 2005 2006 2009 2010 2011 2011 2013 2016 2017 2018 2007 2018 2018 2010 2012 2009 µ 100 nmax 50 000 1 000 500 000 200 400 50 8 000 40 000 50 000 Not clearly reported 200 100 100 800 800 210 10 1 120 100 50 200 20 5 000 40 000 100 000 80 000 80 000 63 000 20 000 89 600 30 000 100 000 400 000 2 000 δ > 0 U (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) problem) (cid:15)-nondominated by meta-individuals in A. Another set-based MMEA is presented in [40]. Unlike DIOP, the proposed method evolves only a single population. Whereas DIOP maximizes the weighted sum of values of Dobj and Dsol, the proposed method treats Dobj and Dsol as meta two-objective functions. NSGA-II is used to simultaneously maximize Dobj and Dsol in [40]. 4) A post-processing approach: As pointed out in [17], it is not always necessary to locate all Pareto optimal solutions. Suppose that a set of non-dominated solutions A has already been obtained by an MOEA (e.g., NSGA-II) but not an MMEA (e.g., Omni-optimizer). After the decision maker has selected the final solution xfinal from A according to her/his preference in the objective space, it is sufficient to search solutions whose objective vectors are equivalent to f (xfinal). 1 x = = f (x) 2, f meta f (xfinal) 2 (cid:107) (x) A post-processing approach is proposed in [17] to han- dle this problem. First, the proposed approach formulates a meta constrained two-objective minimization problem where 2, and f meta 1 −(cid:107) (cid:107) − gmeta(x) = f meta θ < 0. The meta objective functions and f meta f meta represent the distance between x and xfinal in 2 1 the objective and solution spaces. Thus, smaller f meta (x) and f meta (x) indicate that x is similar to xfinal in the objective 2 space and far from xfinal in the solution space, respectively. The constraint gmeta with θ > 0 prevents f meta (x) from becoming an infinitely small value in unbounded problems. NSGA-II is used as a meta-optimizer in [17]. xfinal − − (cid:107) 1 2 5) Open issues: Table I summarizes the properties of the 18 MMEAs reviewed in this section. While some MMEAs require an extra parameter (e.g., L in MOEA/D-AD), Omni-optimizer does not require such a parameter. This parameter-less property is an advantage of Omni-optimizer. However, Omni-optimizer is a Pareto dominance-based MMEA. Since dominance-based MOEAs perform poorly on most MOPs with more than three objectives [28], Omni-optimizer is unlikely to handle many objectives. In addition to MMEAs, some MOEAs handling the solution space diversity have been proposed, such as GDEA [41], DEMO [42], DIVA [43], “MMEA” [44], DCMMMOEA [45], and MOEA/D-EVSD [46]. Note that solution space diversity management in these MOEAs aims to efficiently approximate the Pareto front for MOPs. Since these methods were not designed for MMOPs, they are likely to perform poorly for MMOPs. For example, “MMEA”, which stands for a model- based multi-objective evolutionary algorithm, cannot find mul- tiple equivalent Pareto optimal solutions [44]. Nevertheless, helpful clues for designing an efficient MMEA can be found in these MOEAs. The performance of MMEAs has not been well analyzed. The post-processing method may perform better than MMEAs when the objective functions of a real-world problem are computationally expensive. However, an in-depth investigation is necessary to determine which approach is more practical. Whereas the population size µ and the maximum number of evaluations nmax were set to large values in some studies, they were set to small values in other studies. For example, Table I shows that µ = 1 000 and nmax = 500 000 for Omni-optimizer, while µ = 50 and nmax = 50 000 for Niching-CMA. It is unclear whether an MMEA designed with large µ and nmax values works well with small µ and nmax values. While MMOPs with four or more objectives appear in real-world applications (e.g., five-objective rocket engine design problems [7]), most MMEAs have been applied to only two-objective MMOPs. A large-scale benchmarking study is necessary to address the above-mentioned issues. The decision maker may want to examine diverse dominated solutions. As explained in Section I, dominated solutions found by PQ,(cid:15)-MOEA support the decision making in space mission design problems [4]. The results presented in [29] showed that diverse solutions found by 4D-Miner help neuro- scientists analyze brain imaging data. Although most MMEAs assume MMOPs with δ = 0 as shown in Table I, MMEAs that can handle MMOPs with δ > 0 may be more practical. Since most MMEAs (e.g., Omni-optimizer) remove dominated they are unlikely to find individuals from the population, diverse dominated solutions. Some specific mechanisms are necessary to handle MMOPs with δ > 0 (e.g., the multiple subpopulation scheme in DIOP and MNCA). As explained at the beginning of this section, MMEAs need the three abilities (1)–(3). While the abilities (1) and (2) are needed to approximate the Pareto front, the ability (3) is needed to find equivalent Pareto optimal solutions. Most existing studies (e.g., [9], [20], [21], [36]) report that the abilities (1) and (2) of MMEAs are worse than those of MOEAs. For example, the results presented in [36] showed that Omni-optimizer, MO Ring PSO SCD, and MOEA/D- AD perform worse than NSGA-II in terms of IGD [47] (explained in Section V). If the decision maker is not interested in the distribution of solutions in the solution space, it would 5 be better to use MOEAs rather than MMEAs. The poor perfor- mance of MMEAs for multi-objective optimization is mainly due to the ability (3), which prevents MMEAs from directly approximating the Pareto front. This undesirable performance regarding the abilities (1) and (2) is an issue in MMEAs. What to learn from MSOPs: An online data repository • (https://github.com/mikeagn/CEC2013) that provides results of optimizers on the CEC2013 problem suite [48] is available for MSOPs. This repository makes the comparison of optimizers easy, facilitating constructive algorithm development. A simi- lar data repository is needed for studies of MMOPs. The number of maintainable individuals in the popula- tion/archive strongly depends on the population/archive size. However, it is usually impossible to know the number of equivalent Pareto optimal solutions of an MMOP a priori. The same issue can be found in MSOPs. To address this issue, the latest optimizers (e.g., dADE [49] and RS-CMSA [50]) have an unbounded archive that maintains solutions found during the search process. Unlike modern optimizers for MSOPs, Table I shows that only three MMEAs have such a mechanism. The adaptive population sizing mechanisms in omni-aiNet, PQ,(cid:15)-MOEA, and MOEA/D-AD are advantageous. A general strategy of using an unbounded (external) archive could im- prove the performance of MMEAs. IV. MULTI-MODAL MULTI-OBJECTIVE TEST PROBLEMS 2 and f2(y) = (y1 This section describes test problems for benchmarking MMEAs. Unlike multi-objective test problems (e.g., the DTLZ [51] test suite), multi-modal multi-objective test problems were explicitly designed such that they have multiple equiv- alent Pareto optimal solution subsets. The two-objective and two-variable SYM-PART1 [16] is one of the most represen- tative test problems for benchmarking MMEAs: f1(y) = (y1 +a)2 +y2 2. Here, y1 and y2 are t1(c+2a) translated values of x1 and x2 as follows: y1 = x1 and y2 = x2 t2b. In SYM-PART1, a controls the region of Pareto optimal solutions, and b and c specify the positions of the Pareto optimal solution subsets. The so-called tile identifiers t1 and t2 are randomly selected from 1, 0, 1 . } Fig. 3(a) shows the shape of the Pareto optimal solutions of SYM-PART1 with a = 1, b = 10, and c = 8. As shown in Fig. 3(a), the equivalent Pareto optimal solution subsets are on nine lines in SYM-PART1. a)2 +y2 {− − − − the Superspheres problem [52], Other test problems include the Two-On-One [11] problem, the Omni-test problem [9], the SYM-PART2 and SYM-PART3 problems [16], the EBN problem [53], the two SSUF problems [24], and the Polygon problems [54]. Fig. 3 also shows the distribution of their Pareto optimal solutions. Since there are an infinite number of Pareto optimal solutions in the EBN problem, we do not show them. Source codes of the ten problems can be downloaded from the supplementary website (https://sites.google.com/view/emmo/). In Omni-test, equivalent Pareto optimal solution subsets are regularly located. SYM-PART2 is a rotated version of SYM- PART1. SYM-PART3 is a transformed version of SYM- PART2 using a distortion operation. The Superspheres prob- lem with D = 2 has six equivalent Pareto optimal solution 6 TABLE II: Properties of multi-modal multi-objective test problems, where M , D, and P denote the number of objectives, design variables, and equivalent Pareto optimal solution subsets, respectively. If a problem has irregularity, the shapes of its multiple equivalent Pareto optimal solution subsets differ from each other. (a) SYM-PART1 (b) SYM-PART2 (c) SYM-PART3 Test problems SYM-PART problems [16] Two-On-One problem [11] Omni-test problem [9] Superspheres problem [52] EBN problem [53] M 2 2 2 2 2 Polygon problems [54] Any (d) Two-On-One (e) Omni-test (f) Superspheres MMF suite [21] HPS suite [57] SSUF problems [24] 2 2 2 Irregularity (cid:88) D 2 2 Any Any Any 2 2 2 P 9 2 3D Unknown ∞ Any 2 2 or 4 Any Any (g) SSUF1 (h) SSUF3 (i) Polygon Fig. 3: Distribution of the Pareto optimal solutions for the eight problems. Only x1 and x2 are shown on Omni-test. subsets. However, the number of its P is unknown for D > 2. EBN can be considered as a real-coded version of the so-called binary one-zero max problem. All solutions in the solution space are Pareto optimal solutions. SSUF1 and SSUF3 are extensions of the UF problems [55] to MMOPs. There are two symmetrical Pareto optimal solution subsets in SSUF1 and SSUF3. Polygon is an extension of the distance minimization problems [56] to MMOPs, where P equivalent Pareto optimal solution subsets are inside of P regular M -sided polygons. In addition, the eight MMF problems are presented in [21]. Similar to SSUF1 and SSUF3, the MMF problems are derived from the idea of designing a problem that has multiple equiv- alent Pareto optimal solution subsets by mirroring the original one. A bottom-up framework for generating scalable test problems with any D is proposed in [57]. P equivalent Pareto optimal solution subsets are in P hyper-rectangular located in the solution space similar to the SYM-PART problems. While the first k variables play the role of “position” parameters in the solution space, the other D k variables represent “distance” parameters. The six HPS problem instances were constructed using this framework in [57]. − If a given problem has the multi-modal fitness landscape, it may have multiple non-Pareto fronts whose shapes are similar to the true Pareto front. Such a problem (e.g., ZDT4 [58]) is referred to as a multi-frontal test problem [59]. If the δ value (defined in Subsection II-2) is sufficiently large, a multi-frontal test problem can be regarded as a multi-modal multi-objective test problem. In fact, ZDT4 was used in [19] as a test problem. The Kursawe problem [60] is a multi-modal and nonseparable test problem with a disconnected Pareto front. The Kursawe problem has two fronts in the objective space similar to multi- frontal problems. Thus, the Kursawe problem can be used as a multi-modal multi-objective test problem. 1) Open issues: Table II summarizes the properties of multi-modal multi-objective test problems reviewed here. In Table II, P of Omni-test adheres to [22]. Table II indicates that scalable test problems do not exist, in terms of M , D, and P . Although the SYM-PART problems have some desirable properties (e.g., their adjustable and straightforward Pareto optimal solution shapes), M , D, and P are constant in these problems. Only Polygon is scalable in M . While most test problems have only two design variables, Omni-test and HPS are scalable in D. Unfortunately, P increases exponentially with increased D in Omni-test due to the combinatorial nature of variables. Although the idea of designing scalable SYM-PART and Polygon problems to D is presented in [61], [62], they have similar issues to Omni-test. Although the HPS problems do not have such an issue, it is questionable whether there exists a real-world problem with design variables affecting only the distance between the objective vectors and the Pareto front. Only SYM- PART3 has irregularity. Since the shapes of the Pareto optimal solution subsets may be different from each other in real-world problems, we believe that test problems with the irregularity are necessary to evaluate the performance of MMEAs. The performance of an MMEA with an absolutely defined niching radius (e.g., DNEA) is likely to be overestimated in test problems without irregularity. In addition, the relation between synthetic test problems and real-world problems has not been discussed. The idea of designing a Polygon problem based on a real-world map is presented in [63]. However, this does not mean that such a Polygon problem is an actual real-world problem. What to learn from MSOPs: Some construction methods • for multi-modal single-objective test problems are available, such as the software framework proposed in [64], the con- struction method for various problems [65], and Ahrari and Deb’s method [66]. Borrowing ideas from such sophisticated construction methods is a promising way to address the above-mentioned issues of multi-modal multi-objective test −15015x1−15015x2−15015x1−15015x2−8−4048x1−15015x2−2−1012x1−2−1012x20123456x10123456x20π/4π/2x1012345x2123x1−101x20246810x1×10−105101520x2×10−10246810x10246810x2 problems. In [64], R¨onkk¨onen et al. present eight desirable properties for multi-modal single-objective problem generators such as scalability in D, control of the number of global and local optima, and regular and irregular distributions of optima. These eight properties can be a useful guideline for designing multi-modal multi-objective problem generators. V. PERFORMANCE INDICATORS FOR MMEAS Performance indicators play an important role in quanti- tatively evaluating the performance of MOEAs as well as MMEAs. Since performance indicators for MOEAs consider only the distribution of objective vectors (e.g., the hypervol- ume, GD, and IGD indicators [38], [47]), they cannot be used to assess the ability of MMEAs to find multiple equivalent Pareto optimal solutions. For this reason, some indicators have been specially designed for MMEAs. Performance indicators for MMEAs can be classified into two categories: simple extensions of existing performance indicators for MOEAs and specific indicators based on the distributions of solutions. IGDX [4], [44] is a representative example of the first approach. The IGD and IGDX indicators are given as follows: 7 TABLE III: Properties of performance indicators for MMEAs (convergence to Pareto optimal solution subsets, diversity, uniformity, spread, the use of reference solution sets, and possibility to compare solution sets with different sizes). Indicators GDX [4] IGDX [4], [44] Hausdorff distance [4] CR [21] PSP [21] Pairwise distance [20] CS [16] SPS [16] Solow-Polasky [39] PSV [57] Conv. (cid:88) Div. Unif. Spr. Dif. Ref. (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) IGD(A) = IGDX(A) = 1 |A∗| 1 |A∗|   (cid:88) z∈A∗  (cid:88)  z∈A∗ ED(cid:0)f (x), f (z)(cid:1)(cid:111) (cid:110) min x∈A   , ED(cid:0)x, z(cid:1)(cid:111) (cid:110) min x∈A   , (3) (4) where A is a set of solutions obtained by an MMEA and A∗ is a set of reference solutions in the Pareto optimal solution set. ED(x1, x2) denotes the Euclidean distance between x1 and x2. While A with a small IGD value is a good approximation of the Pareto front, A with a small IGDX approximates Pareto optimal solutions well. Other indicators in the first category include GDX [4], the Hausdorff distance indicator [67] in the solution space [4], CR [21], and PSP [21]. GDX is a GD indicator in the solution space similar to IGDX. CR is an alternative version of the maximum spread [38] to measure the spread of A. PSP is a combination of IGDX and CR. Performance indicators in the second category include the mean of the pairwise distance between two solutions [20], CS [16], SPS [16], the Solow-Polasky diversity measure [39] used in [22], [40], and PSV [57]. CS is the number of Pareto optimal solution subsets covered by at least one individual. SPS is the standard deviation of the number of solutions close to each Pareto optimal solution subset. PSV is the percentage of the volume of A in the volume of A∗ in the solution space. 1) Open issues: Table III shows the properties of perfor- mance indicators for MMEAs reviewed in this section, where the properties are assessed based on the description of each indicator. While the properties of the performance indicators for MOEAs have been examined (e.g., [38], [67]), those for MMEAs have not been well analyzed. Performance indicators for MMEAs should be able to evaluate the three abilities (1)–(3) explained in Section III. Although IGDX is frequently used, it should be noted that IGDX does not evaluate the distribution of solutions in the objective space. Fig. 4 shows the distribution of two solu- tion sets A1 and A2 for SYM-PART1 in the solution and (a) A1 in the solution space (b) A2 in the solution space (c) A1 in the objective space (d) A2 in the objective space Fig. 4: Comparison of solution sets A1 and A2 for SYM-PART1. | | A2 and A1 | objective spaces, where are 27. While the | solutions in A1 are evenly distributed on one of the nine Pareto optimal solution subsets, the solutions in A2 are evenly distributed on all of them. Although A1 has 27 objective vectors that cover the Pareto front, A2 has only 3 equivalent objective vectors. The IGDX and IGD values of A1 and A2 are as follows: IGDX(A1) = 15.92, IGDX(A2) = 0.25, IGD(A1) = 0.06, and IGD(A2) = 0.81. We used 5 000 Pareto optimal solutions for A∗. Although A2 has a worse distribution in the objective space than A1, IGDX(A2) is significantly better than IGDX(A1). As demonstrated here, IGDX can evaluate the abilities (1) and (3) but cannot evaluate the ability (2) to find diverse solutions in the objective space. Since the other indicators in Table III do not take into account the distribution of objective vectors similar to IGDX, they are likely to have the same undesirable property. For a fair performance comparison, it is desirable to use the indicators −15015x1−15015x2−15015x1−15015x201234f101234f201234f101234f2 for MOEAs (e.g., hypervolume and IGD) in addition to the indicators for MMEAs in Table III. What to learn from MSOPs: It is desirable that the indicators • for multi-modal single-objective optimizers evaluate a solution set without the knowledge of the fitness landscape such as the positions of the optima and the objective values of the optima [68]. The same is true for indicators for MMEAs. Table III shows that most indicators (e.g., IGDX) require A∗. Since A∗ is usually unavailable in real-world problems, it is desirable that indicators for MMEAs evaluate A without A∗. Since the archive size in modern multi-modal single- objective optimizers is unbounded in order to store a number of local optima [10], most indicators in this field can handle solution sets with different sizes (e.g., the peak ratio and the success rate [48]). For the same reason, it is desirable that indicators for MMEAs evaluate solution sets with different sizes in a fair manner. However, it is difficult to directly use indicators for multi-modal single-objective optimizers to evaluate MMEAs. VI. CONCLUSION The contributions of this paper are threefold. The first contribution is that we reviewed studies in this field in terms of definitions of MMOPs, MMEAs, test problems, and perfor- mance indicators. It was difficult to survey the existing studies of MMOPs for the reasons described in Section I. Our review helps to elucidate the current progress on evolutionary multi- modal multi-objective optimization. The second contribution is that we clarified open issues in this field. In contrast to multi-modal single-objective optimization, multi-modal multi- objective optimization has not received much attention despite its practical importance. Thus, some critical issues remain. The third contribution is that we pointed out an issue as- sociated with performance indicators for MMEAs. Reliable performance indicators are necessary for the advancement of MMEAs. We hope that this paper will encourage researchers to work in this research area, which is not well explored. ACKNOWLEDGMENT This work was supported by the Program for Guang- dong Introducing Innovative and Enterpreneurial Teams (Grant No. 2017ZT07X386), Shenzhen Peacock Plan (Grant No. KQTD2016112514355531), the Science and Technol- ogy Innovation Committee Foundation of Shenzhen (Grant No. ZDSYS201703031748284), the Program for Univer- sity Key Laboratory of Guangdong Province (Grant No. 2017KSYS008), and National Natural Science Foundation of China (Grant No. 61876075). REFERENCES [1] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms. John Wiley & Sons, 2001. [2] K. Miettinen, Nonlinear Multiobjective Optimization. Springer, 1998. [3] M. Sebag, N. Tarrisson, O. Teytaud, J. Lef`evre, and S. Baillet, “A Multi-Objective Multi-Modal Optimization Approach for Mining Stable Spatio-Temporal Patterns,” in IJCAI, 2005, pp. 859–864. [4] O. Sch¨utze, M. Vasile, and C. A. C. Coello, “Computing the Set of Epsilon-Efficient Solutions in Multiobjective Space Mission Design,” JACIC, vol. 8, no. 3, pp. 53–70, 2011. 8 [5] T. Hiroyasu, S. Nakayama, and M. Miki, “Comparison study of SPEA2+, SPEA2, and NSGA-II in diesel engine emissions and fuel economy problem,” in IEEE CEC, 2005, pp. 236–242. [6] M. Preuss, C. Kausch, C. Bouvy, and F. Henrich, “Decision Space Diversity Can Be Essential for Solving Multiobjective Real-World Problems,” in MCDM, 2008, pp. 367–377. [7] F. Kudo, T. Yoshikawa, and T. Furuhashi, “A study on analysis of design variables in Pareto solutions for conceptual design optimization problem of hybrid rocket engine,” in IEEE CEC, 2011, pp. 2558–2562. [8] J. Togelius, M. Preuss, and G. N. Yannakakis, “Towards multiobjective procedural map generation,” in PCGames, 2010. [9] K. Deb and S. Tiwari, “Omni-optimizer: A generic evolutionary algo- rithm for single and multi-objective optimization,” EJOR, vol. 185, no. 3, pp. 1062–1087, 2008. [10] X. Li, M. G. Epitropakis, K. Deb, and A. P. Engelbrecht, “Seeking Multiple Solutions: An Updated Survey on Niching Methods and Their Applications,” IEEE TEVC, vol. 21, no. 4, pp. 518–538, 2017. [11] M. Preuss, B. Naujoks, and G. Rudolph, “Pareto Set and EMOA Behavior for Simple Multimodal Multiobjective Functions,” in PPSN, 2006, pp. 513–522. [12] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE TEVC, vol. 6, no. 2, pp. 182–197, 2002. [13] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the Strength Pareto Evolutionary Algorithm,” ETHZ, Tech. Rep., 2001. [14] K. Deb and S. Tiwari, “Omni-optimizer: A Procedure for Single and Multi-objective Optimization,” in EMO, 2005, pp. 47–61. [15] M. Kim, T. Hiroyasu, M. Miki, and S. Watanabe, “SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2,” in PPSN, 2004, pp. 742–751. [16] G. Rudolph, B. Naujoks, and M. Preuss, “Capabilities of EMOA to Detect and Preserve Equivalent Pareto Subsets,” in EMO, 2007, pp. 36– 50. [17] G. Rudolph and M. Preuss, “A multiobjective approach for finding equiv- alent inverse images of pareto-optimal objective vectors,” in MCDM, 2009, pp. 74–79. [18] G. P. Coelho and F. J. V. Zuben, “omni-aiNet: An Immune-Inspired Approach for Omni Optimization,” in ICARIS, 2006, pp. 294–308. [19] E. M. Zechman, M. H. G., and M. E. Shafiee, “An evolutionary algorithm approach to generate distinct sets of non-dominated solutions for wicked problems,” Eng. Appl. of AI, vol. 26, no. 5-6, pp. 1442–1457, 2013. [20] O. M. Shir, M. Preuss, B. Naujoks, and M. T. M. Emmerich, “Enhancing Decision Space Diversity in Evolutionary Multiobjective Algorithms,” in EMO, 2009, pp. 95–109. [21] C. Yue, B. Qu, and J. Liang, “A Multi-objective Particle Swarm Optimizer Using Ring Topology for Solving Multimodal Multi-objective Problems,” IEEE TEVC, 2018 (in press). [22] T. Ulrich, J. Bader, and L. Thiele, “Defining and Optimizing Indicator- Based Diversity Measures in Multiobjective Search,” in PPSN, 2010, pp. 707–717. [23] Y. Liu, H. Ishibuchi, Y. Nojima, N. Masuyama, and K. Shang, “A Double-Niched Evolutionary Algorithm and Its Behavior on Polygon- Based Problems,” in PPSN, 2018, pp. 262–273. [24] J. J. Liang, C. T. Yue, and B. Y. Qu, “Multimodal multi-objective optimization: A preliminary study,” in IEEE CEC, 2016, pp. 2454–2461. [25] O. Kramer and H. Danielsiek, “DBSCAN-based multi-objective niching to approximate equivalent pareto-subsets,” in GECCO, 2010, pp. 503– 510. [26] M. Ester, H. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise,” in KDD, 1996, pp. 226–231. [27] O. Kramer and P. Koch, “Rake Selection: A Novel Evolutionary Multi- Objective Optimization Algorithm,” in KI, 2009, pp. 177–184. [28] K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints,” IEEE TEVC, vol. 18, no. 4, pp. 577–601, 2014. [29] V. Krmicek and M. Sebag, “Functional Brain Imaging with Multi- objective Multi-modal Evolutionary Optimization,” in PPSN, 2006, pp. 382–391. [30] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining Conver- gence and Diversity in Evolutionary Multiobjective Optimization,” Evol. Comput., vol. 10, no. 3, pp. 263–282, 2002. [31] N. Hansen and A. Ostermeier, “Completely derandomized self- adaptation in evolution strategies,” Evol. Comput., vol. 9, no. 2, pp. 159–195, 2001. 9 [58] E. Zitzler, K. Deb, and L. Thiele, “Comparison of Multiobjective Evolutionary Algorithms: Empirical Results,” Evol. Comput., vol. 8, no. 2, pp. 173–195, 2000. [Online]. Available: http://dx.doi.org/10. 1162/106365600568202 [59] S. Huband, P. Hingston, L. Barone, and R. L. While, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE TEVC, vol. 10, no. 5, pp. 477–506, 2006. [60] F. Kursawe, “A Variant of Evolution Strategies for Vector Optimization,” in PPSN, 1990, pp. 193–197. [61] V. L. Huang, A. K. Qin, K. Deb, E. Zitzler, P. N. Suganthan, J. J. Liang, M. Preuss, and S. Huband, “Problem Definitions for Performance Assessment on Multi-objective Optimization Algorithms,” NTU, Tech. Rep., 2007. [62] H. Ishibuchi, M. Yamane, N. Akedo, and Y. Nojima, “Many-objective and many-variable test problems for visual examination of multiobjective search,” in IEEE CEC, 2013, pp. 1491–1498. [63] H. Ishibuchi, N. Akedo, and Y. Nojima, “A many-objective test problem for visually examining diversity maintenance behavior in a decision space,” in GECCO, 2011, pp. 649–656. [64] J. R¨onkk¨onen, X. Li, V. Kyrki, and J. Lampinen, “A framework for generating tunable test functions for multimodal optimization,” Soft Comput., vol. 15, no. 9, pp. 1689–1706, 2011. [65] B. Y. Qu, J. J. Liang, Z. Y. Wang, Q. Chen, and P. N. Suganthan, “Novel benchmark functions for continuous multimodal optimization with comparative results,” SWEVO, vol. 26, pp. 23–34, 2016. [66] A. Ahrari and K. Deb, “A Novel Class of Test Problems for Performance Evaluation of Niching Methods,” IEEE TEVC, vol. 22, no. 6, pp. 909– 919, 2018. [67] O. Sch¨utze, X. Esquivel, A. Lara, and C. A. C. Coello, “Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary Multiobjective Optimization,” IEEE TEVC, vol. 16, no. 4, pp. 504–522, 2012. [68] J. Mwaura, A. P. Engelbrecht, and F. V. Nepocumeno, “Performance measures for niching algorithms,” in IEEE CEC, 2016, pp. 4775–4784. [32] D. Dasgupta, S. Yu, and F. Ni˜no, “Recent Advances in Artificial Immune Systems: Models and Applications,” Appl. Soft Comput., vol. 11, no. 2, pp. 1574–1587, 2011. [33] G. P. Coelho and F. J. V. Zuben, “A Concentration-Based Artificial Immune Network for Multi-objective Optimization,” in EMO, 2011, pp. 343–357. [34] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE TEVC, vol. 11, no. 6, pp. 712–731, 2007. [35] C. Hu and H. Ishibuchi, “Incorporation of a decision space diversity maintenance mechanism into MOEA/D for multi-modal multi-objective optimization,” in GECCO (Companion), 2018, pp. 1898–1901. [36] R. Tanabe and H. Ishibuchi, “A Decomposition-Based Evolutionary Algorithm for Multi-modal Multi-objective Optimization,” in PPSN, 2018, pp. 249–261. [37] E. Zitzler, L. Thiele, and J. Bader, “On Set-Based Multiobjective Optimization,” IEEE TEVC, vol. 14, no. 1, pp. 58–79, 2010. [38] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fon- seca, “Performance assessment of multiobjective optimizers: an analysis and review,” IEEE TEVC, vol. 7, no. 2, pp. 117–132, 2003. [39] A. R. Solow and S. Polasky, “Measuring biological diversity,” Environ. Ecol. Stat., vol. 1, no. 2, pp. 95–103, 1994. [40] H. Ishibuchi, M. Yamane, N. Akedo, and Y. Nojima, “Two-objective solution set optimization to maximize hypervolume and decision space diversity in multiobjective optimization,” in SCIS, 2012, pp. 1871–1876. [41] A. Toffolo and E. Benini, “Genetic Diversity as an Objective in Multi- Objective Evolutionary Algorithms,” Evol. Comput., vol. 11, no. 2, pp. 151–167, 2003. [42] T. Robiˇc and B. Filipiˇc, “DEMO: differential evolution for multiobjective optimization,” in EMO, 2005, pp. 520–533. [43] T. Ulrich, J. Bader, and E. Zitzler, “Integrating decision space diversity into hypervolume-based multiobjective search,” in GECCO, 2010, pp. 455–462. [44] A. Zhou, Q. Zhang, and Y. Jin, “Approximating the Set of Pareto- Optimal Solutions in Both the Decision and Objective Spaces by an Estimation of Distribution Algorithm,” IEEE TEVC, vol. 13, no. 5, pp. 1167–1189, 2009. [45] H. Xia, J. Zhuang, and D. Yu, “Combining Crowding Estimation in Objective and Decision Space With Multiple Selection and Search Strategies for Multi-Objective Evolutionary Optimization,” IEEE Trans. Cyber., vol. 44, no. 3, pp. 378–393, 2014. [46] J. C. Castillo, C. Segura, A. H. Aguirre, G. Miranda, and C. Le´on, “A multi-objective decomposition-based evolutionary algorithm with enhanced variable space diversity control,” in GECCO (Companion), 2017, pp. 1565–1571. [47] C. A. C. Coello and M. R. Sierra, “A Study of the Parallelization of a Coevolutionary Multi-objective Evolutionary Algorithm,” in MICAI, 2004, pp. 688–697. [48] X. Li, A. Engelbrecht, and M. G. Epitropakis, “Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization,” RMIT Univ., Tech. Rep., 2013. [49] M. G. Epitropakis, X. Li, and E. K. Burke, “A dynamic archive niching differential evolution algorithm for multimodal optimization,” in IEEE CEC, 2013, pp. 79–86. [50] A. Ahrari, K. Deb, and M. Preuss, “Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations,” Evol. Comput., vol. 25, no. 3, pp. 439–471, 2017. [51] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable Test Prob- lems for Evolutionary Multi-Objective Optimization,” in Evolutionary Multiobjective Optimization. Theoretical Advances and Applications. Springer, 2005, pp. 105–145. [52] M. T. M. Emmerich and A. H. Deutz, “Test problems based on lam´e superspheres,” in EMO, 2006, pp. 922–936. [53] N. Beume, B. Naujoks, and M. T. M. Emmerich, “SMS-EMOA: multiobjective selection based on dominated hypervolume,” EJOR, vol. 181, no. 3, pp. 1653–1669, 2007. [54] H. Ishibuchi, Y. Hitotsuyanagi, N. Tsukamoto, and Y. Nojima, “Many- Objective Test Problems to Visually Examine the Behavior of Multiob- jective Evolution in a Decision Space,” in PPSN, 2010, pp. 91–100. [55] Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari, “Multiobjective optimization Test Instances for the CEC 2009 Special Session and Competition,” Univ. of Essex, Tech. Rep., 2008. [56] M. K¨oppen and K. Yoshida, “Substitute Distance Assignments in NSGA- II for Handling Many-objective Optimization Problems,” in EMO, 2007, pp. 727–741. [57] B. Zhang, K. Shafi, and H. A. Abbass, “On Benchmark Problems and Metrics for Decision Space Performance Analysis in Multi-Objective Optimization,” IJCIA, vol. 16, no. 1, pp. 1–18, 2017.
ai_researcher
2
Mining_Reasons_For_And_Against_Vaccination_From_Unstructured_Data_Using_Nichesourcing_and_AI_Data_Augmentation.pdf
Mining Reasons For And Against Vaccination From Unstructured Data Using Nichesourcing and AI Data Augmentation Damián Ariel Furman1,2, Juan Junqueras1, Z. Burçe Gümü¸slü3, Edgar Altszyler1,4, Joaquin Navajas5, Ophelia Deroy3, Justin Sulik3, 1 Universidad de Buenos Aires, 2 Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), 3 Ludwig-Maximilians-Universität München, 4 Quantit, 5 Universidad Torcuato Di Tella Correspondence: [email protected] 4 2 0 2 n u J 8 2 ] L C . s c [ 1 v 1 5 9 9 1 . 6 0 4 2 : v i X r a Abstract We present Reasons For and Against Vaccina- tion (RFAV), a dataset for predicting reasons for and against vaccination, and scientific au- thorities used to justify them, annotated through nichesourcing and augmented using GPT4 and GPT3.5-Turbo. We show how it is possible to mine these reasons in non-structured text, un- der different task definitions, despite the high level of subjectivity involved and explore the impact of artificially augmented data using in- context learning with GPT4 and GPT3.5-Turbo. We publish the dataset and the trained models1 along with the annotation manual used to train annotators and define the task2. 1 Introduction Over the last decades there had been an increase of anti-vaccine propaganda and parents deciding not to vaccinate their children, which have caused outbreaks of diseases that had been previously con- sidered eliminated(Tafuri et al., 2014). The massive development of Internet and communication tech- nologies has provided a mean to facilitate informa- tion about vaccines and vaccination campaigns, but also a mean to spread misinformation(Kata, 2010). In this scenario, the development of technologies for automatically recognising what is being said about vaccines can help to rapidly identify new misinformation campaigns and elaborate informed counter-narratives to mitigate the risk they pose. In this work we present RFAV (Reasons For and Against Vaccination), a dataset with reasons for and against vaccination labeled through nichesourcing and expanded using GPT4 and GPT3.5, on web- sites downloaded from different sources, in English and Spanish. We also trained different language models using this dataset to automatically identify reasons obtaining promising results. Since this task 1https://huggingface.co/argmining-vaccines 2https://github.com/ArgMiningVaccination/RFAV- Dataset is highly subjective, we include an analysis of dif- ficulties of the annotation process and assess the capabilities of generative LLMs for data augmenta- tion in token classification tasks. 2 Previous Work Larson et al. (2022) defines Vaccine Hesitancy as "a state of indecision and uncertainty about vacci- nation before a decision is made (that) represents a time of vulnerability". Wilson and Wiysonge (2020) presented a thorough study concluding that "there is a significant relationship between organi- zation on social media and public doubts of vaccine safety". In this sense, automatic tools "have poten- tial to counter vaccine hesitancy"(Larson and Lin, 2024), as they can help analyze massive online con- tent. Skeppstedt et al. (2018) used topic models to manually code representative arguments about vaccines. (Qorib et al., 2023) applied sentiment analysis for analyzing twitter user’s stances about Covid-19 vaccines and reviewed other 14 studies that performed the same task. Torsi and Morante (2018) analyzed three annotation schemes for iden- tifying argument components using a corpus of structured essays and news about vaccination and found that to achieve acceptable IAA they needed to use a simple scheme with only one component that was not strictly argumentative on itself. We follow a similar approach. 3 Corpus creation To identify relevant web documents, we gener- ated a list of keywords related to vaccination, in- cluding complementary/alternative medicine top- ics as these are associated with vaccine hesitance (Browne et al., 2015). We used SERAPI to conduct Google and Bing searches with those keywords, retrieving URLs for the top 150 hits per search. As this was a scattergun approach, we next sought to boost the proportion of relevant pages. We consid- 1 ered that any web domain reached by at least 10 unique keywords from our list was likely relevant, so we conducted additional SERPAPI searches fo- cusing on those domains, retrieving up to 40 ad- ditional URLS per search per domain. Using the Trafilatura python package (Barbaresi, 2021) we parsed the scraped HTML text for each URL, filter- ing out documents with fewer than 100 words. We used the TextDescriptives python package (Hansen et al., 2023) to excise low-quality sections of text and the Presidio Analyzer to sanitize personal iden- tifying information. This yielded a total of 136934 documents in English and 94361 documents in Spanish. We further filtered these documents using a new list of keywords to preserve only those that were relevant to our purpose of annotation. After filtering, 94398 English documents (69% of the corpus) and 66257 Spanish documents (70% of the corpus) remained. be explicit if it can be inferred that the authority is being cited to provide credibility. More detailed descriptions of the categories de- fined and decisions about annotations with exam- ples showing typical cases can be found in the An- notation Manual. In order to assess how the different categories of stances affect both human and machine perfor- mance, we define three tasks related to identifica- tion of Reasons: A - a per-word binary classifica- tion indicating if a word is part of a reason or not; B - a per-word classification using six categories: 0 for words not belonging to a reason and 1 to 5 to indicate stances; C - a per-word classification using four categories (0 to 3) consisting of a compressed version of the stances that doesn’t account for the Weak vs Strong distinction. Considering also the task of predicting Scientific Authorities, this yields 4 different tasks. 3.1 Defining the task 3.2 Data annotation All documents were labeled with Reasons for or against vaccination and with Scientific Authorities that might be used to support either a pro or an anti-vaccine stance within the document. We define a Reason to be anything that can po- tentially be of interest to a person considering vac- cination. They are not necessarily argumentative, though all arguments will be considered reasons. Each example may have zero-to-many reasons and each reason will be also labeled with a ’Stance’ value using a Likert scale ranging from 1 to 5, defin- ing the stance that the text has towards vaccination in a broad sense, according to the following de- scriptions: 1. Strongly against vaccination 2. Weakly against vaccination 3. Ambiguous stance or undetermined 4. Weakly supporting vaccination 5. Strongly supporting vaccination Strong stances differ from Weak ones because they present themselves as conclusive and make their stance explicit. Weak stances, though relevant when considering vaccination, appear less conclu- sive and don’t have an explicit posture. We define a Scientific Authority to be any men- tion or invocation of scientists, publications, sci- entific, medical or governmental institutions used to provide credibility for potential reasons in the example (either for or against). The link between reasons and scientific authorities does not have to We took a random sample of 1000 documents in English and 1000 examples in Spanish. We re- moved non-ascii characters and truncated the text to 4000 words, avoiding leaving unfinished sen- tences when possible. Annotation was performed through nichesourcing by six psychology and phi- losophy advanced college students divided in two teams for English and Spanish. Nichesourcing is "a specific form of outsourcing that harnesses the computational efforts from niche groups of experts rather than the ‘faceless crowd’"(Boer et al., 2012). Annotators were asked to carefully review the an- notation manual and take a 2 hours course where vaccination-relevant concepts were explained and annotation criteria and examples were discussed. Each of them then, labeled 400 examples: 100 were common to the three annotators in the same team while the other 300 were exclusive to each individual. This resulted in a total of 1000 exam- ples labeled for each language, with 100 of those labeled three times used for calculating agreement. Annotation was done using the brat annotation tool (Stenetorp et al., 2012) in three stages. On the first and second stage all members of each team annotated 10 and 30 examples respectively from the common batch and did a pair-review discussing the cases where most disagreement arose. Criteria adopted on these stages was added to the annotation manual. On the third stage, all annotators from each team annotated the last 60 examples from the common batch and then the other 300 examples 2 ENGLISH Reason Compressed stance Stance Scientific Authority SPANISH Reason Compressed stance Stance Scientific Authority R1 0.50 0.40 0.36 0.41 R1 0.54 0.54 0.36 -0.002 R2 0.49 0.45 0.38 0.20 R2 0.50 0.46 0.39 0.16 R3 0.49 0.43 0.36 0.51 R3 0.48 0.47 0.40 0.31 All 0.49 0.44 0.36 0.45 All 0.49 0.47 0.39 0.25 Table 1: Cohen’s Kappa agreement for English and Spanish. Table shows the average of the agreement between each possible pair of annotators from the three annotators for each language, divided between each round of annotation (1 to 3) and considering all three rounds from their individual batch of examples. 3.2.1 Agreement Agreement was calculated, for each language, us- ing Cohen’s κ(Cohen, 1960) between all possible pairs of annotators among the three members of the same language team. The reported score is the average of the three agreement values calculated for each combination of the three annotators. Agreement was calculated in a per-word basis ac- cording to the four tasks defined in section 3.1. For “Reason” and “Scientific Authority”, agreement is calculated using a binary classification while for “Stance” and “Compressed Stance” is calculated for a multi-label classification with 6 and 4 classes respectively. Table 1 shows the agreement scores per compo- nent and per annotation round. Based on Cohen’s interpretation, Reason, Com- pressed Stances and Scientific Authority reach a moderate agreement, while Stance shows a fair agreement. For Spanish, Reason and Compressed Stances show a moderate agreement while Stance and Scientific Authority show a Fair agreement (be- ing Stance, very close to moderate). The different values on each round of annotation show that even though the amount of examples was increased pro- gressively, the level of agreement remains except for the case of Scientific Authority, where agree- ment improves with each round. This lead us to think that pair-reviews helped the annotators reach a better criteria. Considering that annotation was performed on unstructured documents from different sources not necessarily vaccine-related, we consider this level 3 of agreement to be satisfactory. Agreement is still on the same range of interpretation than (Poudyal et al., 2020), who achieved a Kappa agreement of 0.58 labeling arguments in a corpus of ECHR (Eu- ropean Court of Human Rights) decisions, consid- ering they worked on more argumentatively struc- tured examples. (Furman et al., 2023) labeled argu- mentative components as a binary classification ob- taining an agreement score that ranges from 0.52 to 0.64 depending on the category. Torsi and Morante (2018) report 57% annotator’s match ratio on claim detection on debates about vaccination, using a metric ranging from 0 to 1 instead of Cohen’s κ that ranges from -1 to 1. 3.2.2 Data Statistics Figure 1 shows the distribution of words inside recognized reasons that were labeled for each class of Stance. Reasons supporting vaccination (either Strongly or Weakly) constitute 71.59% of the total amount of Reasons labeled on the English dataset and 81,94% on the Spanish dataset, while Reasons against vaccination are 20.76% for English and 13.57% for Spanish. Reasons Strongly Against vaccination are specially scarce in both Spanish and English. Analyzing the data, we found that many of the documents that seemed to have been scraped from alternative medicine sources didn’t mentioned vac- cination and were filtered using the keywords as de- scribed in 3. We manually reviewed the 100 exam- ples used for agreement calculation and found that most documents from these sources that mentioned vaccines avoided taking explicit stance. Some ex- ample of reasons against vaccination found are ad- vertise possible secondary effects (sometimes sell- ing treatment), not enforcing vaccination during Covid pandemic and, from a scientific perspective, also narrowing scope of vaccination campaigns. 3.3 Data augmentation using GPT4 and GPT3.5Turbo We used OpenAI’s GPT4 to annotate 1000 new examples in each language and GPT3.5Turbo to annotate 2900 and 2400 new examples in English and Spanish respectively with Reasons and their Stances, spending U$S600 on GPT4 and U$S65 on GPT3.5-Turbo. We instructed the model to add [Be- the beginning of a gin:Reason:*Stance*] at reason and [End:Reason] at the end, where *Stance* stands for a value ranging from 1 to SP Words labeled Examples EN EN 24% 26% 8.3% 9.8% 12% 24% 24% 14% 44.2% 59.3% 14% 10% SP GPT4 GPT3.5 Human Table 2: Percentage of examples with no Reason labeled (left) and percentage of words that formed part of a Reason (right) in English and Spanish for annotations using GPT4, GPT3.5 and nichesourcing centage of words that are labeled as being part of a Reason, for English and Spanish and for corpus annotated through GPT4, GPT3.5-Turbo and nich- esourcing (humans). Though values are similar for GPT4 and GPT3.5-Turbo, it can be seen that hu- man annotators labeled proportionally almost half the amount of reasons. Figure 2 shows the dis- tribution of words inside recognized reasons that were labeled for each class of Stance, for GPT4 and GPT3.5-Turbo and for English and Spanish respec- tively. For GPT4, reasons supporting vaccination in the English dataset (either Strongly or Weakly) constitute 67.33% of the total amount of Reasons while Reasons against vaccination are 19.96%. In the Spanish dataset, reasons supporting vaccination labeled by GPT4 are 68.42% while reasons against vaccination are 22.48%. For GPT3.5-Turbo, reasons supporting vaccina- tion in the English dataset constitute 47.08% of the total amount of Reasons while Reasons against vaccination are 10.60%. In the Spanish dataset, reasons supporting vaccination labeled by GPT3.5- Turbo are 40.25% while reasons against vaccina- tion are 10.69%. In all cases, the proportion of Reasons labeled as "Strong Against" and specially the proportion of Reasons labeled as "Neutral" is much higher comparing to human annotators. In particular, for GPT3.5-Turbo, the Neutral class constitutes the ma- jority class by a significant percentage (42.3% for English and 49.1% for Spanish), while the "Strong Support" that constitutes the majority class in all other datasets is greatly diminished in comparison. We manually reviewed 20 examples that were not labeled with reasons by humans and found that GPT4 usually predicted sentences with a positive stance towards medical or scientific procedures in general as a "Support" reason and sentences with a positive stance towards Alternative Medicine re- lated concepts as "Against" disregarding if they Figure 1: Distribution of labeled words per annotation class on English and Spanish expert annotated dataset 5. The prompt also includes descriptions of the components to be annotated and instructions taken from the annotation manual, providing the model with similar information than human annotators. It also contains a three-shot learner with three labeled examples manually selected to contain reasons with diversity of stances. While most examples annotated this way re- spected the proposed format, we found 11 cases in English and 6 cases in Spanish where the end token [End:Reason] was added before any start to- ken. These cases were discarded and replaced with new ones. Though the prompt instructed the model not to modify in any sense the original example, we no- ticed that GPT4 and GPT3.5Turbo sometimes intro- duced some minor changes like adding punctuation symbols at the beginning or at the end of the exam- ple, correction of orthography mistakes or syntactic errors or abruptly ending generation though not all words from the original example were processed. We considered that these cases didn’t constitute a significant change over the original example and found that the result of the annotation could be used without much problems for training models on all proposed tasks. 3.3.1 Data Statistics Table 2 shows the percentage of the examples that have no Reason labeled on them and also the per- 4 were referring to vaccination, while GPT3.5-Turbo usually labeled them as Neutral. Apart from that, we found many annotations that seemed to be rea- sonable but that differed with the criteria taken by the human annotator. 4 Experiments Five pre-trained models (two in English, two in Spanish and one Multilingual) were fine-tuned us- ing the English, Spanish and both portions of the corpus respectively, to automatize the tasks defined in 3.1. Datasets were partitioned randomly in three parts for train, development and test, respecting a proportion of 80%, 10% and 10% respectively. We explored three different learning rate values (2e-05, 1e-06 and 2e-06) and kept the model that had the best F1 score on the development partition. We present a description of the models used: RoBERTa (Liu et al., 2019) is a transformer En- glish language model based on BERT (Devlin et al., 2019) that established a new state-of-the-art for 4 out of 9 GLUE tasks and matched state-of-the-art on other 2. LongFormer (Beltagy et al., 2020) is a trans- former based English language model specially designed for processing long documents. It is ini- tialized using the weights of RoBERTa and then further pre-trained again on a corpus of 100K long documents to induce learning of long-range depen- dencies. XLM Roberta (Conneau et al., 2020) is a trans- former model based on Roberta architecture but trained with 2.5TB of data containing 100 different languages. BETO (Cañete et al., 2020) is a transformer based on BERT but trained from scratch from a big compilation of Spanish unannotated corpus from 15 different sources. SpanBERTa (Tran, 2020) is a model developed by SkimAI based on RoBERTa’s architecture but trained from scratch with 18GB of Spanish data from a big corpus compiled from different sources. 4.1 Training models with human annotated data 4.1.1 Evaluation For each model, we report F1, Precision and Re- call scores over predictions on test dataset. For the two multi-label tasks of Stance recognition we 5 Figure 2: Distribution of labeled words per annota- tion class by GPT4 and GPT3.5-Turbno on English and Spanish datasets also report F1 scores per category. Scores must be interpreted considering the subjective nature of the task so for the sake of comparison, we took the examples labeled by the three annotators (used to calculate agreement) and calculated F1 scores of all possible pairs considering one to be the ground truth and the other, a human predictor trying to mimic the other annotator (and vice-versa). We use this score as an indicative measure of a machine predictor’s performance compared to a human, per each category. We report the average of the six F1 scores calculated, the worst and the best scores, for English and Spanish. 4.1.2 Results Table 3 shows the results obtained by all the classi- fiers trained for the task of Automatic Recognition of Reasons. For all models except BETO, perfor- mance was close or even slightly above Human annotators. Longformer performed best with an F1 score of 0.64. BETO performed more than 10 points below the other Spanish model, SpanBERTa. Table 4 shows the results obtained by all the classi- fiers trained for the task of Automatic Recognition of cites of Scientific Authority. RoBERTa classifier obtained a 0.43 F1 score, above average human per- formance. Longformer classifier obtained a much lower score, demonstrating that long-range depen- dencies are not important for this task. For Spanish, Human performance is much lower, which corre- sponds to the lower agreement values shown in table 1 so both models are above their performance. In this case, BETO obtained the higher score.Table 5 shows the results obtained by all the classifiers trained on the task of Automatic Recognition of Stances, predicting both if a word belongs to a rea- son and its Stance value. This is a difficult task because it involves classification using six labels with highly unbalanced distribution (see figure 1). Table 7 shows the F1 score per class for each model. English models achieve an acceptable performance for recognizing Support stances while they have no performance at all for Strong Against and Neutrals, both classes that were least frequents on the dataset. Only Longformer showed a better performance for Weak Against. Table 6 shows the results obtained by all the classi- fiers trained in the task of Automatic Recognition of Compressed Stances. Results for all models rise between .11 and .14, a significant improvement, compared to the non-compressed version. This leads to interpret that a great amount of "mistakes" 6 Model Roberta (EN) Longformer (EN) XLM-Roberta (Multi) SpanBERTa (SP) BETO (SP) Avg Human English Best Human English Worst Human English Avg Human Spanish Best Human Spanish Worst Human Spanish F1 0.56 0.64 0.59 0.58 0.47 0.56 0.58 0.52 0.53 0.57 0.49 Pr 0.69 0.58 0.66 0.47 0.50 0.56 0.64 0.5 0.54 0.71 0.46 Rec 0.47 0.72 0.53 0.77 0.44 0.56 0.53 0.54 0.54 0.47 0.54 Table 3: F1, Precision and Recall scores of different models for the task of predicting reasons within an ex- ample Model Roberta (EN) Longformer (EN) XLM-Roberta (Multi) SpanBERTa (SP) BETO (SP) Avg Human English Best Human English Worst Human English Avg Human Spanish Best Human Spanish Worst Human Spanish F1 0.43 0.29 0.25 0.27 0.36 0.42 0.45 0.38 0.25 0.4 0.17 Pr 0.38 0.68 0.49 0.46 0.38 0.5 0.7 0.25 0.28 0.53 0.32 Rec 0.51 0.19 0.18 0.20 0.33 0.5 0.3 0.83 0.28 0.32 0.12 Table 4: F1, Precision and Recall scores of different models for the task of predicting scientific authorities considered in the scoring of the models where because of difficulties at recognizing Strong vs Weak stances but not at recognizing Against vs Pro stances. Unlike previous experiments, here the best model’s performance (Longformer for English and Span- BERTa for Spanish) is .08 and .07 below human performance, respectively. Table 8 shows the F1 scores per class for this tasks. Spanish and Multilingual models still show no per- formance for Against class. Roberta, however, im- proved its performance significantly compared to both Against classes considered separately. Pro class shows an improvement compared to both Pro classes from the the task of Stance recognition yielding a good performance, close to the binary task of recognizing reasons. A manual examination of examples labeled by hu- mans and by automatic models can be found on appendix section B.1. Model Roberta (EN) Longformer (EN) XLM-Roberta (Multi) SpanBERTa (SP) BETO (SP) Average Human English Best Human English Worst Human English Average Human Spanish Best Human Spanish Worst Human Spanish F1 0.28 0.31 0.2 0.26 0.24 0.36 0.54 0.22 0.31 0.32 0.28 Pr 0.33 0.35 0.26 0.26 0.29 0.38 0.53 0.21 0.33 0.4 0.31 Rec 0.28 0.30 0.19 0.26 0.23 0.38 0.54 0.29 0.33 0.32 0.26 Table 5: F1, Precision and Recall scores of different models for the task of predicting stances Model Roberta (EN) Longformer (EN) XLM-Roberta (Multi) SpanBERTa (SP) BETO (SP) Average Human English Best Human English Worst Human English Average Human Spanish Best Human Spanish Worst Human Spanish F1 0.43 0.43 0.36 0.36 0.35 0.51 0.54 0.48 0.43 0.45 0.41 Pr 0.48 0.48 0.35 0.34 0.35 0.51 0.53 0.49 0.44 0.49 0.45 Rec 0.41 0.4 0.38 0.39 0.34 0.51 0.56 0.47 0.44 0.42 0.39 Table 6: F1, Precision and Recall scores of different models for the task of predicting a reduced set of stances (three instead of five) Model Roberta (EN) Longformer (EN) XLM-Roberta (Multi) SpanBERTa (SP) BETO (SP) Against Neu Support Wk Str Str Wk .26 .45 .0 .05 .0 .20 .46 .0 .27 .0 .14 .14 .0 .0 .0 .31 .31 .0 .0 .0 .21 .33 .0 .0 .0 Table 7: F1 Scores for the task of detecting stances per each class: Strong Against, Weak Against, Neutral, Weak Support and Strong Support Model Roberta (EN) Longformer (EN) XLM-Roberta (EN) SpanBERTa (SP) BETO (SP) Against Neutral Pro .56 .52 .54 .50 .45 .23 0.27 .0 0.01 0.00 .0 .0 .0 .0 .0 Table 8: F1 Scores per class for the task of detecting a compressed version of stances Hum + GPT4 All F1 Pr Rec F1 Pr Rec Model .31 .71 .20 .45 .70 .33 RoBERTa .19 .83 .11 Longformer .39 .78 .26 .10 .78 .05 XLM-Roberta .52 .54 .50 .03 .73 .02 .48 .51 .46 SpanBERTa .20 .83 .11 .43 .61 .33 BETO Table 9: Results of models trained with both Human + GPT4 and Human + GPT4 + GPT3.5 (All corpora) for predicting Reasons Hum + GPT4 All F1 Pr Rec F1 Pr Rec Model .21 .23 .22 .27 .28 .29 RoBERTa .15 .14 .17 Longformer .22 .21 .23 .18 .24 .18 XLM-Roberta .27 .26 .28 .21 .26 .22 .23 .22 .24 SpanBERTa .22 .48 .20 .20 .34 .19 BETO Table 10: Results of models trained with both Human + GPT4 and Human + GPT4 + GPT3.5 (All corpora) for predicting Stances. 4.2 Training using augmented data Table 9 shows the results of predictions of Rea- sons done by models trained by combining the dataset labeled through nichesourcing with only GPT4 annotated dataset and with both GPT4 and GPT3.5-Turbo annotated datasets. Models were tested against the same test partition used for ex- periments in section 4.1.2. It can be seen that by combining the Human annotated training partition with these datasets, the overall performance de- creased. The more data we use for training the worse result we get. The models whose perfor- mance decreased the most are those who had a better performance when training only with the Hu- man annotated corpus. Tables 10 and 11 show the results of training models with the same combi- nation of datasets for predicting Stances and the Compressed Stances respectively. Again, we ob- serve a decrease in model’s performances but much smaller than when analysing Reasons. In order to gain insights for analyzing these re- Hum + GPT4 All F1 Pr Rec F1 Pr Rec Model .32 .59 .32 .42 .49 .39 RoBERTa .36 .52 .34 Longformer .30 .36 .29 .23 .40 .25 XLM-Roberta .38 .43 .36 .31 .37 .29 .38 .40 .37 SpanBERTa .35 .58 .32 .35 .52 .33 BETO Table 11: Results of models trained with both Human + GPT4 and Human + GPT4 + GPT3.5 (All corpora) for predicting Compressed Stances 7 Hum+ GPT4 All Model RoBERTa Longformer XLM-Roberta SpanBERTa BETO Hum 12.5% 8.6% 22.6% 6.1% 14.6% 12% 21.5% 3.9% 11.5% 7.2% 5.1% 2.4% 1.2% 0.3% 1.8% Table 12: Percentage of words labeled by predictor as Reasons, for predictor trained with Human, Human + GPT4 and Human + GPT4 + GPT3.5Turbo annotated data. This is, the percentage of True positives + False positives over the whole dataset. sults, we evaluated the performance of GPT4 and GPT3.5-Turbo against the test dataset using human annotations as gold standard. From 100 examples, 71 in English and 70 in Spanish were unchanged after adjusting the output with a postprocessing script that removes possible additions by GPT. The rest of the examples were discarded. Tables 13 and 14 show F1, Precision and Re- call scores for Automatic Recognition of Reasons, Stances and Compressed Stances for annotations done with GPT4 and compare those values to the ones obtained by the best model and human evalu- ation from section 4.1.2. These results seem to suggest that GPT mod- els with fewshot learners don’t perform as well as smaller open-source models finetuned with high quality data labeled by experts, or at least, they are not able to absorb the subjective criteria defined through the annotation process only by prompting and in-context learning. Table 12 shows the percentage of words that were labeled as being part of a Reason by each model. This is, the percentage of the annotated data that is either a True or a False positive. We can observe that the more data is used for training, the more conservative the model trained with that data becomes when predicting on the test dataset. This may seem contradictory on a first inspection given that the augmented data have almost twice as much positive labels than Human annotated exam- ples (see section 3.3.1). Our hypothesis is that the combination of datasets with different annotation criteria affects negatively the models predictive ca- pacity making them more conservative. 5 Conclusions In this work we present a protocol for annotating reasons with a stance towards vaccination and a dataset of 1000 examples in English and 1000 ex- amples in Spanish annotated by six persons through 8 Rec Best F1 Hum F1 Pr Component F1 0.43 0.44 0.44 0.64 Reasons Stances 0.26 0.26 0.33 0.31 Compressed 0.39 0.40 0.40 0.43 0.56 0.36 0.51 Table 13: Performance of GPT4 on the English test dataset for detecting reasons, stances and compressed stances, compared with the best model and human F1 scores for reference Rec Best F1 Hum F1 Pr Component F1 Reasons 0.40 0.44 0.44 0.58 0.23 0.27 0.27 0.26 Stances Compressed 0.35 0.34 0.38 0.36 0.53 0.31 0.43 Table 14: Performance of GPT4 on the Spanish test dataset for detecting reasons, stances and compressed stances, compared with the best model and human F1 scores for reference Nichesourcing and 3900 examples in English and 3400 examples in Spanish annotated using GPT4 and GPT3.5-Turbo with a fewshot learner and a short synthesis of the annotation manual on the prompt. We release the dataset and the finetuned models for the free use of the scientific commu- nity. Despite the highly subjective nature of the task, we achieved an acceptable IAA thanks to an iterative annotation process where annotation cri- teria and examples were discussed. Annotation manual registering this process is also released. Ex- periments show that the annotation process can be reproduced automatically with satisfactory re- sults considering the level of subjectivity of the correspondent task measured using Cohen’s Kappa and the F1 scores of all combinations of annota- tors, with some room for improvement on the tasks of detecting Stances, particularly for the Against and Neutral classes. When augmenting the human annotated corpus using annotations performed by GPT4 and GPT3.5-Turbo performance decreased, specially for the task of automatically identifying Reasons. Manual inspection of the augmented data revealed that the annotations made by GPT models were not senseless but rather followed a different criteria than human experts, tending to consider a wider range of subjects to be vaccine related (lead- ing to annotate approximately 80% more examples and twice the amount of words). We conclude that GPT models were not able to reproduce the annotation criteria of human annotators only by incorporating a reduced version of the annotation manual and three examples on the prompt. mation campaigns to target most commonly used reasons supporting vaccination. We acknowledge this possible misuse of our tool but we also reason that contrasting arguments, facts and information should help people to take more informed and ra- tional decisions in the end. Though one of our goals is to fight misinfor- mation to help prevent outbreaks of preventable diseases, we also want to acknowledge that not all reasons against vaccination are necessarily misin- formation. Example 15 in appendix shows a reason against vaccination of immunosuppressed patients against COVID-19 based on lack of testing and the possibility to wait given that there was little cases in that country at that time. We found a sig- nificant amount of examples like this one, where reasons for not getting vaccinated were presented not against vaccination in general, but against a par- ticular vaccine or vaccination campaign and they were presented with a scientific base. The dataset along with the trained models presented in this work are meant to help to automatically identify what is being said about vaccines and vaccination. It must be used with caution and critical thinking. 6 Limitations In the following section we acknowledge some limitations found in our work. Experiment results show that data imbalance of the annotated corpus directly affects the predictive capabilities of the models. Results from tables 7 and 5 show that performance for the majority class ("Support") is much higher, while performance for the "Against" or "Neutral" classes is lower. This is related to the fact that these categories are scarce in the annotated dataset, as can be observed on figure 1. This data imbalance is a reflection of the pro- portion of online content supporting and attacking vaccination, being the first one much common than the second. Therefore, the only way to augment the sample of minority classes without artificially altering the distribution of classes is to label more examples, which is costly. This limits a possible use for the tool: to automatically recognize what is being said against vaccination in order to help elaborate adequate responses. More work needs to be done in order to improve model performance on minority classes. Our strategy of data augmentation using gener- ative models like GPT4 and GPT3.5-Turbo was based on providing them a summary of the annota- tion manual within the prompt and using in-context learning to make them learn the annotation criteria. However, annotation produced by these models fol- lowed a different criteria than human annotators, tending to consider a wider range of statements to be vaccine-related, therefore producing a different distribution of classes, which can be observed in figure 2. While GPT4 tended to consider any state- ment that was science-related to be of the Pro class and any statement relative to alternative medicine to be of the Against class, GPT3.5 tended to anno- tate a lot of vaccine unrelated content as a Neutral Reason. We believe that this difference in annota- tion criteria made the models that were finetuned using this data combined with human annotations to become even more conservative in their labelling, specially over the minority classes, thus achieving lower results. 7 Ethical Considerations Though this tool is intended to be used to fight mis- information campaigns causing vaccine hesitancy and possibly, outbreaks of preventable diseases, it could also be used as a tool to mine Reasons supporting vaccination in order to orient misinfor- 9 Heidi J Larson and Leesa Lin. 2024. Generative artifi- cial intelligence can have a role in combating vaccine hesitancy. BMJ, 384. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint. Prakash Poudyal, Jaromir Savelka, Aagje Ieven, Marie Francine Moens, Teresa Goncalves, and Paulo Quaresma. 2020. ECHR: Legal corpus for argument mining. In Proceedings of the 7th Workshop on Argu- ment Mining, pages 67–75, Online. Association for Computational Linguistics. Miftahul Qorib, Timothy Oladunni, Max Denis, Esther Ososanya, and Paul Cotae. 2023. Covid-19 vaccine hesitancy: Text mining, sentiment analysis and ma- chine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications, 212:118715. Maria Skeppstedt, Andreas Kerren, and Manfred Stede. 2018. Vaccine hesitancy in discussion forums: Computer-assisted argument mining with topic mod- els. Studies in health technology and informatics, 247:366–370. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. brat: a web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102–107, Avignon, France. Association for Compu- tational Linguistics. S. Tafuri, M.S. Gallone, M.G. Cappelli, D. Martinelli, R. Prato, and C. Germinario. 2014. Addressing the anti-vaccination movement and the role of hcws. Vac- cine, 32(38):4860–4865. Vaccine-preventable Dis- eases and Vaccinations Among Health-care Workers. Benedetta Torsi and Roser Morante. 2018. Annotating claims in the vaccination debate. In Proceedings of the 5th Workshop on Argument Mining, pages 47–56, Brussels, Belgium. Association for Computational Linguistics. Chris Tran. 2020. https://github.com/chriskhanhtran/spanish- bert. Steven Lloyd Wilson and Charles Wiysonge. 2020. Social media and vaccine hesitancy. BMJ Global Health, 5(10). References A. Barbaresi. 2021. Trafilatura: A web scraping library and command-line tool for text discovery and extrac- tion. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. Victor Boer, Michiel Hildebrand, Lora Aroyo, Pieter De Leenheer, Chris Dijkshoorn, Binyam Tesfa, and Guus Schreiber. 2012. Nichesourcing: Harnessing the power of crowds of experts. volume 7603, pages 16–20. M. Browne, P. Thomson, M. J. Rockloff, and G. Penny- cook. 2015. Going against the herd: psychological and cultural factors underlying the ‘vaccination con- fidence gap’. José Cañete, Gabriel Chaperon, Rodrigo Fuentes, Jou- Hui Ho, Hojin Kang, and Jorge Pérez. 2020. Span- ish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020. J. Cohen. 1960. A Coefficient of Agreement for Nomi- nal Scales. Educational and Psychological Measure- ment, 20(1):37. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsuper- vised cross-lingual representation learning at scale. Preprint, arXiv:1911.02116. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Damián Furman, Pablo Torres, José A. Rodríguez, Diego Letzen, Vanina Martínez, and Laura Alonso Alemany. 2023. Which argumentative aspects of hate speech in social media can be reliably identified? Preprint, arXiv:2306.02978. L. Hansen, L. R. Olsen, and K. Enevoldsen. 2023. Textdescriptives: A python package for calculating a large variety of metrics from text. Anna Kata. 2010. A postmodern pandora’s box: Anti- vaccination misinformation on the internet. Vaccine, 28(7):1709–1716. Heidi J. Larson, Emmanuela Gakidou, and Christo- pher J.L. Murray. 2022. The vaccine-hesitant mo- ment. New England Journal of Medicine, 387(1):58– 65. 10 Extract reasons either supporting or opposing vaccination, and link them to the corresponding stance values. • A Reason potentially or hypothetically appeals to someone considering vaccination. • They must be something relevant to someone hypothetically considering getting or not getting vaccinated. • Examples can have zero or many reasons. • Reasons have a number indicating their stance towards vaccination. • The token [Reason:begin:1] indicates a reason that is strongly against vaccination. • The token [Reason:begin:2] indicates a reason that is weakly against vaccination, this means, that it highlights negative aspects associated with vaccination without explicitly taking a stance against it. • The token [Reason:begin:3] indicates a reason that have a neutral stance towards vaccination or which stance can not be inferred. • The token [Reason:begin:4] indicates a reason weakly supporting vaccination. This means that it provides positive aspects of vaccination (like "they are free" or "they are accessible") without explicitly taking a stance. • The token [Reason:begin:5] indicates a reason strongly supporting vaccination. It associates vaccines explicitly with good qualities and positive concepts. • Do not remove the links from the original non annotated text, keep them in plain text, respecting the original format. • An identified reason should be marked with the special tokens "[Reason:begin:stanceValue]" before the first word of the reason and "[Reason:end]" at the end of the reason. • stanceValue is an integer ranging from 1 to 5. • The output should contain exactly the same text as the input only adding the special tokens when appropriate. Figure 3: Template used for generating prompts for annota- tion using GPT4 and GPT3.5. The final version of the prompt included three non-annotated examples linked to their corre- spondent annotations A Prompt used for data augmentation Figure 3 shows the template used for generating the prompts. While this was the same for both languages, examples used for in-context learning were selected to match the same language as the example being annotated. B Example Appendix B.1 Manual examination of predicted examples We randomly selected 3 examples from the test set and generated predictions for each of them using GPT4 and the best performing models from each category. Figures 4, 5, 6, 7, 8 and 9 show Exam- ple 105 from test dataset as annotated by a human annotator, a finetuned Longformer predicting exclu- sively Reasons, a finetuned Longformer predicting Reasons with their Stance, a finetuned Longformer predicting Reasons with the Compressed Stances, a finetuned Roberta predicting Scientific Author- ities, and GPT4 respectively. It can be observed that there is general agreement about the paragraph starting with "We are ready for a healthier tomor- row...", with the exception of the model predicting Stances, which left the first part of it unannotated. Human annotator considered the first two sentences to be valid reasons and GPT4 considered also the other next paragraph, but none of the pretrained models labeled them. The Roberta model predict- ing Scientific Authorities labeled "Western Win- sconsin Health", though Human annotator didn’t. Figures 10, 11, 12 and 13 show Example 114 from test dataset as annotated, also, by a human annotator, a model predicting exclusively Reasons, a model predicting Reasons with their Stance and a model predicting Scientific Authorities, respec- tively. In this case, the model predicting Scien- tific Authorities matched exactly with Human an- notation. For the model predicting Reasons, there are minor discrepancies regarding if the phrases starting with "Learn more about..." should be con- sidered a reason or not but mostly matches with Human annotation. When inspecting Stance pre- dictions we observe a higher level of discrepancies regarding what is a reason or not and also about the Stance value though the difference is only between Strong and Weak Support. This example was not labeled by GPT4 because it produced a result that was not correctly labeled (see section 4.2) In both examples we found that the Longformer model predicting Stances doesn’t label the full ex- tent of a sentence, something that was clearly stated on the annotation protocol and that prevailed on most Human Annotations and also mostly on mod- els predicting only Reasons. Figures 16, 17, 18, 19, 20 and 21 show Exam- ple 784 from test dataset as annotated by a human annotator, a finetuned Longformer predicting exclu- sively Reasons, a finetuned Longformer predicting Reasons with their Stance, a finetuned Longformer predicting Reasons with the Compressed Stances, a finetuned Roberta predicting Scientific Authorities, and GPT4 respectively. Model predicting only Rea- sons shows a high level of matching with Human annotations, though it predicted one extra sentence on the first paragraph. The paragraph starting with "Currently, Sanofi’s..." was only partially labeled by the Human annotator although the annotation 11 manual clearly states that whenever it was possible, whole sentences should be labeled, just like the Longformer model did. Model predicting Stances didn’t make any predictions and thus found no Rea- sons. However, model predicting the compressed version of the Stances did find a Reason which par- tially matched one of the Reasons labeled by the Human annotator. No Scientific Authority was la- beled on this example neither the model predicted one, so in that sense, they match. Lastly, GPT4 labeled the second sentence and not the first, like It labeled the whole the Human annotator did. paragraph starting with "Currently..." just like the annotation manual says, and also labeled the last paragrapgh matching the annotation made by the Human expert. Lastly, figure 14 shows example 160 from the test dataset which was manually selected because it has a clear stance against vaccination. In this case, no finetuned model predicted any Reasons or Scientific Authorities. GPT4’s prediction, on the other hand, matched exactly with Human annotator. 12 Figure 4: Example 105 from test dataset labeled through nichesourcing with Reasons, Stances and Scientific Authorities Figure 5: Example 105 from test dataset labeled by our finetuned Longformer only with Reasons Figure 6: Example 105 from test dataset labeled by our finetuned Longformer with Reasons and their Stance 13 Figure 7: Example 105 from test dataset labeled by our finetuned Longformer with Reasons and the Compressed version of Stances Figure 8: Example 105 from test dataset labeled by our finetuned Roberta-base with Scientific Authorities Figure 9: Example 105 from test dataset labeled by GPT4 14 Figure 10: Example 114 from test dataset labeled through nichesourcing with Reasons, Stances and Scientific Authorities Figure 11: Example 114 from test dataset labeled by our finetuned Longformer for the task of detecting Reasons 15 Figure 12: Example 114 from test dataset labeled by our finetuned Longformer for the task of detecting Reasons and their Stance Figure 13: Example 114 from test dataset labeled by our finetuned Roberta-base for the task of detecting Scientific Authorities Figure 14: Example 160 from test dataset labeled by a human annotator showing a Reason with a Strong stance Against vaccination 16 Figure 15: Example annotated by human annotator showing a weak stance against vaccination based on scientific debate about immunosuppressed patients Figure 16: Example 784 from test dataset labeled by a human annotator showing Figure 17: Example 784 from test dataset labeled by a finetuned longformer for the task of detecting Reasons 17 Figure 18: Example 784 from test dataset labeled by a finetuned longformer for the task of detecting Reasons and their Stance Figure 19: Example 784 from test dataset labeled by a human annotator showing a Reason with a Strong stance Against vaccination 18 Figure 20: Example 784 from test dataset labeled by a finetuned Roberta model for the task of detecting Scientific Authorities Figure 21: Example 784 from test dataset labeled by a GPT4 19
ai_researcher
1
Supporting_Interdisciplinary_Research_with_Cards-based_Workshops_-_A_Case_Study_on_Participatory_Planning_for_Mountain_Pastoralism.pdf
HCI Support Card: Creating and Using a Support Card for Education in Human-Computer Interaction Lesandro Ponciano Pontifical Catholic University of Minas Gerais Belo Horizonte, Minas Gerais, Brazil [email protected] 9 1 0 2 p e S 5 1 ] C H . s c [ 1 v 7 5 8 6 0 . 9 0 9 1 : v i X r a ABSTRACT Support cards summarise a set of core information about a sub- ject. The periodic table of chemical elements and the mathematical tables are well-known examples of support cards for didactic pur- poses. Technology professionals also use support cards for recalling information such as syntactic details of programming languages or harmonic colour palettes for designing user interfaces. While support cards have proved useful in many contexts, little is known about its didactic use in the Human-Computer Interaction (HCI) field. To fill this gap, this study proposes and evaluates a process for creating and using an HCI support card. The process consid- ers the interdisciplinary nature of the field, covering the syllabus, curriculum, textbooks, and students’ perception about HCI topics. The evaluation is based on case studies of creating and using a card during a semester in two undergraduate courses: Software Engineering and Information Systems. Results show that a support card can help students in following the lessons, remembering and integrating the different topics studied in the classroom. The card guides the students in building their cognitive maps, mind maps, and concept maps to study human-computer interaction. It fosters students’ curiosity and permanent engagement with the HCI topics. The card usefulness goes beyond the HCI classroom, being also used by students in their professional activities and other academic disciplines, fostering an interdisciplinary application of HCI topics. KEYWORDS Teaching, Learning, HCI education, UX education, Support card 1 INTRODUCTION Support cards are guides that organise a set of information about a subject in a way that facilitates and speeds up the recall of a given topic. They are widely used as an educational resource [1, 3, 7, 9]. In basic education, the “Math reference card” is an example of support card that organises the arithmetic operations of addition, subtrac- tion, multiplication, and division [1]. Support cards are used for education in many areas, such as paediatric education [3], physics education [9], and chemistry education [7]. Professionals working with technology also use support cards in their daily activities. For Permission to reproduce or distribute, in whole or in part, material extracted from this work, verbatim, adapted or remixed, as well as the creation or production from the content of such work, is granted without fee for non-commercial use, provided that the original work is properly credited. IHC 2019 - Workshop on HCI Education (WEIHC’19), Octuber 21-25, 2019, Vitória, Brazil. In Extended Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems. Porto Alegre: SBC. © 2019 by the author(s), in accordance with the terms of the Creative Commons Attribution-NonCommercial 4.0 International Public License (CC BY-NC 4.0). example, support cards are used to recall activities of a software development process [15], remember syntactic details of a given programming language [24], and consult colour palettes used in the design of user interfaces [26]. Therefore, the multidisciplinary and practical use of support card is well reported in the literature. While support cards have proved useful in many contexts, little is known about its use and its effectiveness in the teaching-learning process in the area of Human-Computer Interaction (HCI). This area integrates an interdisciplinary body of theories, recommendations for design (guidelines), heuristics, and evaluation methods that must be remembered or constantly consulted during the learning process, and during some systems’ design and evaluation activities. This kind of information must be known and understood by the students, but not necessarily memorised. Inspired by their use in other disciplines, this study investigates the role that support cards can play in this context. This study discusses the use of support cards for didactic pur- poses in HCI teaching and learning. A process for creating and using a support card in the HCI discipline is proposed. The process considers the interdisciplinary nature of HCI, covering the syllabus, curriculum, textbooks and students’ perception about HCI topics. The evaluation is based on case studies of creating and using a card during a semester in HCI classes that are parts of two undergrad- uate courses: Software Engineering and Information Systems. In doing so, three research questions are answered: (1) how to draw a support card to be used as a didactic resource in HCI classroom? (2) what utilities do students perceive in using the card? (3) how useful is the card beyond the HCI classroom? Results from observations and questionnaires answered by 37 students show that the card can be an important didactic resource to HCI students. The card helps students in following the classes and integrating the topics of the discipline. It adds to many other pedagogical resources used in classroom [11, 20]. The card also has interdisciplinary application. Students used the card in other classes that involve the design and evaluation of interactive systems, such as Software Development Laboratory, Software Testing, and Com- pletion of Course Work. The card was also useful in professional activities, such as Supervised Internship. 2 BACKGROUND AND RELATED WORK In this section, we analyse the relevant literature and related work about support cards and HCI education. 2.1 The Multidisciplinary Use of Support Cards Support cards summarise a set of core information about a subject and are a quick way of accessing information about a subject or task. The term “support card” is sometimes used as a synonym WEIHC ’19, October 21–25, 2019, Vitória - ES, Brazil Lesandro Ponciano for “quick reference card” and “pocket card” terms. All these terms refer to an instrument that can be easily handled, and that allows the person to find a topic quickly. Another commonly used term is “cheat sheet”, which is a kind of support card often made by the student for unauthorised consultation during an exam. In physics education, students are encouraged to create their own support cards, putting down anything they think will be use- ful, such as notes, formulas, and constants [9]. Differently, the card addressed in this study is a didactic resource made by a teacher to support students during their study and practices of interac- tive systems’ design and evaluation. This meaning and this use of support card are similar in other disciplines, such as the “Math reference card” [1] used in basic education, and the “Periodic Table of Chemical Elements”1 [7] used in chemistry education. In some disciplines, the support card can also be used in exams and activities in the classroom [9]. Thus, students did not worry about memorising everything; they just check their cards. This form of work is much closer to professional situations where stu- dents must solve problems, but they usually have reference material available for consultation. Support card utilities can go beyond the process of absorbing knowledge to include hands-on activities. For example, a portable support card can improve paediatric resident education in comprehensive care for children nearing the end of life [3]. The card may be a convenient, simple, and useful instrument to be used in these practical contexts. 2.2 HCI Education Students’ competence in Human-Computer Interaction involves the classic model of knowledge, skills and attitudes (KSA) [2, 14]. Knowledge is the condition of being aware of something, retain- ing and processing information. To gain knowledge about HCI, students must learn about concepts, methods, and theories that guide the creation of strategies, guidelines, and recommendations for design. Skill, in turn, is how to do something, performing ac- tivities and tasks on time and precisely. Applying heuristics and guidelines in designing a specific system is an example of HCI skill. Finally, attitude is forming a new or different viewpoint or belief about a subject. Changing and adapting HCI guidelines for different contexts are attitudes. Gaining knowledge, skill and attitudes in the HCI area is chal- lenging for many students [12, 27]. Studies have been conducted to find strategies to make this learning process easier. Three types of strategies focused on skills can be highlighted [17, 27]: (1) leading students to review and discuss stories describing project situations (history reviews); (2) conducting controlled studies in which stu- dents are led to solving practical problems (problem solving cases); (3) engaging students in situations where they must make decisions and justify their choices (decision-making cases). Students also must have attitude, thinking critically about the design, development, and evaluation of interactive systems [8, 17]. Strategies for achieving this end in the classroom include, for example: structured debates about human values [20]; lessons that integrates theories, drawings 1In chemistry education, the “Periodic Table of Chemical Elements” is a support card with visual systematic arrangement of the chemical elements ordered by atomic number, electron configuration, and recurring chemical properties. and music with active participation of the students [10]; and the institutionalisation of interdisciplinary activities [6]. These strategies presuppose that students have at least a basic knowledge of the key topics in the area. Knowledge is a building block for ability and attitude. Gaining knowledge is synonymous to retain information, usually through lectures and reading textbooks. For the new generations of students, who have been born with available information and communication technologies [5, 13, 18], memorising information may not be necessary, especially if it is readily available for consultation. In this paper, we propose the use of support card as a consultation didactic instrument to help students in gaining knowledge about the core HCI topics. 3 PROCESS OF CREATING AND USING A SUPPORT CARD This section presents a process for creating and using a support card for HCI education. The process focuses on three main questions that one must answer in creating and using a card: What are the main requirements of the support card? How to define the topics to be covered on the card? How to organise the topics on the card? and How to use the card in the classroom? In the following paragraphs, we discuss our approach to address these questions. We set out four main requirements for an HCI support card: (1) the card must be comprehensive of the content of the syllabus; (2) the card must preserve the interdisciplinary nature of the area; (3) the card must be useful to students in the classroom and homework; and, (4) the card should be easy to handle and read. The card must be comprehensive so that it can be used throughout the lessons and allows the student to connect the studied topics. The card should not be only technical; instead, it should highlight knowledge coming from other areas, preserving and contextualising interdisciplinary topics. Finally, to be used effectively, the card must provide value to students, such as serving as a quick reference to core HCI topics, an aid to the recall of some concept/method, and a study guide for exams. To be easy to read and handle, the card can be provided to students as images that can be viewed in smartphone/tablet or printed on a two-sided paper laminated with plastic material on both sides. It is impossible to cover all syllabus topics on a one-page card. Some topic prioritisation must be done. At a macro level, the card should cover the major topics defined in the discipline’s syllabus, textbooks, and curriculum. At a micro level, the card should cover topics that are harder to remember, and topics that serve as anchors for other topics. The card works as a support instrument for stu- dent’s memory, and topics included on the card serve as triggers for other topics that are not included on the card. So, interdisciplinary topics are especially important to be covered on the card. The organisation of the topics on the card must be intuitive for students so that they can use the card effectively. This can be done by using the “card sorting technique”. This technique is known in the HCI area since it is used in designing systems’ information architecture [28]. As part of creating a support card, this technique can be used to ask students to organise topics of the discipline into groups that make sense to them, and label those groups, forming categories of HCI topics (as exemplified in Figure 1). The card sort- ing outcomes inform how students would cluster and label HCI HCI Support Card WEIHC ’19, October 21–25, 2019, Vitória - ES, Brazil topics. Of course, it must be done when students have already stud- ied the topics. Results from the card sorting dynamics are insights for the information architecture of the next versions of the support card, which will be used by future students of the discipline. Thus, the card is renewed with each HCI course. 4 EVALUATION Following the approach described in the previous section, we cre- ated a support card and used it on two courses. In this section, we first present the card. Then, we discuss the method employed to evaluate the card and its use. Finally, we discuss the results. 4.1 The Support Card Figure 2 shows an overview of card’s layout. The card has two faces: face A, and face B. Topics on face A are more conceptual and theoretical than those on face B. Face A brings together HCI con- cepts and theories. Face B brings together topics directly used when designing or evaluating interactive systems, such as: ergonomic guidelines, golden rules, heuristics, and guidelines for icons. The card covers four major groups of topics: Basic Concepts, including interface, affordance, usability, and communicability; Theoretical Approaches, including Activity Theory, Action Theory, Colour Theory, and Semiotic Engineering; Design Process, including Usability Engineering, and Scenario-based Design; and, Evaluation Methods, including Heuristic Evaluation, System Usability Scale, and Semiotic Inspection. The material presented on the card is based on the textbooks by Barbosa e Silva [4], and by Rogers, Sharp and Preece [22], which are the main textbooks used in the courses in which the card was made. We also considered topics indicated in the ACM SIGCHI Curricula for Human-Computer Interaction [14]. 4.2 Research Methods This study follows a case study approach [23]. The case studies were conducted in Software Engineering and Information Systems undergraduate courses, which are taught in two campus of the Pontifical Catholic University of Minas Gerais (PUC Minas). The card was created at the end of the second half of 2018 by using insights from the card sorting dynamics. The card was used in classes during the first semester of 2019 in the Information Systems and Software Engineering undergraduate courses. It was made available to the students as images in the Portable Network Graphics (PNG) format, so students can use it printed or digital. The card is fully readable when printed in A4 size. In digital format, the image size can be enlarged or reduced according to student’s preferences. Software Engineering and Information Systems undergraduate courses are composed of 8 periods, each period being equivalent to one semester. HCI discipline is currently located in the fifth period in Information System course, and in the fourth period, in Software Engineering course. In both courses, HCI discipline has the same syllabus, has in total 68 classes each of 50 minutes, and has Requirements Engineering discipline as a prerequisite. The evaluation was made through teacher’s class notes and a questionnaire. Questions of the questionnaire are shown in Table 1. The questionnaire was answered by students after using the card in classroom for 4 months. Students’ participation was informed, voluntary, and anonymous. Altogether, 37 students answered the questionnaire, being 17 students from the Software Engineering course (hereafter they are identified with codes ranging from SE-P1 to SE-P17) and 20 students from the Information Systems course (hereafter identified with codes ranging from IS-P1 to IS-P20). Figure 1: Card sorting sessions. Students are asked to cluster and categorise HCI topics in way that makes sense to them. The support card is made available to students after a few classes, when some topics have already been taught to them. For example, the card can be released after the first 8 classes in an HCI course consisting of 68 classes. The idea is that, based on what has al- ready been studied, students can see how the studied topics are summarised on the card. The card should be used in the classroom. An association between the subject of the class and the topics on the card should be made. For example, the card may contain a topic (represented by a word or picture) associated with each set of slides or each book chapter. Teachers can highlight where each subject of the class appears on the card. In hand-on activities, such as Nielsen’s Heuristic Evaluation of a system [19], students can consult the support card to remember the heuristics. Teachers can also stimulate a creative use of the card, such as asking students to tell a story by articulating HCI concepts shown on the card. WEIHC ’19, October 21–25, 2019, Vitória - ES, Brazil Lesandro Ponciano (a) Face A, predominance of conceptual and theoretical HCI topics (b) Face B, predominance of practical HCI topics Figure 2: Overview of the layout of the HCI Support Card. Full size and high-resolution image can be download at www.lsd. ufcg.edu.br/~lesandrop/cartaoIHC. 4.3 Results The usefulness of the support card in the HCI education. Stu- dents have used and recommended the card. A total of 12 (33%) students used the card in physical format (e.g. printed on paper), 20 (53%) students used the card in digital format, and 5 (14%) students used it in both formats. A total of 11 (30%) students used the card 10 or more times throughout the semester. All the 37 students (100%) answered that would recommend the card to other HCI students. Altogether, a total of 36 students (97%) suggested that the support card should continue to be used in the HCI discipline. Of these students, a total of 29 (78%) students provided some justification for their suggestion. Some of such justifications are detailed below. • Students explain that the support card allows for a quick search and exploration of the topics of the discipline. For example, students declare that: “The card helps a lot to absorb the contents of the discipline, besides allowing consulting the content quickly” (IS-P3), “It assists in quick consultation of rules, evaluation methods, etc. in the process of evaluation and construction of the system, serving a bit like a mental map for the exam” (IS-P15), “The card serves as a basic guide to remember the topics” (SE-P7); “It helps me in connecting the idea to the name; sometimes I remember the content, but I forget the name, the card helps a lot.” (SE-P16). • Students explain that the support card makes it easy to study and organise the topics of the discipline. For example, stu- dents declare that: “The card helps at the time of the studies making it easy to recall the contents” (IS-P14); “It allows us to remember the content and even to better summarise the con- tents at the time of studying, highlighting the main subjects within HCI discipline” (IS-P9); “It is useful to organise content in my mind” (SE-P1); and “The card is great for disciplines like HCI - full of details, rules and keywords” (SE-P6). From the teacher’s perspective, one effect of using the support card in classroom is that it arouses curiosity and allows for a perma- nent engagement with HCI topics. At the beginning of the semester, when the card is made available, students explore the entire card HCI Support Card WEIHC ’19, October 21–25, 2019, Vitória - ES, Brazil Table 1: Questions to assessment students’ perception about the use of support card in the HCI classroom. Item of evaluation Question Answer format/options Engagement with the card Do you estimate that you consulted the card how many times throughout the HCI discipline? (a) None; (b) 1 to 5 times; (c) 6 to 10 times; (d) 10 to 15 times; (e) More than 15 times In what format did you use the card? (a) digital format (e.g., smartphone, tablet, com- puter); (b) physical format (e.g., printed on paper) Usefulness in the HCI discipline Would you recommend the card to other HCI students? Yes/No answer Usefulness beyond HCI discipline Suggestion of improvement Do you think the card should continue to be made available to students and used in the next classes of the HCI discipline? How much do you agree with the statement “The card summarises the main topics studied in the HCI discipline” Was the card useful for you in any professional activity? (such as in the company in which you work, on the internship, or something equivalent) Yes/No answer, with an open-ended justification Five-point Likert scale answer [16] Yes/No answer, with an open-ended justification Has the card been helpful to you in any discipline other than the HCI discipline? Yes/No answer, with an open-ended justification Do you think the card will be of any use to you after the completion of the HCI discipline? Is there anything about HCI that you do not have on the card and that you believe should be added to the card? If yes, give us an example. Yes/No answer, with an open-ended justification Open-ended response Do you have anything else to tell us about the card or about its use in the classroom? If yes, please, write below. Open-ended response and are curious about the topics on it (what is this? when will we study this?). During the semester, when handling the card in the classroom, the student sees, reads, thinks about small elements of the whole discipline. This constant contact causes a permanent engagement with the HCI topics. The usefulness of the support card beyond the HCI class- room (in other undergraduate disciplines or professional ac- tivities). Students indicate that they used the card in professional activities. For example, the IS-P9 student says “When I am develop- ing a user interface, I try to remember the concepts of HCI, such as gestalt principles, and the card helps me to remember”. Other justi- fications are associated with knowledge gained in the classes, for example: “I showed some errors when developing the ERP [Enterprise Resource Planning] where I work” (IS-P12); “In a development team, at the company, we discuss about aesthetic scenarios” (IS-P14); and “[I consider the topics] in the use of colours for an application” (IS-P11), Students also used the card to remember and apply HCI topics in other disciplines of the course, such as the following disciplines: Interdisciplinary Software Work IV (SE-P3 and SE-P6); Software Development Laboratory (SE-P6); Completion of Course Work (SE- P10 and SE-P16); and, Software Testing (IS-P7). The student IS-P7 explains “the card is useful in Software Testing because it has concepts that help in understanding the point of view of a user”. The card summarises some evaluation methods such as Nielsen’s Heuristic Evaluation and System Usability Scale (SUS), making it easier to associate HCI topics and software testing in general. The use of the card by students in other disciplines can go beyond the semester in which they are studying HCI: “At the end of the course, the slide is rarely consulted, but the card can be” (IS-P8). Suggestions of improvements on the card. Two students pro- vided suggestions about the layout of the card: “I think a point to change would be the subjects being in order of teaching in the class- room” (SE-P3), and “Should have a pocket version” (IS-P12). This type of suggestion is important to understand how the student would like the card to be, how to improve the card in future versions, and how to avoid misunderstanding about card’s purpose. Limitations. Our proposal and results have limitations that should be highlighted. Although the card has been used in classes WEIHC ’19, October 21–25, 2019, Vitória - ES, Brazil Lesandro Ponciano of two different courses (Information Systems and Software Engi- neering), it is unknown if students’ perception would be similar in HCI disciplines taught as part of other courses, such as Computer Engineering, Art, Digital Games Development, and Computer Sci- ence. As the support card is an auxiliary resource and of optional use, it can be used only by those students who find its use beneficial. 5 DISCUSSIONS AND CONCLUSIONS Support card is a pedagogical resource widely used in many dis- ciplines. In this work, we report the experience of creating and using a support card in HCI classes in Information Systems and Software Engineering courses. We discuss the main requirements of the support card, how to define the topics to be covered on the card, how to organise the topics on the card, and how to use the card in the classroom. Our results show the importance of support card as a didactic resource in HCI education. It helps the students (1) to follow the lessons, (2) to integrate the different topics of the course, and (3) to make it easy an interdisciplinary application of studied topics. The card works as one of the inputs for students to build their cognitive maps, mind maps, and concept maps in studying human-computer interaction. It stimulates students’ curiosity and engagement with HCI topics. The use of the card goes beyond the HCI discipline; it is also used by students in their professional activities and other academic disciplines. One cannot expect a single and definitive support card to the HCI field. HCI is a dynamic field, so the card should be adapted continually following such dynamic and the needs of each course. Thus, the card is a dynamic instrument, changing with the context of the course in which it is used. The process described in this paper can be followed to create and adapt a support card to each context. To be effective, the support card should also be a “student-oriented” card, being continually adapted to students’ needs. Feedback from student should be continually obtained and considered so that the card remains effective overtime. Finally, the card is complementary; it adds to many other pedagogical resources used in HCI education. We suggest the following questions to be investigated in future work. What is the most effective method for establishing which syllabus topics should be highlighted on the card? Can interaction features make even more engaging the digital use of the card? It is also relevant to create a card totally focused on the practical work of designing and evaluating interactive systems. Doing so will require a less pedagogical and more market-oriented approach. Future work can also investigate the use of support card beyond the HCI teaching-learning process. A theoretical and general un- derstanding of the use of this type of artefact is desired. It may be useful in many situations where a human being performs tasks that require remembering or processing an amount of information that goes beyond the limits of the cognitive system [25], such as some human intelligence or human computation tasks [21]. We hope this work will lead to new studies in this direction. REFERENCES [1] Ann Anderson. 1995. Creative use of worksheets: Lessons my daughter taught me. Teaching Children Mathematics 2, 2 (1995), 72–80. [2] Liesbeth K.J. Baartman and Elly de Bruijn. 2011. Integrating knowledge, skills and attitudes: Conceptualising learning processes towards vocational competence. Educational Research Review 6, 2 (2011), 125 – 134. [3] Emily M Balkin, Katherine Ort, Robert Goldsby, Jessica Duvall, and Cynthia D Kim. 2017. Pocket reference card improves pediatric resident comfort in caring for children at end of life. Journal of palliative medicine 20, 4 (2017), 409–414. [4] Simone Barbosa and Bruno Silva. 2010. Interação Humano-Computador. Elsevier Brasil, Rio de Janeiro, RJ, Brazil. [5] Clodis Boscarioli, Luciana Zaina, Sílvia Bim, Simone Barbosa, and Milene Silveira. 2016. HCI Education in Brazil from the Results of the Workshop on Teaching of HCI. In 15th Brazilian Symposium on Human Factors in Computing Systems. ACM, USA, 52:1–52:4. [6] Nathalino P. Britto, Maria Elizabeth S. Furtado, and Rafaela P. L. Cardoso. 2018. Uma Estratégia para Institucionalização de Iniciativas para Interdisciplinaridade de IHC aplicada ao Ensino de Programação. In Anais Estendidos do XVII Simpósio Brasileiro sobre Fatores Humanos em Sistemas Computacionais, Workshop sobre Educação em IHC. SBC, Porto Alegre, RS, Brazil, 1–6. [7] Heinz Cassebaum and George B. Kauffman. 1971. The Periodic System of the Chemical Elements: The Search for Its Discoverer. Isis 62, 3 (1971), 314–327. [8] Elizabeth Churchill, Anne Bowser, and Jennifer Preece. 2013. Teaching and learning human-computer interaction: past, present, and future. Interactions 20, 2 (2013), 44–53. [9] David I Cone. 2003. Benefits of a “Cheat Sheet”. The Physics Teacher 41, 9 (2003), 509–510. [10] Elton José da Silva and Hugo Eduardo Ziviani. 2018. Desenho e Música no Ensino de IHC: relato de experiência de uma aula sobre conceitos básicos da Engenharia Semiótica. In Anais Estendidos do XVII Simpósio Brasileiro sobre Fatores Humanos em Sistemas Computacionais, Workshop sobre Educação em IHC. SBC, Porto Alegre, RS, Brazil, 1–6. [11] Adriano Luiz de Souza Lima and Fabiane Barreto Vavassori Benitti. 2019. Let’s Talk About Tools and Approaches for Teaching HCI. In Learning and Collaboration Technologies. Designing Learning Experiences, Panayiotis Zaphiris and Andri Ioannou (Eds.). Springer International Publishing, Cham, 155–170. [12] Clive L Dym, Alice M Agogino, Ozgur Eris, Daniel D Frey, and Larry J Leifer. 2005. Engineering design thinking, teaching, and learning. Journal of engineering education 94, 1 (2005), 103–120. [13] Diego Fontdevila. 2017. Tales from an Agile Journey: Designing Curricula for Millennials in Industry and Academia. In 1st International Workshop on Software Engineering Curricula for Millennials. IEEE, USA, 2–2. [14] Thomas T Hewett, Ronald Baecker, Stuart Card, Tom Carey, Jean Gasen, Marilyn Mantei, Gary Perlman, Gary Strong, and William Verplank. 1992. ACM SIGCHI curricula for human-computer interaction. ACM, New York, NY, USA. [15] Michael James. 2010. Scrum reference card. https://cs.anu.edu.au/courses/ comp3120/public_docs/CollabNetScrumReferenceCard.pdf [16] Rensis Likert. 1932. A technique for the measurement of attitudes. Arch Psych 140, 22 (1932), 55. [17] D Scott McCrickard, Christa M Chewar, and Jacob Somervell. 2004. Design, science, and engineering topics?: teaching HCI with a unified method. ACM SIGCSE Bulletin 36, 1 (2004), 31–35. [18] Maria Augusta Vieira Nelson, Rommel Vieira Carneiro, and Marco Rodrigo Costa. 2017. Interdisciplinary Software Projects As an Active Methodology to Practice for the Profession. In 1st International Workshop on Software Engineering Curricula for Millennials. IEEE Press, USA, 28–32. [19] Jakob Nielsen and Rolf Molich. 1990. Heuristic Evaluation of User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 249–256. [20] Lesandro Ponciano. 2018. Debate Estruturado: Uma Estratégia Pedagógica para Ensino e Aprendizagem de Valores Humanos em Interação Humano-Computador. In Anais Estendidos do XVII Simpósio Brasileiro sobre Fatores Humanos em Sistemas Computacionais, Workshop sobre Educação em IHC. SBC, RS, Brazil, 1–6. [21] Lesandro Ponciano, Francisco Brasileiro, Nazareno Andrade, and Lívia Sampaio. 2014. Considering human aspects on strategies for designing and managing distributed human computation. Journal of Internet Services and Applications 5, 1 (2014), 10. [22] Yvonne Rogers, Helen Sharp, and Jenny Preece. 2011. Interaction Design: Beyond Human-Computer Interaction (3rd ed.). John Wiley & Sons, New Jersey, US. [23] Per Runeson, Martin Host, Austen Rainer, and Bjorn Regnell. 2012. Case study research in software engineering: Guidelines and examples. John Wiley & Sons, New Jersey, US. [24] Tom Short. 2004. R Reference Card. https://cran.r-project.org/doc/contrib/ Short-refcard.pdf [25] Herbert A Simon. 1972. Theories of bounded rationality. Decision and organization 1, 1 (1972), 161–176. [26] Bob Stein. 2000. Web Design Color Reference Card. https://www.amazon.com/ Design-Color-Reference-Card-Chart/dp/0967826314 [27] Lauren Wilcox, Betsy DiSalvo, Dick Henneman, and Qiaosi Wang. 2019. Design in the HCI Classroom: Setting a Research Agenda. In Proceedings of the 2019 on Designing Interactive Systems Conference (DIS ’19). ACM, NY, USA, 871–883. [28] Jed R. Wood and Larry E. Wood. 2008. Card Sorting: Current Practices and Beyond. J. Usability Studies 4, 1 (Nov. 2008), 1–6.
ai_researcher
1
Collaborative_Safe_Formation_Control_for_Coupled_Multi-Agent_Systems.pdf
COLLABORATIVE SAFE FORMATION CONTROL FOR COUPLED MULTI-AGENT SYSTEMS 4 2 0 2 r p A 2 ] C O . h t a m [ 3 v 6 5 1 1 1 . 1 1 3 2 : v i X r a Brooks A. Butler Elmore Family School of Electrical and Computer Engineering Purdue University [email protected] Chi Ho Leung Elmore Family School of Electrical and Computer Engineering Purdue University [email protected] Philip E. Paré ∗ Elmore Family School of Electrical and Computer Engineering Purdue University [email protected] ABSTRACT The safe control of multi-robot swarms is a challenging and active field of research, where common goals include maintaining group cohesion while simultaneously avoiding obstacles and inter-agent collision. Building off our previously developed theory for distributed collaborative safety-critical control for networked dynamic systems, we propose a distributed algorithm for the formation control of robot swarms given individual agent dynamics, induced formation dynamics, and local neighbor- hood position and velocity information within a defined sensing radius for each agent. Individual safety guarantees for each agent are obtained using rounds of communication between neighbors to restrict unsafe control actions among cooperating agents through safety conditions derived from high-order control barrier functions. We provide conditions under which a swarm is guaranteed to achieve collective safety with respect to multiple obstacles using a modified collaborative safety algorithm. We demonstrate the performance of our distributed algorithm via simulation in a simplified physics-based environment. 1 Introduction Nature has always inspired scientists and engineers to design elegant solutions for real-life problems. One of the nature-inspired ideas in the field of automatic control comes from the observation that collective behavior in nature is often governed by relatively simple interactions among individuals (Strogatz [2004]). The set of collaboration rules introduced by Reynolds [1987] is one of the early attempts in the literature to describe collective formation behavior in the animal kingdom. In more recent years, multi-agent formation problem has received special attention in robotics and automatic control due to its broad range of applications and theoretical challenges. While it is impossible to exhaustively categorize every formation control-related research, we can organize them in terms of the fundamental ideas behind the control schemes (Beard et al. [2001], Reynolds [1987]), sensing capability and interaction topology of the formation controller (Oh et al. [2015]), and the formation control-induced problems of interest such as the consensus reaching problem (Ren and Cao [2010]). ∗This work was partially funded by Purdue’s Elmore Center for Uncrewed Aircraft Systems and the National Science Foundation, grant NSF-ECCS #2238388. Collaborative Safe Formation Control Some generalizations of the formation control-induced problems also find their application in other multi-agent cyber- physical systems outside the robotic community. Some examples of critical multi-agent model applications include the mitigation of epidemic-spreading processes (Paré et al. [2020], Butler and Paré [2023a]), smart grid management (Tuballa and Abundo [2016]), and uncrewed aerial drone swarms (Tahir et al. [2019]). Since many of these multi-agent cyber-physical systems have become ubiquitous in modern society, effective and safe operation in multi-agent systems is crucial, as disruptions in these interconnected systems can potentially have far-reaching societal and economic consequences. Theoretical frameworks and techniques from the study of safety-critical control are natural solutions to the problem of collaborating safety requirements in the multi-agent formation problem. Foundational work on safety critical control can be traced back to the 1940s (Nagumo [1942], Blanchini [1999]). Recently, the introduction and refinement of control barrier functions (CBFs) (Ames et al. [2016, 2019]) has induced new excitement in the field of safety-critical control. Since their introduction, control barrier functions have been used in numerous applications to provide safety guarantees in various dynamic system models (Chung et al. [2018], Wang et al. [2017]). Moreover, multiple recent studies have reported CBFs’ practicality and theoretical soundness in solving the multi-agent obstacle avoidance problem (Wang et al. [2017], Santillo and Jankovic [2021], Jankovic and Santillo [2021]). In this paper, we extend the work in Butler and Paré [2023b] to design a non-intrusive collaborative safety filter for formation control with online obstacle avoidance guarantees. The problem formulation and analysis are performed under the formality of CBFs. The collaborative safety filter is realized by a novel communication algorithm wherein agents share their maximum safety capability within their neighborhood in the formation. The maximum safety capability is computed from each agents’ local distance-based sensor data, and therefore, is flexible for a wide range of real-life implementation scenarios. We show in simulation that rounds of communication between agents terminate in finite time with consensus on the desired collaboratively safe control action if the underlying centralized constraint optimization problem is feasible. The proofs for all lemmas and theorems presented in this work can be found in the full version of this paper at Butler et al. [2024]. We organize the remainder of this paper as follows. We introduce some preliminaries for networked models and safety-critical control in Section 2, then formally define the safe formation control problem in Section 3. We then present our proposed method for safe formation control through active collaboration via communication in Section 4 and illustrate these results with a simplified two-dimensional formation control example in Section 5. 1.1 Notation Let |C| denote the cardinality of the set C. R and N are the set of real numbers and positive integers, respectively. Let C r denote the set of functions r-times continuously differentiable in all arguments. We define ∥ · ∥2 and ∥ · ∥∞ to be the two-norm and infinity norm of a given vector argument, respectively. We notate 0 and 1 to be vectors of all zeros and all ones, respectively, of the appropriate size given by context and [v]k to be the kth element of vector v. A monotonically increasing continuous function α : R+ → R+ with α(0) = 0 is termed as class-K. We define [n] ⊂ N to be a set of indices {1, 2, . . . , n}. We define the Lie derivative of the function h : RN → R with respect to the vector field generated by f : RN → RN as Lf h(x) = ∂h(x) ∂x f (x). (1) We define higher-order Lie derivatives with respect to the same vector field f with a recursive formula (Röbenack [2008]), where k > 1, as Lk f h(x) = ∂Lk−1 f h(x) ∂x f (x). (2) We compute the Lie derivative of h along the vector field generated by f and then along the vector field generated by g as ∂ ∂x (cid:18) ∂h(x) ∂x (cid:19) f (x) g(x). (3) LgLf h(x) = 2 Preliminaries We define a networked system using a graph G = (V, E), where V is the set of n = |V| nodes, E ⊆ V × V is the set of edges. Let Ni be the set of all neighbors with an edge connection to node i ∈ [n], where Ni = {j ∈ [n] \ {i} : (i, j) ∈ E}. (4) We further define xi to be the state vector for agent i ∈ [n], xNi to be the concatenated states of all neighbors to agent i, i.e. xNi = (xj, ∀j ∈ Ni), and x to be the full state of the networked system. 2 Collaborative Safe Formation Control Recall the definition of high-order barrier functions (HOBF) (Xiao and Belta [2019, 2021]), where we define a series of functions in the following form ψ0 i (x) := hi(x) i (x) := ˙ψ0 ψ1 ... i (x) := ˙ψk−1 ψk i i (x) + α1 i (ψ0 i (x)) (x)) (5) (x) + αr i (ψk−1 i i (·), . . . , αk where α1 corresponding series of sets i (·), α1 i (·) denote class-K functions of their argument. These functions provide definitions for the i (x) ≥ 0} i (x) ≥ 0} i := {x ∈ RN : ψ0 C1 i := {x ∈ RN : ψ1 C2 ... i := {x ∈ RN : ψk−1 Ck i (x) ≥ 0} (6) which yield the following definition. Definition 1. Let C1 node i ∈ [n] if hi ∈ C k and there exist differentiable class-K functions α1 x ∈ (cid:84)k i , . . . , Ck i , C2 i be defined by (5) and (6). We have that hi is a node-level barrier function (NBF) for i (x) ≥ 0 for all i such that ψk i , . . . , αk i , α2 r=1 Cr i . This definition leads naturally to the following lemma (which is a direct result of Theorem 4 in Xiao and Belta [2019]). Lemma 1. If hi is an NBF, then (cid:84)k i is forward invariant. r=1 Cr 3 Safe Formation Control Problem In this section, we define a general version of the safe formation control problem with respect to applying a safety filter to control actions that affect individual agent behavior governed by assumed formation dynamics. For the sake of notational brevity, we use x, the full state of the network, and (xi, xNi), the concatenated states of agents in the neighborhood centered on agent i ∈ [n], interchangeably moving forward. Consider the first-order dynamics for a single agent i ˙xi = fi(xi) + gi(xi)ui (7) where ui ∈ Ui ⊂ RMi is some form of affine acceleration controller for agent i. Let uf i (xi, xNi) be a distributed feedback control law that induces some formation behavior. We can treat these formation dynamics as part of the natural dynamics of the system where uf i (xi, xNi) is modified by some safety filter control law as ˙xi = fi(xi) + gi(xi)(uf i (xi, xNi) − us i ) i is a modification to the formation control signal to ensure agent safety. We can then rewrite the dynamics in ˙xi = ¯fi(xi, xNi) + ¯gi(xi)us i ¯fi(xi, xNi) = fi(xi) + gi(xi)uf i (xi, xNi ) ¯gi(xi) = −gi(xi). (8) (9) (10) where us (7) as where and We assume each agent has positional safety constraints with respect to a given obstacle o ∈ Oi(t), where Oi(t) is the set of identifiers for obstacles within the sensing range of agent i at time t. Note that other agents within the sensing range of agent i at time t will also be included in Oi(t), which does not change the computation of the first-order safety condition. However, if agents in Oi(t) are also in Ni, then the expression for the second-order safety condition for inter-agent collision avoidance with respect to the defined formation dynamics, which will be explained in further detail in Section 4, must also incorporate partial derivatives with respect to xj for j ∈ Ni. For convenience, we drop the notation of time dependence on Oi moving forward. We define the set of viable safety filter control actions as i ∈ Ui : uf i (x) = {us U s i (x) − us i ∈ Ui}. (11) 3 Collaborative Safe Formation Control In this paper, we assume safety conditions for each agent are defined with respect to the relative position of agents to obstacles. Therefore, since control is implemented through acceleration, we construct a higher-order barrier function for each agent i with respect to a given obstacle o as follows ϕ0 i,o(xi, xo) = hi(xi, xo) i,o(xi, xo) = ˙ϕ0 ϕ1 i,o(xi, xo) + α0 i (ϕ0 i,o(xi, xo)) where xo is the state of obstacle o ∈ Oi. These functions then define the corresponding safety constraint sets i,o := {(xi, xo) ∈ RNi × RNo : ϕ0 C1 i,o := {(xi, xo) ∈ RNi × RNo : ϕ1 C2 i,o(xi, xo) ≥ 0} i,o(xi, xo) ≥ 0}. (12) (13) Given the definition of these constraint sets, we can define an agent-level control barrier function and subsequent forward invariant properties as follows. Definition 2. We have hi,o(xi, xo) is an agent-level control barrier function (aCBF) if for all (xi, xo) ∈ C1 i ∈ U s and t ∈ T there exists a class-K function α1 i (x) such that i,o ∩ C2 i,o i and us ˙ϕ1 i,o(x, xo, uf i (x), us i ) + α1 i (ϕ1 i,o(xi, xo)) ≥ 0. (14) We see that (14) characterizes the first-order safety condition for agent i with respect to obstacle o since the acceleration control input appears in the second derivative of hi,o, which is computed in ˙ϕ1 i,o. This barrier function definition naturally leads to the following result on agent-level safety. Lemma 2. If hi,o(xi, xo) is an aCBF, then C1 i,o is forward invariant for all t ∈ T . i,o ∩ C2 Proof. If hi,o is an aCBF, then ∃us i ∈ U s i such that ˙ϕ1 i,o(x, xo, uf i (x), us i ) + α1 i (ϕ1 i,o(xi, xo)) ≥ 0 for all (xi, xo) ∈ C1 ˙ϕ1 i,o(x, xo, uf Lemma 1, it naturally follows that is C1 i,o ∩ C2 i (x), us i ) ≥ 0. Therefore, if (xi(t0), xo(t0)) ∈ C2 i,o ∩ C2 i,o forward invariant. Thus, as ϕ1 i,o. i,o(xi, xo) approaches zero there will be some us i such that i,o is forward invariant for all t ∈ T . By i,o then C2 With agent-level control barrier functions defined, we are now prepared to state our formal problem for this work, which is defined in our notation as follows: min i ∈U s us i (x) 1 2 i (xi, xNi ) − us i (cid:13) (cid:13) (cid:13) 2 2 (cid:13) (cid:13)uf (cid:13) (cid:16) s.t. ˙ϕ1 i,o x, xo, uf i , us ∀i ∈ [n], ∀o ∈ Oi. i (cid:17) + α1 i (cid:0)ϕ1 i,o(xi, xo)(cid:1) ≥ 0 (15) In words, we aim to provide a control policy that minimally alters the prescribed distributed formation control signal such that the defined safety conditions for obstacle avoidance are satisfied for all agents in the formation. We present our solution to this problem in the following sections, where in Section 4 we define a barrier function candidate based on the relative positions of agents to obstacles and leverage previous work in Butler and Paré [2023b] to define a second-order safety condition that includes the effect of neighbor’s formation dynamics on agent safety. We then modify the collaborative safety algorithm from Butler and Paré [2023b] in Section 4.1 to account for communicating safety needs with multiple safety constraints and provide a method for computing the maximum agent safety capability in Section 4.2, culminating in a modified collaborative safety algorithm for formation control in Section 4.3. We then demonstrate our collaborative safety algorithm in simulation on a distributed formation controller for two-dimensional agents in Section 5. 4 Safe Formation Control with Collaboration We now present a method by which each agent can communicate safety needs to its neighboring agents to achieve collective safety in a distributed manner. We define a relative position safety constraint for each agent with respect to a 4 Collaborative Safe Formation Control given obstacle as follows. Let pi and po be the position of agent i ∈ [n] and obstacle o ∈ Oi, respectively. We define a position based safety constraint as (16) where ri,o ∈ R is the minimum distance agent i should maintain from obstacle o. Assuming control inputs on the acceleration of agent i, we use the second-order barrier functions candidate from (12) to define the first derivative safety condition hi,o(xi, xo) = ∥pi − po∥2 2 − r2 i,o ˙ϕ1 i,o(x, xo, us i ) = L ¯fiϕ1 i,o(x, xo) + L¯giϕ1 i,o(xi, xo)us i . If we define the next high-order barrier function as i ) = ˙ϕ1 i,o(x, xo, us ϕ2 i,o(x, xo, uf i , us i ) + α1 i (ϕ1 i,o(xi, xo)) (17) (18) and Φi,o(x, xo, us (19) we begin to see neighbor dynamics and the subsequent effect of neighbor control actions in the higher-order derivative expressions. A more detailed discussion on the derivation of (19) may be found in Butler and Paré [2023b]; however, for our purposes, we separate (19) into terms that are affected by neighbor control and those that are not affected by neighbor control as follows i )), i (ϕ2 i,o(x, xo, us i,o(x, xo, us i , us Ni i , us Ni ) + α2 ) = ˙ϕ2 Φi,o(x, xo, us i , us Ni ) = (cid:88) j∈Ni aij,o(x, xo)us j + ci,o(x, xo, us i ) (20) where aij,o(x, xo) = L¯gj L ¯fiϕ1 (21) is the effect that modified control actions us j taken by agent j ∈ Ni have on the formation dynamics and the subsequent safety condition of agent i with respect to obstacle o ∈ Oi and ci,o(x, xo, us i ) collects all other terms including those that are affected by its own control actions us i . Note that if neighbors in Ni also implement control through acceleration inputs then it is possible for aij,o = 0Mi since control inputs for neighbors do not appear until the next order barrier function. In this case, we can circumvent the need to compute unnecessary derivatives by having agents communicate safety needs in terms of velocity constraints, which may be used to approximate acceleration constraints locally for each agent. We will give an example of how this approximation may be done in practice in Section 5. i,o(x, xo) To compute ci,o more explicitly, we make the following assumption, Assumption 1. Let α1 i (z) := α1 i (z) := α2 i z and α2 i z where z ∈ RNi and α1 i , α2 i ∈ R>0 and define βi = α1 i + α2 i . This assumption yields the full expression of ci,o as ci,o(x, xo, us i ) = (cid:88) L ¯fj L ¯fi ϕ1 i,o(x, xo) + L2 ¯fi i,o(x, xo) + α1 ϕ1 i α2 i ϕ1 i,o(xi, xo) + βiL ¯fi ϕ1 i,o(xi, xo) + L¯gi ϕ1 i,o(xi, xo) ˙us i j∈Ni + us⊤ i L2 ¯gi ϕ1 i,o(x, xo)us i + βiL¯gi ϕ1 i,o(xi, xo)us i + (cid:104) L ¯fi L¯gi ϕ1 i,o(xi, xo)⊤ + L¯gi L ¯fi ϕ1 (cid:105) i,o(xi, xo) us i . (22) i ) < 0, then agent i is incapable of remaining safe given us i ) ≥ 0, then agent i is capable of remaining safe given us We may interpret (22) as the total safety capability of agent i with respect to avoidance of obstacle o ∈ Oi, where if ci,o(x, xo, us i (assuming no negative action effects of neighbors). Conversely, if ci,o(x, xo, us i and will require assistance from its neighbor’s actions. Note that in the case where xo = xj for some j ∈ Ni, the computation of (19) must also incorporate the dependence of xj in ∥pi − pj∥ when evaluating the Lie derivatives in both (21) and (22). Given our definition of a subsequent higher-order barrier function in (18), we define another safety constraint set as i s.t. ˙ϕ1 which collects all states where agent i is capable of maintaining its first-order safety condition under the influence of its induced formation dynamics. Given these definitions, we are prepared to define a collaborative control barrier function as follows. Definition 3. Let C1 barrier function (CCBF) for node i ∈ [n] if hi,o ∈ C 3 and ∀(xi, xo) ∈ C1 (us i,o be defined by (13) and (23). We have that hi,o is a collaborative control i,o and ∀t ∈ T there exists i,o :=(cid:8)(xi, xo) ∈ RNi × RNo : ∃us C3 i,o(xi, xo)(cid:1) ≥ 0(cid:9) i,o(x, xo, uf i,o, and C3 i ) + α1 i i,o ∩ C3 i,o ∩ C2 i ∈ U s such that i,o, C2 ) ∈ U s i , us (cid:0)ϕ1 (23) i , us (24) Ni Lemma 3. Given a distributed multi-agent system defined by (8) and constraint sets defined by (13) and (23), (cid:84) ) ≥ 0, ∀o ∈ Oi. Φi,o(x, xo, us i,o is forward invariant ∀t ∈ T if hi,o is a CCBF for all o ∈ Oi. i,o ∩ C2 C1 i,o ∩ C3 i × U s Ni i , us Ni o∈Oi 5 Collaborative Safe Formation Control i , us Ni Proof. The results of this lemma are a direct extension of Theorem 2 in Butler and Paré [2023b], where if hi,o is a CCBF for a given obstacle o ∈ Oi then ∃(us ) ∈ U s i appears in both i , us ) and ϕ2 Φi,o(x, xo, us i , us ) ≥ 0 Ni Ni i ∈ U s for some us i , then ϕ2 i,o, then for all xi, xNi, xo i where ϕ2 i ∈ U s and us ) ≥ 0. Thus, we have i,o ∩ C2 that ϕ2 i,o ∩ C3 i,o ∩ C2 i,o(x, xo, us i,o. Therefore, we have that C1 i,o is forward invariant. Further, since these same arguments hold ∀o ∈ Oi, it directly follows that (cid:84) i ), we must show that if (xi, xo) ∈ C1 i,o ∩ C2 i ) ≥ 0 also. If (24) holds for all (xi, xo) ∈ C1 i,o(x, xo, us i ) ≥ 0, ∀(xi, xo) ∈ C1 i,o ∩ C3 i,o ∩ C2 i,o ∩ C2 C1 i,o and Φi,o(x, xo, us i,o ∩ C3 i , us Ni ∈ U s Ni i,o, which implies ϕ1 i,o ∩ C3 i,o ∩ C2 i,o(x, xo, us i (xi, xo) ≥ 0, ∀(xi, xo) ∈ C1 such that (24) holds. Since us i,o(x, xo, us i,o(x, xo, us i ) = 0, there exists us Ni i,o is forward invariant. such that ˙ϕ2 i × U s Ni i,o ∩ C3 i,o ∩ C3 o∈Oi With set invariance defined with respect to neighbor influence, we can leverage these properties to construct an algorithm to implement collaborative safety through rounds of communication between neighbors. 4.1 Multi-Agent Collaboration Through Communication In this section, we introduce the collaborative safety algorithm, modified from our previous work in Butler and Paré [2023b]. The major additional contribution to the algorithm in this work is the additional handling of multiple safety constraints from each agent, which requires a new definition of maximum safety capability with respect to multiple safety conditions. For the formation control problem scenario, we make the following assumptions. Assumption 2. Let us i (t) be piecewise constant ∀t ∈ T . This assumption includes zero-hold controllers that implement control decisions in a bang-bang fashion, allowing us to set ˙ui = 0 in the analysis. i,o(xi, xo) = 0Mi×Mi, ∀o ∈ Oi. ϕ1 Assumption 3. Let L2 ¯gi In words, we assume that the control exerted by agent i does not have a dynamic relationship with its ability to exert control (e.g., the robot’s movement is implemented identically no matter its position in a defined coordinate system). Since each agent may be actively avoiding multiple obstacles, we may compute the vector describing the second-order safety condition with respect to each obstacle under Assumptions 1-3 as follows   aij|Ni|,o1(x, xo1 ) ...   aij|Ni|,oK (x, xoK ) us j1 ... j|Ni| us   (cid:125)   L ¯fiL¯giϕ1⊤ io1   +   L ¯fiL¯gi ϕ1⊤ ioK (cid:124) + L¯giL ¯fiϕ1 io1 ... + L¯giL ¯fiϕ1 ioK (cid:123)(cid:122) Bi + βiL¯giϕ1 io1 + βiL¯giϕ1 ioK us i    (cid:125) Φi = aij1,o1 (x, xo1) ... aij1,oK (x, xoK ) · · · . . . · · · (cid:123)(cid:122) Ai L ¯fj L ¯fiϕ1 io1  (cid:80) j∈Ni    (cid:124) +   (cid:124) (cid:80) j∈Ni L ¯fj L ¯fi ϕ1 ioK + L2 ¯fi + L2 ¯fi io1 ϕ1 ... ϕ1 ioK (cid:123)(cid:122) qi + α1 i α2 i ϕ1 io1 + α1 i α2 i ϕ1 ioK + βiL ¯fiϕ1 io1 + βiL ¯fiϕ1 ioK    (cid:125) (25) where K = |Oi(t)| is the number of obstacles within the sensing range of agent i at time t. Note our early remark that other agents in the formation within the sensing range of agent i will also be included in this vector to account for inter-agent collision avoidance. Further, note that the length of this vector is time-varying according to |Oi(t)|. We can express (25) more compactly as where Ai ∈ RK×MNi , with MNi = (cid:80) result on its relationship to the problem stated in (15). Lemma 4. Under Assumptions 1-3, any set of agent control inputs us j∈Ni i ∈ U s i , ∀i ∈ [n] that satisfies Φi(x, xo, ∀o ∈ Oi) = Aius Ni (26) Mj, Bi ∈ RK×Mi, and qi ∈ RK. Under (25), we have the following i + qi + Bius from (26) are also a solution to Φi(x, xo, us i , us Ni , ∀o ∈ Oi) ≥ 0; ∀i ∈ [n] ˙ϕ1 i,o(x, xo, uf i , us i ) + α1 i from (15). (cid:0)ϕ1 i,o(xi, xo)(cid:1) ≥ 0; ∀i ∈ [n], ∀o ∈ Oi 6 (27) (28) Collaborative Safe Formation Control i,o(x, xo, us i ∈ U s Proof. By the proof of Lemma 3, i ) = i , i,o(xi, xo)(cid:1) ≥ 0 also. Thus, since (27) implies that Φi,o(x, xo, us ˙ϕ1 i,o(x, xo, uf i , us i ) + α1 ) ≥ 0, ∀o ∈ Oi, then i Ni by Assumptions 1-3, which simplify the expression of (20) by selecting scalar class-K functions by Assumption 1, setting L¯giϕ1 i = 0 by Assumption 2, and setting us⊤ i = 0, ∀o ∈ Oi by Assumption 3, the set of agent control inputs that satisfy (27) must also satisfy (28). ) ≥ 0 for some us if Φi,o(x, xo, us i,o(x, xo)us ϕ1 i,o(xi, xo) ˙us then ϕ2 i , us Ni i L2 ¯gi i , us (cid:0)ϕ1 We now describe the collaborative safety algorithm and how it may be used to communicate safety needs to neighboring agents in the formation control problem. See Butler and Paré [2023b] for a more detailed discussion on the construction of the collaborative safety algorithm with respect to a single safety condition for each agent. The central idea of this algorithm involves rounds of communication between agents, where each round of communication between agents, centered on an agent i ∈ [n], involves the following steps: 1. Receive (send) requests from (to) neighbors in Ni 2. Process requests and determine needed compromises 3. Send (receive) adjustments to (from) neighboring nodes in Ni. s i ⊆ U s The end result of this algorithm will be some set of constrained allowable filtered actions for each agent U i , where any safe action selected from this set will also be safe for all neighbors in Ni. In order to determine what requests should be made of neighbors, each agent must compute its maximum safety capability with respect to the second-order safety condition as defined by (19). However, since the safety capability of agent i with respect to multiple obstacles is represented as a vector, rather than a scalar value for a single condition (Butler and Paré [2023b]), we must carefully define the maximum safety capability for agents in the context of formation control with multiple obstacles. Therefore, in the following section, we define a method for determining the vector of maximum capability for agent i with respect to multiple obstacles and how this information may be used to communicate its safety needs to neighbors. 4.2 Maximum Capability Given Multiple Obstacles To define the maximum capability of an agent i with respect to multiple obstacles, we begin by making the following assumption. Assumption 4. Let U s i be a non-empty convex set which is defined by U s i ∈ RMi : Gius i − li ≤ 0}. i = {us In order to determine the “safest" action agent i may take given multiple obstacles, we want to choose the action us maximizes the minimum entry of the vector Bius problem: i that i from (26), which is defined by the following max-min optimization max i ∈U s us i min 1≤k≤|Oi| [Bius i ]k. (29) This problem characterizes the optimal control strategy u∗ i that attempts to satisfy the safety constraint (20) imposed on agent i for each obstacle o ∈ Oi that is at most risk of being violated (or being violated the worst). We can reduce (29) to a linear programming problem: min ξi s.t. d⊤ξi (cid:20)0 Gi 1 −Bi (cid:21) ξi − (cid:21) (cid:20)li 0 ≤ 0, (30) where d⊤ = (cid:2)−1 0⊤ Mi strategy u∗ (cid:3), ξ⊤ i = (cid:2)γi u⊤ i (cid:3), and γi ∈ R is a scalar that captures the performance of the optimal i . The next proposition formally characterizes the equivalency of Problem (29) and Problem (30). Proposition 1. Given Assumptions 1-4, the optimal solution of (29): u∗ i = arg max us i ∈U s i γ∗ i = max i ∈U s us i min 1≤k≤|Oi| [Bius i ]k min 1≤k≤|Oi| [Bius i ]k exists if and only if there exists an optimal solution in (30), ξ(∗) i = (cid:104) γ(∗) i ⊤(cid:105)⊤ u(∗) i 7 , and γ(∗) i = γ∗ i , u(∗) i = u∗ i . Collaborative Safe Formation Control Proof. We first notice that the following two optimization problems are equivalent: (cid:26) maxus s.t. i min1≤k≤|Oi| [Bius Gius i ]k i − li ≤ 0, maxus s.t. i ,γi    γi Gius γi ≤ [Bius i − li ≤ 0 i ]k ∀1 ≤ k ≤ |Oi|, (31) (32) i ]k with an achievable lower bound γi on each [Ai(xi)us i ]k ∀1 ≤ k ≤ |Oi|. Furthermore, we can show that (29) is equivalent to (31) by rewriting us by substituting min1≤k≤|Oi| [Ai(xi)us [Ai(xi)us as an optimization constraint, and similarly, we can show that (30) is equivalent to (32) by setting ξi = (cid:2)γi us⊤ d⊤ = (cid:2)−1 0⊤ Mi the proof. i ]k, that is, γi ≤ i ∈ U s i explicitly (cid:3)⊤ , (cid:3), and realizing that arg max γi = arg min −γi. The transitivity of equivalence relations concludes i Thus, we have a method for computing a vector that represents the maximum capability of agent i with respect to multiple obstacles Oi. If γi is negative, then agent i will make a request to its neighboring agents that will limit their s j to those that will satisfy [Φi]k ≥ 0, ∀k ∈ Oi, assuming agent i takes the action u∗ control actions U i . In the following section, we describe how our modified collaborative safety algorithm incorporates this capability vector at a high level. 4.3 Collective Safety Through Distributed Collaboration Given our addition to the collaborative safety algorithm from Butler and Paré [2023b] to incorporate multiple safety constraints, the computation steps and convergence properties of our algorithm remain largely unchanged in Algorithm 1 due to the fact that communication of multiple safety constraints from one neighbor is equivalent to multiple neighbors communicating a single constraint in the computation of control restrictions. Algorithm 1 Modified Collaborative Safety 1: ¯cij ← 0, ∀j ∈ Ni 2: U 3: repeat 4: 5: s i ← U s i ¯ci ← Compute maximum capability by solving (30) δi ← ¯ci − (cid:80) ¯cij, U s i ← Perform SPRU (Butler and Paré [2023b]) 6: 7: until ¯cij remains constant and [δi]k∈[|Oi|] ≥ 0 j∈Ni ¯cij We denote SPRU as an abbreviation of (S)end/receive requests, (P)rocess requests, (R)ecieve/send adjustments, (U)pdate constraints w.r.t. adjustments as detailed in Butler and Paré [2023b]. We yield the following result on the collective safety of a formation under the modified collaborative safety algorithm. Theorem 1. Let Assumptions 1-4 hold for all i ∈ [n]. If Algorithm 1 is convergent and U then (15) yields (cid:84) i,o forward invariant during t ∈ T for all i ∈ [n]. i,o ∩ C2 C1 o∈Oi s i (x(t)) ̸= ∅, ∀i ∈ [n], ∀t ∈ T , Proof. If Algorithm 1 is convergent for all t ∈ T and there exists a non-empty U Lemmas 3 and 4 we have any action taken by any agent from these constrained control sets must render (cid:84) C2 i,o ∩ C3 have by Lemma 2 that (cid:84) i,o forward invariant for all i ∈ [n]. Thus by applying control constraints U i,o is also forward invariant during t ∈ T for all i ∈ [n]. s i (x(t)) for all i ∈ [n], then by C1 i,o ∩ s i (x(t)) to (15) for each agent, we i,o ∩ C2 C1 o∈Oi o∈Oi In words, we have that if Algorithm 1 always terminates with a feasible set of safe actions for all agents, then Theorem 1 guarantees that using (15) to choose safe actions for individual agents renders all agents safe for all time. Note that (15) filters the agent’s actions according to their individual safety constraints, where the set of allowable actions s U i is given by Algorithm 1. It should also be noted that it is not guaranteed for the modified collaborative safety algorithm to converge in the case where there are conflicting requests (either between neighbors or between multiple safety conditions) which may be possible in obstacle-dense environments. Thus, providing conditions under which the collaborative safety algorithm remains convergent under conflicting requests in an important direction for future work. 8 (33) (34) Collaborative Safe Formation Control 5 Application Example We now illustrate the application of our collaborative safety algorithm to the safe cooperative formation control of a simplified two-dimensional agent system and simulate a multi-obstacle avoidance scenario. 5.1 Virtual Mass-Spring Formation Model Consider a two-dimensional multi-agent system with distributed formation control dynamics defined by a virtual mass-spring model, with xi = [p⃗x i , p⃗y i , v⃗x i , v⃗y i ]⊤  ˙xi = v⃗x i v⃗y i 0 0        +  0 0   1 0    0 0 0 1 (cid:16) uf i (x) − us i (cid:17) where uf i (x) = (cid:35) (cid:34) uf⃗x i uf⃗y i = 1 mi (cid:34)(cid:80) (cid:80) j∈Ni j∈Ni kijsij sin θij − bijv⃗x i kijsij cos θij − bijv⃗y i (cid:35) describes the desired formation behavior of the system, where agents behave as if coupled by mass-less springs with kij and bij being the spring and dampening constants for the virtual spring from agent j to agent i, respectively, and sij = Lij − Rij denoting the stretch length of a given spring connection with resting length Rij and Lij = ∥pi − pj∥2 being the current length of the spring. We compute the ⃗x and ⃗y components of the stretched spring as sin θij = i − p⃗x p⃗x j Lij , cos θij = i − p⃗y p⃗y Lij j . Thus, our induced coupling model then becomes ¯fi(x) =     , ¯gi =     v⃗x i v⃗y i uf⃗x i uf⃗y i       0 0 0 0 −1 0 0 −1 with the first-order safety condition for a given obstacle using the barrier function candidate (16) computed as ϕ1 i,o(xi, xo) = 2 (cid:104) i (p⃗x v⃗x i − p⃗x o ) + v⃗y i (p⃗y (cid:105) i − p⃗y o) + α0 i (hi,o(xi, xo)) which yields the Lie derivatives of the safety condition with respect to the formation dynamics as o ) + uf⃗y i,o(x, xo) = 2v⃗x o)) + uf⃗x o )) + v⃗y i + α0 i − p⃗x i − p⃗x i + α0 i − p⃗y i (p⃗x i (v⃗y i (v⃗x i (p⃗y L ¯fiϕ1 i (p⃗x i (p⃗y i − p⃗y o) and L¯gi ϕ1 i,o(xi, xo) = 2 (cid:2)p⃗x i − p⃗x o p⃗y i − p⃗y o (cid:3) . (35) (36) (37) (38) It should be noted that given this mass-spring network formation control law, when computing the effect of control by agent j on the safety conditions of agent i yields L¯gj L ¯fiϕ1 i,o(xi, xo) = 0Mj (39) since the control input of agent j does not appear until the next derivative of Φi. In order to avoid unnecessary computations of additional partial derivatives, each agent computes the effect of neighboring control as if neighbors directly control their velocities, i.e.,   , ∀j ∈ Ni. (40) ¯gj =     0 −1 0 −1 0 0 0 0 9 Collaborative Safe Formation Control FIGURE 1: The trajectories of a 3-agent formation through an obstacle field, where a leader agent (blue) is given a constant control signal directing it straight through the field. Each agent implements safety filtering according to Algorithm 1 and (15) to avoid obstacles while maintaining a formation behavior, according to (33) and (34). This assumption is non-physical since it would require infinite acceleration for neighbors to achieve such a discontinuous instantaneous jump in velocity. However, if we assume a finite time interval τ > 0 during which our acceleration controller might achieve such a change in velocity, we can approximate the necessary acceleration constraints during that time. In other words, since these terms are used to communicate action limitations on neighbors, we may approximate acceleration limits over a given time interval by simply dividing the velocity constraints by the appropriate time window length. Therefore, if the velocity constraints communicated are then we may compute acceleration constraints for a given time interval τ > 0 as U v = {u ∈ RM : Gu + l ≤ 0}, (cid:26) U a = u ∈ RM : (cid:27) Gu + l ≤ 0 . 1 τ (41) (42) In real-world applications, this reliance on a known time interval may cause challenges when accounting for communi- cation delays and inconsistent processing and actuation time intervals. 5.2 Simulations We construct an example of multi-obstacle avoidance for a fully connected 3-agent formation where the parameters of the virtual mass-spring system are mi = 0.5, ri,o = 1, Kij = 3, Rij = 3, and bij = 1 for all i ∈ [n], j ∈ Nj, and o ∈ O. Further, we set control magnitude limits for each agent i as Ui = {ui ∈ R2 : ∥ui∥∞ ≤ 15}. We then apply a constant control signal to a single agent which leads the formation through an obstacle field, where the initial and final positions of each agent and their respective trajectories through the obstacle field are shown in Figure 1. Each agent uses the modified collaborative safety algorithm described in Algorithm 1 to communicate its safety needs and accommodate safety requests to and from neighbors, respectively. Each agent then implements a first-order safety filter s on their control actions as described by (15) while incorporating the control constraints U i computed using Algorithm 1. We plot the safety filter control signal including the constant leader control signal for agent 0 in Figure 2, which shows the safety filter control signal us i for both the ⃗x and ⃗y components over time. For a video of this simulation, see https://youtube.com/shorts/aRki-Mbna3w. To view our simulation code, see Butler [2023]. Note that the follower agents in this simulation end up switching positions in the formation as a consequence of the induced spring dynamics for the system. However, it should also be noted that the simplistic nature of the formation dynamics in this example may lead to restricted behavior when encountering obstacles. Therefore, replacing the mass-spring network with more sophisticated formation control protocols will induce smarter formation behavior such as better group path planning, formation coordination, etc. The advantages of the proposed safety filtering algorithm can be leveraged as long as the additional formation control protocols are piece-wise differentiable with respect to xi for all i ∈ [n]. 6 Conclusions In this paper, we have presented a method for applying safety-filtered control to arbitrarily distributed formation control algorithms through active communication of safety needs between neighboring agents in formation. We have modified 10 Collaborative Safe Formation Control ⃗x , s u i ⃗y , s u i FIGURE 2: The safety-filtered control signals for each agent in the ⃗x component (left) and ⃗y component (right) of us i , which are computed using Algorithm 1 and (15), during the traversal of the formation through the obstacle field shown in Figure 1. Note that a constant control signal is given to agent 0 (blue), which is included in the modified control signal. a collaborative safety algorithm from our previous work Butler and Paré [2023b] to account for the communication and processing of multiple safety conditions and shown that, if the algorithm is convergent for all agents, then the formation is guaranteed to remain safe. Directions for future work include an analysis of the convergence for the modified collaborative safety algorithm under conflicting safety requests, as well as incorporating robustness to sources of uncertainty in our conditions for safety guarantees. Further, it should be noted that we make no assumptions about the real-time computation and communication of safety requests between neighbors and that the computation load for each agent increases as more obstacles are added to the environment, including other neighboring agents. Therefore, to bring this safety-filtering algorithm to real-time applications, in future work we must consider several real-world challenges in the implementation of active collaboration between communicating agents. References Steven Strogatz. Sync: The Emerging Science of Spontaneous Order. Penguin UK, 2004. Craig W Reynolds. Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, pages 25–34, 1987. Randal W Beard, Jonathan Lawton, and Fred Y Hadaegh. A coordination architecture for spacecraft formation control. IEEE Transactions on Control Systems Technology, 9(6):777–790, 2001. Kwang-Kyo Oh, Myoung-Chul Park, and Hyo-Sung Ahn. A survey of multi-agent formation control. Automatica, 53: 424–440, 2015. Wei Ren and Yongcan Cao. Distributed Coordination of Multi-agent Networks: Emergent Problems, Models, and Issues. Springer Science & Business Media, 2010. Philip E Paré, Carolyn L Beck, and Tamer Ba¸sar. Modeling, estimation, and analysis of epidemics over networks: An overview. Annual Reviews in Control, 50:345–360, 2020. Brooks A. Butler and Philip E. Paré. Optimal safety-critical control of epidemics. IEEE Control Systems Letters, 7: 1819–1824, 2023a. doi:10.1109/LCSYS.2023.3280116. Maria Lorena Tuballa and Michael Lochinvar Abundo. A review of the development of smart grid technologies. Renewable and Sustainable Energy Reviews, 59:710–725, 2016. Anam Tahir, Jari Böling, Mohammad-Hashem Haghbayan, Hannu T Toivonen, and Juha Plosila. Swarms of unmanned aerial vehicles—A survey. Journal of Industrial Information Integration, 16:100106, 2019. Mitio Nagumo. Über die lage der integralkurven gewöhnlicher differentialgleichungen. Proceedings of the Physico-Mathematical Society of Japan. 3rd Series, 24:551–559, 1942. Franco Blanchini. Set invariance in control. Automatica, 35(11):1747–1767, 1999. Aaron D Ames, Xiangru Xu, Jessy W Grizzle, and Paulo Tabuada. Control barrier function based quadratic programs for safety critical systems. IEEE Transactions on Automatic Control, 62(8):3861–3876, 2016. Aaron D Ames, Samuel Coogan, Magnus Egerstedt, Gennaro Notomista, Koushil Sreenath, and Paulo Tabuada. Control barrier functions: Theory and applications. In Proceedings of the 2019 18th European Control Conference (ECC), pages 3420–3431. IEEE, 2019. 11 Collaborative Safe Formation Control Soon-Jo Chung, Aditya Avinash Paranjape, Philip Dames, Shaojie Shen, and Vijay Kumar. A survey on aerial swarm robotics. IEEE Transactions on Robotics, 34(4):837–855, 2018. Li Wang, Aaron D Ames, and Magnus Egerstedt. Safety barrier certificates for collisions-free multirobot systems. IEEE Transactions on Robotics, 33(3):661–674, 2017. Mario Santillo and Mrdjan Jankovic. Collision free navigation with interacting, non-communicating obstacles. In Proceedings of the 2021 American Control Conference (ACC), pages 1637–1643. IEEE, 2021. Mrdjan Jankovic and Mario Santillo. Collision avoidance and liveness of multi-agent systems with CBF-based controllers. In Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), pages 6822–6828. IEEE, 2021. Brooks A Butler and Philip E Paré. Distributed collaborative safety-critical control for networked dynamic systems. arXiv preprint arXiv:2310.03289, 2023b. Brooks A. Butler, Chi Ho Leung, and Philip E. Paré. Collaborative safe formation control for coupled multi-agent systems. arXiv preprint arXiv:2311.11156, 2024. Klaus Röbenack. Computation of multiple Lie derivatives by algorithmic differentiation. Journal of Computational and Applied Mathematics, 213(2):454–464, 2008. Wei Xiao and Calin Belta. Control barrier functions for systems with high relative degree. In Proceedings of the 58th Conference on Decision and Control (CDC), pages 474–479, 2019. Wei Xiao and Calin Belta. High-order control barrier functions. IEEE Transactions on Automatic Control, 67(7): 3655–3662, 2021. Brooks A. Butler. Collaborative safe formation control. https://github.com/brooksbutler/ safe-formation-control, 2023. 12
ai_researcher
2
Expert-level_protocol_translation_for_self-driving_labs.pdf
HoME: Hierarchy of Multi-Gate Experts for Multi-Task Learning at Kuaishou Xu Wang Kuaishou Technology [email protected] Jiangxia Cao Kuaishou Technology [email protected] Zhiyi Fu Kuaishou Technology [email protected] Kun Gai Unaffiliated [email protected] Guorui Zhou Kuaishou Technology [email protected] 4 2 0 2 g u A 0 1 ] R I . s c [ 1 v 0 3 4 5 0 . 8 0 4 2 : v i X r a ABSTRACT In this paper, we present the practical problems and the lessons learned at short-video services from Kuaishou. In industry, a widely- used multi-task framework is the Mixture-of-Experts (MoE) para- digm, which always introduces some shared and specific experts for each task and then uses gate networks to measure related experts’ contributions. Although the MoE achieves remarkable improve- ments, we still observe three anomalies that seriously affect model performances in our iteration: (1) Expert Collapse: We found that experts’ output distributions are significantly different, and some ex- perts have over 90% zero activations with ReLU, making it hard for gate networks to assign fair weights to balance experts. (2) Expert Degradation: Ideally, the shared-expert aims to provide predic- tive information for all tasks simultaneously. Nevertheless, we find that some shared-experts are occupied by only one task, which indicates that shared-experts lost their ability but degenerated into some specific-experts. (3) Expert Underfitting: In our services, we have dozens of behavior tasks that need to be predicted, but we find that some data-sparse prediction tasks tend to ignore their specific-experts and assign large weights to shared-experts. The reason might be that the shared-experts can perceive more gradient updates and knowledge from dense tasks, while specific-experts easily fall into underfitting due to their sparse behaviors. Motivated by those observations, we propose HoME to achieve a simple, efficient and balanced MoE system for multi-task learn- ing. Specifically, we conduct three insightful modifications: (1) Expert normalization&Swish mechanism to align expert out- put distributions and avoid expert collapse. (2) Hierarchy mask mechanism to enhance sharing efficiency between tasks to reduce occupancy issues and away from expert degradation. (3) Feature- gate&Self-gate mechanisms to ensure each expert could obtain appropriate gradient to maximize its effectiveness. To our knowl- edge, this paper is the first work to focus on improving multi-task MoE system stability, and we conduct extensive offline&online (average improves 0.52% GAUC offline & 0.954% play-time per Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. Conference’17, July 2017, Washington, DC, USA © 2024 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn Figure 1: Typical multi-task behaviors at Kuaishou. user online) experiments and ablation analyses to demonstrate our HoME effectiveness. HoME has been deployed on Kuaishou’s short-video services, serving 400 million users daily. CCS CONCEPTS • Information systems → Recommender systems. KEYWORDS Multitask Learning; Short-Video Recommendation; Ranking ACM Reference Format: Xu Wang, Jiangxia Cao, Zhiyi Fu, Kun Gai, and Guorui Zhou. 2024. HoME: Hierarchy of Multi-Gate Experts for Multi-Task Learning at Kuaishou. In Proceedings of ACM Conference (Conference’17). ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 1 INTRODUCTION Short-video applications like Tiktok and Kuaishou have grown rapidly in recent years; unlike other platforms, users always have clear intentions, i.e., search keywords on Google and buy clothes/food at Amazon, while Kuaishou almost plays an entertainment role without any users’ concept inputs. As shown in Figure 1, when using Kuaishou, users usually watch multiple automatically played short-videos by simply swiping up and down on the screen, and sometimes leave some interactions, e.g., Long-view, Comment, etc. The proportion of implicit feedback [14, 34] is much greater than other scenarios. Therefore, the only reason why Kuaishou could grow to a large application with 400 million users worldwide, is that our system can provide personalized and interesting short videos, giving users a satisfactory experience. To this end, utilizing the Conference’17, July 2017, Washington, DC, USA Xu Wang, Jiangxia Cao, Zhiyi Fu, Kun Gai, and Guorui Zhou Figure 2: Illustration of a naive MMoE and the expert collapse issue occurring in practice. As shown in (b), expert6 always assigned the biggest gate value, over 0.98 in most cases, by all tasks. We also noticed that expert6 outputs much more smaller and sparser activation values than other experts, as shown in (c). Those phenomena indicate that in the real data-streaming scenario, MMoE is unstable and easy to collapse, which obstacles fair comparisons among experts and impacts model performance. rare but multifarious behavior cues left by users to capture their interests accurately is the fundamental task. Generally, the common wisdom always forms such learning process as a multi-task learning paradigm [20, 32, 38], to build a model that could output multiple estimated probabilities of different user interactions simultaneously and supervise this model by real user behavior logs. As a typical multi-task solution, the idea of MoE is widely used in industry to implement parameter soft-sharing. The most famous method is Multi-gate Mixture-of-Experts (MMoE [23]), which con- sists of two major components (as shown in Figure 2(a)): Expert Networks – a group of expert networks (e.g., MLP with ReLU) for modeling the input features and implicit high-level feature cross- ing as multiple representations, and Gate Networks – task-specific gate networks (e.g., MLP with Softmax) for estimating different experts’ importance to fuse their outputs for corresponding tasks. Recently, several works have extended the expert networks to en- hance MMoE system ability by introducing the task-specific ex- perts (e.g., CGC [33]) or stacking more experts layers (e.g., PLE [33], AdaTT [19]). At Kuaishou, our former online multi-task module is equipped by MMoE [23], which remarkably improves our A/B test metrics compared to the baseline. However, after launching the MMoE, we have tried several different changes to the multi-task modeling module in past years. But all ended in failure, including upgrading to two or more expert layers, extending more shared- experts, introducing extra specific-experts, and so on. Consequently, we started in-depth analyses to identify the potential reasons that might prevent our iterating. Unsurprisingly, we discovered three anomalies that seriously affect multi-task performances. Expert Collapse: We first checked the gate output situation of MMoE and showed major tasks’ gate weight assigned to 6 shared- experts in Figure 2(b). It is noticeable that all gates assigned larger weights to the shared-expert 6 and almost ignored other shared- experts. Thus, we next checked the output value distribution of the shared experts and observed their significant differences. As shown in Figure 2(c), the mean and variance of experts 1∼5 are at a similar level, but expert 6 is 100x smaller in terms of the mean value. Such inconsistent output distributions result in the gate network making it difficult to assign fair weights to balance different experts, which further leads the experts at different numerical levels to be mutually exclusive. Moreover, we also found that expert output has too many 0 activations (i.e., over 90% of output), causing its average derivatives to be small and parameters insufficiently trained. Expert Degradation: After we fixed the above serious expert collapse issue, we successfully upgraded our multi-task module to a shared-specific MoE variant, CGC. As a result, we are curi- ous whether the gating weights can get the expected results that all task gate networks could assign perceivable scores for shared- experts and their specific-experts to achieve an equilibrium status. Unfortunately, we found another unexpected expert degradation phenomenon (as shown in Figure 3). Here, we show the average scores of the gating mechanisms for some major towers, and we observe that the shared-expert hardly contributes to all tasks but degrades to a specific-expert only belongs to few tasks. Therefore, such observation reveals that it is difficult for the architecture of naive shared and specific experts to converge to the ideal status. Expert Underfitting: After we further fixed the expert degrada- tion and enhanced the efficiency of shared-experts for all tasks, we found some specific-experts are assigned a small gate value so that corresponding tasks only rely on the shared knowledge, making less use of specific parameters. Actually, our model needs to predict dozens of different tasks simultaneously, and their densities (i.e., positive sample rate) also vary greatly, while dense tasks can be 100x larger than sparse tasks, e.g., Click v.s. Collect. Compared to shared- experts that could receive multiple gradient updates from multiple dense tasks, specific-experts easily fall into underfitting, further leading the sparse task to rely more on shared-experts but ignoring their specific-experts and making specific parameters wasted. As shown in Figure 4, the task 6 gate network assigns a large value to the shared-experts but overlooks its specific-experts. To address these anomalies and improve MoE paradigm model stability, we propose a simple, efficient and balanced neural net- work architecture for multi-task learning: Hierarchy of Multi-gate Experts model, termed as HoME. Specifically, we provide insightful and in-depth solutions from three perspectives: the value distribu- tion alignment for fair expert weights, the hierarchy meta expert structure to re-assemble tasks, and the gate networks to enhance sparse task expert and deep multi-layer MMoE training: HoME Conference’17, July 2017, Washington, DC, USA Figure 3: Expert degradation issue in CGC, where the two shared experts are almost monopolized by task2 and task7, respectively, working in a specific style. Figure 4: Expert underfitting issue, where task1 and task6 almost rely on shared experts only and ignore their own spe- cific expert, making less use of the specific expert network. Expert normalization&Swish mechanism: To balance the variance of experts outputs and avoid expert collapse, we first introduced the normal [1, 16] operation for each expert to project their output to approximate the normal distributions, i.e., expert outputs distribution ≈ N (0, I). However, under this setting, we found that performing normalization directly will also lead to too many 0 existing after the ReLU function. The reason might be that the mean value of normalized expert output is close to 0, thus half of the outputs will be less than 0 and then activated as 0 under ReLU. To alleviate the zero derivatives gradient phenomenon, we use the Swish [28] function to replace the ReLU function to improve the utilization of parameters and speed up the training process. Since the normalization and swish setting, all experts’ output could align to a similar numerical magnitude, which could help our gate network assign comparable weights. Hierarchy mask mechanism: To reduce expert occupancy issues and away from expert degradation (also called task conflict seesaw issue [5, 31, 33]), in this paper, we present a simple-yet- effective cascading hierarchy mask mechanism to alleviate such con- flict. Specifically, we insert a pre-order meta expert network to group different tasks to extend the standardized MoE system. As shown in Figure 1, our short-video behaviors tasks could be manu- ally divided into two meta categories according to their prior rele- vance: (1) passive watching-time tasks, e.g., Long-view; (2) proac- tive interaction tasks, e.g., Comment. Therefore, we can pre-model coarse-grained meta-category experts and then support each task with the following idea: each task should have not only fully-shared global experts, but also partial-shared in-category experts. Feature-gate and Self-gate mechanisms: To enhance the training of our sparse-task experts, we present two gate mecha- nisms to ensure they can obtain appropriate gradients to maximize their effectiveness: feature-gate and self-gate mechanisms. Con- sidering that the same layer experts always share the same input features, but different experts will receive different gradients. Thus, the same feature input may lead to the potential risk of gradient conflicts for multiple expert parameter optimization. To this end, we first present the feature-gate mechanism to privatize flexible ex- pert inputs to protect sparse-task expert training. Besides, the latest MMoE efforts show that deeper stacking expert networks [19, 33] could bring more powerful prediction ability. However, in our ex- periment, we find the origin gate network easily dilutes the gradient layer by layer, which is unfriendly for the sparse-task expert train- ing. To ensure the top layers gradient can be effectively passed to the bottom layers and stabilize the deeper MMoE system training, we further devise the self-gate mechanism to connect the adjacent related experts residually. The main contributions of our work are as follows: • We deeply analyze the expert issues of the current MoE system and propose our milestone work HoME. To the best of our knowl- edge, this paper is the first to focus on enhancing multi-task MoE system stability, which will shed light on other researchers to explore a more robust multi-task MoE system. • We conduct extensive offline and online experiments at Kuaishou short-video service. The offline experiments show that all pre- diction tasks get significant improvements, and the online ex- periments obtain 0.636% and 0.735% play-time improvements on Kuaishou and Kuaishou-Lite applications. • Our HoME has been widely deployed on various services at Kuaishou, supporting 400 million active users daily. 2 RELATED WORKS In this section, we briefly review the evolution trajectory of multi- task learning, which plays a more and more important role in em- powering models to perceive multiple signals in various research fields, including recommender systems [2, 9, 37], neural language processing [6, 8, 30], computer vision [12, 18, 26] and ubiquitous computing [13, 22]. In early years, several works utilized the hard expert sharing architecture with multi task-specific towers fed by the same expert output to achieve the simplest multi-task learn- ing system, including the shared-bottom [3], mixture-of-expert Conference’17, July 2017, Washington, DC, USA Xu Wang, Jiangxia Cao, Zhiyi Fu, Kun Gai, and Guorui Zhou (MoE [17]). Later, the cross-stitch [25] network and sluice [29] network were proposed to build a deep expert information fusion network to generate the task-specific inputs to achieve soft ex- pert knowledge sharing. Except the complex vertical deep expert crossing, the horizontal expert weight estimating is another way to customize task-specific tower input, the recently year proposed multi-gate mixture-of-expert (MMoE) [23] gives a multi-gate mech- anism to assign different weights to different experts in order to balance different tasks. With the wave of neural networks-based recommender systems, the MMoE variants methods also play a significant role in improving the model capabilities and accuracy. The pioneering work is from the Youtube ranking system [40], which utilizes several shared experts through different gating net- works to model the real user-item interactions. To alleviate the task- conflict seesaw [5, 31, 33] problem, the MMoE variants CGC [33] and PLE [33] not only utilize shared-experts, but also insert addi- tional specific-experts for more flexible expert sharing. Based on the shared/specific idea, a lot of MMoE variant was proposed, including: MSSM [11] extends the PLE approach by employing a field-level and cell-level features selective mechanism to determine the importance of input features automatically. AdaTT [19] leveraging an adaptive fusion gate mechanism on PLE to model complex task relationships between specific-expert and shared-expert. STAR [31] adopts star topology with one shared expert network and some specific net- works to fuse expert parameters. MoLA [43] borrows the low-rank fine-tuning technique from LLM and devices lightweight low-rank specific-expert adapters to replace complex specific-expert. 3 METHODOLOGY In this section, we introduce the components of our model, HoME. We first retrospect how the MoE system works in an industrial scale RecSys, from feature engineering, MoE neural networks details and prediction scores assembled for ranking. Afterward, we express our solution for the three problems: expert normalization&swish mechanism to overcome the expert collapse issue, hierarchy mask mechanism to alleviate expert degradation issue, and two kinds of gate mechanisms for the expert underfitting issue. 3.1 Preliminary: Multi-Task Learning for Industrial Recommender System The industrial recommender system follows a two-stage design: (1) hundreds of item candidate generation [35, 36] and (2) item candidate ranking [7, 24, 40] to select dozens of top items for users. Since the goals of these two stages are distinct, thus the techniques used are also completely different: the generation process focuses on user-side feature modeling and coarsen item sampling while the ranking process focuses on user and item feature fusion and fine-grained user multi-interaction fitting. Therefore, the multi- task learning model is always employed in the ranking process, to estimate various interactions’ probabilities for a specific user-item pair. For brevity, the model-generated probabilities always have a short name (xtr), e.g., click probability as ctr, effective-view probability as evtr, like probability as ltr, comment probability as cmtr, and so on. each learning user-item samples contain two types of information – the supervised label and input features: • Supervised Signals: the real labels of this user-item watch expe- rience, e.g., click 𝑦𝑐𝑡𝑟 ∈ {0, 1}, effective view 𝑦𝑒𝑣𝑡𝑟 ∈ {0, 1}, like 𝑦𝑙𝑡𝑟 ∈ {0, 1}, comment 𝑦𝑐𝑚𝑡𝑟 ∈ {0, 1} and other labels. • Feature Inputs: the MoE input aims to describe the status of user and item from multiple perspectives and can be roughly divided into four classes: (1) ID and category features, we use a straight- forward lookup operator to get their embeddings, e.g., user ID, item ID, tag ID, is active user, is follow author, Scenario ID and others; (2) statistics features, which needs to devise bucketing strategies to discretize them to assign an ID for them, e.g., number of watched short-video in the last month, short-video viewing time in the past month, and others. (3) the sequential features to reflect users short-term and long-term interests, which usually is modeled by the one-stage or two-stage attention mechanism, e.g., DIN [42], DIEN [41], SIM [27], TWIN [4]. (4) pre-trained multi-modal embeddings such as text embedding [10], asr em- bedding [39], video embedding [21], etc. Combination all of them, we can obtain the multi-task training sam- ples (e.g., labels are {𝑦𝑐𝑡𝑟 , 𝑦𝑒𝑣𝑡𝑟 , . . . }, inputs are v = [v1, v2, . . . , v𝑛]), where 𝑛 indicates the total feature number. 3.1.2 Mixture-of-Experts for XTR prediction. Given the training user-item sample labels 𝑦𝑐𝑡𝑟 , 𝑦𝑒𝑣𝑡𝑟 , . . . and features v, next we uti- lize the multi-task module to make predictions. Specifically, we show the wide-used shared/specific paradigm MoE variant CGC details as follows: ˆ𝑦𝑐𝑡𝑟 = Tower𝑐𝑡𝑟 (cid:0)Sum(cid:0)Gate𝑐𝑡𝑟 (v), {Experts{𝑠ℎ𝑎𝑟𝑒𝑑,𝑐𝑡𝑟 } (v) }(cid:1)(cid:1), ˆ𝑦𝑒𝑣𝑡𝑟 = Tower𝑒𝑣𝑡𝑟 (cid:0)Sum(cid:0)Gate𝑒𝑣𝑡𝑟 (v), {Experts{𝑠ℎ𝑎𝑟𝑒𝑑,𝑒𝑣𝑡𝑟 } (v) }(cid:1)(cid:1), ˆ𝑦𝑙𝑡𝑟 = Tower𝑙𝑡𝑟 (cid:0)Sum(cid:0)Gate𝑙𝑡𝑟 (v), {Experts{𝑠ℎ𝑎𝑟𝑒𝑑,𝑙𝑡𝑟 } (v) }(cid:1)(cid:1), Tower(·) = Sigmoid(cid:0)MLP_T(·) (cid:1), where Experts(·) = ReLU(cid:0)MLP_E(·) (cid:1), Gate(·) = Softmax(cid:0)MLP_G(·) (cid:1), (1) where the Expert𝑠ℎ𝑎𝑟𝑒𝑑 : R|v| → R𝐷 and Expert𝑥𝑡𝑟 : R|v| → R𝐷 are the ReLU-activated shared and specific experts networks re- spectively, the Gate𝑥𝑡𝑟 : R|v| → R𝑁 is the Softmax-activated gate network for corresponding task, 𝑁 is the related shared and spe- cific experts number, Sum aims to aggregate the 𝑁 experts outputs according to gate-generated weights, Tower𝑥𝑡𝑟 : R𝐷 → R is the Sigmoid-activated task-specific network to measure corresponding interaction probability ˆ𝑦. After obtaining all the estimated scores ˆ𝑦𝑐𝑡𝑟 , . . . and ground- truth labels 𝑦𝑐𝑡𝑟 , . . . , we directly minimize the cross-entropy binary classification loss to train the multi-task learning model: L = − 𝑥𝑡𝑟 ∑︁ 𝑐𝑡𝑟,... (cid:0)𝑦𝑥𝑡𝑟 log ( ˆ𝑦𝑥𝑡𝑟 ) + (1 − 𝑦𝑥𝑡𝑟 ) log (1 − ˆ𝑦𝑥𝑡𝑟 )(cid:1) (2) In online serving, a common operation is to devise a controllable complex equation to combine XTRs as one ranking score: ranking_score = 𝛼 · ˆ𝑦𝑐𝑡𝑟 + 𝛽 · ˆ𝑦𝑒𝑣𝑡𝑟 + 𝛾 · ˆ𝑦𝑐𝑚𝑡𝑟 + . . . (3) 3.1.1 Label&Feature. Formally, such a ranking learning process is always organized as a multiple binary classifications style, and where 𝛼, 𝛽, 𝛾 are hyper-parameters. In fact, Eq.(3) is very compli- cated with many strategies in industry RecSys. We only show a HoME Conference’17, July 2017, Washington, DC, USA Figure 5: The HoME and other MoE-style multi-task learning architectures. In HoME, tasks are divided into groups based on their relatedness and modeled as fully-shared or partial-shared meta-representations in the first layer, then refined as specific task representations in the second layer. HoME further introduces two specially designed modules: Feature-gate to alleviate task conflicts at the input level, and Self-gate to ensure that each task makes the most of specific experts. Best viewed in color. naive case. In the following section, we focus on the multi-task learning procedure in Eq.(1) to improve its stability. 3.2 Expert Normalization&Swish Mechanism Although the vanilla MMoE system in Eq.(1) achieves remarkable improvements, it still exists the serious expert collapse problem. Denote the experts’ MLP_E function generated representation as {z𝑠ℎ𝑎𝑟𝑒𝑑 , z𝑐𝑡𝑟 , z𝑒𝑣𝑡𝑟 ,. . . }, we found their means and variances values are significantly different. Inspired by the Transformers, the nor- malization operator is one of the vital techniques to successfully support training very deep neural networks. We also introduce the batch normalization [16] for each expert to support our HoME to generate comparable output z𝑛𝑜𝑟𝑚 ∈ R𝐷 : z𝑛𝑜𝑟𝑚 = Batch_Normalization(z) = 𝜸 √ z − 𝝁 𝜹 2 + 𝝐 + 𝜷, Where 𝝁 = Batch_Mean(z), 𝜹 2 = Batch_Mean(cid:0) (z − 𝝁 ) 2(cid:1), (4) where z is an arbitrary experts’ MLP_E output, 𝜸 ∈ R𝐷 , 𝝁 ∈ R𝐷 are trainable scale and bias parameters to adjust the distribution, 𝝐 ∈ R𝐷 is a very small factor to avoid the division by 0 error. 𝝁 ∈ R𝐷 , 𝜹 2 ∈ R𝐷 are mean and variances of current batch same expert outputs. After the expert normalization, the distribution of z𝑛𝑜𝑟𝑚 is a normal distribution that is closely related to N (0, I). As a result, the half of z𝑛𝑜𝑟𝑚 values will be less than 0 and then activated as 0 under ReLU, causing their derivatives and gradients to be 0, cumbering model convergence. Thus, we use the Swish function to replace the ReLU in Eq.(1) to obtain our HoME Expert: HoME_Expert(·) = Swish (cid:16) Batch_Normalization(cid:0)MLP_E(·)(cid:1)(cid:17) , (5) where the HoME_Expert(·) is the final structures used in our HoME. Under the normalization and swish setting, the output of all ex- perts could align to a similar numerical magnitude, which could help our gate network assign comparable weights. For brevity, in the following section, we still use Expert(·) to represent our HoME_Expert(·). 3.3 Hierarchy Mask Mechanism For the expert degradation, there is a series of works that introduce novel specific-expert and shared-expert architecture to alleviate task conflicts. However, following the specific and shared paradigm, we found that the problem of shared expert degradation still oc- curs. We argue that it can be beneficial to consider the prior task relevance, as shown in Figure 1; our prediction task can be divided into two categories, e.g., proactive interaction tasks (e.g., Like, Com- ment, etc.) and passive watching-time tasks (e.g., Effective-view, Long-view, etc.). In this section, we propose a simple-yet-effective cascading hierarchy mask mechanism to model the prior inductive bias between tasks. Specifically, we insert a pre-order meta expert network to group different tasks, here including three meta-task knowledge to support our two categories of tasks: 𝑚𝑒𝑡𝑎 = Sum(cid:0)Gate𝑖𝑛𝑡𝑒𝑟 𝑖𝑛𝑡𝑒𝑟 z 𝑚𝑒𝑡𝑎 = Sum(cid:0)Gate𝑤𝑎𝑡𝑐ℎ 𝑤𝑎𝑡𝑐ℎ z 𝑠ℎ𝑎𝑟𝑒𝑑 𝑚𝑒𝑡𝑎 =Sum(cid:0)Gate𝑠ℎ𝑎𝑟𝑒𝑑 z 𝑚𝑒𝑡𝑎 (v), {Experts{𝑠ℎ𝑎𝑟𝑒𝑑,𝑖𝑛𝑡𝑒𝑟 } 𝑚𝑒𝑡𝑎 (v), {Experts{𝑠ℎ𝑎𝑟𝑒𝑑,𝑤𝑎𝑡𝑐ℎ} 𝑚𝑒𝑡𝑎 (v), {Experts{𝑠ℎ𝑎𝑟𝑒𝑑,𝑖𝑛𝑡𝑒𝑟,𝑤𝑎𝑡𝑐ℎ} 𝑚𝑒𝑡𝑎 𝑚𝑒𝑡𝑎 (v) }(cid:1), 𝑚𝑒𝑡𝑎 (v) }(cid:1), (v) }(cid:1), (6) 𝑚𝑒𝑡𝑎, z𝑤𝑎𝑡𝑐ℎ 𝑚𝑒𝑡𝑎 , z𝑠ℎ𝑎𝑟𝑒𝑑 𝑚𝑒𝑡𝑎 where z𝑖𝑛𝑡𝑒𝑟 are coarsen macro-level meta rep- resentation to extract: (1) interaction in-category knowledge, (2) watch-time in-category knowledge and (3) shared knowledge. After obtaining these meta representations, we next focus on the multi-task prediction according to their corresponding meta knowledge and shared meta knowledge. Specifically, we utilize the meta knowledge to build three types of experts: (1) the globally shared experts for all tasks according to z𝑠ℎ𝑎𝑟𝑒𝑑 𝑚𝑒𝑡𝑎 , (2) the locally 𝑚𝑒𝑡𝑎 or z𝑤𝑎𝑡𝑐ℎ shared experts for in-category tasks according to z𝑖𝑛𝑡𝑒𝑟 𝑚𝑒𝑡𝑎 , 𝑚𝑒𝑡𝑎 or z𝑤𝑎𝑡𝑐ℎ (3) the specific experts for each task according to z𝑖𝑛𝑡𝑒𝑟 𝑚𝑒𝑡𝑎 . For the task-specific gate networks, we directly use the concatena- tion shared meta knowledge z𝑠ℎ𝑎𝑟𝑒𝑑 and corresponding category 𝑚𝑒𝑡𝑎 meta knowledge to generate the weights of experts. Here, we take Conference’17, July 2017, Washington, DC, USA Xu Wang, Jiangxia Cao, Zhiyi Fu, Kun Gai, and Guorui Zhou the Click and Effective-view interactions as examples: ˆ𝑦𝑐𝑡𝑟 = Tower𝑐𝑡𝑟 (Sum(cid:0)Gate𝑐𝑡𝑟 (z 𝑖𝑛𝑡𝑒𝑟 𝑚𝑒𝑡𝑎 ⊕ z {Experts𝑠ℎ𝑎𝑟𝑒𝑑 (z Experts{𝑖𝑛𝑡𝑒𝑟,𝑐𝑡𝑟 } (z 𝑤𝑎𝑡𝑐ℎ 𝑚𝑒𝑡𝑎 ⊕ z 𝑠ℎ𝑎𝑟𝑒𝑑 𝑚𝑒𝑡𝑎 ), 𝑠ℎ𝑎𝑟𝑒𝑑 𝑚𝑒𝑡𝑎 ), 𝑖𝑛𝑡𝑒𝑟 𝑚𝑒𝑡𝑎 ) }(cid:1), 𝑠ℎ𝑎𝑟𝑒𝑑 𝑚𝑒𝑡𝑎 ), ˆ𝑦𝑒𝑣𝑡𝑟 = Tower𝑒𝑣𝑡𝑟 (Sum(cid:0)Gate𝑒𝑣𝑡𝑟 (z (7) {Experts𝑠ℎ𝑎𝑟𝑒𝑑 (z Experts{𝑤𝑎𝑡𝑐ℎ,𝑒𝑣𝑡𝑟 } (z 𝑠ℎ𝑎𝑟𝑒𝑑 𝑚𝑒𝑡𝑎 ), 𝑤𝑎𝑡𝑐ℎ 𝑚𝑒𝑡𝑎 ) }(cid:1), where the ⊕ denotes the concatenation operator, the Experts𝑠ℎ𝑎𝑟𝑒𝑑 are the all tasks shared experts, Experts𝑖𝑛𝑡𝑒𝑟 , Experts𝑤𝑎𝑡𝑐ℎ are the in-category tasks shared experts. It is worth noting the meta abstraction of HoME’s first layer, the main architecture difference with PLE, which is based on our ob- servation of real multi-task recommendation scenario at Kuaishou (see Figure 5). Based on the prior semantics divided meta expert network of our HoME, we can avoid conflicts between tasks as much as possible and maximize the sharing efficiency among tasks. 3.4 Feature-gate&Self-gate mechanisms For the expert underfitting, we find some data-sparse tasks’ gate- generated weights tend to ignore their specific experts but assign large gate weights to shared experts. The reason might be that our model needs to predict 20+ different tasks simultaneously, but these dense tasks’ density can be 100x larger than sparse tasks. To enhance our sparse task expert training, we present two gate mech- anisms to ensure they can obtain appropriate gradients to maximize their effectiveness: the feature-gate and self-gate mechanisms. For feature-gate, the purpose is to generate different represen- tations of input features for different task experts, to alleviate the potential gradient conflicts when all experts share the same input features. Formally, the feature-gate aims to extract the importance of each input feature element, e.g., Fea_Gate : R|v| → R|v| if the input is v. However, in industrial recommender systems, the v is always a high-dimension vector, e.g., |v| > 3000+; thereby, it is ex- pensive to introduce these large matrices for meta experts. Inspired by the LLM efficiency tuning technique, LoRA [15], we also intro- duce two small matrices to approximate a large matrix to generate element importance: Fea_LoRA(v, 𝑑 ) = 2 × Sigmoid(cid:0)v(BA)(cid:1), (8) where B ∈ R|v|×𝑑 , A ∈ R𝑑 ×|v|, BA ∈ R|v|×|v| Note that we apply a 2× operator after the Sigmoid function, which aims to achieve a flexible zoom-in or zoom-out operator. Indeed, the Fea_LoRA function is an effective way to generate privatized expert inputs. In our iteration, we find it could be further enhanced with multi-task idea, i.e., introducing more Fea_LoRA to generate feature importance from multiple aspects as our Fea_Gate. (9) Fea_Gate(v) = Sum(cid:0)Gate𝑓 𝑒𝑎 (v), {Fea_LoRA{1,2,...,𝐿} (v, |v|/𝐿) }(cid:1), where 𝐿 is a hyper-parameter to control the Fea_LoRA number, the Gate𝑓 𝑒𝑎 : R|v| → R𝐿 utilized to generate weights to balance different Fea_LoRA importance. Note that we need to choose an 𝐿 that is divisible by input length |v| to generate the dimension of Fea_LoRA. Therefore, our expert input can be obtained as follows (here we show the first layer meta shared experts input v𝑠ℎ𝑎𝑟𝑒𝑑 𝑚𝑒𝑡𝑎 ): 𝑚𝑒𝑡𝑎 = v ⊙ Fea_Gate𝑠ℎ𝑎𝑟𝑒𝑑 𝑠ℎ𝑎𝑟𝑒𝑑 v 𝑚𝑒𝑡𝑎 (v), (10) where the ⊙ denotes the element-wise product. In this way, the different experts have their own feature space, which could reduce the risk of gradient conflicts to protect sparse tasks. Besides, the latest MoE efforts show that deeper stacking expert networks could bring more powerful prediction ability [19, 33]. Unfortunately, in our experiment, we find the origin gate network easily dilutes the gradient layer by layer, especially for the sparse task expert training. In addition to the expert-input level Fea_Gate, we also add a residual idea-based self-gate on the expert-output level to ensure the top layers gradient can be effectively passed to bottom layers. Specifically, the Self_Gate only focuses on the output of its specific experts. Take the watching-time meta experts output as an example: (cid:16) 𝑠ℎ𝑎𝑟𝑒𝑑 𝑚𝑒𝑡𝑎,𝑠𝑒𝑙 𝑓 = Sum z Where Self_Gate(·) = Sigmoid(cid:0)MLP_G(·)(cid:1) if only 1 Expert 𝑚𝑒𝑡𝑎 (v), {Experts𝑠ℎ𝑎𝑟𝑒𝑑 (v) } Self_Gate𝑠ℎ𝑎𝑟𝑒𝑑 (11) (cid:17) Self_Gate(·) = Softmax(cid:0)MLP_G(·)(cid:1) others where Self_Gate : R|v| → R𝐾 , where 𝐾 is the related Expert num- ber, and its activate function is Sigmoid if there only 1 Expert, oth- erwise setted as Softmax. Analogously, the z𝑖𝑛𝑡𝑒𝑟 𝑚𝑒𝑡𝑎,𝑠𝑒𝑙 𝑓 can be obtained in the same way, we then add the corresponding representations (e.g., z𝑖𝑛𝑡𝑒𝑟 𝑚𝑒𝑡𝑎,𝑠𝑒𝑙 𝑓 ) to support the next layer. See Figure 5 for fine-grained HoME details. 𝑚𝑒𝑡𝑎,𝑠𝑒𝑙 𝑓 , z𝑤𝑎𝑡𝑐ℎ 𝑚𝑒𝑡𝑎 + z𝑖𝑛𝑡𝑒𝑟 4 EXPERIMENTS In this section, we first compare HoME with several widely-used multi-task learning approaches in offline settings. We then con- ducted some model variations with our modifications to verify the effectiveness of HoME. We also test the impact of HoME hyper- parameters robustness on the number of expert numbers and feature- gate LoRA numbers. Furthermore, we provide our model’s expert network gate weights graph to show our HoME is promising to be a balanced system. Finally, we push our HoME to the online A/B test to verify how much benefit that HoME can contribute to Kuaishou. 4.1 Experiments Setup We conduct experiments at our short-video data-streaming, which is the largest recommendation scenario at Kuaishou, including over 400 Million users and 50 Billion logs every day. For a fair com- parison, we only change the multi-task learning module in Eq.(1), and keep the same of other modules. Specifically, we implement the MMoE [23], CGC [33], PLE [33], AdaTT [19] model variants as baselines. For the evaluation, we use the wide-used ranking met- rics AUC and GAUC [42] to reflect the model’s predictive ability. Specifically, in our short-video service, GAUC is the most impor- tant offline metric. Its main idea is to calculate each user’s AUC separately and then weighted aggregate all users’ AUC as: ∑︁ GAUC = 𝑤𝑢 AUC𝑢 where 𝑤𝑢 = 𝑢 where the 𝑤𝑢 denotes the user’s logs ratio. #logs𝑢 (cid:205)𝑖 #logs𝑖 , (12) 4.2 Offline Experiments The main experiment results are shown in Table 1. Note that im- provements of 0.03%~0.05% in AUC or GAUC in the offline evalua- tion are significant enough to bring substantial online revenue to HoME Conference’17, July 2017, Washington, DC, USA Table 1: Offline results (%) (AUC and GAUC) on Short-Video services at Kuaishou. Model MMoE MMoE* CGC* w/o shared CGC* PLE* AdaTT* HoME Effective-view GAUC AUC Long-view Comment AUC GAUC AUC GAUC AUC GAUC AUC GAUC AUC GAUC AUC GAUC AUC GAUC Forward Collect Follow Click Like #Params 77.56 71.90 82.91 77.04 73.43 69.38 96.94 84.85 92.44 78.55 92.85 80.12 92.44 76.75 95.42 84.30 224.70Mil 77.66 77.72 77.72 77.74 77.76 77.87 72.03 72.10 72.11 72.14 72.16 72.34 82.98 83.03 83.04 83.04 83.07 83.19 77.15 77.21 77.23 77.24 77.27 77.42 73.66 73.84 73.88 73.92 73.84 73.95 69.62 69.85 69.89 69.92 69.83 69.98 96.96 97.02 97.02 97.02 97.01 97.03 84.97 85.11 85.12 85.15 85.12 85.23 92.49 92.51 92.51 92.54 92.53 92.61 78.68 78.76 78.78 78.82 78.79 79.03 92.91 93.03 93.03 93.05 92.98 93.12 80.27 80.51 80.53 80.57 80.45 80.77 92.53 92.67 92.68 92.70 92.70 92.76 76.93 77.12 77.16 77.22 77.18 77.42 95.50 95.57 95.59 95.61 95.59 95.64 84.51 84.70 84.76 84.80 84.73 84.87 224.85Mil 279.43Mil 325.83Mil 351.29Mil 305.01Mil 292.24Mil Improve over MMoE +0.31 +0.44 +0.28 +0.38 +0.52 +0.60 +0.09 +0.38 +0.17 +0.48 +0.27 +0.65 +0.32 +0.67 +0.22 +0.57 - HoME w/o fg2 HoME w/o fg HoME w/o fg-sg HoME w/o fg-sg-mask 77.85 77.78 77.77 77.63 72.30 72.22 72.19 72.01 83.16 83.11 83.09 82.96 77.39 77.32 77.29 77.13 73.94 73.89 73.83 73.68 69.95 69.89 69.83 69.65 97.02 97.01 97.02 96.98 85.19 85.15 85.14 84.98 92.60 92.58 92.58 92.47 78.99 78.91 78.90 78.61 93.10 93.06 93.05 92.95 80.71 80.62 80.60 80.35 92.74 92.70 92.70 92.54 77.34 77.24 77.22 76.97 95.62 95.60 95.60 95.51 84.83 84.77 84.75 84.52 268.18Mil 204.51Mil 202.38Mil 202.70Mil For a fair comparison, baselines remarked with ‘*’ are equipped with our HoME_Expert as base Expert network. CGC* w/o shared removes shared experts and all gate networks of CGC*. For HoME, the ‘w/o fg2’ and ‘w/o fg’ variants ignore the second layer feature-gates and all feature-gates respectively; the ‘w/o sg’ variant ignores all self-gates, the ‘w/o mask’ variant keeps HoME architecture but all experts are shared. Best/runner-up results are marked bold/underlined. our business. We first show the effectiveness of the HoME_Expert upon MMoE, i.e., MMoE*. Then we compare HoME with the im- proved baselines all equipped with HoME_Expert, such as ‘CGC* w/o shared’, the variant of CGC ignores the shared experts and all gate networks. Moreover, we also implement ablation variants for our HoME; the ‘w/o fg2’ and ‘w/o fg’ variants ignore the second layer feature-gates and all feature-gates respectively; the ‘w/o sg’ variant ignores all self-gates, the ‘w/o mask’ variant keeps HoME architecture but all experts are shared. We have the following ob- servations: • (1) The MMoE* largely outperforms the naive MMoE method, which indicates our Expert normalization&Swish mechanism could overcome the expert collapse issue, balance expert outputs and encourage the expert networks to take due responsibility. (2) The ‘CGC* w/o shared’ can be seen as Shared-bottom with a specific-expert for each task. The MMoE* is weaker than the trivial ‘CGC* w/o shared’ solution equipped with more parame- ters (24% bigger compared to MMoE* in our experiment), which indicates that MMoE systems are fragile and can easily degrade in real large-scale streaming data scenarios. (3) Compared to ‘CGC* w/o shared’, the CGC* does not show significant improvement, which indicates the shared-experts of CGC* are degenerating into some specific-experts. (4) Compared to MMoE*, the PLE* and AdaTT* achieve better performance, which indicates that after solving expert collapse, stacking multiple expert network layers and increasing model parameters are a promising way to unleash the potential of multi-task modules. (5) HoME shows statistical improvements over other strong baselines in all tasks, while introducing fewer parameters and achieving the best re- sults, which indicates our modification could enhance multi-task MoE system stability and maximize expert efficiency. • (1) For our HoME ablations, the ‘w/o fg-sg-mask’ variant shows comparable performance with MMoE*, while the ‘/o fg-sg’ vari- ant achieves significant improvement across all tasks, i.e., AUC +0.15% in most cases, which demonstrates that our Hierarchy Mask Mechanism is a powerful and low-resources strategy to alleviate expert degradation issue without introduce large ad- ditional parameters. (2) The ‘w/o fg’ variant reaches better and more steady improvements than the ‘w/o fg-sg’ variant, which indicates adding the residual connection between different layer experts is helpful to train experts. (3) Compared with HoME and the ‘w/o fg2’ variant, we can find the second layer feature-gates could enhance model ability, but the first layer feature-gates show more robust and greater improvements. The reason might be that the first layer is the input as the information source and used in the coarsen meta layer; their tasks gradient conflict problem will be more serious than the second fine-grained layer. 4.3 Discussion of Hyper-Parameter Sensitivity This section explores the hyper-parameter sensitivity of the expert’s numbers and the feature-gate LoRA numbers, to investigate the ro- bustness of HoME. For the expert number, we conduct experiments under the ‘HoME w/o fg’ variant, since the first layer feature-gate is an expensive parameter-consuming operator. From Table 2, we can observe a HoME scaling-law phenomenon: only by introducing more experts, the prediction accuracy will steadily improve with the increase in the number of parameters. Such a phenomenon also demonstrates our HoME is a balanced MoE system, which could unleash the ability of all experts. For the feature LoRA number, we conduct experiments under the variant ‘HoME w/o fg2’, which only involves the first layer feature-gate while showing a significant improvements in Table 1. Specifically, in our implementation, more LoRA numbers will only reduce the dimension of the hidden dimen- sion while not adding additional parameters, which may decrease single LoRA ability. From Table 3, we can observe that the variant of two LoRA shows the best results, which indicates there exists a bottleneck to balance the LoRA number and LoRA modeling ability to provide more incremental information. 4.4 Discussion of HoME Situation Figure 6 gives the expert output distributions and graph weights flow of our HoME. From it, we can observe that HoME achieves a balanced gate weight equilibrium situation that: (1) According to the heatmap of feature-gate (randomly visualized 64 dimensions), we can draw a conclusion that our feature-gate could achieve a flex- ible element-wise feature selection for each expert. (2) All shared Conference’17, July 2017, Washington, DC, USA Xu Wang, Jiangxia Cao, Zhiyi Fu, Kun Gai, and Guorui Zhou Table 2: Hyper-Parameter Sensitivity discussion of expert networks regarding the number of expert numbers. Variant Expert Number Effective-view GAUC AUC Long-view Comment AUC GAUC AUC GAUC AUC GAUC AUC GAUC AUC GAUC AUC GAUC AUC GAUC Forward Collect Follow Click Like #Parameter HoME w/o fg 1 2 3 4 77.78 77.81 77.83 77.85 72.22 72.26 72.29 72.31 83.11 83.13 83.15 83.17 77.32 77.36 77.37 77.40 73.89 73.90 73.94 73.96 69.89 69.92 69.93 69.95 97.01 97.02 97.03 97.03 85.15 85.18 85.19 85.21 92.58 92.59 92.60 92.61 78.91 78.94 78.97 78.99 93.06 93.08 93.10 93.11 80.62 80.68 80.73 80.77 92.70 92.72 92.74 92.76 77.24 77.28 77.33 77.38 95.60 95.62 95.63 95.66 84.77 84.80 84.85 84.89 204.51Mil 243.28Mil 282.04Mil 320.81Mil Table 3: Hyper-Parameter Sensitivity discussion of Feature-Gate regarding the number of LoRA. Variant LoRA Number Effective-view GAUC AUC Long-view Comment AUC GAUC AUC GAUC AUC GAUC AUC GAUC AUC GAUC AUC GAUC AUC GAUC Forward Collect Follow Click Like #Parameter HoME w/o fg2 1 2 4 6 77.83 77.85 77.84 77.84 72.27 72.30 72.29 72.28 83.14 83.16 83.16 83.15 77.36 77.39 77.37 77.39 73.89 73.94 73.91 73.91 69.92 69.95 69.94 69.94 97.00 97.02 97.01 97.00 85.17 85.19 85.18 85.17 92.58 92.60 92.59 92.59 78.95 78.99 78.97 78.96 93.09 93.10 93.11 93.10 80.69 80.71 80.71 80.71 92.72 92.74 92.73 92.73 77.31 77.34 77.34 77.32 95.61 95.62 95.61 95.61 84.80 84.83 84.80 84.82 268Mil 268Mil 268Mil 268Mil Table 4: Online A/B testing results of Short-Video services at Kuaishou. Applications Groups Kuaishou Single Page Kuaishou Lite Single Page Kuaishou Double Page Young People Total Young People Total Young People Total Watching-Time Metrics Interaction Metrics Average Play-time Play-time Video View Click Like Comment Collect Forward Follow +0.770% +0.311% +0.512% +0.474% +0.311% +0.169% +1.041% +0.636% +0.729% +0.735% +0.645% +1.283% +0.547% +0.059% -0.215% -0.173% +0.498% +0.882% - - - - - +0.945% +1.036% +0.601% +0.198% +0.192% +1.175% +0.483% +2.124% +1.966% +1.533% +1.726% +3.244% +0.495% +2.048% +0.548% +1.049% +0.856% +1.209% +1.678% +2.390% +2.008% +5.241% +2.245% +0.717% +0.795% +2.741% +1.351% +1.910% +1.366% +0.882% +0.911% and specific expert outputs are aligned in similar numerical mag- nitude. Further, we can find the meta-shared-expert distributions are different from specific-expert distributions, which indicates the shared-knowledge tends to be encoded by meta networks while the difference-knowledge is pushed to be encoded by the specific experts. (3) All experts play their expected roles; the shared and specific experts contribute perceivable weights. 4.5 Online A/B Test In this section, we also push HoME to be an online ranking model served at three short-video scenarios: Kuaishou Single/Double Page (in Figure 1) and Kuaishou Lite Single Page. In our service, the main metric is the watching-time metrics, e.g., (average) play-time, which reflects the total amount of time users spend on Kuaishou, and we also show the video view metric, which measures the total amount of short-video users watched. The online A/B test results of Young and Total user groups are shown in Table. 4. Actually, the about 0.1% improvement in play-time is a statistically significant enough modification at Kuaishou. Our proposed HoME achieves a very significant improvement of +0.311%, +0.474% and +0.169% for all users in three scenarios respectively, which is the most remarkable modification in the past year. In addition, we can observe that HoME achieves significant business gain at all interaction metrics, e.g., Click, Like, Comment and others, which reveals HoME could converge the multi-task system to a more balanced equilibrium state without the seesaw phenomenon. Moreover, we can find that the increase is larger for sparse behavior tasks, which indicates our HoME enables all shared or specific experts to obtain appropriate gradients to maximize its effectiveness. 5 CONCLUSIONS In this paper, we focus on solving the multi-task learning methods practical problems and lessons we learned from Kuaishou short- video service, which is one of the world’s largest recommendation scenarios. We first figure out that the existing wide-used multi-task family, i.e., Gated Mixture-of-Expert, is prone to several serious problems that limit the model’s expected ability. From the expert outputs, we find the expert collapse problem that experts’ output distributions are significantly different. From the shared-expert learning, we observe the expert degradation problem that some shared experts only serve one task. From the specific-expert learn- ing, we noticed the expert underfitting problem that some sparse tasks specific-experts almost do not contribute any information. To overcome them, we propose three insightful improvements: (1) the Expert normalization&Swish mechanism to align expert output distribution; (2) the Hierarchy mask mechanism to regularize the relationship between tasks to maximize shared-expert efficiency; (3) the Feature-gate and Self-gate mechanisms to privatize more flexible experts’ inputs and connect adjacent related experts to en- sure all experts could obtain appropriate gradients. Furthermore, via extensive offline and online experiments on one of the world’s largest short-video platforms, Kuaishou, we showed that HoME has led to substantial improvements compared to other wide-used multi-task methods. Our HoME has been widely deployed on vari- ous online models at Kuaishou, supporting several services for 400 Million active users daily. HoME Conference’17, July 2017, Washington, DC, USA Figure 6: The feature-gate heatmap, expert output distributions and gate weights flow of our HoME. 6 BIOGRAPHY Xu Wang is currently a researcher at Kuaishou Technology (KStar Talent Program), Beijing, China. He received his M.S. degree from Harbin Institute of Technology, Shenzhen, China. His main research interests include recommendation systems and multi-task learning. Jiangxia Cao is currently a researcher at Kuaishou Technology (KStar Talent Program), Beijing, China. He received his Ph.D. degree from the Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China. His research focuses on industry recom- mender and low-resource large models. He has published over 20 papers in top-tier international conferences and journals including SIGIR, WSDM, CIKM, ACL and so on. Zhiyi Fu is currently a researcher at Kuaishou Technology (KStar Talent Program), Beijing, China. He received his M.S. and B.S. de- gree from Peking University, Beijing, China. His main research interests include user long-term interest modeling and multi-task learning. REFERENCES [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer Normaliza- tion. arXiv (2016). [2] Trapit Bansal, David Belanger, and Andrew McCallum. 2016. Ask the GRU: Multi-task Learning for Deep Text Recommendations. In ACM Conference on Recommender Systems (RecSys). [3] Rich Caruana. 1997. Multitask Learning. Machine Learning (1997). [4] Jianxin Chang, Chenbin Zhang, Zhiyi Fu, Xiaoxue Zang, Lin Guan, Jing Lu, Yiqun Hui, Dewei Leng, Yanan Niu, Yang Song, and Kun Gai. 2023. TWIN: TWo-stage Interest Network for Lifelong User Behavior Modeling in CTR Prediction at Kuaishou. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). [5] Jianxin Chang, Chenbin Zhang, Yiqun Hui, Dewei Leng, Yanan Niu, Yang Song, and Kun Gai. 2023. PEPNet: Parameter and Embedding Personalized Network for Infusing with Personalized Prior Information. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). [6] Ronan Collobert and Jason Weston. 2008. A Unified Architecture for Natu- ral Language Processing: Deep Neural Networks with Multitask Learning. In International Conference on Machine Learning (ICML). [7] Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep Neural Networks for YouTube Recommendations. In ACM Conference on Recommender Systems (RecSys). [8] Damai Dai, Chengqi Deng, Chenggang Zhao, RX Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y Wu, et al. 2024. DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models. arXiv (2024). [9] James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy, Taylor Van Vleet, Ullas Gargi, Sujoy Gupta, Yu He, Mike Lambert, Blake Livingston, and Dasarathi Sampath. 2010. The YouTube Video Recommendation System. In ACM Conference on Recommender Systems (RecSys). [10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv (2019). [11] Ke Ding, Xin Dong, Yong He, Lei Cheng, Chilin Fu, Zhaoxin Huan, Hai Li, Tan Yan, Liang Zhang, Xiaolu Zhang, et al. 2021. MSSM: A Multiple-level Sparse Sharing Model for Efficient Multi-Task Learning. In International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). [12] Jianping Fan, Tianyi Zhao, Zhenzhong Kuang, Yu Zheng, Ji Zhang, Jun Yu, and Jinye Peng. 2017. HD-MTL: Hierarchical Deep Multi-Task Learning for Large- Scale Visual Recognition. IEEE Transactions on Image Processing (TIP) (2017). [13] Joumana Ghosn and Yoshua Bengio. 1996. Multi-Task Learning for Stock Selection. In Advances in Neural Information Processing Systems (NeurIPS). [14] Shansan Gong and Kenny Q Zhu. 2022. Positive, Negative and Neutral: Modeling Implicit Feedback in Session-based News Recommendation. In International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). [15] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. LoRA: Low-Rank Adaptation of Large Language Models. arXiv (2021). [16] Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In International Conference on Machine Learning (ICML). [17] Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. 1991. Adaptive Mixtures of Local Experts. Neural Computation (1991). [18] Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. In IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR). [19] Danwei Li, Zhengyu Zhang, Siyang Yuan, Mingze Gao, Weilin Zhang, Chaofei Yang, Xi Liu, and Jiyan Yang. 2023. AdaTT: Adaptive Task-to-Task Fusion Network for Multitask Learning in Recommendations. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). [20] Qi Liu, Zhilong Zhou, Gangwei Jiang, Tiezheng Ge, and Defu Lian. 2023. Deep Task-specific Bottom Representation Network for Multi-Task Recommendation. In ACM International Conference on Information and Knowledge Management (CIKM). [21] Yixin Liu, Kai Zhang, Yuan Li, Zhiling Yan, Chujie Gao, Ruoxi Chen, Zhengqing Yuan, Yue Huang, Hanchi Sun, Jianfeng Gao, et al. 2024. Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models. Conference’17, July 2017, Washington, DC, USA Xu Wang, Jiangxia Cao, Zhiyi Fu, Kun Gai, and Guorui Zhou arXiv (2024). [22] Xiao Lu, Yaonan Wang, Xuanyu Zhou, Zhenjun Zhang, and Zhigang Ling. 2017. Traffic Sign Recognition via Multi-Modal Tree-Structure Embedded Multi-Task Learning. IEEE Transactions on Intelligent Transportation Systems (TITS) (2017). [23] Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H Chi. 2018. Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of- Experts. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). [24] Xiao Ma, Liqin Zhao, Guan Huang, Zhi Wang, Zelin Hu, Xiaoqiang Zhu, and Kun Gai. 2018. Entire Space Multi-Task Model: An Effective Approach for Estimating Post-Click Conversion Rate. In International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). [25] Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. 2016. Cross-Stitch Networks for Multi-Task Learning. In IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR). [26] Duy-Kien Nguyen and Takayuki Okatani. 2019. Multi-Task Learning of Hierar- chical Vision-Language Representation. In IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR). [27] Qi Pi, Guorui Zhou, Yujing Zhang, Zhe Wang, Lejian Ren, Ying Fan, Xiaoqiang Zhu, and Kun Gai. 2020. Search-based User Interest Modeling with Lifelong Sequential Behavior Data for Click-Through Rate Prediction. In ACM International Conference on Information and Knowledge Management (CIKM). [28] Prajit Ramachandran, Barret Zoph, and Quoc V Le. 2017. Searching for Activation Functions. arXiv (2017). [29] Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2017. Sluice Networks: Learning What to Share Between Loosely Related Tasks. arXiv (2017). [30] Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2019. A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks. In AAAI Conference on Artificial Intelligence (AAAI). [31] Xiang-Rong Sheng, Liqin Zhao, Guorui Zhou, Xinyao Ding, Binding Dai, Qiang Luo, Siran Yang, Jingshan Lv, Chi Zhang, Hongbo Deng, and Xiaoqiang Zhu. 2021. One Model to Serve All: Star Topology Adaptive Recommender for Multi-Domain CTR Prediction. In ACM International Conference on Information and Knowledge Management (CIKM). [32] Liangcai Su, Junwei Pan, Ximei Wang, Xi Xiao, Shijie Quan, Xihua Chen, and Jie Jiang. 2024. STEM: Unleashing the Power of Embeddings for Multi-task Recommendation. In AAAI Conference on Artificial Intelligence (AAAI). [33] Hongyan Tang, Junning Liu, Ming Zhao, and Xudong Gong. 2020. Progres- sive Layered Extraction (PLE): A Novel Multi-Task Learning (MTL) Model for Personalized Recommendations. In ACM Conference on Recommender Systems (RecSys). [34] Ruobing Xie, Cheng Ling, Yalong Wang, Rui Wang, Feng Xia, and Leyu Lin. 2021. Deep Feedback Network for Recommendation. In International Joint Conference on Artificial Intelligence (IJCAI). [35] Jing Yan, Liu Jiang, Jianfei Cui, Zhichen Zhao, Xingyan Bin, Feng Zhang, and Zuotao Liu. 2024. Trinity: Syncretizing Multi-/Long-tail/Long-term Interests All in One. arXiv (2024). [36] Jiaqi Zhai, Lucy Liao, Xing Liu, Yueming Wang, Rui Li, Xuan Cao, Leon Gao, Zhao- jie Gong, Fangda Gu, Michael He, et al. 2024. Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations. arXiv (2024). [37] Yijie Zhang, Yuanchen Bei, Hao Chen, Qijie Shen, Zheng Yuan, Huan Gong, Sen- zhang Wang, Feiran Huang, and Xiao Huang. 2024. Multi-Behavior Collaborative Filtering with Partial Order Graph Convolutional Networks. arXiv (2024). [38] Yu Zhang and Qiang Yang. 2022. A Survey on Multi-Task Learning. IEEE Transactions on Knowledge and Data Engineering (TKDE) (2022). [39] Ziqiang Zhang, Sanyuan Chen, Long Zhou, Yu Wu, Shuo Ren, Shujie Liu, Zhuoyuan Yao, Xun Gong, Lirong Dai, Jinyu Li, and Furu Wei. 2024. SpeechLM: Enhanced Speech Pre-Training With Unpaired Textual Data. IEEE/ACM Transac- tions on Audio, Speech and Language Processing (TASLP) (2024). [40] Zhe Zhao, Lichan Hong, Li Wei, Jilin Chen, Aniruddh Nath, Shawn Andrews, Aditee Kumthekar, Maheswaran Sathiamoorthy, Xinyang Yi, and Ed Chi. 2019. Recommending What Video to Watch Next: A Multitask Ranking System. In ACM Conference on Recommender Systems (RecSys). [41] Guorui Zhou, Na Mou, Ying Fan, Qi Pi, Weijie Bian, Chang Zhou, Xiaoqiang Zhu, and Kun Gai. 2019. Deep Interest Evolution Network for Click-Through Rate Prediction. In AAAI Conference on Artificial Intelligence (AAAI). [42] Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018. Deep Interest Network for Click- Through Rate Prediction. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). [43] Yuhang Zhou, Zihua Zhao, Haolin Li, Siyuan Du, Jiangchao Yao, Ya Zhang, and Yanfeng Wang. 2024. Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters. arXiv (2024).
ai_researcher
4
MatPilot_an_LLM-enabled_AI_Materials_Scientist_under_the_Framework_of_Human-Machine_Collaboration.pdf
MatPilot: an LLM-enabled AI Materials Scientist under the Framework of Human-Machine Collaboration Ziqi Ni1, Yahao Li1, Kaijia Hu1, Kunyuan Han1, Ming Xu2, Xingyu Chen1, Fengqi Liu1, Yicong Ye1, *, Shuxin Bai1 1College of Aerospace Science and Engineering, National University of Defense Technology, Changsha, 410073, China 2College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, China * Corresponding author. E-mail: [email protected] (Ziqi Ni), [email protected] (Yicong Ye). Abstract The rapid evolution of artificial intelligence, particularly large language models, presents unprecedented opportunities for materials science research. We proposed and developed an AI materials scientist named MatPilot, which has shown encouraging abilities in the discovery of new materials. The core strength of MatPilot is its natural language interactive human-machine collaboration, which augments the research capabilities of human scientist teams through a multi-agent system. MatPilot integrates unique cognitive abilities, extensive accumulated experience, and ongoing curiosity of human-beings with the AI agents' capabilities of advanced abstraction, complex knowledge storage and high-dimensional information processing. It could generate scientific hypotheses and experimental schemes, and employ predictive models and optimization algorithms to drive an automated experimental platform for experiments. It turns out that our system demonstrates capabilities for efficient validation, continuous learning, and iterative optimization. Keywords AI materials scientist; Large language models; autonomous experimentation platform 1 Human-machine collaboration framework In recent years, artificial intelligence (AI) has driven a revolutionary transformation in materials science. Data-driven approaches have significantly advanced materials research, leading to notable improvements in predicting material properties1–4, optimizing compositions and experimental conditions5–10, as well as discovering new materials11–15. However, this data-driven paradigm has inherent limitations: it tends to follow a linear logic, constructing a semi-closed mechanical system that relies heavily on human input. Additionally, it faces significant challenges in developing interpretable models, enhancing data learning efficiency, and elucidating complex structure-property relationships in materials. More critically, current AI methods applied in materials tend to overemphasize correlations while neglecting causal relationships. In many instances, simple causal inference guided by domain expertise can provide deeper insights than analyzing massive datasets. Without incorporating common-sense reasoning and deep domain knowledge, AI systems that rely solely on statistical analysis struggle to reach the level of human intelligence needed to discover complex scientific theories. The emergence of large language models (LLMs) such as ChatGPT has opened the door to human-machine collaboration through natural language interaction. By enabling communication in natural language, humans can interact with AI agents to feedback. Engaging exchange specialized knowledge in materials science. When researchers' intuition and experience are systematically integrated, AI models can continuously learn and think through through human-machine collaboration, where "humans are in the loop", allows for combining the computational advantages of machines with the intuitive judgment capabilities of human beings, breaking through the limitations of purely data-driven methods. Fig. 1 depicts the human-machine collaboration framework implemented in our MatPilot, showcasing the collaboration between researchers and AI in a materials laboratory. in scientific research Fig.1 Human-machine collaboration framework implemented in our MatPilot. 2 MatPilot system architecture Our group has developed MatPilot, an LLM-enabled AI materials scientist designed to enhance human-machine collaboration. It combines the strengths of human intuition with AI efficiency to enable a more systematic and effective exploration of materials science. MatPilot consists of a cognition module and a execution module (Fig.2), enhancing materials researchers' ability to think and perform efficiently. The cognition module, analogous to the human brain, is responsible for processing information, analyzing data, and making decisions. Meanwhile, the execution module resembles the body, tasked with performing the practical actions necessary for experimental procedures. Together, these two modules form a cohesive system where thinking and action are interlinked, enabling researchers to conceptualize, strategize, and implement their ideas in practice. This integration of cognition processing with physical execution establishes a research platform that supports both thinking capabilities and practical outcomes. By bridging the gap between abstract reasoning and tangible implementation, MatPilot facilitates a comprehensive approach to navigating the complexities of materials research. MatPilot is designed as a tool to augment human creativity rather than replace it. The cognition and execution modules are intended to function autonomously while human researchers remain at the core of the scientific endeavor—their creativity, critical thinking, and intuition are irreplaceable drivers of innovation and discovery. MatPilot handles repetitive tasks, analyzes complex data patterns, and streamlines experimental workflows, allowing researchers to focus on more nuanced and conceptual work. Researchers can dedicate their time to hypothesis formation, interpretation of results, and refining research directions, with MatPilot deeply involved in and influencing the decision-making process. In this way, MatPilot acts as an intelligent collaborator, enhancing the researcher's ability for deep thinking and precise execution, while always keeping the human mind as the ultimate source of insight and direction. Fig.2 MatPilot's architecture (with energy storage ceramics as an example). 3 Cognition module The cognition module of MatPilot integrates knowledge acquisition and innovation generation as its core functions. The knowledge acquisition function ensures that MatPilot continuously gathers the latest insights in materials science, while the innovation generation function enables it to propose novel research ideas and experimental designs. By combining these capabilities, MatPilot can efficiently understand existing research findings and assist in generating new insights, thereby providing substantial support to researchers. 3.1 Knowledge acquisition We envision MatPilot evolving into an AI expert in materials science, with deep domain knowledge to effectively support researchers. To achieve this, we applied the retrieval-augmented generation approach, enabling a large language model to acquire specialized materials science knowledge through the retrieval of a high-quality knowledge base. The quality of this knowledge base is crucial for the effectiveness of the RAG method. Fig.3 illustrates the workflow for constructing such a high-quality knowledge base, which involves four main steps: a. Literature screening Initially, we filter a collection of core literature to identify the most relevant and core research findings, ensuring that the sources of information are of high quality and reliability. b. Data extraction Subsequently, we extract high-quality, structured data from the selected literature, focusing particularly on experimental procedures and key performance data. This step ensures that the extracted information is directly applicable to the model's reasoning and analysis tasks. c. Knowledge distillation Next, we condense complex scientific knowledge into manageable core concepts, enabling the model to efficiently grasp the essence of intricate problems and enhancing processing efficiency. d. Knowledge graph construction Finally, we construct a knowledge graph that elucidates the relationships between materials, processing methods, and performance attributes. This graph serves as a foundation for subsequent knowledge inference and relational analysis. This approach offers two significant advantages. First, the system is no longer restricted to the static knowledge acquired during initial training; instead, it can continuously update its knowledge base to reflect the latest scientific advancements. Second, it can integrate multiple types of information (e.g., tabular data, distilled text, and relational graphs) and select the most suitable retrieval strategy depending on the specific nature of the query, thereby significantly enhancing retrieval efficiency and reasoning capabilities. 3.2 Innovation generation Fig.3 Workflow for constructing a high-quality knowledge base. MatPilot's creative capability is founded on the structural intelligence theory, which posits that innovation emerges from the synergy between divergent and convergent thinking. Divergent thinking enables the system to broadly explore the problem space and generate diverse solutions, while convergent thinking focuses on distilling these ideas into concrete, feasible proposals. Although LLMs excel at standard task processing, they often lack the imagination necessary for substantive innovation. To overcome this limitation, we have developed an innovation generation framework based on multi-agent and human-machine collaboration, as shown in Fig.4. Fig.4 Multi-agent and human-machine debate collaboration framework for innovation generation. In terms of multi-agent collaboration, the system incorporates three specialized types of agents: exploration agents, evaluation agents, and integration agents. Exploration agents are responsible for divergent thinking, generating diverse research directions through interdisciplinary knowledge association and heuristic reasoning. Evaluation agents focus on feasibility analysis, conducting comprehensive assessments across dimensions such as technical complexity, resource requirements, and expected outcomes. Integration agents coordinate perspectives among different agents, synthesizing disparate innovative elements into coherent research proposals. This multi-agent architecture emulates the collaboration of human research teams through continuous interactive dialogue and adaptive adjustments, significantly boosting the system's capacity for innovation. The human-machine collaboration aspect establishes a bidirectional interactive mechanism. Human experts contribute domain knowledge, research experience, and strategic guidance, infusing the system's innovation process with high-level professional insights. In turn, the system harnesses its powerful data processing and analytical capabilities to rapidly generate multidimensional research directions for expert consideration. This process creates a positive feedback loop: expert feedback helps the system continuously optimize its innovation strategies, while the system's multifaceted analysis provides experts with novel research perspectives. This collaborative model ensures both the scientific validity and feasibility of innovative proposals. MatPilot can not only generate innovative research directions but also design practical experimental protocols, thus playing a substantive creative role in materials science research. This framework enables the system to be an intelligent collaborative partner capable of research conceptualization and experimental design. 4 Execution module In traditional materials research, researchers often need to perform numerous repetitive tasks, which are both labor-intensive and time-consuming. Experimental automation16–20 has generated significant interest and has been applied by both academia and industry, particularly in fields such as pharmaceuticals and organic chemistry. In recent years, automation and autonomous experimental platforms in materials and chemistry have grown considerably. While there have been reports of autonomous experiments involving solid materials, a fully autonomous experimental platform that spans from preparation to characterization and performance testing has yet to be seen. The execution module of MatPilot practically implements the research ideas and experimental plans developed by the cognition module. By leveraging automation and autonomous experiments platform, MatPilot's execution module effectively liberates researchers from these tedious experimental tasks, enabling them to devote more time to creative thinking and scientific inquiry. Furthermore, the execution module enhances experimental efficiency through the use of standardized and regulated procedures, thereby significantly improving the reliability and reproducibility of experimental results. 4.1 The entire automation process from material preparation to characterization Materials research has traditionally been a resource-intensive endeavor, requiring both time and significant costs. The solid-state sintering method, widely used for ceramics, is a prime example of this complexity. The process involves numerous intricate steps, including raw material weighing, ball milling, sintering, granulation, property testing and so on. Each of these major steps consists of multiple sub-tasks, often requiring painstaking attention to detail. The conventional manual preparation of ceramic materials, therefore, becomes not only labor-intensive but also prone to variability and inconsistency. MatPilot's execution module is transforming this by automating significant portions of the solid-state sintering workflow, as shown in Fig.5. By integrating automated workstations at every feasible point, the module reduces the need for manual intervention, ensuring consistency, precision, and accuracy across all critical stages. This strategic automation addresses key bottlenecks of the traditional process, especially maintaining consistency and efficiency in experiments. The introduction of automation through MatPilot fundamentally reimagines the ceramic preparation process. Instead of a sequence of disconnected, laborious manual tasks, the workflow is transformed into an integrated, streamlined operation. This automation not only minimizes idle times between steps but also optimizes resource allocation, ensuring effective utilization of machinery, materials, and human oversight. The precision and consistency introduced by automation lead to highly reproducible and reliable experimental results, a crucial factor in materials research where even minor inconsistencies can lead to divergent outcomes. By automating critical stages, MatPilot also enhances safety in the laboratory setting. Manual handling of ceramic powders and sintered materials often involves risks such as exposure to fine particulates or repetitive strain from laborious actions like ball milling. Automated systems mitigate these risks by taking over the most hazardous and repetitive parts of the process, allowing researchers to focus on higher-level tasks such as experimental design and analysis. Fig.5 Automation workflow: (a) Dispensing; (b) Ball Milling; (C) Sintering; (d) Molding; (e) DMS; (f) DHM. 4.2 Integration of embodied intelligence in autonomous experiment The laboratory environment was initially designed for human researchers, requiring substantial modifications to automate experimental processes. However, automating each piece of equipment to ensure compatibility with robotic arms would be both impractical and excessively costly. Moreover, traditional automation methods lack the precision, adaptability, and real-time feedback needed for complex tasks requiring high flexibility. This limitation is primarily due to the absence of necessary adaptive response capabilities and intelligent feedback mechanisms. For instance, during the preparation process of ceramics, researchers rely on subjective judgment to evaluate particle quality and detect residuals in the sieving of powders. This reliance on human intuition poses challenges in adjusting the amplitude or frequency of robotic arm movements, which requires real-time perception and decision-making. Such subjective evaluations hinder the possibility of fully automating these processes, as conventional automation systems struggle to effectively handle these nuanced complexities. Embodied intelligence emphasizes the ability of automation systems to adapt flexibly in uncertain environments through enhanced perception and interaction capabilities. And recent advancements, such as ALOHA21 and ReKep22, offer promising solutions to address our challenges. Mobile ALOHA performs a variety of complex household tasks autonomously or via remote operation, using imitation learning. ReKep leverages a vision-language model, GPT-4o, and relational keypoint constraints to enable robots to operate in complex environments, such as pouring tea. The similarity between these household operations and our experimental tasks gives us great confidence in the possibility of achieving autonomous experiments in materials. We plan to implement embodied intelligence technologies over the next 1-2 years. Fig.6 shows some of the actions we are currently developing. This technology will enable us to design more intelligent systems that can dynamically adjust operations based on real-time feedback, thereby enhancing both the efficiency of experiments and the reliability of results. Fig.6 Actions empowered by embodied intelligence technologies for autonomous experiments: (a) pouring; (b) Scraping; (c) Shaking. 5 Discussion and outlook MatPilot implements a highly efficient iterative optimization, wherein the cognition module and execution module collaborate continuously to generate hypotheses, conduct experiments, and integrate feedback. This process follows the iteration systematically refines concepts of evolutionary optimization: each experimental parameters based on empirical outcomes, achieveing in building on the cumulative knowledge accrued from prior iterations. Such iterative refinement facilitates the verification of scientific hypotheses and drives the ongoing enhancement of experimental strategies. By leveraging the insights from each experiment, MatPilot accelerates materials discovery by directing research efforts toward the most promising areas, optimizing resource allocation, and minimizing time spent on less rewarding experiments. MatPilot serves as a materials research copilot, providing specialized support while ensuring that researchers maintain full control over their investigative journey. It means that each discovery is deeply interconnected to the human intellect behind it, making scientific exploration remaining an endeavor of curiosity, creativity, and profound understanding, which only human spirit can truly provide these qualities. The driving force behind scientific progress remains humanity's innate curiosity about the unknown and the relentless pursuit of truth. While MatPilot amplifies researchers' capabilities, it is ultimately the researchers who impart purpose and significance to every question, every experiment, and every breakthrough. The rapid progression of AI is fundamentally reshaping the technological landscape, offering unprecedented opportunities in materials science. Motivated by this vision, we proposed and developed MatPilot, a platform that integrates AI into research workflows. We believe that human-machine collaboration will soon become an essential and natural part of scientific research. This will enable researchers to leverage AI for increasingly complex and demanding tasks, thereby maximizing efficiency, precision, and creativity in the research process. The synergy between human ingenuity and AI-powered exploration is creating unprecedented opportunities for innovation, meeting the growing needs of materials discovery and leading materials science into a new era. References 1. Moosavi, S. M. et al. A data-science approach to predict the heat capacity of nanoporous materials. Nat. Mater. 21, 1419–1425 (2022). 2. Kang, Y. & Kim, J. ChatMOF: an artificial intelligence system for predicting and generating metal-organic frameworks using large language models. Nat. Commun. 15, 4705 (2024). 3. Ross, J. et al. Large-scale chemical language representations capture molecular structure and properties. Nat. Mach. Intell. 4, 1256–1264 (2022). 4. Burés, J. & Larrosa, I. Organic reaction mechanism classification using machine learning. Nature 613, 689–695 (2023). 5. Ruiz Euler, H.-C. et al. A deep-learning approach to realizing functionality in nanoelectronic devices. Nat. Nanotechnol. 15, 992–998 (2020). 6. Moon, J. et al. Active learning guides discovery of a champion four-metal perovskite oxide for oxygen evolution electrocatalysis. Nat. Mater. 23, 108–115 (2024). 7. Rao, Z. et al. Machine learning–enabled high-entropy alloy discovery. Science 378, 78–85 (2022). 8. Wang, J. Y. et al. Identifying general reaction conditions by bandit optimization. Nature 626, 1025–1033 (2024). 9. Su, J. et al. Intelligent synthesis of magnetic nanographenes via chemist-intuited atomic robotic probe. Nat. Synth. 3, 466–476 (2024). 10. Kanarik, K. J. et al. Human–machine collaboration for improving semiconductor process development. Nature 616, 707–711 (2023). 11. Merchant, A. et al. Scaling deep learning for materials discovery. Nature 624, 80– 85 (2023). 12. Wu, Z. et al. Leveraging language model for advanced multiproperty molecular optimization via prompt engineering. Nat. Mach. Intell. (2024) doi:10.1038/s42256- 024-00916-5. 13. Burés, J. & Larrosa, I. Organic reaction mechanism classification using machine learning. Nature 613, 689–695 (2023). 14. Zeni, C. et al. MatterGen: a generative model for inorganic materials design. Preprint at https://doi.org/10.48550/ARXIV.2312.03687 (2023). 15. Witman, M. D., Goyal, A., Ogitsu, T., McDaniel, A. H. & Lany, S. Defect graph neural networks for materials discovery in high-temperature clean-energy applications. Nat. Comput. Sci. 3, 675–686 (2023). 16. Burger, B. et al. A mobile robotic chemist. Nature 583, 237–241 (2020). 17. Ha, T. et al. AI-driven robotic chemist for autonomous synthesis of organic molecules. Sci. Adv. 9, eadj0461 (2023). 18. Szymanski, N. J. et al. An autonomous laboratory for the accelerated synthesis of novel materials. Nature 624, 86–91 (2023). 19. M. Bran, A. et al. Augmenting large language models with chemistry tools. Nat. Mach. Intell. 6, 525–535 (2024). 20. Darvish, K. et al. ORGANA: A Robotic Assistant for Automated Chemistry Experimentation and Characterization. Preprint at http://arxiv.org/abs/2401.06949 (2024). 21. Fu, Z., Zhao, T. Z. & Finn, C. Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation. https://doi.org/10.48550/ARXIV.2401.02117 (2024). 22. Huang, W., Wang, C., Li, Y., Zhang, R. & Fei-Fei, L. ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation. Preprint at https://doi.org/10.48550/ARXIV.2409.01652 (2024). Preprint at
ai_researcher
1
Turning_Your_Research_Idea_into_a_Proposal_Worth_Funding.pdf
Rethinking Resource Allocation in Science Johan Bollen* 1,4, Stephen Carpenter 2, Jane Lubchenco 3, and Marten Scheffer* 4 1: School of Informatics, Computing, and Engineering, Indiana University, Bloomington IN 2: Center for Limnology, University of Wisconsin-Madison 3: Department of Integrative Biology, Oregon State University 4: WU Environmental Sciences, Wageningen University, Wageningen, The Netherlands *: To whom correspondence should be addressed; E-mail: [email protected], [email protected] US funding agencies alone distribute a yearly total of roughly $65B dollars largely through the process of proposal peer review: scientists compete for project funding by submitting grant proposals which are evaluated by selected panels of peer reviewers. Similar funding systems are in place in most advanced democracies. However, in spite of its venerable history, proposal peer review is increasingly struggling to deal with the increasing mismatch between demand and supply of research funding. A costly system. The most conspicuous problem with the current system is the cost associated with the time spent on writing, processing, and reviewing project proposals. For instance, it is estimated that European researchers collectively spent about €1.4 billion worth of time to submit unsuccessful applications to the Horizon 2020 program, a sizable proportion of the €5.5 billion distributed. Meanwhile, Australian researchers are estimated to collectively spend three centuries a year writing, submitting, and reviewing project proposals (1). Of course, this time is not entirely lost. Writing and reviewing proposals helps to articulate one’s vision, but is that worth the extraordinary amount of time spent? Does the system allocate science funding in the most effective manner so that society receives the greatest possible return on investment? Intuitively, one would think so, but the capacity of peer review to sort out the most productive proposal is in fact surprisingly low. For instance, analysis of 102,740 funded NIH grants found almost no relationship between review scores and the resulting scientific output (2). So, should we simply skip the proposal submission and review machinery? We could, for example, give all tenured researchers an equal share of available funding. An analysis of the Natural Science and Engineering Research Council Canada (NSERC) statistics shows that preparing a grant application costs approximately $40,000 (Canadian). This is more expensive than simply giving every qualified investigator a direct baseline discovery grant of $30,000 (3). On the other hand, not all scientific work is equal. Some scientists do conduct research that is more promising and some efforts inherently require greater resources. Awarding the same amount of baseline funding to every researcher is therefore not an optimal strategy. Furthermore, an equal distribution may not meet societal or programmatic needs. Could we redesign our funding system in a way that reduces both excessive costs and low accuracy, while ensuring the fundamental needs of society are being met? Here, we suggest a redesign based on two simple starting points: A) Fund people instead of projects. This principle obviates the need for project proposal writing, reviewing, and management. At the same time, it likely improves reliability because it is based on the evaluation of the comprehensive merits of individual scientists rather than a single project proposal. Indeed, a comparative study suggests that a person-based funding system results in more high-quality scientific output (4). B) Leverage the wisdom of the crowd. There is strong evidence that large groups can collectively make better decisions than small teams of specialists as long as decisions are made independently and the groups are sufficiently diverse (5). To see how a system based on these two principles may work for fund allocation, consider the following two-step procedure: 1. Every participating scientist receives an equal portion of all available funding as their base starting budget. 2. Each participant anonymously donates a fixed percentage (say 50%) of their funding to other, non-affiliated scientists. This is repeated each funding round. It is important to note that each scientist must distribute a percentage of everything they received in the previous round, i.e. the base funding plus what they previously received from other scientists. For example, suppose that a scientist receives the base amount of $50,000 and received $150,000 from other researchers. The total received is $200,000 of which 50%, i.e $100,000, needs to be donated to other scientists. The scientist retains a total of $100,000. Since every scientist participates, funding circulates through the community converging over time to funding levels that all scientists have collectively, yet independently determined. Importantly, scientists that receive more funding than others also become the more significant funders in the system. This self-organized weighting resembles the mathematical technique of power iteration used to converge on stationary probability distributions of web page relevancy (6). To speed up the process, the initial manual funding selections can be algorithmically carried forward until a convergence criterion is reached. Another option is to have a two-phase donation process. A first round is followed by the publication of funding numbers and subsequently a second donation round. Regardless of the specific implementation details, the system converges on a distribution of funding that reflects all information in the scientific community with a minimum investment of time and effort. Challenges While the basic principle is simple and transparent, its practical implementation requires some additional considerations. First of all, we have to decide who can participate in this system. For example, the system could involve everyone with an academic appointment at an accredited institution. Second, like the current proposal peer review system, conflicts of interest must be vigorously prevented. A well-designed automated approach may effectively eliminate most problems. For instance, co-authorship and shared affiliations can be automatically detected from scientific information databases. Also, algorithms may efficiently detect fraudulent reciprocal donation loops or cartels which should be forbidden and penalized. Funding agencies will naturally play a central role in the development, application, and refinement of the proposed system. For instance, SOFA could be set-up to run within specific domains, subdomains, or even smaller topic areas (e.g. Chemistry, environmental chemistry, or marine chemistry). This allows funding agencies and policy makers to set budgets according to programmatic priorities. Stable funding for expensive infrastructure and long-term contracts could continue to be allocated by the existing funding system. However, staying closer to the new approach, researchers could also be allowed to put up large common projects or infrastructures as “super-nodes” for funding in SOFA. It may also be convenient to provide some generic options such as “redistribute my funding equally to all female scientists” or “scientists younger than 30 years old”. These and other elaborations may further ensure a reliable and balanced system. Clearly, a cautious approach is needed, requiring a transparent multi- disciplinary team effort that involves the funding agencies for designing, monitoring, and evaluating pilot projects that pave the way for a larger scale implementation. Opportunities While there are obvious challenges and uncertainties in implementing such a novel approach, there are also opportunities that go beyond solving the excessive overhead and unreliability of the current system. There are at least four commonly recognized issues that can be addressed in one stroke: 1) Systematic biases with regard to ethnicity or gender can be objectively measured and mitigated. For instance, a bias towards funding women may be corrected by raising the funding to each female scientist by a fixed percentage. 2) Excessive inequality in funding can be controlled by tuning the mandatory donation fraction. Simulations suggest that a 50% donation fraction results in funding inequality that approximates that of the current system (6). By contrast a very small donation fraction will result in a highly egalitarian distribution since scientists simply retain most of their base funding (6). 3) Newcomers always receive the guaranteed base fund with no obligations to spend excessive time in applying for project funding. A reduced mandatory donation fraction for early career scientist could strengthen their position even further. The ivory tower effect could be reduced by letting a percentage (say 10%) of the funds be distributed by the public allowing for transparent input with respect to societally desirable research directions. This would in addition stimulate researchers to communicate their ideas to the public. Risks, barriers, and bridges The proposed system would immediately save billions of dollars that are now spent on proposal submission and reviewing. While it has the potential to solve a range of broadly felt issues with our present system of science funding, it remains impossible to foresee all consequences. A donation system may lead to higher well-being among researchers (7) than the present competition-oriented model, but at the same time the crowd-based aspect will reward those who most openly communicate their work and plans, encouraging “salesmanship” at the cost of thoughtfulness. A central challenge will be to ensure that the system remains responsive to societal needs. Will the wisdom of the crowd converge to priorities and objectives that meet societal needs? Our proposal includes the ability of policy-makers to direct funding to particular domains and constituencies. Clearly, funding agencies will remain uniquely positioned to provide guidance and know-how for bridging societal and scientific objectives. Government program managers would continue to be highly engaged in the process, but their role would shift toward designing useful classification structures, for instance defining sub-domains, and managing crowd-based assessments within those domain (rather than the laborious task of evaluating scientific excellence). Instead of directing funds to scientists, the agencies would work collaboratively with scientists and decision-makers, to leverage shared resources to support both scientific excellence and programmatic obligations. Fortunately, implementation is not an “all or nothing” matter. One could run small-scale trials with fractions of the national research budget alongside the existing system. This might in fact soon be realized in The Netherlands where the Dutch parliament approved a motion directing the national science funding agencies to experiment with new models of funding allocation. Such tests afford the opportunity to conduct repeated cycles of evaluation that can inform gradual improvement to the system as it is being scaled up. The funding model we propose may seem a potentially disruptive innovation. However, society can no longer afford to lose billions in a complex and costly machinery with unclear performance. The present system has served us well for over half a century. It may now be perceived as ‘tested-and-proven’, but we have come to a point that incremental adjustments seem unlikely to repair its broadly recognized shortcomings. The situation we face may be an example of how scaling-up can sometimes lead to fundamentally unsustainable overhead as observed in systems ranging from businesses (8) to societies (9). A carefully planned experiment with a Self Organized Fund Allocation system may provide a bridge to more efficient and reliable alternatives. Acknowledgements: This manuscript formed the basis of a 2018 Nature Worldview (10) editorial. The authors express their gratitude to Kate Coronges and Alessandro Vespignani of the Network Science Institute, Northeastern University, Boston MA for their tremendously constructive feedback and comments that helped us to significantly improve this manuscript. We also want thank the organizers and participants of the national workshop on increasing grant submission pressure (“Aanvraagdruk”) and improving NOW grant request procedures that was organized by Netherlands Organization for Scientific Research (NWO) on April 2017 (https://www.nwo.nl/beleid/nwo+werkconferenties+2017/nationale+werkconferentie) Figure 1 Schematic comparison of the current fund allocation model compared to Self- Organized Fund Allocation. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. D. L. Herbert, A. G. Barnett, N. Graves, Funding: Australia's grant system wastes time. Nature 495, 314 (2013). F. C. Fang, A. Bowen, A. Casadevall, NIH peer review percentile scores are poorly predictive of grant productivity. eLife 5, e13323 (2016). R. Gordon, B. J. Poulin, Cost of the NSERC science grant peer review system exceeds the cost of giving every qualified researcher a baseline grant. Accountability in Research 16, 13-40 (2009). P. Azoulay, J. S. G. Zivin, G. Manso, Incentives and Creativity: Evidence from the Academic Life Sciences. National Bureau of Economic Research Working Paper Series No. 15466, (2009). A. W. Woolley, C. F. Chabris, A. Pentland, N. Hashmi, T. W. Malone, Evidence for a collective intelligence factor in the performance of human groups. science 330, 686-688 (2010). J. Bollen, D. Crandall, D. Junk, Y. Ding, K. Börner, From funding agencies to scientific agency. Collective allocation of science funding as an alternative to peer review, (2014). S. F. Brosnan, F. B. M. de Waal, Monkeys reject unequal pay. Nature 425, 297-299 (2003). T. R. Zenger, Explaining organizational diseconomies of scale in R&D: Agency problems and the allocation of engineering talent, ideas, and effort by firm size. Management science 40, 708-729 (1994). J. Tainter, The collapse of complex societies. (Cambridge University Press, Cambridge, UK, 1988), pp. 250. J. Bollen, Who would you share your funding with. Nature 560, 143 (2018).
ai_researcher
5
BI-CST_Behavioral_Science-based_Creativity_Support_Tool_for_Overcoming_Design_Fixation.pdf
1 1 0 2 y a M 2 ] T N . h t a m [ 1 v 4 9 2 0 . 5 0 1 1 : v i X r a On bi-unitary harmonic numbers J´ozsef S´andor Babe¸s-Bolyai University Faculty of Mathematics and Informatics Str. Kog˘alniceanu 1 400084 Cluj-Napoca, Romania Abstract The aim of this paper is twofold. First we give a short survey of the existing results on various notions of harmonic numbers; and then we make a preliminary study of bi-unitary harmonic numbers. 1 Introduction In 1948 O. Ore [11] considered numbers n whose divisors have integral harmonic mean H(n) = r 1 d1 + · · · + , 1 dr where 1 = d1 < d2 < · · · < dr = n are all the divisors of n. Since 1 dr = 1 n n dr = 1 n σ(n), X X 1 and r = d(n) (with σ(n) and d(n) denoting the sum, resp. number of divisors of n), clearly so H(n) is an integer iff H(n) = nd(n) σ(n) , σ(n)|nd(n) (1) C. Pomerance [12] called a number n with property (1), a harmonic number. Ore proved that if n is perfect (i.e. σ(n) = 2n), then it is harmonic. Indeed, if n is perfect, then 2|d(n) is always true, i.e. n is not a perfect square. Ore proved also that if n is harmonic, then ω(n) ≥ 2, and Pomerance showed that the only harmonic numbers with two distinct prime factors are the even perfect numbers. In 1963 M. V. Subbarao [14] called the number n a balanced number if σ(n) d(n) = n 2 , and proved that n = 6 is the single balanced number. Now, remark that a balanced number satisfies H(n) = 2, so it is a particular harmonic number. M. Garcia [4] extended the list of harmonic numbers to include all 45 which are < 107, and found more than 200 larger ones. The least one, apart from 1 and the perfect numbers, is 140. All 130 harmonic numbers up to 2 · 109 are listed by G. L. Cohen [1]; and R. M. Sorli (see Cohen and Sorli [2]) has continued the list to 1010. Ore conjectured that every harmonic number is even, but this probably is very difficult. Indeed, this result, if true, would imply that there are no odd perfect numbers. See also W. H. Mills [9], who proved that if there exists an odd harmonic number n, then n has a prime-power factor greater than 107. In 1998 G. L. Cohen and M. Deng [3] have introduced a generalization of harmonic numbers. Let k ≥ 1, integer and let σk(n) be the sum of kth powers of divisors of n. Then n is called k-harmonic, if σk(n)|nkd(n) 2 (2) They proved that for k > 1 there is no k-harmonic number in the range 1 < n ≤ 1010. A divisor d of n is called unitary divisor, if = 1. Let σ∗(n) and d∗(n) denote the sum, resp. number- of unitary divisors of n. M. V. Subbarao and L. J. Warren [15] introduced the unitary perfect num- bers n satisfying σ∗(n) = 2n. They found the first four unitary perfect numbers, while the fifth one was discovered by Ch. Wall [18]. A number n is called unitary harmonic if d, n d (cid:1) (cid:0) σ∗(n)|nd∗(n), (3) concept introduced by K. Nageswara Rao [10], who showed that if n is unitary perfect, then it is also unitary harmonic. P. Hagis and G. Lord [6] proved that if H ∗(x) is the counting function of these numbers, then for ε > 0 and large x one has H ∗(x) < 2.2x1/2 · 2(1+ε) log x/ log log x The same result was obtained in 1957 by H.-J. Kanold [8] for the counting function of harmonic numbers. Wall [19] showed that there are 23 unitary harmonic numbers n with ω(n) ≤ 4, and claimed that there are 43 unitary harmonic numbers n ≤ 106. However, Hagis and Lord [6] have shown this with 45 in place of 43. Recently, T. Goto and S. Shibata [5] have determined all harmonic numbers satisfying H(n) ≤ 300. According to the Referee, Goto and K. Okeya have extended the study to H(n) ≤ 1200. For infinitary harmonic numbers, related to the concept of an ”in- finitary divisor”, see Hagis and Cohen [7]. For many results involving these topics, see also Chapter I of the author’s recent book [13]. 3 2 Bi-unitary divisors and bi-unitary har- monic numbers n d A divisor d of n is called a bi-unitary divisor if the greatest common is 1. Let σ∗∗(n) be the sum of bi-unitary unitary divisor of d and divisors of n. Wall [17] called a number n bi-unitary perfect, if σ∗∗(n) = 2n, and proved that there are only three bi-unitary perfect numbers, namely 6, 60 and 90. It is not difficult to verify that σ∗∗ is multiplicative, and for prime powers pα, σ∗∗(pα) = σ(pα) = pα+1 − 1 p − 1 σ∗∗(pα) = pα+1 − 1 p − 1 − pα/2, , if α is odd if α is even Let d∗∗(n) be the number of bi-unitary divisors of n. Then it is also is the prime known (see D. Suryanarana [16]) that if n = pa1 factorization of n > 1, then 1 . . . par r d∗∗(n) = ai ! ai=even Y (ai + 1) Yai=odd ! We now introduce the main notion and results of this paper. Definition. The number n is called bi-unitary harmonic, if ∗∗ σ (n)|nd ∗∗ (n) (5) (6) Theorem 1. Let k ≥ 1 be an integer and suppose that n is bi-unitary k-perfect, i.e. σ∗∗(n) = kn. Then n is bi-unitary harmonic iff ∗∗ k|d (n) (7) Particularly, 6, 60, 90 are bi-unitary harmonic numbers. Proof. (7) is a consequence of Definition (6) and the definition of bi-unitary k-perfect numbers. Remark that for k = 2, by relation (5), (7) 4 is always true. By Wall’s result on bi-unitary perfect numbers it follows that 6, 60, 90 are also bi-unitary harmonic numbers. Let ω(n) denote the number of distinct prime factors of n. Corollary. If ω(n) ≥ 2 and n is bi-unitary 4-perfect, then n is bi- unitary harmonic number. Proof. Remark that for ω(n) ≥ 2, by (5), 4|d∗∗(n) so by (7) the result follows. Theorem 2. Let n = pa1 r > 1 be the prime factorization of n, and suppose that all ai (i = 1, r) are odd. Then n is bi-unitary harmonic iff it is harmonic. 1 . . . par Proof. Since all ai (i = 1, r) are odd, by (5), d∗∗(n) = (ai + 1) = d(n), Yai=odd and by (4), σ∗∗(n) = σ∗∗(pα) = σ(pα) = σ(n). Yα odd Yα odd Thus (6) is true if and only if (1) is true. Corollary. 1) Besides 1, the only squarefree bi-unitary harmonic number is 6. This follows by a result of Ore [11] on harmonic numbers. 2) If n is odd bi-unitary harmonic, with all ai odd, then n has a component exceeding 107. If n is even, then ω(n) ≥ 3. This follows by the result of Mills stated in the Introduction, as well as by the fact that if ω(n) = 2, then n being harmonic, it must be a perfect number. All even perfect number has the form 22kp, where p is an odd prime. Since 2k = even, this leads to a contradiction. Remark. By Theorem 2, new bi-unitary harmonic numbers can be found. For example, n = 21 · 33 · 5 = 270, n = 25 · 3 · 7 = 672, n = 25 · 33 · 5 · 7 = 30240, n = 23 · 33 · 53 · 7 · 13 = 2457000, see [2] for a list of harmonic seeds and harmonic numbers. 5 A computer program, on the other hand, may be applied for a search of bi-unitary harmonic numbers. For example, there are 50 such numbers n ≤ 106, but the search could be extended to 109 (see the Table at the end of the paper), etc. Theorem 3. Let n = pa1 1 . . . par If all ai (i = 1, r) are even, and n is bi-unitary harmonic, then r > 1 be the prime factorization of n. ω(n) ≥ 2 (8) Proof. If n = p2a is a prime power, with even exponent 2a, then by (5), (6) is equivalent to (1 + p + p2 + · · · + pa−1 + pa+1 + · · · + p2a)|p2a(2a). Clearly p2a is relatively prime to 1 + p + · · · + pa−1 + pa+1 + · · · + p2a, so we must have (1 + p + p2 + · · · + pa−1 + pa+1 + · · · + p2a)|(2a) But this is impossible, since the first term contains a number of 2a terms, each (excepting 1) greater than 1, so 1 + p + · · · + pa−1 + pa+1 + · · · + p2a > 2a. Therefore (8) follows. Corollary. If n is bi-unitary harmonic number, then ω(n) ≥ 2. Indeed, by Theorem 3, n cannot be of the form p2a. On the other hand, if n = p2a+1 (p prime), then it is harmonic, contradicting Ore’s result that ω(n) ≥ 2. If there are odd, as well as even exponents, the following particular result holds true: Theorem 4. There are no bi-unitary harmonic numbers of the form p3 · q2 (p, q distinct primes). Proof. If n = p2a+1q2b, then d∗∗(n) = 2b(2a + 2) = 4b(a + 1), σ∗∗(n) = (1 + p + · · · + p2a+1)(1 + q + · · · + qb−1 + qb+1 + · · · + q2b), 6 so n is bi-unitary harmonic iff (1+p+· · ·+p2a+1)(1+q+· · ·+qb−1+qb+1+· · ·+q2b)|4b(a+1)p2a+1q2b (9) For a = 1, b = 1 this becomes (1 + p + p2 + p3)(1 + q2)|8p3q2 (10) Since (1 + p + p2 + p3, p3) = 1 and (1 + q2, q2) = 1, it follows that (1 + p + p2 + p3)|8q2 and (1 + q2)|8p3 (11) If p = 2, it follows (1 + q2)|64, which is impossible for all q. Similarly, if q = 2, then 5|8p3, so p = 5, and then the first relation of (11) is impossible. Thus, p, q ≥ 3. Remark that 1 + p + p2 + p3 = (1 + p)(1 + p2) and that (11) implies that (1 + p) and (1 + p2) can have only two distinct prime factors, namely: 2 and q. Let 1 + p = kq, i.e. p = kq − 1. Then p2+1 = k2q2−2kq+2 is divisible by q only if q = 2. But this is impossible. Clearly k is even, k = 2s, so p2 + 1 = 2(2s2q2 − 2sq + 1) which cannot be a power of 2, since 2s2q2 − 2sq + 1 is odd. Thus (1 + p)(1 + p2) can have also other prime factors than 2 and q, contradicting (11). Remark. It can be proved similarly that there are no bi-unitary harmonic numbers of the form pq4 or p3 · q4, and that the only one of the form pq2 is 5 · 32. Finally, we state the following result: Theorem 5. Let n be of the form and let r s n = p2ai+1 i q2bj j , i=1 Y s j=1 Y n1 = p2ai+1 i r i=1 Y j=1 Y qbj −1 j and n2 = s qbj j j=1 Y 7 (pi and qj are distinct primes; ai, bj positive integers). Suppose that n1 is a harmonic number, while n2 a unitary harmonic number. Then n = n1n2 is a bi-unitary harmonic number. Proof. This follows from the fact that for the numbers n given above one has the identity H ∗∗(n) = H(n1)H ∗(n2), (12) where H, H ∗, H ∗∗ are the corresponding harmonic means (e.g. H ∗∗(n) = nd∗∗(n)/σ∗∗(n)). Identity (12) can be proved by using the definitions and the results (e.g. relation (5)) for the above functions. Final remarks. By examining the Table with all bi-unitary har- monic numbers up to 109, we can remark that there are in to- tal 211 such numbers in this range. The first 12 bi-unitary har- monic numbers are all harmonic, or unitary harmonic; the first num- ber without this property is n = 9072. There are only 5 bi- unitary harmonic numbers up to 109 which are powerful, namely n = 3307500, 9922500, 23152500, 138915000, 555660000. From these, only n = 9922500 = 22 · 34 · 54 · 72 is a perfect square. It is interesting to note that, the existence of perfect squares in the set of harmonic or unitary harmonic numbers, is an open question up to now. 3 Related numbers As we have seen, the harmonic means of divisors, unitary divisors, and bi-unitary divisors are given explicitly by H(n) = nd(n) σ(n) , H ∗(n) = nd∗(n) σ∗(n) , H ∗∗(n) = nd∗∗(n) σ∗∗(n) . In what follows, the harmonic, unitary harmonic, resp. bi-unitary harmonic numbers will be called simply as H, H ∗, resp. H ∗∗-numbers. 8 This will be motivated also by the introduction of the following six new fractions and related numbers: H1(n) = nd(n) σ∗(n) , H2(n) = nd∗(n) σ(n) , H3(n) = H4(n) = nd∗∗(n) σ(n) , H5(n) = nd∗(n) σ∗∗(n) , H6(n) = nd(n) σ∗∗(n) , nd∗∗(n) σ∗(n) . When H1(n) is an integer, we will say that n is a H1 number, etc. By remarking that d∗∗(n) is always divisible by d∗(n), and that if n has the form n = pε1 r , where εi ∈ {1, 2} (i = 1, r), pi distinct primes, then d∗∗(n) = d∗(n), σ∗∗(n) = σ∗(n), we can state the following result: 1 . . . pεr Theorem 6. In all cases, H ∗∗(n) = H5(n)k1(n), H4(n) = H2(n)k2(n), H6(n) = H ∗(n)k3(n), where k1(n), k2(n), k3(n) are integers. r , then H3(n) = H1(n), H5(n) = H ∗(n), If n has the form n = pε1 H6(n) = H ∗∗(n) = H ∗(n), H4(n) = H2(n). 1 . . . pεr If n has the form pa1 1 . . . par r with all ai odd (i = 1, r), then H5(n) = H2(n), H6(n) = H1(n), H3(n) = H4(n) = H(n). Corollary. In all cases, a H2-number is also a H4-number; a H ∗- number is also a H6-number; a H5-number is also a H ∗∗-number. If n = r , then n is a H ∗∗-number iff it is a H ∗, and a H6-number; n is H5- pε1 1 . . . pεr number, iff it is a H ∗-number, and n = H2-number iff n = H4-number. If all ai are odd, then the notions of H2 and H5-numbers; H, H3, H4- numbers; resp. H1 and H6-numbers coincide. Remark. Since in Wall [19] there are stated all H ∗-numbers with ω(n) ≤ 4, we can say from the Table 1 of that paper, that the only H ∗∗ (i.e. bi-unitary harmonic)-number of the form p2q is 32 · 5 = 45; - of the form p2qr are 22 · 3 · 5 = 60 and 32 · 2 · 5 = 90; - of the form p2q2r is 52 · 72 · 13 = 15925; - of the form p2qrs are 22 · 3 · 5 · 7 = 420, 32 · 2 · 5 · 7 = 630; - of the form p2q2rs is 22 · 52 · 7 · 13 = 9100; - of the form 9 p2q2r2s is 32 · 52 · 132 · 17 = 646425. (Here p, q, r, s are distinct primes). These complement some results of Theorem 4. Clearly, the deeper study of all of the above numbers cannot be done in this paper (but there are some results under preparation). We state only the following result: Theorem 7. If n is a perfect number, then n is a H2 and H4-number, too. If n > 1 is a H2-number, then it cannot be a geometric number (i.e. a perfect square). Proof. Let σ(n) = 2n. Then, as d∗(n) = 2ω(n), and 2|d∗∗(n), clearly H2(n) and H4(n) will be integers. It is well known that σ(n) is odd if n is a perfect square (i.e., n = m2). Then, if H2(n) is an integer, then clearly σ(n) divides n, and this is possible only for n = 1. Remarks. 1) The similar problem in the case of H-numbers, i.e. if they are or not geometric, is a difficult open question (see e.g. [2]). 2) A number n > 1 is called friendly number (or Duffinian number), see e.g. [13], if (n, σ(n)) = 1. Clearly, if n is friendly, then n cannot be H2- or H4-number. Indeed, in this case one must have σ(n)|d∗(n), or σ(n)|d∗∗(n), but this is impossible for n > 1, since σ(n) > d∗∗(n) ≥ d∗(n) for n > 1. A similar result holds true for the H-numbers. Acknowledgements. The author is indebted to the Referee as well as to Professor G. L. Cohen, for pointing out new references; for correc- tions in a former version of the paper; and for many suggestions which considerably improved the presentation of the paper. He also thanks Pro- fessor R. M. Sorli for providing him a list of bi-unitary harmonic numbers up to 109 (see the attached Table). References [1] G. L. Cohen, Numbers whose positive divisors have small integral harmonic mean, Math. Comp. 66(1997), 883-891. 10 [2] G. L. Cohen and R. M. Sorli, Harmonic seeds, Fib. Quart. 36(1998), 386-390. [3] G. L. Cohen and M. Deng, On a generalization of Ore’s harmonic numbers, Nieuw Arch. Wiskunde 16(1998), no. 3, 161-172. [4] M. Garcia, On numbers with integral harmonic mean, Amer. Math. Monthly, 61(1954), 89-96. [5] T. Goto and S. Shibata, All numbers whose positive divisors have integral harmonic mean up to 300, Math. Comp. 73(2004), 475-491. [6] P. Hagis and G. Lord, Unitary harmonic numbers, Proc. Amer. Math. Soc. 51(1975), 1-7. [7] P. Hagis and G. L. Cohen, Infinitary harmonic numbers, Bull. Aus- tral. Math. Soc. 41(1990), 151-158. [8] H.-J. Kanold, ¨Uber das harmonische Mittel der Teiler einer nat¨urlichen Zahl, Math. Ann. 13(1957), 371-374. [9] W. H. Mills, On a conjecture of Ore, Proc. Number Theory Conf., Boulder Co., 1972, 142-146. [10] K. Nageswara Rao, On some unitary divisor functions, Scripta Math. 28(1967), 347-351. [11] O. Ore, On the averages of the divisors of a number, Amer. Math. Monthly, 55(1948), 615-619. [12] C. Pomerance, On a problem of Ore: Harmonic numbers, Abstract 709-A5, Notices Amer. Math. Soc. 20(1973), A-648. [13] J. S´andor, Handbook of number theory, II, (In coop. with B. Crstici), Springer Verlag, 2004. 11 [14] M. V. Subbarao, Problem E1558, Amer. Math. Monthly 70(1963), 92, solution in 70(1963), 1009-1010. [15] M. V. Subbarao and L. J. Warren, Unitary perfect numbers, Canad. Math. Bull. 9(1966), 147-153. [16] D. Suryanarayana, The number of bi-unitary divisors of an integer, Lecture Notes in Math., vol. 251, 1972, 273-278. [17] Ch. Wall, Bi-unitary perfect numbers, Proc. Amer. Math. Soc. 33(1972), no. 1, 39-42. [18] Ch. R. Wall, The fifth unitary perfect number, Canad. Math. Bull. 18(1975), 115-122. [19] Ch. Wall, Unitary harmonic numbers, Fib. Quart. 21(1983), 18-25. AMS Subject classification: 11A25, 11A99, 11N37. 12 All bi-unitary harmonic numbers less than 109 n 1 6 = 2 · 3 45 = 32 · 5 60 = 22 · 3 · 5 90 = 2 · 32 · 5 270 = 2 · 33 · 5 420 = 22 · 3 · 5 · 7 630 = 2 · 32 · 5 · 7 672 = 25 · 3 · 7 2970 = 2 · 33 · 5 · 11 5460 = 22 · 3 · 5 · 7 · 13 8190 = 2 · 32 · 5 · 7 · 13 9072∗ = 24 · 34 · 7 9100 = 22 · 52 · 7 10080 = 25 · 32 · 5 · 7 15925 = 52 · 72 · 13 22680 = 23 · 34 · 5 · 7 22848 = 26 · 3 · 7 · 17 27300 = 22 · 3 · 52 · 7 · 13 30240 = 25 · 33 · 5 · 7 40950 = 2 · 32 · 52 · 7 · 13 45360 = 24 · 34 · 5 · 7 54600 = 23 · 3 · 52 · 7 · 13 81900 = 22 · 32 · 52 · 7 · 13 95550 = 2 · 3 · 52 · 72 · 13 99792 = 24 · 34 · 7 · 11 136500 = 22 · 3 · 53 · 7 · 13 13 H ∗∗(n) 1 2 3 4 4 6 7 7 8 11 13 13 12 10 16 7 18 16 15 24 15 20 20 18 14 22 25 n 163800 = 23 · 32 · 52 · 7 · 13 172900 = 22 · 52 · 7 · 13 · 19 204750 = 2 · 32 · 53 · 7 · 13 208656 = 24 · 34 · 7 · 23 245700 = 22 · 33 · 52 · 7 · 13 249480 = 23 · 34 · 5 · 7 · 11 312480 = 25 · 32 · 5 · 7 · 31 332640 = 25 · 33 · 5 · 7 · 11 342720 = 26 · 32 · 5 · 7 · 17 385560 = 23 · 34 · 5 · 7 · 17 409500 = 22 · 32 · 53 · 7 · 13 472500 = 22 · 33 · 54 · 7 491400 = 23 · 33 · 52 · 7 · 13 646425 = 32 · 32 · 132 · 17 695520 = 25 · 33 · 5 · 7 · 23 708288 = 26 · 3 · 7 · 17 · 31 716625 = 32 · 53 · 72 · 13 791700 = 22 · 3 · 52 · 7 · 13 · 29 819000 = 23 · 32 · 53 · 7 · 13 861840 = 24 · 34 · 5 · 7 · 19 900900 = 22 · 32 · 52 · 7 · 11 · 13 955500 = 22 · 3 · 53 · 72 · 13 982800 = 24 · 33 · 52 · 7 · 13 1028160 = 26 · 33 · 5 · 7 · 17 1037400 = 23 · 3 · 52 · 7 · 13 · 19 1187550 = 2 · 32 · 52 · 7 · 13 · 29 1228500 = 22 · 33 · 53 · 7 · 13 1392300 = 22 · 32 · 52 · 7 · 13 · 17 1421280 = 25 · 33 · 5 · 7 · 47 14 H ∗∗(n) 24 19 25 23 27 33 31 44 32 34 30 25 36 13 46 31 21 29 40 38 33 28 40 48 38 29 45 34 47 n 1433250 = 2 · 32 · 53 · 72 · 13 1528800 = 25 · 3 · 52 · 72 · 13 1571328 = 29 · 32 · 11 · 31 1801800 = 23 · 32 · 52 · 11 · 13 2457000 = 23 · 33 · 53 · 7 · 13 2579850 = 2 · 34 · 52 · 72 · 13 2888704 = 210 · 7 · 13 · 31 3307500∗ = 22 · 33 · 54 · 72 3767400 = 23 · 32 · 52 · 7 · 13 · 23 3878550 = 2 · 33 · 52 · 132 · 17 4176900 = 22 · 33 · 52 · 7 · 13 · 17 4291056 = 24 · 34 · 7 · 11 · 43 4299750 = 2 · 33 · 53 · 72 · 13 4504500 = 22 · 32 · 53 · 7 · 11 · 13 4713984 = 29 · 33 · 11 · 31 4961250 = 2 · 34 · 54 · 72 5405400 = 23 · 33 · 52 · 7 · 11 · 13 6168960 = 27 · 34 · 5 · 7 · 17 6397300 = 22 · 52 · 7 · 13 · 19 · 37 6688500 = 22 · 3 · 53 · 73 · 13 7698600 = 23 · 32 · 52 · 7 · 13 · 47 7780500 = 22 · 32 · 53 · 7 · 13 · 19 7983360 = 28 · 34 · 5 · 7 · 11 8353800 = 23 · 33 · 52 · 7 · 13 · 17 8666112 = 210 · 3 · 7 · 13 · 31 9922500∗ = 22 · 34 · 54 · 72 10032750 = 2 · 32 · 53 · 73 · 13 10624320 = 26 · 32 · 5 · 7 · 17 · 31 10701600 = 25 · 3 · 52 · 73 · 13 15 H ∗∗(n) 28 32 32 44 60 27 32 28 46 26 51 43 42 55 48 25 66 64 37 49 47 57 64 68 48 30 49 62 56 n 10999296 = 29 · 32 · 7 · 11 · 31 11302200 = 23 · 33 · 52 · 7 · 13 · 23 11309760 = 26 · 33 · 5 · 7 · 11 · 17 11875500 = 22 · 32 · 53 · 13 · 29 12899250 = 2 · 34 · 53 · 72 · 13 13022100 = 22 · 33 · 52 · 7 · 13 · 53 14303520 = 25 · 33 · 5 · 7 · 11 · 43 15561000 = 23 · 32 · 53 · 7 · 13 · 19 18763200 = 24 · 33 · 52 · 7 · 13 · 19 19061280 = 25 · 32 · 5 · 7 · 31 · 61 19845000 = 23 · 34 · 54 · 72 20638800 = 24 · 34 · 52 · 72 · 13 20884500 = 22 · 33 · 53 · 7 · 13 · 17 22932000 = 25 · 32 · 53 · 72 · 13 23152500∗ = 22 · 33 · 54 · 73 23569920 = 29 · 33 · 5 · 11 · 31 23647680 = 26 · 33 · 5 · 7 · 17 · 23 24160500 = 22 · 32 · 53 · 7 · 13 · 59 25798500 = 22 · 34 · 53 · 72 · 13 25832520 = 23 · 34 · 5 · 7 · 17 · 67 27027000 = 23 · 33 · 53 · 7 · 11 · 13 29381625 = 32 · 53 · 72 · 13 · 41 31872960 = 26 · 32 · 5 · 7 · 17 · 31 31888080 = 24 · 34 · 5 · 7 · 19 · 37 32997888 = 29 · 33 · 7 · 11 · 31 34889400 = 23 · 33 · 52 · 7 · 13 · 71 35626500 = 22 · 33 · 53 · 7 · 13 · 29 38383800 = 23 · 3 · 52 · 7 · 13 · 19 38785500 = 22 · 33 · 53 · 132 · 17 16 H ∗∗(n) 56 69 88 58 45 53 86 76 76 61 40 48 85 64 49 80 92 59 54 67 110 41 93 74 84 71 87 74 52 n 42997500 = 22 · 33 · 54 · 72 · 13 43205568 = 26 · 3 · 7 · 17 · 31 · 61 43330560 = 210 · 3 · 5 · 7 · 13 · 31 43857450 = 2 · 34 · 52 · 72 · 13 · 17 46683000 = 23 · 33 · 53 · 7 · 13 · 19 47297250 = 2 · 33 · 53 · 72 · 11 · 13 47392800 = 25 · 3 · 52 · 72 · 13 · 31 48323520 = 26 · 33 · 5 · 7 · 17 · 47 50213520 = 24 · 37 · 5 · 7 · 41 51597000 = 23 · 34 · 53 · 72 · 13 519795200 = 26 · 3 · 52 · 72 · 13 · 17 56511000 = 23 · 33 · 53 · 7 · 13 · 23 64701000 = 23 · 32 · 53 · 7 · 13 · 79 68796000 = 25 · 33 · 53 · 72 · 13 71253000 = 23 · 33 · 53 · 7 · 13 · 29 77477400 = 23 · 32 · 52 · 7 · 11 · 13 · 43 77641200 = 24 · 33 · 52 · 7 · 13 · 79 93284100 = 22 · 32 · 52 · 7 · 13 · 17 · 67 95327232 = 210 · 3 · 7 · 11 · 13 · 31 98993664 = 29 · 34 · 7 · 11 · 31 103194000 = 24 · 34 · 53 · 72 · 13 108421632 = 29 · 33 · 11 · 23 · 31 109147500 = 22 · 34 · 54 · 72 · 11 109336500 = 22 · 33 · 53 · 7 · 13 · 89 129991680 = 210 · 32 · 5 · 7 · 13 · 31 133660800 = 27 · 33 · 52 · 7 · 13 · 17 136732050 = 2 · 34 · 52 · 72 · 13 · 53 138915000∗ = 23 · 34 · 54 · 73 1429908429 · 32 · 7 · 11 · 13 · 31 17 H ∗∗(n) 52 61 80 51 114 77 62 94 72 72 64 115 79 96 116 86 79 67 88 90 80 92 55 89 96 128 53 70 104 n 144471600 = 24 · 34 · 52 · 73 · 13 144963000 = 23 · 33 · 53 · 7 · 13 · 59 160254000 = 25 · 32 · 53 · 73 · 13 164989440 = 29 · 33 · 5 · 7 · 11 · 31 172972800 = 28 · 33 · 52 · 7 · 11 · 13 176289750 = 2 · 33 · 53 · 72 · 13 · 41 188527500 = 22 · 34 · 54 · 72 · 19 191237760 = 27 · 34 · 5 · 7 · 17 · 31 199320576 = 210 · 3 · 7 · 13 · 23 · 31 219287250 = 2 · 34 · 53 · 72 · 13 · 17 221557248 = 29 · 33 · 11 · 31 · 47 227026800 = 24 · 34 · 52 · 72 · 11 · 13 232432200 = 23 · 33 · 52 · 7 · 11 · 13 · 43 247484160 = 28 · 34 · 5 · 7 · 11 · 31 271498500 = 22 · 33 · 53 · 7 · 132 · 17 283783500 = 22 · 34 · 53 · 72 · 11 · 13 287752500 = 22 · 34 · 54 · 72 · 29 287878500 = 22 · 32 · 53 · 13 · 19 · 37 288943200 = 25 · 34 · 52 · 73 · 13 300982500 = 22 · 33 · 54 · 73 · 13 325798200 = 23 · 34 · 52 · 7 · 132 · 17 341775000 = 23 · 32 · 55 · 72 · 31 356879250 = 2 · 33 · 53 · 72 · 13 · 83 361179000 = 23 · 34 · 53 · 73 · 13 363854400 = 26 · 3 · 52 · 73 · 13 · 17 374078250 = 2 · 34 · 53 · 72 · 13 · 29 377055000 = 23 · 34 · 54 · 72 · 19 389975040 = 210 · 33 · 5 · 7 · 13 · 31 390957840 = 24 · 35 · 5 · 7 · 132 · 17 18 H ∗∗(n) 84 118 112 140 128 82 57 124 92 85 94 88 129 124 91 99 58 111 108 91 78 70 83 126 112 87 76 144 104 n 407307264 = 210 · 3 · 7 · 13 · 31 · 47 421866900 = 22 · 33 · 52 · 7 · 13 · 17 · 101 428972544 = 29 · 33 · 7 · 11 · 13 · 31 434397600 = 25 · 33 · 52 · 7 · 132 · 17 438574500 = 22 · 34 · 53 · 72 · 13 · 17 447828480 = 29 · 33 · 5 · 11 · 19 · 31 467002900 = 22 · 52 · 7 · 13 · 19 · 37 · 73 474692400 = 24 · 34 · 52 · 72 · 13 · 23 481572000 = 25 · 33 · 53 · 73 · 13 486319680 = 26 · 33 · 5 · 7 · 11 · 17 · 43 488697300 = 22 · 35 · 52 · 7 · 132 · 17 490990500 = 22 · 32 · 53 · 7 · 11 · 13 · 109 494968320 = 29 · 34 · 5 · 7 · 11 · 31 513513000 = 23 · 33 · 53 · 7 · 11 · 13 · 19 552348720 = 24 · 37 · 5 · 7 · 11 · 41 555660000∗ = 25 · 34 · 54 · 73 559704600 = 23 · 33 · 52 · 7 · 13 · 17 · 67 567567000 = 23 · 34 · 53 · 72 · 11 · 13 575757000 = 23 · 32 · 53 · 7 · 13 · 19 · 37 585427500 = 22 · 34 · 54 · 72 · 59 639802800 = 24 · 34 · 52 · 72 · 13 · 31 648083520 = 26 · 32 · 5 · 7 · 17 · 31 · 61 648784500 = 22 · 3 · 53 · 73 · 13 · 97 690908400 = 24 · 33 · 52 · 7 · 13 · 19 · 37 708107400 = 23 · 33 · 52 · 7 · 11 · 13 · 131 710892000 = 25 · 32 · 53 · 72 · 13 · 31 722358000 = 24 · 34 · 53 · 73 · 13 756756000 = 25 · 33 · 53 · 72 · 11 · 13 758951424 = 29 · 33 · 7 · 11 · 23 · 31 19 H ∗∗(n) 94 101 156 104 102 152 73 92 168 172 81 109 150 209 132 100 134 132 148 59 93 122 97 148 131 124 140 176 161 n 779688000 = 26 · 32 · 53 · 72 · 13 · 17 793457920 = 27 · 34 · 5 · 7 · 17 · 127 823280640 = 210 · 3 · 5 · 7 · 13 · 19 · 31 953629840 = 24 · 37 · 5 · 7 · 17 · 41 877149000 = 23 · 34 · 53 · 72 · 13 · 17 879196500 = 22 · 32 · 53 · 7 · 13 · 19 · 113 970023600 = 24 · 34 · 52 · 72 · 13 · 47 973176750 = 2 · 32 · 53 · 73 · 13 · 97 977394600 = 23 · 35 · 52 · 7 · 132 · 17 992548080 = 24 · 38 · 5 · 31 · 61 H ∗∗(n) 128 127 152 136 136 113 94 97 108 81 20
ai_researcher
1
Dial-M_A_Masking-based_Framework_for_Dialogue_Evaluation.pdf
Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines Gabriel Salomon∗, Rayson Laroca∗ and David Menotti∗ ∗Department of Informatics, Federal University of Paran´a (UFPR), Curitiba, PR, Brazil Email: {gsaniceto, rblsantos, menotti}@inf.ufpr.br 0 2 0 2 y a M 8 ] V C . s c [ 2 v 6 0 1 3 0 . 5 0 0 2 : v i X r a Abstract—Smart meters enable remote and automatic electric- ity, water and gas consumption reading and are being widely deployed in developed countries. Nonetheless, there is still a huge number of non-smart meters in operation. Image-based Automatic Meter Reading (AMR) focuses on dealing with this type of meter readings. We estimate that the Energy Company of Paran´a (Copel), in Brazil, performs more than 850,000 readings of dial meters per month. Those meters are the focus of this work. Our main contributions are: (i) a public real-world dial meter dataset (shared upon request) called UFPR-ADMR; (ii) a deep learning-based recognition baseline on the proposed dataset; and (iii) a detailed error analysis of the main issues present in AMR for dial meters. To the best of our knowledge, this is the first work to introduce deep learning approaches to multi- dial meter reading, and perform experiments on unconstrained images. We achieved a 100.0% F1-score on the dial detection stage with both Faster R-CNN and YOLO, while the recognition rates reached 93.6% for dials and 75.25% for meters using Faster R-CNN (ResNext-101). Index Terms—automatic meter reading, dial meters, pointer- type meters, deep learning, public dataset I. INTRODUCTION Measuring residential energy consumption is known to be a laborious task [1]–[3]. Although smart meters are gradually replacing old meters, there are still many old mechanical meters in operation around the world since their replacement is time-consuming and costly. In many regions, such as remote areas and developing nations, manual on-site readings are still prevalent [4]. Even in developed countries, replacements are still far for complete. For example, in the end of 2018, there were still more than 26 million non-automatic meters in the United States [5]. In the literature, Automatic Meter Reading (AMR) is usually associated with digital and smart meters [6]. In this work, we use this designation exclusively for image-based automatic readings. AMR allows the employees of the service company (electricity/gas/water) or, preferably, the consumers themselves to capture meter images using a mobile device, which is cheaper and more feasible than manual on-site reading, and easier to deploy – in the short/medium term – than the replacement of old meters. There are two main categories of residential energy me- ters [7], [8]: (i) analog (with cyclometer and dial displays) and (ii) digital (with electronic display and smart meters), as shown in Fig. 1. This work focuses on dial meters since, although there are numerous dial meters in operation, there are still many open challenges in this context (as detailed further). (a) cyclometer display (b) dial display (c) electronic display (d) smart meter Fig. 1. The most common types of energy meters. The Energy Company of Paran´a (Copel) [9] measures elec- tricity consumption in more than 4 million consuming units (i.e., meters) per month in the Brazilian state of Paran. From the images they provided us (see Section III), we estimate that 21% of those devices are dial meters, resulting in more than 840,000 dial meter readings carried out every month. Most of the dial meter recognition literature is focused on industrial applications, e.g., pressure meters [10]–[12], voltmeter [13] and ammeter [14]. As the meters are generally fixed and indoors, the image quality is strictly controlled. Although in some cases the conditions are indeed realistic, they are not as unconstrained as in images obtained in out- door environments, with challenging conditions, e.g., severe lighting conditions (low light, glares, uneven illumination, in the region of interest, and taken reflections, etc.), dirt at a distance. In addition, most approaches are based on handcrafted features [11], [15], and were evaluated exclusively on private datasets [10], [11], [14]–[16]. To the best of our knowledge, there are no public datasets containing dial meter images in the literature. Taking into account the above discussions, we introduce a real-world fully-labeled dataset (shared upon request) contain- to assist ing 2,000 meter images, acquired in unconstrained scenarios by Copel employees, with 9,097 individual dials and a well- the development and defined evaluation protocol assessment of new approaches for this task1. In addition, we conducted experiments using deep learning models in our dataset images to serve as baselines for future work, inves- tigating problems related to dial meter reading and providing guidance for further research through a detailed quantitative and qualitative error analysis. The remainder of this work is organized as follows. In Section II, we discuss approaches designed for AMR as well as deep learning techniques. The proposed dataset is described in Section III. Section IV presents the evaluated deep learning- based approaches for automatic reading of dial meters, while the results (with a detailed error analysis) are reported in Section V. Lastly, in Section VI, we state the conclusions. II. RELATED WORK There are many works in the literature that dealt with AMR. Most of them focus on the recognition of cyclometers and digital meters using Optical Character Recognition (OCR) methods. Recently, deep learning approaches have received great attention in this context [4], [10], [12], [17]–[19]. Dial meter recognition research, on the other hand, is more scarce. Most methods focus on gauges for industrial application [10]– [14], [20], [21]. Although gauges may look similar to energy dial meters, they usually only contain a single dial and one type of dial template, and the image conditions tend to be much more controlled in terms of lighting, dirt, and image quality. In this section, we describe some relevant works on AMR as well as state-of-the-art deep learning approaches for object detection and recognition [22]–[26]. A. Digit-based Meter Reading Gallo et. al. [2] proposed a method that uses Multilayer Perceptron (MLP) to locate the Region of Interest (ROI) of the meters (also denoted as counter region [1], [2], [4]), Max- imally Stable Extremal Regions (MSER) to segment the digits, Histogram of Oriented Gradients (HOG) for feature extraction, and Support Vector Machine (SVM) for digit recognition. Nodari and Gallo [27] proposed a method named MultiNOD for gas cyclometers reading. It consists of a neural network tree, sharing and resizing features to perform counter detec- tion and digit segmentation. The digit recognition stage was handled using Tesseract. This approach was later improved in [1], with the addition of a Fourier analysis applied to the segmented image, in order to avoid false positives. Finally, SVM was employed for digit classification. Tsai et. al. [18] employed Single Shot MultiBox Detec- tor (SSD) [28], a deep learning object detector, to locate the counter region in energy meters. The authors reported an accuracy rate of 100% on their experiments, but did not address the recognition stage. 1The UFPR-ADMR dataset is publicly available (but upon request) to the research community at web.inf.ufpr.br/vri/databases/ufpr-admr/. Yang et. al. [19] proposed a Fully Convolutional Se- quence Recognition Network (FCSRN) for water meter analog digit reading, with a novel loss function entitled Augmented Loss (AugLoss). AugLoss addresses the “middle-state” that can occur when the digit accumulator is changing from one display digit to the next one, usually outputting the old displayed digit. Their approach outperformed Recurrent Neural Networks (RNN) and attention-based models on the task of sequence recognition, but the experiments were made in controlled images, with cropped and aligned meters. Gmez et. al. [17] introduced a segmentation-free approach to perform meter reading. They trained a Convolutional Neural Network (CNN) to yield readings directly from the input images, without the need to detect the counter region. Al- though their approach has achieved promising results, the authors used a private dataset in the experiments, and only compared their method with traditional algorithms that rely on handcrafted features, which are easily affected by noise and may not be robust to images acquired in adverse conditions [4]. Laroca et. al. [4] designed a two-stage approach for AMR. The Fast-YOLOv2 model [29] was employed for counter detection and three CNN-based models were evaluated in the counter recognition stage. The authors considerably improved their recognition results when balancing the training set in terms of digit classes through data augmentation techniques. B. Dial Meter Reading Tang et. al. [15] proposed a complete framework for dial energy meter reading based on binarization, line intersection, and morphological operations. Despite being an interesting ap- proach, the dataset used in the experiments was not published, and the images were obtained in a controlled environment. In [16], the authors also employed handcrafted features for dial recognition. In addition to binarization and line intersection, the counter region was detected using Scale- Invariant Feature Transform (SIFT) features. Their method was evaluated on a private dataset containing only 141 images taken in a controlled environment. The following approaches dealt only with single-dial meters (commonly known as gauges) and not with energy meters. Although the problems are similar, there is a fundamental difference: a small error in a multi-dial meter can result in a completely wrong measurement (especially if the error occurs in recognizing the most significant dials). Such a fact needs to be taken into account when evaluating recognition methods. Several approaches explored handcrafted features, such as Hough Transform (HT), in order to locate the dials [11], [13], [20]. The steps in such works are very similar: image binarization on the preprocessing stage, Hough Circle Trans- form (HCT) for dial location, and pointer angle detection using Hough Line Transform (HLT) or similar methods. These approaches generally work well in constrained environments, but may not be suitable for real-world outdoor scenarios with uneven lighting and the presence of noise. Mask R-CNN was proposed for pointer recognition in [14], [21]. Fang et. al. [14] used it to find reference key points and the pointer in a gauge scale marks, while He et. al. [21] focused on segmenting the meter dial and pointer. In both works, the angle between the pointer and the dial was explored to retrieve the reading. The datasets used in the experiments were not provided in both works. Region-based Fully Convolutional Networks (R-FCNs) were used for meter detection [12]. Although the authors used deep learning for detection, the meter reading was performed with handcrafted methods such as binarization, line detection, and skeleton extraction. Liu et. al. [10] evaluated Fast R- CNN, Faster R-CNN, YOLO and SSD for meter detection and concluded that even though Faster R-CNN outperforms the others, YOLO is the fastest. Nevertheless, the recognition was performed by a handcrafted method (i.e., HT) and the images used have not been made publicly available. C. Deep Learning Methods ResNet [22] is one of the recent breakthroughs in deep networks. The introduction of residual blocks enabled deeper network architectures while having fewer parameters than shallower networks, such as VGG19 [23]. ResNet also per- forms better and converges faster. The residual learning pro- cess introduces lower level features directly to higher abstrac- tion layers, preserving information. ResNet was later upgraded to ResNeXt [30]. The main difference between them is the concept of “cardinality”; instead of going deeper, ResNeXt uses a multi-branch architecture (cardinality refers to the number of branches used) to increase the transformations and achieve a higher representation power. ResNet and ResNeXt can be employed for recognition (classification) problems. In order to detect the dials on each image, object detection deep networks will be explored. Faster R-CNN [24] is a state- of-the-art approach that uses attention mechanisms and the sharing of convolutional features between the Region Proposal Network (RPN) and the detection network (originally VGG16) to enhance speed and accuracy. First, the RPN generates region proposals that may contain known objects; then, the detection network evaluates the boundaries and classifies the objects. Redmon et al. [25] proposed YOLO (You Only Look Once), an object detector that focuses on an extreme speed/accuracy trade-off by dividing the input image into regions and pre- dicting bounding boxes and probabilities for each region. YOLOv2 [29], an improved version of YOLO, adopts a series of concepts (e.g., anchor boxes, batch normalization, etc.) from existing works along with novel concepts to improve YOLO’s accuracy while making it faster [31]. Similarly, Red- mon and Farhadi [26] introduced YOLOv3 (the latest version of YOLO), which uses various tricks to improve training and increase performance, such as residual blocks, shortcut connections, and upsampling. YOLO-based models have been successfully applied in several research areas [4], [32], [33]. D. Datasets Most of the referred works do not provide a public dataset to enable a fair comparison of results. There are a few publicly available meter datasets [4], [19], [27], however, none of them have images containing pointer-type meters, only digit-based ones. As far as we know, there is no publicly available dataset containing images of dial meters. III. THE UFPR-ADMR DATASET We acquired the meter images from Copel, a company of the Brazilian electricity sector that serves more than 4 million consuming units per month [4], [9]. The images of the meters were obtained at the consuming units by Copel employees using cell phone cameras (note that cell phones of many brands and models were used). All images had already been resized and compressed for storage, resulting in images of 640 × 480 or 480 × 640 pixels (depending on the orientation in which the image was taken). To create the UFPR-ADMR dataset, we selected 2,000 images where it was possible for a human to recognize the correct reading of the meter, as the images were acquired in uncontrolled environments and it would not be possible to label the correct reading in many cases. In each image, we manually labeled the position (x, y) of each corner of an irregular quadrilateral that contains all the dials. These corner annotations can be used to rectify the image patch containing the dials. Fig. 2 shows some images selected for the dataset as well as illustrations of the annotations. Fig. 2. Examples of the images chosen for the dataset. In the bottom row, there are examples of the annotations provided for each image: in green the irregular surrounding quadrilateral, in blue the bounding boxes around the dials and in red the maximal ellipse contained in the bounding box. Note that the customer meter identification is blurred for keeping subject privacy. All meters have 4 or 5 dials, being 903 meters (45%) with 4 dials and 1,097 meters (55%) with 5. The values pointed on each dial have an almost uniform distribution of digits, having slightly more 0s than other digits. Information about the dimensions of the meters and dials in the dataset are shown in Table I. Note the great variability in the size of both meters and dials, for example, the smallest dial (20 × 29 pixels) is almost 10 times smaller than the largest one (206×201 pixels). TABLE I STATISTICS ABOUT THE SIZE OF METERS AND INDIVIDUAL DIALS. Min (px) W × H 96 × 37 20 × 29 Meters Dials Max (px) W × H Mean (px) W × H Mean Area (px2) 632 × 336 206 × 201 326 × 121 88 × 86 42,296 8,328 Fig. 3 illustrates the distribution of digits per dial. The most prominent bar indicates that the most frequent digit in the first position is 0. Nevertheless, it should be noted that the distribution is not as unbalanced as datasets with digit-based meter images, such as the UFPR-AMR dataset [4], in which the number of 0s in the first position is equal to the sum of 0s in the other positions. This is probably due to the fact that dial meters stopped being manufactured and deployed decades ago, which implies that each dial might have completed many cycles since the installation and may be indicating any value. Fig. 3. The distribution of digits according to the dial position on the meter. As 45% of the meters in the proposed dataset do not have a 5th dial, the values of the 5th dial quantities were interpolated proportionally for better visualization. Table II shows the frequency of digits in the UFPR-ADMR dataset. Unlike datasets containing digit-based meters [4], [19], which were manufactured/deployed more recently, the distribution of the digits is almost uniform across our dataset. TABLE II FREQUENCY DISTRIBUTION OF DIGITS IN THE UFPR-ADMR DATASET. Frequency / Digit Distribution 0 1 2 3 4 5 6 7 8 9 996 913 899 906 929 936 942 872 818 886 A. Challenges The main challenge of the proposed dataset is the quality of the images. Low-end cameras, challenging environmental conditions and high compression are factors that have a high impact on the final image quality. The challenging environ- mental conditions include: reflections, dirt, and broken glass, and low-quality acquisition may result in: noisy, blurred and low-contrast images. Fig. 4 illustrates the main image-quality issues described above. In addition to the aforementioned quality issues, there are several types of meter templates and each manufacturer has its own dial model (with variations on the marks) and pointer design. This variations combined with the image capture angle make it difficult to determine the exact pointed value. Another challenge arises from the presence of clockwise and counter-clockwise dials – for design purposes, each meter has alternating clock directions –, and the direction of the dials may differ depending on the meter model and manufacturer. B. Evaluation Protocol and Metrics An evaluation protocol is necessary to enable fair compar- ison between different approaches. The dataset was randomly divided in three disjoints subsets: 1200 images for train- ing (60%), 400 images for validation (20%) and the remaining 400 images for testing (20%). Following recent works in which datasets were introduced [4], [19], [34], the subsets generated are explicitly available along with the UFPR-ADMR dataset. To assess the recognition, three metrics are proposed: (i) dial recognition rate, (ii) meter recognition rate, and (iii) mean absolute error. As the main task is to correctly recognize the meter reading, which is a sequence of digits, the meter recog- nition rate consists of the comparison between the predicted sequence (predm) and the ground-truth sequence (gtm), for each of the N meters: M Rrate = 1 N N (cid:88) m=1 match(x, y) = match(predm, gtm) (1) (cid:40) 1, 0, if x = y, if x (cid:54)= y. For the dial recognition rate, we employed the Leven- shtein distance (also known as edit distance), a common measurement for computing distance between two sequences of characters. The Levenshtein distance measures the mini- mum number of edits (addition, removal or replacement of characters) required to transform one sequence in the other. Levenshtein distance is suitable for our evaluation since it can handle small sequence errors in sequences that other metrics would treat as a big error. For instance, if we have a ground-truth sequence a = “1234” and a prediction sequence b = “234”, a per-character evaluation metric would consider the error equal to 4, while Levenshtein distance is equal to 1, as the difference between them is a single digit prediction. The Levenshtein distance between the sequences a and b can be determined using: (cid:40) leva,b(i, j) = max(i, j), lev(cid:48) a,b(i, j) if min(i, j) = 0, otherwise, where: lev(cid:48) a,b(i, j) = min    leva,b(i − 1, j) + 1 leva,b(i, j − 1) + 1 leva,b(i − 1, j − 1) + 1(ai(cid:54)=bj ). 0123456789Displayed Digit050100150200250300Quantity of Dials*interpolated values12345* (a) uneven lighting (b) blur (c) distant capture (d) reflections Fig. 4. Samples of the challenging scenarios present on the provided images. We selected for the UFPR-ADMR dataset 2,000 images in which it was possible for a human to recognize the correct reading of the meter. We blurred the region containing the consumer unit number in each image due to privacy constraints. (e) dirt (f) glare (g) broken glass The Levenshtein distance between the prediction and the ground truth is computed and then divided by the longest se- quence size between them. This gives us the error. Subtracting the error from 1 gives us the dial recognition rate for each meter. Finally, the mean of all recognition rates yields the total dial recognition error: DRrate = 1 N N (cid:88) (cid:16) m=1 1− lev(predm,gtm)(|predm|, |gtm|) max(|predm|, |gtm|) (cid:17) (2) Considering that the sequence of digits that composes the meter reading is, in fact, a number (integer), correctly predicting the last digit in the sequence is not as important as correctly predicting the first one (i.e., the most significant digit). In order to differentiate and penalize errors in the most significant digits, the mean absolute error is simple yet effective. After converting the sequences to integers (predm and gtm become the integers pm and gm, respectively), the mean absolute error can be obtained using: M Aerror = 1 N N (cid:88) m=1 |pm − gm| (3) IV. EVALUATED APPROACH We chose two deep networks to evaluate: Faster R-CNN and YOLO. The reason for treating dial meter reading as a detec- tion problem arises from the previous successful approaches to AMR using detection networks [4], [14]. Faster R-CNN presented accurate results in several detection and recognition problems in the literature, while YOLO achieved reasonable results with a high rate of frames per second (FPS), improving the viability of mobile applications. As illustrated in Fig. 5, the proposed pipeline consists of (i) image acquisition, (ii) dial detection and recognition, and (iii) final reading. Fig. 5. The main steps to perform dial meter reading. A. Dial Detection We perform dial detection directly in the input images, that is, without first detecting the ROI. According to our experiments, presented in section V, this approach achieves the highest F-score value. In other words, our recognition results are not significantly influenced by minor errors on the detection stage, making ROI detection avoidable. B. Dial Recognition Faster R-CNN is evaluated with the following residual networks as backbones replacing VGG [23]: ResNet-50 [22] (with 50 convolutional layers), ResNet-101 [22] (with 101 con- volutional layers) and ResNeXt-101 [30]. According to [22], ResNets outperform VGG and other several networks in clas- sification tasks; therefore, they are used in our experiments. Detection NetworkImageAcquisitionDialDetectionDialRecognitionFinalReading For the YOLO-based models, we use the classifiers pro- posed along with the networks in [26], [29]. YOLOv2 uses the Darknet-19 model as its backbone, which has 19 convolutional layers (hence the name) and 5 max-pooling layers. YOLOv3, on the other hand, uses a network called Darknet-53 (with 53 convolutional layers) for feature extraction; Darknet-53 can be seen as a hybrid approach between Darknet-19 and residual networks [26]. We employed both YOLOv2 and YOLOv3 models in our experiments in order to assess their speed/accuracy trade-off for this task. C. Final Reading The final reading is generated according to the position of the detected dial on the image (from leftmost to the rightmost dial). Non-maximum suppression is performed using the Intersection over Union (IoU) metric (IoU > 0.5) and considering a maximum of 5 dials per image, keeping only the dials predicted with higher confidence in order to avoid false positives. V. EXPERIMENTAL RESULTS We evaluated the performance of the models based on YOLO and Faster R-CNN to detect and recognize the dials simultaneously (note that we used pre-trained weights when fine-tuning both networks). We performed our experiments on a CPU with a Quad-Core AMD Opteron 8387 2.8GHz processor, 64GB of RAM and an NVIDIA Titan Xp GPU. In order to stop the training process and select the best model for testing, we chose the mean Average Precision (mAP) evaluation metric, which has been commonly employed on object detection tasks [24]–[26]. The mAP can be calculated as follows: mAP = APi , (4) 1 c c (cid:88) i=1 where APi stands for the average precision value (for recall values from 0 to 1) of the i-th class. A. Data Augmentation We generated new images by creating small variations to the training images to increase the generalization power of the networks. Based on preliminary experiments carried out on the validation set, we generated seven times the number of training images (the combined number of original and augmented images was 9,600). The following transforma- tions were randomly chosen for each image: random scaling [−20%, 20%], random translation [−20%, 20%], random rotation [−15◦, 15◦] and random shear [−12%, 12%]. The values, which are relative to the original size and position of the images, were chosen randomly within the defined intervals. B. Evaluation First, we investigate the performance of the models in the dial detection task. The results are listed in Table III. For comparison, a common method proposed in the literature was evaluated: HCT [11], [20]. F-score was chosen as the evaluation metric, as it is often used to assess detection tasks. As expected, deep learning-based methods (i.e., YOLO and Faster R-CNN) outperformed HCT, reaching very high F-score values. HCT did not cope well with the large variations on lighting, contrast and perspective found in our dataset images. TABLE III DIAL DETECTION RESULTS ACHIEVED ON THE UFPR-ADMR DATASET. Detection Model Backbone Hough Circle Transform Fast-YOLOv3 YOLOv3 Faster R-CNN Faster R-CNN Faster R-CNN - Darknet Darknet-53 ResNet-50 ResNet-101 ResNeXt-101 Prec. 53.27 99.94 100.0 100.0 100.0 100.0 (%) Recall 55.28 100.0 100.0 99.94 100.0 100.0 F-score 54.25 99.97 100.0 99.97 100.0 100.0 We performed the recognition (reading) by combining the recognized digits (from the leftmost to the rightmost) and comparing them with the pointed values, using the metrics described in Section III. The recognition results, as well as the FPS rates obtained, are displayed in Table IV. TABLE IV RECOGNITION RATE RESULTS OBTAINED ON THE UFPR-ADMR DATASET. Method Input Size FPS Recognition (%) Mean Abs. Dial Meter Error Fast-YOLOv2 Fast-YOLOv2 Fast-YOLOv3 Fast-YOLOv3 YOLOv2 YOLOv2 YOLOv3 YOLOv3 FR-CNN R-50 FR-CNN R-101 FR-CNN X-101 416 × 416 608 × 608 416 × 416 608 × 608 416 × 416 608 × 608 416 × 416 608 × 608 800 × 800 800 × 800 800 × 800 244 145 220 120 67 40 35 20 13 11 6 79.61 85.24 83.27 86.60 91.42 92.51 93.00 93.38 92.56 92.62 93.60 42.25 51.75 47.75 54.25 68.00 71.25 73.75 74.75 72.25 71.75 75.25 5382.06 3810.34 6098.27 5183.82 2615.23 1924.98 1685.98 1591.16 1451.81 1343.29 1591.77 The best performing method was Faster R-CNN (ResNext- 101) followed by YOLOv3. Faster R-CNN obtained a 75.25% recognition rate per meter and 93.60% per dial, using 800 × 800-pixel images. After YOLOv3, Faster R-CNN with ResNet-101 performed better than ResNet-50 considering the recognition rate per dial. Interestingly, ResNet-101 presented a lower hit rate considering the recognition at meter level. The lower hit rate is caused by the fact that ResNet-101 errors were better distributed across the images, while ResNet-50 concentrated the errors on fewer images. The faster method was Fast-YOLOv2, using 416 × 416 images, achieving 244 FPS. Although YOLOv3 did not sur- pass Faster R-CNN (ResNext-101) in recognition rates, the FPS rates obtained were three times higher (20 FPS and 6 FPS, respectively). Considering that the recognition rates achieved by YOLOv3 were not far behind, this model showed a promising trade-off between accuracy and speed. The best method regarding mean absolute error was Faster R-CNN (ResNet-101) with an error of 1343.29. This means that the method’s errors occurred less frequently (or were To illustrate all of the aforementioned causes of errors, some samples are presented in Fig. 7. Table VI summarizes the errors and their frequency on the best two methods: YOLOv3 and Faster R-CNN (ResNeXt-101). Note that most errors are caused by the neighbor values issue, when the pointer is in front of the mark, making it hard to determine if the pointed value is the one after or before the mark. smaller) on the most significant digits. Table V confirms this statement, as Faster R-CNN (ResNet-101) had fewer errors in the most significant dial (the leftmost). TABLE V DISTRIBUTION OF ERRORS BY DIAL POSITION Dial Position YOLOv3 FR-CNN (R-101) FR-CNN (X-101) 1 25.84 20.70 23.50 Frequency (%) 3 4 2 15.44 18.66 18.55 16.94 23.91 22.25 25.99 19.83 18.04 5 15.79 16.91 17.66 Fig. 6 presents some correct prediction results. Note that the Levenshtein distance between every correct prediction and its respective ground-truth annotation always equals 0. Fig. 6. Ground-truth and prediction examples of correctly recognized meters, with their respective Levenshtein distance. C. Error Analysis The most common errors in the presented approach are caused by: • Symmetry: as there are clockwise and counterclockwise dials, when the digits are blurred, the method can not dif- ferentiate the direction and thus may output the mirrored value of the real prediction. • Neighbor value: the most common error. Variables such as angle, lighting, shadows and occlusion (when the pointer is in front of the dial scale mark) can hinder the reading of a dial. Even between the authors, there were some disagreements regarding the correct pointed value in such situations. • Severe lighting conditions/Dirt: shadows, glares, reflec- tions and dirt may confuse the networks, especially in low-contrast images, where those artifacts may emerge more than the pointer, fooling the network to think that it is a pointer border, resulting in an incorrect prediction. • Rotation: rotated images are harder to predict, as the pointed value is not in the usual position. The predictions may be assigned to neighbor digits that would be in the current angle of the pointer if the image was not rotated. Fig. 7. Ground-truth and prediction examples with their respective Leven- shtein distance. The errors are marked in red and include: a) neighbor values, b) severe lighting conditions, c) neighbor value (second dial) and symmetry (first dial); and d) rotation. TABLE VI TYPE AND FREQUENCY OF ERRORS OBTAINED ON EVALUATION. Type of Error Frequency YOLOv3 FR-CNN (X-101) Symmetry Neighbor value Lighting conditions / Dirt Rotation 2% 82% 14% 2% 3% 85% 9% 3% VI. CONCLUSIONS Imaged-based AMR is a faster and less laborious solution than manual on-site reading, and easier to deploy than the replacement of old meters. In this work, we presented the issues and challenges regarding the automatic reading of dial meters since there are many open challenges in this context. We introduced a public real-world dataset (shared upon re- quest), called UFPR-ADMR, for automatic dial meter reading, that includes 2,000 fully annotated images acquired on site by employees of one of the largest companies of the Brazilian electricity sector [9]. As far as we know, this is the first public dataset containing images of dial meters. The proposed dataset contains a well-defined evaluation protocol, which enables a fair comparison of different methods in future works. Considering that the image scenario is challenging in most cases, the deep networks Faster R-CNN and YOLO achieved promising results. This straightforward approach, without ROI 8784387843 (lev=0)a)11581158 (lev=0)c)3783937839 (lev=0)b)95409540 (lev=0)d)43955495 (lev=2)c)21403050 (lev=3)d)40623061 (lev=2)a)01669_1669 (lev=1)b) detection or image preprocessing, simplified the traditional AMR pipeline [1], [2], [4], reducing the number of steps required to obtain the dial meter readings. There is a lot of room for improvement, such as new methods to address the boundaries issues between the markers, which should solve most of the errors. In addition, a new loss function that penalizes errors on the leftmost dials should help to reduce the absolute error (minimizing the absolute error is of paramount importance to the service company). ACKNOWLEDGMENTS This work was supported by grants from the National Coun- cil for Scientific and Technological Development (CNPq), Grants #313423/2017-2 and #428333/2016-8) and the Coor- dination for the Improvement of Higher Education Personnel (CAPES) (Social Demand Program). We gratefully acknowl- edge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. We also thank the Energy Company of Paran´a (Copel) for providing the images for the UFPR-ADMR dataset. REFERENCES [1] M. Vanetti, I. Gallo, and A. Nodari, “Gas meter reading from real world images using a multi-net system,” Pattern Recognition Letters, vol. 34, no. 5, pp. 519–526, 2013. [2] I. Gallo, A. Zamberletti, and L. Noce, “Robust angle invariant GAS me- ter reading,” in International Conference on Digital Image Computing: Techniques and Applications, Nov 2015, pp. 1–7. [3] C. Li, Y. Su, R. Yuan, D. Chu, and J. Zhu, “Light-weight spliced convolution network-based automatic water meter reading in smart city,” IEEE Access, vol. 7, pp. 174 359–174 367, 2019. [4] R. Laroca, V. Barroso, M. A. Diniz, G. R. Gonc¸alves, W. R. Schwartz, and D. Menotti, “Convolutional neural networks for automatic meter reading,” Journal of Electronic Imaging, vol. 28, no. 1, p. 013023, 2019. [5] U. E. I. Administration. (2019) Electric power annual 2018. [Online]. Available: https://www.eia.gov/electricity/annual/pdf/epa.pdf [6] Y. Kabalci, “A survey on smart metering and smart grid communication,” Renewable and Sustainable Energy Reviews, vol. 57, pp. 302–318, 2016. [7] Ausgrid. (2020) Types of meters. [Online]. Available: https://www. ausgrid.com.au/Your-energy-use/Meters/Type-of-meters [8] Callmepower. (2020) Types of electricity meters. [Online]. Available: https://callmepower.com/useful-information/electricity-meter-types [9] Copel. (2020) Energy Company Of Paran´a. [Online]. Available: http://www.copel.com/hpcopel/english/ [10] Y. Liu, J. Liu, and Y. Ke, “A detection and recognition system of pointer meters in substations based on computer vision,” Measurement, vol. 152, p. 107333, 2020. [11] W. Zheng, H. Yin, A. Wang, P. Fu, and B. Liu, “Development of an automatic reading method and software for pointer instruments,” in International Conference on Electronics Instrumentation Information Systems, June 2017, pp. 1–6. [12] Y. Huang, X. Dai, and Q. Meng, “An automatic detection and recognition method for pointer-type meters in natural gas stations,” in Chinese Control Conference, July 2019, pp. 7866–7871. [13] H. Jiale, L. En, T. Bingjie, and L. Ming, “Reading recognition method of analog measuring instruments based on improved hough transform,” in International Conference on Electronic Measurement Instruments, vol. 3, Aug 2011, pp. 337–340. [14] Y. Fang, Y. Dai, G. He, and D. Qi, “A mask RCNN based automatic reading method for pointer meter,” in Chinese Control Conference, July 2019, pp. 8466–8471. [15] Y. Tang, C. Ten, C. Wang, and G. Parker, “Extraction of energy infor- mation from analog meters using image processing,” IEEE Transactions on Smart Grid, vol. 6, no. 4, pp. 2032–2040, July 2015. [16] R. Ocampo-Vega et al., “Image processing for automatic reading of electro-mechanical utility meters,” in Mexican International Conference on Artificial Intelligence, Nov 2013, pp. 164–170. [17] L. Gmez, M. Rusiol, and D. Karatzas, “Cutting sayre’s knot: Reading scene text without segmentation. Application to utility meters,” in IAPR Intern. Workshop on Document Analysis Systems, 2018, pp. 97–102. [18] C. Tsai, T. D. Shou, S. Chen, and J. Hsieh, “Use SSD to detect the digital region in electricity meter,” in International Conference on Machine Learning and Cybernetics (ICMLC), July 2019, pp. 1–7. [19] F. Yang, L. Jin, S. Lai, X. Gao, and Z. Li, “Fully convolutional sequence recognition network for water meter number reading,” IEEE Access, vol. 7, pp. 11 679–11 687, 2019. [20] L. Zhang et al., “Pointer-type meter automatic reading from complex environment based on visual saliency,” in International Conference on Wavelet Analysis and Pattern Recognition, July 2016, pp. 264–269. [21] P. He, L. Zuo, C. Zhang, and Z. Zhang, “A value recognition algorithm for pointer meter based on improved Mask-RCNN,” in International Conference on Information Science and Technology, 2019, pp. 108–113. [22] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 770–778. [23] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations (ICLR), 2015, pp. 1–12. [24] S. Ren et al., “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017. [25] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp. 779–788. [26] J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” arXiv preprint, vol. arXiv:1804.02767, 2018. [27] A. Nodari and I. Gallo, “A multi-neural network approach to image detection and segmentation of gas meter counter.” in IAPR Conference on Machine Vision Applications (MVA), 2011, pp. 239–242. [28] W. Liu et al., “SSD: Single shot multibox detector,” in European Conference on Computer Vision (ECCV), 2016, pp. 21–37. [29] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 6517–6525. [30] S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [31] L. Liu et al., “Deep learning for generic object detection: A survey,” International Journal of Computer Vision, Oct 2019. [32] R. Laroca, L. A. Zanlorensi, G. R. Gonc¸alves, E. Todt, W. R. Schwartz, and D. Menotti, “An efficient and layout-independent automatic license plate recognition system based on the YOLO detector,” arXiv preprint, vol. arXiv:1909.01754, pp. 1–14, 2019. [33] E. Severo, R. Laroca, C. S. Bezerra, L. A. Zanlorensi, D. Weingaertner, G. Moreira, and D. Menotti, “A benchmark for iris location and a deep learning detector evaluation,” in International Joint Conference on Neural Networks (IJCNN), July 2018, pp. 1–7. [34] R. Laroca, E. Severo, L. A. Zanlorensi, L. S. Oliveira, G. R. Gonc¸alves, W. R. Schwartz, and D. Menotti, “A robust real-time automatic license plate recognition based on the YOLO detector,” in International Joint Conference on Neural Networks (IJCNN), July 2018, pp. 1–10.
ai_researcher
1
AI3SD_AI4Good_Workshop_@_WebSci'20_Report_2020.pdf
Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 Roots of Trumpism: Homophily and Social Feedback in Donald Trump Support on Reddit Joan Massachs Universitat Politècnica de Catalunya, Spain [email protected] Corrado Monti ISI Foundation, Italy [email protected] Gianmarco De Francisci Morales ISI Foundation, Italy [email protected] Francesco Bonchi ISI Foundation, Italy Eurecat, Spain [email protected] ABSTRACT We study the emergence of support for Donald Trump in Reddit’s political discussion. With almost 800k subscribers, “r/The_Donald” is one of the largest communities on Reddit, and one of the main hubs for Trump supporters. It was created in 2015, shortly after Donald Trump began his presidential campaign. By using only data from 2012, we predict the likelihood of being a supporter of Donald Trump in 2016, the year of the last US presidential elections. To characterize the behavior of Trump supporters, we draw from three different sociological hypotheses: homophily, social influence, and social feedback. We operationalize each hypothesis as a set of fea- tures for each user, and train classifiers to predict their participation in r/The_Donald. We find that homophily-based and social feedback-based features are the most predictive signals. Conversely, we do not observe a strong impact of social influence mechanisms. We also perform an introspection of the best-performing model to build a “persona” of the typical supporter of Donald Trump on Reddit. We find evidence that the most prominent traits include a predominance of masculine interests, a conservative and libertarian political leaning, and links with politically incorrect and conspiratorial content. CCS CONCEPTS • Applied computing → Sociology; • Information systems → Web mining; • Computing methodologies → Machine learn- ing. ACM Reference Format: Joan Massachs, Corrado Monti, Gianmarco De Francisci Morales, and Francesco Bonchi. 2020. Roots of Trumpism: Homophily and Social Feedback in Donald Trump Support on Reddit. In 12th ACM Conference on Web Science (WebSci ’20), July 6–10, 2020, Southampton, United Kingdom. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3394231.3397894 0 2 0 2 y a M 4 ] I S . s c [ 1 v 0 9 7 1 0 . 5 0 0 2 : v i X r a Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. WebSci ’20, July 6–10, 2020, Southampton, United Kingdom © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-7989-2/20/07. . . $15.00 https://doi.org/10.1145/3394231.3397894 1 INTRODUCTION The emergence and success of Donald Trump during the 2016 US presidential elections caught many pundits by surprise.1 The rea- sons behind such an upset have been the subject of intense debate: they have been traced back to a resurgence of authoritarian pop- ulism, to the socio-economic context of US in a globalized world, and even to his raw appeal as an anti-establishment and divisive candidate, just to name a few [1, 6, 9, 29, 33]. While understanding the precise causes of Trump’s success might be impossible, the unprecedented data available via the Web and social media gives us an opportunity to at least understand his supporters. Indeed, the goal of this work is to study the emergence of support for Donald Trump in Reddit’s political discussion. Don- ald Trump’s campaign relied heavily on social media, and Reddit was a fundamental platform for its success [14]. Moreover, Red- dit allows to study this emergence in a broader perspective, by identifying which factors anticipate Trump support years before. Reddit is a social news aggregation website; in 2012, it attracted 46 million unique visitors; in 2016, it was the seventh most visited website in United States, with more than 200 million visitors.2 Its users use pseudonyms, and their posts and comments are publicly available. Reddit is also commonly used to discuss news and political topics. These features make it a promising venue for social research. Moreover, one of the largest online communities of Donald Trump supporters is the Reddit community r/The_Donald. Although this community was born only in 2015, thanks to the availability of historical Reddit data over the years, we can frame our investigation as a prediction task. Thus, our methodology in this work is the following. First, we build a computational focus group [18] of 44 924 politically active users on Reddit, who engaged in political discussion both in 2012 and in 2016. Then, we divide our focus group into two classes: those who participate in r/The_Donald in 2016 and those who do not. Participation in r/The_Donald is a valid proxy to study Donald Trump support, as the rules of this subreddit explicitly state that the community is for “Trump Sup- porters Only”, and that dissenting users will be removed. Based on this proxy, we identify 7083 (15.8%) users with significant presence in that community. Therefore, we frame our question as a binary prediction task: given the features of a user in 2012, can we predict whether they will participate in r/The_Donald in 2016? 1https://www.forbes.com/sites/stevedenning/2016/11/13/the-five-whys-of-the- trump-surprise 2http://web.archive.org/web/20121231152526/http://www.reddit.com/about/ http://web.archive.org/web/20161213123205/https://www.alexa.com/topsites/ countries/US Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 WebSci ’20, July 6–10, 2020, Southampton, United Kingdom J. Massachs, C. Monti, G. De Francisci Morales, F. Bonchi For our purpose, we define a set of features by drawing from ex- isting sociological theories of opinion formation. In particular, our features capture three social mechanisms: influence, conformity, and homophily. Each mechanism is the product of a different type of interaction between a user and their environment. First, we con- sider direct communications—a user paying attention to a comment. This interaction might lead to attitude change through persuasion or reactance; in general, we speak of (direct) influence. Determining whether online interactions on social media can cause one to recon- sider their views has attracted considerable attention [8] and several concerns [19]. The second type of interaction we consider is social feedback. It might lead to attitude change via conformity [5], since users might wish to match the perceived norm of their communities. The opposite can also happen: anti-conformity [36] can lead users to defy the perceived norms they experience. We operationalize social feedback as the score received by a user in a particular commu- nity. Finally we consider indirect interactions: common interests, proximity, social groups. They might explain common attitudes via homophily [24]. We observe indirect interactions as participa- tion in Reddit communities. These are not necessarily political, and include also hobbies, interests, religions, geographic locations, and even addictions. Distinguishing influence from homophily is a long-standing problem in social network analysis [16]. By aggregating these three sets of features, we build a rich data set regarding our focus group of politically active users. We share this data set, dubbed reddit-politics-12-16, for further investi- gation on this topic. In this work, we use it to answer the following research questions: • Can we predict who will support Donald Trump four years in advance? • Which kind of interaction is most predictive of participation in r/The_Donald? • What are the main traits of a future Trump supporter on Reddit? Our best model achieves an F1-score of 35.3%, more than double the random baseline of 15.2%, and an area under the ROC curve of 0.70. We find evidence that homophily is the better predictor among the considered ones, while conformity also plays a notice- able role. We do not observe significant evidence of direct influence. Several interesting traits emerge among those that predict Donald Trump support, which we describe in detail in Section 5. The Trump supporter “persona” has conservative and libertarian views, and participates in politically incorrect and conspiratorial communities. Among their interests, the most important ones are entrepreneur- ship, guns, and video games. Among the traits more heavily anti- correlated with Trumpism, we find atheism and environmentalism, as well as interests such as cooking and DIY electronics. 2 BACKGROUND AND RELATED WORK Reddit, as an interesting and publicly available data source, has attracted plenty of attention in recent works. A comprehensive survey was compiled in 2017 by Medvedev et al. [25]. More re- cently, some works have used Reddit data to study the evolution of specific beliefs and tendencies; as well as the relationship between politics and different Reddit communities. Kane and Luo [13] use LDA to characterize the political tendencies of non-political sub- reddits; however, the presence of arguments makes their results hard to interpret. Klein et al. [15] characterize Reddit users that joined the r/conspiracy subreddit, as a proxy to study conspirato- rial world views. They find that language differs clearly between conspiratorial users and their control group; in particular, they ob- serve differences in usage of words related to crime, government and power, while they do not witness meaningful differences in negative or positive emotions. They also analyze which subreddits act as “pathways” to r/conspiracy by building a user-based simi- larity network between communities. Subscribers of r/conspiracy are over-represented in communities related to pornography, tech culture, and music. As we show in Section 5, we find a signifi- cant correlation between r/conspiracy and r/The_Donald. Grover and Mark [12] analyse behaviour patterns in r/altright, finding that they display warning behaviors such as fixation and in-group identification. A small number of works explicitly focus on r/The_Donald. Zan- nettou et al. [37] study the propagation of memes across multiple alt-right communities in social networks, including Reddit and r/The_Donald. Flores-Saviaga et al. [10] investigate the behaviour of users on r/The_Donald, finding that they often adopt “troll slang”, especially when discussing conspiracy theories. They also find that the messages attracting most engagement are those explaining in detail some political circumstances and calling users to action. In their conclusions, they also note the need for a deeper look at this community, by investigating its roots. Our prediction task can be considered related to stance detection, as we identify the opinion of a pre-determined set of individuals with respect to a specific topic. Usually, however, stance detection involves determining the stance of a short text, typically where the author explicitly mentions the stance object. For instance, Moham- mad et al. [28] at the SemEval-2016 Task 6 challenge classify the stances of a set of Twitter users on different topics. Interestingly, one of the topics of the challenge is Donald Trump’s presidential candidacy, on which the best classifier achieves an F1-score of 0.56 (compared to a constant baseline F1-score of 0.29). The classification performance metrics of our best model are in line with these results. Other examples of political stance detection include the work by Lai et al. [17] about classifying stances on the Italian 2016 referen- dum, and the one by Taulé et al. [34] about stances on Catalonian independence. Usually, stance detection methods rely heavily on linguistic fea- tures [17, 28, 34] to predict explicit views. However, it is also pos- sible to use homophily to identify significant correlation between political beliefs and other traits. To quote DellaPosta et al. [7], “self-reinforcing dynamics of homophily and influence dramatically amplify even very small elective affinities between lifestyle and ideol- ogy”. This phenomenon has been studied on Twitter by Garimella and Weber [11], by analyzing significant traits of democrat and republican Twitter users. Magdy et al. [22] employ a mix of these features to predict Islamophobic views on Twitter before they are expressed. Network features alone are able to achieve a precision of 79% on this task, thus confirming the importance of homophily in predicting unspoken views. Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 Roots of Trumpism WebSci ’20, July 6–10, 2020, Southampton, United Kingdom 3 DATA We take our data set from Reddit [2]. Reddit is organized in topical communities, called subreddits. Users can post in these subreddits, and comment on other posts and comments, thus creating a tree structure for the overall discussion. We call a message a generic piece of user-generated content, when the distinction between post and comment is not relevant. In addition, users can also upvote a message to show approval, appreciation, or agreement (and their opposites with a downvote). The score of a message is the number of positive votes minus the number of negative votes it has received.3 To define our focus group, we first need to define the set of subreddits we wish to consider. Since we are interested in political discussion, we choose r/politics, the largest political subreddit, as our seed. We then pick the 50 most similar subreddits to r/politics according to cosine similarity over a vector representation of the subreddits based on latent semantic analysis, which captures sub- reddits whose user base is similar to the seed one.4 By considering these political subreddits, let the set of active users be those that have written at least 10 comments in 2012 and 10 comments in 2016 in any of these subreddits. This set contains 44 924 users, and constitutes our computational focus group [18]. In addition, let the popular subreddits be the top 1000 subreddits with the most comments. Let us now focus on the task at hand. We wish to predict which users will support Trump in 2016, the year Trump was elected president of the United States, by looking only at data from 2012, the year of the previous presidential elections. Class label. We use participation in r/The_Donald in 2016 to infer the class label of politically active Reddit users. It is worth men- tioning that in 2012 the subreddit r/The_Donald did not exist yet, so we have no notion of Trump supporters in 2012. However, sim- ply taking all users who commented in r/The_Donald is too loose and noisy as an operational definition. As a first approximation, we define a user to be a Trump supporter if they have at least 4 comments on r/The_Donald, and the sum of their scores is at least 4. This corresponds to 7427 users; however, we note that 1200 of those users have also posted on the subreddit devoted to the other pres- idential candidate, Hillary Clinton (r/hillaryclinton). Therefore, in order to take into account the general political activity of a user, we consider a user as Trump supporter in 2016 if they have at least 4 comments more in r/The_Donald than in r/hillaryclinton, and the sum of the scores (both positive and negative) on r/The_Donald is at least 4 points higher than the one in r/hillaryclinton. This definition allows us to have a data set with limited class imbalance while maximizing the confidence in the label attribution. With this method, we discard 344 users (4.6% of our first set) that, according to this definition, are not clearly supporting Trump in 2016. Finally, in our focus group of 44 924 users, 7083 (15.8%) are labeled as Trump supporters and 37 841 (84.2%) are labeled as non Trump supporters. This labeling is what we adopt in all of our analysis. Direct influence. We say that an active user u interacts with the political subreddit r when u answers a message, in any popular 3https://www.reddithelp.com/en/categories/reddit-101/reddit-basics/how- posts-or-comments-score-determined 4https://www.shorttails.io/interactive-map-of-reddit-and-subreddit-similarity- calculator subreddit, made by another user v who has posted in the subreddit r in 2012. This notion of direct influence captures the idea that u interacts with v, who is a user belonging to the community r , and therefore is possibly exposed to the attitudes of that community, irrespective of where the interaction takes place. We opt for this notion of influence to avoid extreme sparsity from considering user to user interactions. Furthermore, we consider an interaction conflictual when one of the two messages has a score of at least 10 and the other one has a score of at most −10. This definition captures the notion that the two attitudes expressed in the messages differ, and that the interaction possibly represents a conflict. For each active user and political subreddit, we compute how many times the user has interacted with the subreddit, and how many of these interactions are conflictual. Social feedback. We consider the scores received by an active user u on a political subreddit r in 2012 as a proxy for the social feedback given by r to u. The positive and negative scores are con- sidered separately, as forms of positive and negative reinforcement, respectively. We use average scores to normalize the score across different levels of user activity. The higher the average positive score of a user, the better received their attitude is in the given community. Conversely, the average negative score shows how much a given community disapproves of the attitude of a given user. Homophily. Users may have similar behavior –support Trump– because they already have similar characteristics and interests. We capture this notion by looking at the participation of an active user u to a popular subreddit r . Users with similar interests are likely to belong to the same communities, which is a form of homophily. We experimented with both numerical (number of comments) and binary versions of these features, and found the results to be similar. Given that the latter version is simpler to interpret, henceforth we report results for the binary feature. Therefore, our final data set contains the following features for each user: Participation: • The feature r part. is true when the user participates in subreddit r , i.e., they have written a comment on r . Score: • The feature r pos. s. is the average of the positive scores of the comments by the user in subreddit r . • The feature r neg. s. is the average of the negative scores of the comments by the user in subreddit r . Interaction: • The feature num. i. is the total number of direct interactions that the user has had. • The feature r dist. i. is the fraction of direct interactions that the user has had with users participating to the subreddit r . • The feature r pos. i. is the fraction of non-conflictual direct interactions with users participating to the subreddit r among the direct interactions with users participating to r . Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 WebSci ’20, July 6–10, 2020, Southampton, United Kingdom J. Massachs, C. Monti, G. De Francisci Morales, F. Bonchi This data set is the main artifact resulting from our research. We believe it is of independent value for research in computational social science, and thus make it available to the community.5 For both scores and positive interactions, if the user does not have comments in the subreddit r , and thus the features would be undefined, the value of the feature is taken as the population average. This way, the classification algorithm cannot distinguish an average score value from a non-participating user. In other words, this imputation method removes the participation information from the features, with the aim of disentangling homophily from social feedback and direct influence. In addition, we extract two other sets of interpretable baseline features grounded in text mining: Sentiment: The feature r polarity is the average polarity of the titles of the posts by the user in a political subreddit r . We compute the polarity by using TextBlob.6 Bag of words: The feature x bag is the tf-idf weight of the word x in the titles of the posts in political subreddits the user has authored. Moreover, we create two derived feature sets: bisected scores and bisected interactions. These features are based on the score and interaction features, by dividing the subreddits in two sets. The grouping is defined depending on whether the fraction of Trump- supporting users in 2016 is above or below average for the given subreddit. Let us indicate these two sets of subreddits with T and N , respectively. Given that this feature grouping uses the label information, we do not use them to investigate their predictive power. Rather, we leverage them to gain insights on which features are correlated with Trump support. For the bisected scores, rather than having a positive and negative value for each subreddit, we have only four values: average positive and negative scores for each of the two groups of subreddits. Similarly, for bisected interactions, the interactions of a user are summarized in three values: (i) the fraction of direct interactions that the user has had with users participating in a subreddit in T , (ii) the fraction of non-conflictual direct interactions with users participating in a subreddit in T , and (iii) the fraction of non-conflictual direct interactions with users participating in a subreddit in N . 4 METHODS For each feature set described in the previous section, we train dif- ferent classification algorithms to predict which users will become Trump supporters in 2016. In addition, we also test the possible combinations between participation, score, and interaction features. Before training each classification algorithm, we preprocess the data and perform feature selection to avoid overfitting and to ob- tain more parsimonious and interpretable models. In particular, we perform the following preprocessing steps: (i) remove sparse fea- tures, (ii) standardize numerical values, (iii) select only significantly correlated features, and (iv) remove multicollinearity. In the first step, we remove features that are defined for fewer than 500 users (out of 44 924 total); for the participation feature set, we use a stricter rule and remove subreddits with fewer than 250 5https://github.com/JoanMG/reddit-data 6https://textblob.readthedocs.io Table 1: For each algorithm and for each feature set, we re- port the F1-score (%) and its standard deviation σ over the 5- fold cross-validation. The three algorithms used are logistic regression, decision tree, and random forest. For a detailed description of each feature set, see Section 5.1. LR DT F1 (%) σ F1 (%) Participation Score Interaction Sentiment Bag of words Score (bisected) Int. (bisected) Int. + Part. Int. + Score Part. + Score Int. + Part. + Score 34.8 29.5 26.7 7.3 25.9 29.0 25.4 34.7 30.4 35.3 35.5 0.7 1.2 0.7 0.8 1.0 0.8 0.6 1.3 1.0 0.9 1.2 31.8 31.0 26.3 16.4 13.1 29.5 26.9 31.5 30.6 32.3 32.0 RF F1 (%) 33.7 33.7 25.5 10.7 23.1 29.8 24.6 33.8 33.6 35.0 35.2 σ 0.7 1.0 0.6 13.1 0.6 0.9 1.5 0.8 0.9 0.6 0.8 σ 0.5 1.7 1.0 13.4 10.7 0.8 0.8 0.7 1.8 0.7 0.7 Random baseline F1: 15.2% users in our group; for the bag-of-words feature set, we remove the words that are used by fewer than 45 users (0.1% of our focus group). In the standardization step, we shift and rescale each numerical feature so that it has zero mean and unit variance. For feature se- lection, we remove all features that are not significantly correlated (p < 0.05) with the target variable, according to Pearson correla- tion. Finally, to remove multicollinearity, we iteratively remove the most significantly collinear features through a greedy approach for backward feature elimination; we measure collinearity by means of variance inflation factor (VIF). After feature selection, we train the following machine learning algorithms: logistic regression, decision tree, and random forest. For each one, all the measures reported are obtained through 5- fold cross-validation. We optimize the hyper-parameter of each classification algorithm by using nested cross-validation, so as not overfit the model selection stage. We report the average F1 measure and the standard deviation across the 5 folds for the best model (according to the nested cross-validation). 5 RESULTS In this section, we present our experimental results and provide answers to our original research questions. Firstly, we measure and discuss the prediction accuracy of each feature set, to determine how well we can predict Trump support and which kind of interaction is the most predictive. Secondly, we analyze the most predictive features, to outline the main traits that distinguish future Trump supporters on Reddit. 5.1 Prediction accuracy Our results for each feature set and classifier are summed up in Table 1. First, note that logistic regression outperforms the other two algorithms in most cases, although there are some exceptions Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 Roots of Trumpism WebSci ’20, July 6–10, 2020, Southampton, United Kingdom Table 2: For each of the most predictive feature sets, we re- port precision, recall, F1-score, and area under ROC curve. We only report the performance obtained by the best algo- rithm between logistic regression and random forest. All classifiers are 5-fold cross-validated and use information from 2012 to predict Trump support in 2016. Precision Recall F1 AUC Participation Score Interaction Part. + Score 0.25 0.24 0.18 0.27 0.56 0.60 0.52 0.56 0.34 0.33 0.26 0.35 0.68 0.67 0.55 0.70 at the end of this section, and by analyzing which are the most important features in Section 5.2. Language. Finally, linguistic features perform quite poorly. Sen- timent, with an F1-score of 16.4% ± 13.4 is as predictive as the random baseline, and any classifier more complex than a decision tree ends up overfitting. In other words, we do not observe any cor- relation between the tone of writing and the likelihood of becoming a Trump supporter. The bag-of-words features perform better, but with 25.9%±1.0 of F1-score they are much worse than participation, and still worse than interaction. This result suggests that simple language models are worse predictors of Trumpism than common social groups. Combined features. Now, we measure the predictive power of pairs of feature sets used together: participation and scores, partici- pation and interactions, and interactions and scores. Results show that, first, adding the interaction feature set to any other one does not improve their predicting power. The results for participation and interactions are the same as those for participation, and for in- teractions and scores are also the same as those of scores only. These results strengthen our conclusion that direct online interactions on Reddit are not a decisive factor in determining who becomes a Trump supporter four years later. Instead, when we combine par- ticipation and scores, results improve slightly compared to the best of the two. This fact suggests that these two types of interactions provide a partially orthogonal signal. The most important signals we find are therefore homophily and social feedback, while we find only limited effects of social influence. Combining participation and score thus constitutes our best social features-based classifier. We analyze in detail the performance of this last model in pre- dicting Trump support four years in advance. This model obtains a precision of 27% and a recall of 56%. Let us remind that the fraction of Trump supporters in our focus group is 15.8%. By taking the probability assigned by the best classifier to each user we obtain a score indicating the propensity of a Reddit user to become a Trump supporter. We evaluate the predictive power of this propensity score with a ROC curve in Figure 1. The area under ROC curve for this model is 0.70. We report these results, along with the models for participation, scores, and interactions taken individually, in Table 2. Bisected features. We now turn our attention to bisected features. Recall that by bisecting we mean dividing the subreddits in a cer- tain feature set (Scores or Interactions) in two groups, depending Figure 1: ROC curves of the most predictive feature sets: par- ticipation, scores, direct interaction; and the combination of participation and scores. We only report the performance obtained by the best algorithm among logistic regression and random forest. All classifiers use information from 2012 to predict Trump support in 2016. –score-based, sentiment, and bisected interactions features– that we discuss in the following paragraphs. We now compare the predictive power of each feature set by looking at the F1-score achieved by the best classifier. Homophily. Participation is the best-performing feature among the basic sets; it achieves an average F1-score of 34.8% ± 0.7. This result suggests that homophily is the most powerful predictor of Donald Trump support among the considered ones: the role of shared social groups outranks in predictive power direct online interactions, social feedback, bag-of-words, and sentiment-based features. This result confirms the importance of homophily as a determinant of social behavior [7]. We show which specific topical groups are most predictive of Trump support in Section 5.2. Social feedback. Reddit scores obtain an F1-score of 33.7% ± 1.0, almost as high as participation. We remark that, in order to disen- tangle as much as possible participation and scores, we take the population average score for the subreddits a given user did not par- ticipate in. Therefore, such a high score suggests a relevant role for social feedback and conformity: individuals that were positively or negatively welcomed by certain communities land on r/The_Donald four years later. We look at which community’s feedback has this effect in Section 5.2. The independence of scores and participation is confirmed by the increase in F1-score when using both feature sets together, as we show at the end of this section. While for the other feature sets the best classifier is logistic regression, for score-related features random forest has a better outcome. Since random forest is a non-linear classifier, its advantage suggests a non-linear relationship between Reddit scores and the likelihood of supporting Donald Trump. Direct influence. The effect of interactions, with an F1-score of 26.7% ± 0.7, seems to be much lower than the one of scores and participation. By using a class-proportional random baseline, we obtain an F1-score of 15.2% (close to 15.8%, the proportion of Trump supporters). Direct interactions are therefore still a better predictor than random. We investigate in depth the correlations discovered on direct interactions by using the bisected interaction feature set 0.00.20.40.60.81.0False positive rate0.00.20.40.60.81.0True positive ratePart. + ScoreParticipationScoreInteraction Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 WebSci ’20, July 6–10, 2020, Southampton, United Kingdom J. Massachs, C. Monti, G. De Francisci Morales, F. Bonchi Table 3: Logistic regression coefficients for predicting Trump support, for all the features in the bisected interac- tion feature set. We indicate withT the set of subreddits with more Trump supporters than average and with N those with fewer Trump supporters than average. Feature description β Interactions with users participating in T 0.076163 -0.005322 Non-conflictual interactions with users participating in T Non-conflictual interactions with users participating in N -0.029029 on whether a subreddit has a fraction of future Trump support- ers larger (T ) or smaller (N ) than average. As such, these features contain future information, not originally available in 2012, but have a coarser granularity. They allow us to investigate the effect of influence of (future) Trump-supporting users in contrast with the rest, both for direct influence and social feedback. First, we measure their results in terms of prediction accuracy, by looking at Table 1. Bisected interactions obtain a similar performance to interactions divided by subreddits. This finding suggests that the effect of social influence is fairly similar across Trump-dominated subreddits. Surprisingly, instead, scores lose predictive power. Ap- parently, the coarser granularity makes the classifier less precise. This result shows that the effect of social feedback from a certain community is not simply a reflection of whether that community will become more or less dominated by Trump supporters, but there is a finer-grained structure to it. We analyze in depth the features for the two bisected models (scores and direct interactions) in order to further characterize which types of interactions anticipate Donald Trump support. Let us first look at the logistic regression coefficients for the features in the bisected interaction feature set. Here we have three features, depending on the interaction being conflictual or non- conflictual, and on it involving a subreddit with a high or low number of future Trump supporters. Using this kind of future infor- mation allow us to look for evidence of backfire effect. Table 3 shows that having any direct interaction with future Trump-dominated subreddits is predictive of Trump support. In addition, conflictual interactions (irrespective of the target) are correlated with Trump support, as shown by the negative coefficient for non-conflictual interactions. This finding is a manifestation of quarreling behavior in Trump supporters online, more than of backfire effect. This inter- pretation is consistent with previous analyses [26] and supported by the results we show in the next paragraph. Second, we analyze the results for the bisected scores feature set. In this feature set, we divide the subreddits in two groups, according to the number of future Trump supporters. Therefore, considering positive and negative scores, we have four features. Since the best classifier for this feature set is random forests we use SHAP, a state-of-the-art algorithm to explain features in random forests models [20, 21]. These values can be interpreted similarly to the β coefficients of the logistic regression. Each point represents a user, thus, for each feature, the figure shows the distribution of SHAP values across the data set. Horizontally wider distributions indicate Figure 2: SHAP values for all the features in the bisected scores feature set. We indicate with pos. the features ob- tained from positive scores and neg. for negative scores. For each feature, red indicates the highest values and blue the lowest. On the right, we have the feature values most associ- ated with Trump support. Table 4: Logistic regression coefficients for the most impor- tant features in the bag-of-words feature set. On the left we have the top 10 features with largest β coefficient; on the right, the top 10 with smallest β coefficient. Trump supporters Trump non-supporters Word β Word liberal guy debate politic libertarian come think cop tell home 0.000784 0.000691 0.000650 0.000635 0.000604 0.000604 0.000593 0.000591 0.000587 0.000570 abuse reporter similar contribution century honor palestinian writer context voting β -0.000399 -0.000345 -0.000338 -0.000326 -0.000322 -0.000321 -0.000318 -0.000314 -0.000313 -0.000306 a larger absolute impact of the feature in the overall classification, while the color of each point (blue to red) encodes the feature value (low or high). A feature with high values corresponding to positive SHAP values (to the right) is positively correlated with Trumpism. Conversely a feature with high values corresponding to negative SHAP values (to the left) is negatively correlated Trump support. We report SHAP values in Figure 2. The results are quite insightful: negative scores in subreddits with higher-than-average future presence of Trump supporters are associated with future Trump support. It would appear, therefore, that the defiance of social group norms that anticipate Trump sup- port is present also in the communities more aligned with Trump- ism. This is consistent with other findings of “trolling” behavior from Trump supporters [26]. 5.2 Predictive traits In this section, we investigate the importance of each feature for our models, in order to answer our last research question: which traits did anticipate the development of Donald Trump support? To do so, we perform an in-depth feature analysis for the most successful models: bag-of-words, participation, scores, interactions, and the combined model. 0.100.050.000.050.10SHAP value (impact on model output)pos. s. in trumpist subr.pos. s. in non trumpist subr.neg. s. in non trumpist subr.neg. s. in trumpist subr.LowHighFeature value Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 Roots of Trumpism WebSci ’20, July 6–10, 2020, Southampton, United Kingdom Table 5: Logistic regression coefficients for the most impor- tant features in the participation feature set. On the left we have the top 30 features with largest β coefficient; on the right, the top 30 with smallest β coefficient. Trump supporters Trump non-supporters Subreddit β Subreddit r/Conservative r/Libertarian r/conspiracy r/4chan r/circlejerk r/NoFap r/Entrepreneur r/ImGoingToHellForThis r/trees r/MensRights r/guns r/blackops2 r/runescape r/Anarcho_Capitalism r/Catholicism r/leagueoflegends r/nfl r/starcraft r/CCW r/breakingbad r/investing r/AdviceAnimals r/DeadBedrooms r/Firearms r/Advice r/seduction r/Christianity r/golf r/mylittlepony r/POLITIC 0.3815 0.3740 0.3733 0.3341 0.3107 0.2918 0.2539 0.2510 0.2482 0.2482 0.2293 0.2110 0.2031 0.1937 0.1931 0.1920 0.1843 0.1714 0.1638 0.1631 0.1624 0.1589 0.1577 0.1551 0.1537 0.1518 0.1455 0.1453 0.1437 0.1423 r/raspberry_pi r/TrueAtheism r/AskCulinary r/comics r/rpg r/ireland r/Fantasy r/explainlikeimfive r/environment r/doctorwho r/polyamory r/scifi r/books r/askscience r/london r/britishproblems r/Homebrewing r/programming r/gadgets r/AndroidQuestions r/listentothis r/hiphopheads r/boardgames r/asoiaf r/whatisthisthing r/lgbt r/cringepics r/ukpolitics r/Python r/baseball β -0.2847 -0.2577 -0.2355 -0.2249 -0.2186 -0.2034 -0.1983 -0.1944 -0.1892 -0.1878 -0.1806 -0.1777 -0.1772 -0.1738 -0.1691 -0.1687 -0.1632 -0.1521 -0.1501 -0.1463 -0.1462 -0.1397 -0.1336 -0.1292 -0.1244 -0.1187 -0.1175 -0.1136 -0.1089 -0.1080 As seen in the last section, the best classification algorithm is in general the logistic regression; for the scores feature sets, random forests achieve similar or better performance, possibly because of their non-linearity. Therefore, in our investigation of feature impor- tance, we analyze random forests features when scores are involved, and logistic regression otherwise. Thanks to the normalization de- scribed in Section 4, for logistic regression we can simply look at the coefficients obtained by each feature. Instead, for random forests, we employ again SHAP, an algorithm to explain the output of ensemble tree models [20, 21]. Language features. The first model we investigate is the bag- of-words model. The model tries to capture statistical differences in the usage of words by Trump supporters. Table 4 reports the most discriminative words. In general, these features are not easily interpretable, but we can discern some noticeable patterns. Trump supporters in 2012 were more likely to use the word liberal and the word libertarian. We can surmise that the former is an insult and the second is a self-description, but there is no direct way to confirm this conjecture by looking at the model alone. However, we shall see some confirmatory evidence in the analysis of participation features. Moreover, they use terms such as cop, possibly linked to the law-and-order views promoted by Trump; and home, perhaps related to a pronounced attention to concepts such as family values, or homeland. On the opposite side –the words least used by Trump supporters in 2012– we note terms vaguely related to civil rights such as abuse, reporter; and the word palestinian, possibly acknowledging claims of Palestinians. However, in general also the features on this side are hard to interpret. We shall now see how, by using the more predictive participation-based classifier, we are able to draw a clearer portrait. Participation features. We have seen that this is the best single feature set in terms of prediction accuracy. Table 5 shows the 30 most important features for each of the two classes. Here, each fea- ture represents participation (writing a comment) in that subreddit in 2012. The model coefficients are larger than for the bag-of-words features. The most discriminative features are related to political views. Conservative and libertarian groups are the most correlated with Donald Trump support. This finding is consistent with the idea that Trump’s coalition is a part of the so-called “libertarian authoritari- anism”, which conflates needs from both ideological camps [4]. We also recognize topics and communities that are known to be associated with Trump support. r/conspiracy is a community devoted to conspiracy theories [15]; e.g., it covered extensively the “pizzagate” hoax about child sex rings operated by Democratic party officials. This observation backs the theory that some fringe groups have merged into the mainstream political discourse [30]. The website 4chan, a “politically incorrect” discussion board, has been linked to the “alt-right movement” in a previous analysis [26]. We find that participation to the r/4chan subreddit in 2012 is the fourth most predictive feature in this set. Other politically incor- rect groups are also correlated with Trump support. For example, r/ImGoingToHellForThis is a community devoted to shocking and vitriolic humor. Some interests and hobbies clearly emerge among the most pre- dictive subreddits for Trump support, while others seems to anti- correlate with Trump support. An interest in firearms is strongly correlated with Trumpism (r/guns, r/Firearms, r/CCW [Concealed Carry Weapons]). The same is true for several video games commu- nities (r/blackops2, r/runescape, r/leagueoflegends, r/starcraft). Instead, other hobbies are anti-correlated, for instance, tabletop games (r/boardgames, r/rpg). Cuisine and do-it-yourself hobbies are important: r/raspberry_pi, r/AskCulinary, among the most r/Homebrewing are strongly anti-correlated with Trump support. Interests in literature and art is an equally important predictor (r/books, r/comics, r/ListenToThis, r/Fantasy, r/scifi). Religion is also central in the separation: among those correlated with Trump support we find r/Catholicism and r/Christianity; among those anti-correlated, instead, one of the most predictive is r/TrueAtheism. This finding is consistent with the idea that, for Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 WebSci ’20, July 6–10, 2020, Southampton, United Kingdom J. Massachs, C. Monti, G. De Francisci Morales, F. Bonchi Other subreddits appear to be anti-correlated with Trump sup- port simply because they are typically associated to non-American Reddit users: this is the case for r/ukpolitics, r/london, r/ireland, r/britishproblems. A curious finding is that one of the best predic- tors for Trump support is r/trees, a subreddit for cannabis enthusi- asts. We suspect a possible confounding factor: for instance, Miech et al. [27] show that, in the United States, daily cannabis usage in 19-24 years olds is three times higher for those who are not attend- ing college (13% vs 4%). This is consistent with previous finding that Trump has attracted more support from this less-educated segment of the population [32]. Social feedback. We now turn our attention to the social feedback features. As mentioned before, since the best model for this fea- ture set is random forest, we employ SHAP [20, 21] to explain the relationships learned by the model. Figure 3 reports the resulting SHAP values. Some of the subreddits to which participation is a strong predic- tor of Trump support also appear here, although in a different guise: negative scores in r/Conservative, r/trees, and r/conspiracy are correlated with lack of support for Trump, while negative scores in r/atheism are correlated with Trump support. On the r/NFL subreddit, we observe an anti-correlation between positive scores and Trump support. Since this subreddit also appears among the most important participation features, this result sug- gests that participating in the subreddit but not being appreciated by the community is a predictor of Trump support. Some generalist subreddits, such as r/funny, r/pics, or r/AskReddit also appear. In all these cases, negative scores are as- sociated with Trump support; the same is true for r/politics. We remind that participation in those subreddits is not among the most important features. These observations suggest that a negative feed- back from wide-ranging, mainstream Reddit communities in 2012 is linked to Trump support in 2016. This could be the case also for r/gonewild, a subreddit which self-describes as a “a place for open-minded adult Redditors to show off their nude bodies for fun”: users who obtain negative feedback in this community are more likely to become Trump supporters four years later. Direct influence. Our third basic feature set represents direct interactions between a user and another user, where the latter par- ticipated in a certain subreddit. They also account for how many of those interactions were non-conflictual. Table 6 shows the most predictive features. Despite its scarce predictive power when com- pared to participation, we are still able to use these features to enrich our portrait. Trump support is predicted by the fraction of positive interac- tions on politically-active subreddits such as r/Republican, r/Libertarian, and r/moderatepolitics, as well as communities which discuss topics of interests to Trump supporters such as r/conspiracy and r/Economics. These traits support our previous analysis, and confirm the idea that libertarianism and conservatism are among the roots of Trumpism. However, we also observe that the amount of interactions with r/GaryJohnson, candidate against Trump in 2016 elections, is anti-correlated with Trump support. The most powerful feature in this set is the fraction of positive Figure 3: SHAP values for the 30 most important features in the score feature set. We indicate with pos. the features obtained from positive scores and neg. for negative scores. For each feature, red indicates the highest values and blue the lowest. On the right, we have the feature values most associated with Trump support. For instance, the first row indicates that a high negative score in r/politics is indicative of Trump support. many Americans, Trump was “a symbolic defense of the United States perceived Christian heritage” [35]. Some of the communities correlated with Trump support are re- lated to interests such as entrepreneurship and investing. This could suggest both support from wealthy persons, or from those with a self-made attitude. Status threat (as opposed to economic hardship) has been indicated as a common trait in Trump support [29]. Several subreddits with predominantly male demographics ap- pear among those correlated with Trump support, consistently with previous findings [3]. One of them, r/MensRights, is focused on the defense of male interests against feminism. From a sexual orientation point of view, we observe a very clear division between Trump-associated subreddits and the anti-correlated ones. The lat- ter group includes gender, sexual, and romantic minorities, such as r/polyamory and r/lgbt. The subreddits most positively corre- lated with Trump are mostly masculine: for instance, r/seduction, a subreddit part of the Pick-Up Artists movement;7 r/NoFap, a group that provides self-help for porn addiction; and the already cited r/MensRights. It is worth noting that also r/DeadBedrooms, which self-describes as “a support group for Redditors who are coping with a relationship that is seriously lacking in sexual intimacy”, is among the most associated with Trump support. Of the remaining subreddits in the group, many are associated with popular culture (on both sides), such as sports and TV shows. 7https://www.dailydot.com/irl/ken-hoinsky-pua-reddit-seduction-book-the- game 0.020.000.020.040.060.080.100.12SHAP value (impact on model output)nsfw pos.GetMotivated neg.reactiongifs neg.IAmA pos.TrueReddit neg.scifi neg.todayilearned neg.askscience neg.science neg.breakingbad neg.AdviceAnimals neg.aww neg.gifs neg.pics pos.WTF neg.worldnews neg.AskReddit pos.videos neg.nfl pos.fffffffuuuuuuuuuuuu neg.Conservative neg.gonewild neg.conspiracy neg.AskReddit neg.pics neg.funny neg.politics pos.atheism neg.trees neg.politics neg.LowHighFeature value Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 Roots of Trumpism WebSci ’20, July 6–10, 2020, Southampton, United Kingdom Table 6: Logistic regression coefficients for the most impor- tant features in the interaction feature set. On the left we have the top 10 features with largest β coefficient; on the right, the top 10 with smallest β coefficient. We indicate with “r dist.” the fraction of interactions on subreddit r , and with “r pos.” the fraction of positive interactions over all interac- tions with subreddit r . Trump supporters Trump non-supporters Feature β Feature r/ShitPoliticsSays pos. r/Republican pos. r/conspiracy dist. r/moderatepolitics pos. r/Conservative dist. r/Libertarian dist. r/Libertarian pos. r/conspiracy pos. r/POLITIC pos. r/Economics pos. 0.1513 0.0868 0.0684 0.0637 0.0563 0.0543 0.0457 0.0416 0.0274 0.0219 r/todayilearned dist. r/TrueReddit dist. r/Futurology dist. r/dataisbeautiful dist. r/GaryJohnson dist. r/PoliticalDiscussion dist. r/Liberal dist. r/PoliticalDiscussion pos. r/worldnews dist. Total number of interactions β -0.0698 -0.0584 -0.0548 -0.0348 -0.0214 -0.0168 -0.0149 -0.0120 -0.0116 -0.0105 interactions on r/ShitPoliticsSays. This subreddit hosts critiques and mockery of other subreddits, and it exhibits right-wing views.8 Finally, we note that the total number of direct interactions is anti-correlated with Trump support, suggesting that the overall influence of Reddit is adverse to Trump. Combined features. Finally, Figure 4 displays the most impor- tant features of the combined model that uses participation and scores. The two feature sets are well balanced: both feature sets are represented among the most predictive features (14-to-16). This observation strengthens the hypothesis that social feedback and homophily provide a different, orthogonal signals in predicting support for Trump. 6 CONCLUSIONS AND FUTURE WORK We have looked at predictors for becoming a supporter of Donald Trump on Reddit. We used data from 2012 to predict the participa- tion in r/The_Donald in 2016, which we use as a proxy for support of Trump. Such a prediction task is challenging, given the four-year time span (a US presidential electoral cycle) between the observed data and the target behavior. Nevertheless, our best performing model achieves an AUC of 0.70 and an F1 measure of 0.36, signifi- cantly above the performance of a random baseline. We explored a diverse set of predictors which represent three sociological hypotheses for the support of Trump: homophily, so- cial feedback, and influence. We operationalized each hypothesis in the context of Reddit by looking at participation of a user in a community (a subreddit), the appreciation their posts receive in a given community, and interactions with users of other communi- ties. Compared to other baseline interpretable linguistic features, such as the bag-of-words and the sentiment of the posts, the social ones result more predictive of the target behavior. In particular, features encoding homophily and social feedback (conformity and anti-conformity) have shown to be the best predictors of Trumpism, while social influence has shown limited relevance. In addition, a 8E.g., it denounces r/Fuckthealtright and r/AgainstHateSubreddits as hostile subreddits. Figure 4: SHAP values for the 30 most important features in the participation+score feature sets combined. We indi- cate with pos. s. the features obtained from positive scores and neg. s. for negative scores; with part. the participation features. For each feature, red indicates the highest values and blue the lowest. On the right, we have the feature val- ues most associated with Trump support. combination of features for homophily and social feedback (i.e., participation and scores) performs slightly better than the single features, thus showing that the two signals are somewhat comple- mentary. Finally, we introspect the features of the best performing models to delineate a ‘persona’ of how a typical Trump supporter in 2016 looked like on Reddit in 2012. The typical Trump supporter has conservative and libertarian views, is ill-received by the mainstream political tribe, is religious and in conflict with atheism, and has interests in guns, conspiracies, entrepreneurship, and politically incorrect content. Conversely, the typical Reddit user who does not support Trump is atheist, LGBT-friendly, and has interests in cooking, literature, and technology. Limitations and future work. The operationalization of the so- ciological theories we considered in this study has, necessarily, the opportunity to introduce distortions. Out of the three feature sets, the interaction ones which encode social influence are the most brittle because of their natural sparsity. We countered this characteristic by aggregating them per community, but they still resulted to be the least predictive ones in our models. This result might be caused by the specific design choices, and more work is needed to quantify the role that social influence plays in changing the political attitudes of people on social media. The score feature set which encode social feedback also presents some challenges, as the score distribution is heavy tailed. In our work, we used a non-linear classifier (random forest) to tackle this problem, but more sophisticated algorithms might improve results. 0.020.000.020.040.06SHAP value (impact on model output)AdviceAnimals neg. s.fffffffuuuuuuuuuuuu part.nfl part.pics pos. s.todayilearned neg. s.Conservative neg. s.worldnews neg. s.WTF neg. s.videos neg. s.gifs part.gonewild part.guns part.conspiracy neg. s.MensRights part.AdviceAnimals part.funny neg. s.AskReddit pos. s.Conservative part.pics neg. s.AskReddit neg. s.conspiracy part.trees neg. s.atheism neg. s.ImGoingToHellForThis part.Libertarian part.4chan part.circlejerk part.politics pos. s.politics neg. s.trees part.LowHighFeature value Please cite the published version (WebSci ’20). DOI: 10.1145/3394231.3397894 WebSci ’20, July 6–10, 2020, Southampton, United Kingdom J. Massachs, C. Monti, G. De Francisci Morales, F. Bonchi More fundamentally, the design of the current study does not allow to differentiate between different causal interpretations of the social feedback effect. Let us use three variables to represent the behavior of supporters: observed social feedback, observed support for Trump, and latent political attitudes. On the one hand, a causal model could envision the social feedback as a cause for change in political attitudes, which in turn causes the support for Trump. In this case, the social feedback is a root cause of the support for Trump. For instance, a user might have a negative experience with the mainstream political community, which causes their attitudes to drift towards more extreme positions, which in turn might explain the support for Trump. On the other hand, the latent political attitudes could be a common cause for both the received social feedback, because the attitudes expressed are already misaligned with the community, and the support for Trump. In this second case, the social feedback is an effect of the political attitudes, and the support for Trump depends on it in a non-causal way. For example, a user might have some fringe attitudes which are ill- received in the mainstream political community, and find a natural outlet in Trumpism. A causal investigation of these hypotheses from observational data is an interesting extension of the current work [31]. In this framework, we could formalize confounding factors, understanding for instance if Trump supporters became more engaged with some political subreddits, or if they stem from users more active on them in the first place. However, our work constitutes a necessary first step before any causal investigation. Finally, we have described the ‘persona’ of a Trump supporter by assuming there is only a single one. However, there is evidence that people coming from multiple socio-demographics strata support Trump [23].9 It is thus possible that the persona we describe is an amalgamation of traits coming from different sources. In this case, building multiple personae would create more accurate portraits. Also, it would help in distinguishing Trump supporters on Reddit from other young U.S. Republicans. This analysis could help under- stand which issues attracted those who became politicized in this way, thus giving more insights on the roots of Trumpism. REFERENCES [1] Sara Ahmadian, Sara Azarshahi, and Delroy L Paulhus. 2017. Explaining Don- ald Trump via communication style: Grandiosity, informality, and dynamism. Personality and Individual Differences 107 (2017), 49–53. [2] J. Baumgartner, S. Zannettou, B. Keegan, M. Squire, and J. Blackburn. [n.d.]. The Pushshift Reddit Dataset. arXiv preprint arXiv:2001.08435 ([n. d.]). [3] Mick Brewer. 2019. From the Ground, to the Ballot, to the System: The (Critical) Interpersonal Reproduction of Masculinity within Homosocial Friendships of Male Donald Trump Supporters. Southern Illinois University at Carbondale. [4] Wendy Brown. 2018. Where the fires are. Soundings 68, 68 (2018), 14–25. [5] Robert B Cialdini and Noah J Goldstein. 2004. Social influence: Compliance and conformity. Annu. Rev. Psychol. 55 (2004), 591–621. [6] M. de Wit, A. Roman-Alcalá, A. Liebman, and S. Chrisman. 2019. Agrarian origins of authoritarian populism in the United States: What can we learn from 20th-century struggles in California and the Midwest? Journal of Rural Studies (2019). [7] Daniel DellaPosta, Yongren Shi, and Michael Macy. 2015. Why do liberals drink lattes? Amer. J. Sociology 120, 5 (2015), 1473–1511. [8] Trevor Diehl, Brian E Weeks, and Homero Gil de Zuniga. 2016. Political persua- sion on social media: Tracing direct and indirect effects of news use and social interaction. new media & society 18, 9 (2016), 1875–1895. 9https://fivethirtyeight.com/features/the-mythology-of-trumps-working-class- support [9] M Fitzduff. 2017. Why irrational politics appeals: understanding the allure of Trump. ABC-CLIO. [10] Claudia I Flores-Saviaga, Brian C Keegan, and Saiph Savage. 2018. Mobilizing the Trump train: Understanding collective action in a political trolling community. In Twelfth International AAAI Conference on Web and Social Media. [11] Venkata Rama Kiran Garimella and Ingmar Weber. 2014. Co-following on Twitter. In Proceedings of the 25th ACM conference on Hypertext and social media. 249–254. [12] Ted Grover and Gloria Mark. 2019. Detecting Potential Warning Behaviors of Ideological Radicalization in an Alt-Right Subreddit. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 13. 193–204. [13] Benjamin Kane and Jiebo Luo. 2018. Do the Communities We Choose Shape our Political Beliefs? A Study of the Politicization of Topics in Online Social Groups. In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 3665–3671. [14] David Karpf. 2017. Digital politics after Trump. Annals of the International Communication Association 41, 2 (2017), 198–207. [15] Colin Klein, Peter Clutton, and Adam G Dunn. 2019. Pathways to conspiracy: The social and linguistic precursors of involvement in Reddit’s conspiracy theory forum. PloS one 14, 11 (2019). [16] Timothy La Fond and Jennifer Neville. 2010. Randomization tests for distinguish- ing social influence and homophily effects. In Proceedings of the 19th international conference on World wide web. 601–610. [17] Mirko Lai, Viviana Patti, Giancarlo Ruffo, and Paolo Rosso. 2018. Stance evolution and twitter interactions in an italian political debate. In International Conference on Applications of Natural Language to Information Systems. Springer, 15–27. [18] Yu-Ru Lin, Drew Margolin, Brian Keegan, and David Lazer. 2013. Voices of victory: A computational focus group framework for tracking opinion shift in real time. In Proceedings of the 22nd international conference on World Wide Web. [19] Xiaodan Lou, Alessandro Flammini, and Filippo Menczer. 2019. Information pollution by social bots. arXiv preprint arXiv:1907.06130 (2019). [20] Scott M. Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M. Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence 2, 1 (2020), 2522–5839. [21] Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4765–4774. [22] Walid Magdy, Kareem Darwish, Norah Abokhodair, Afshin Rahimi, and Timothy Baldwin. 2016. #ISISisNotIslam or #DeportAllMuslims? Predicting Unspoken Views. In Proceedings of the 8th ACM Conference on Web Science (WebSci âĂŹ16). ACM, 95âĂŞ106. https://doi.org/10.1145/2908131.2908150 [23] Jeff Manza and Ned Crowley. 2017. Working class hero? Interrogating the social bases of the rise of Donald Trump. In The Forum, Vol. 15. De Gruyter, 3–28. [24] Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology 27, 1 (2001), 415–444. [25] Alexey N Medvedev, Renaud Lambiotte, and Jean-Charles Delvenne. 2017. The anatomy of Reddit: An overview of academic research. In Dynamics on and of Complex Networks. Springer, 183–204. [26] William Merrin. 2019. President Troll: Trump, 4Chan and Memetic Warfare. In Trump’s media war. Springer, 201–226. [27] Richard Miech, Lloyd Johnston, Patrick O’Malley, Jerald Bachman, John Schulen- berg, and Megan Patrick. 2019. Monitoring the future national survey results on drug use, 1975-2018: volume I, secondary school students. [28] Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. Semeval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). 31–41. [29] Diana C Mutz. 2018. Status threat, not economic hardship, explains the 2016 presidential vote. Proceedings of the National Academy of Sciences 115, 19 (2018). [30] Rishab Nithyanand, Brian Schaffner, and Phillipa Gill. 2017. Online political discourse in the Trump era. arXiv preprint arXiv:1711.05303 (2017). [31] Judea Pearl. 2009. Causality. Cambridge university press. [32] Jonathan T Rothwell and Pablo Diego-Rosell. 2016. Explaining nationalist political views: The case of Donald Trump. Available at SSRN 2822059 (2016). [33] Ryne A Sherman. 2018. Personal values and support for Donald Trump during the 2016 US presidential primary. Personality and Individual Differences 128 (2018). [34] Mariona Taulé, M Antonia Martí, Francisco M Rangel, Paolo Rosso, Cristina Bosco, Viviana Patti, et al. 2017. Overview of the task on stance and gender detection in tweets on Catalan independence at IberEval 2017. In 2nd Workshop on Evaluation of Human Language Technologies for Iberian Languages, IberEval 2017, Vol. 1881. CEUR-WS, 157–177. [35] Andrew L Whitehead, Samuel L Perry, and Joseph O Baker. 2018. Make America Christian again: Christian nationalism and voting for Donald Trump in the 2016 presidential election. Sociology of Religion 79, 2 (2018), 147–171. [36] R Willis. 1963. Two dimensions of conformity-nonconformity. Sociometry (1963). [37] Savvas Zannettou, Tristan Caulfield, Jeremy Blackburn, Emiliano De Cristofaro, Michael Sirivianos, Gianluca Stringhini, and Guillermo Suarez-Tangil. 2018. On the origins of memes by means of fringe web communities. In Proceedings of the Internet Measurement Conference 2018. 188–202.
ai_researcher
1
Design_Ideation_and_Selection_of_Under-Piston_Door_for_a_Two-stroke_Marine_Engine_Using_Hybrid_TRIZ-biomimetic_and_MCDM_Methods.pdf
Cash or Non-Cash? Unveiling Ideators' Incentive Preferences in Crowdsourcing Contests Forthcoming in Journal of Management Information Systems, 2024 Christoph Riedl D'Amore-McKim School of Business, Northeastern University, Boston MA, USA [email protected] Johann Füller Faculty of Business and Management, University of Innsbruck, Innsbruck, Austria [email protected] Katja Hutter Faculty of Business and Management, University of Innsbruck, Innsbruck, Austria [email protected] Gerard J. Tellis Marshall School of Business, University of Southern California, Los Angeles, CA [email protected] Abstract Even though research has repeatedly shown that non-cash incentives can be effective, cash incentives are the de facto standard in crowdsourcing contests. In this multi-study research, we quantify ideators’ preferences for non-cash incentives and investigate how allowing ideators to self-select their preferred incentive—offering ideators a choice between cash and non-cash incentives—affects their creative performance. We further explore whether the market context of the organization hosting the contest—social (non- profit) or monetary (for-profit)—moderates incentive preferences and their effectiveness. We find that individuals exhibit heterogeneous incentive preferences and often prefer non-cash incentives, even in for-profit contexts. Offering ideators a choice of incentives can enhance creative performance. Market context moderates the effect of incentives, such that ideators who receive non-cash incentives in for-profit contexts tend to exert less effort. We show that heterogeneity of ideators’ preferences (and the ability to satisfy diverse preferences with suitably diverse incentive options) is a critical boundary condition to realizing benefits from offering ideators a choice of incentives. We provide managers with guidance to design effective incentives by improving incentive-preference fit for ideators. Keywords: Crowdsourcing; contests; innovation; ideation; incentives; non-monetary rewards; pro- social incentives. 1 Introduction Crowdsourcing innovation contests broadcast an open call to the public (“crowd”) inviting them to submit ideas and problem solutions. Successful crowdsourcing contests typically draw on a wide pool of ideators, including employees, users, non-users, suppliers, distributors, and professional ideators, tapping into diverse nationalities, backgrounds, and socioeconomic groups [25,69]. The use of crowdsourcing enables both for-profit organizations such as BMW, Danone, Fujitsu, Intel, Procter & Gamble or Volkswagen, and non-profit organizations such as foundations, governments, federal agencies or NGOs to access a wide variety of high-quality ideas and solutions [2]. Despite the widespread use of crowdsourcing contests, success depends on participants’ effort and creativity in contributing high-quality solutions, with earlier studies exploring factors such as prize structures [46,54,64], the number of contestants [11], or entry barriers [26]. While incentives are a key contest design element in crowdsourcing, as they are theorized to affect ideators’ effort and creative performance [65], it is not clear which incentives are most effective. Previous studies show that ideators engage in crowdsourcing for a variety of reasons [1,8,11,23,45] and a mismatch between incentives and participants’ motives can backfire and lead to reduced effort [29]. Although we know that ideators are motivated to participate in crowdsourcing contests for a variety of reasons, why are contests run by diverse organizations from for- profit to non-profit all using cash prizes? Given the diverse motives of the heterogeneous crowd that crowdsroucing contests aspire to attract, offering everyone the same incentive hardly seems effective [29,37]. Could crowdsourcing contest designs be improved by offering ideators a choice of different incentives? In this paper we introduce incentive choice—which we contrast from a single, fixed incentive that is “assigned” to all ideators—as a new incentive design aspect in crowdsourcing contests. Incentive choice allows ideators to self-select between cash and non-cash prizes the incentive they prefer. We theorize that incentive choice can alleviate a central challenge in incentive design and that improving the incentive-preference fit increases effort and performance (i.e., idea quality) in crowdsourcing contests [44,56]. We further theorize that market context—the for-profit or non-profit orientation of the contest organizer—may be an important moderator for the effectivness of incentives. In a non-profit context, participants may be happy to volunteer their time to solve societal problems for mere recognition or praise. In a for-profit context, however, they may expect cash in return for their efforts to help organizations remain financially successful and gain competitive advantage. Market context may thus affect the effectiveness of non- cash incentives. It may also affect preference for non-cash incentives in the first place. Both of these aspects in turn may inform our understanding of boundary conditions under which offering a choice of incentives can be effective. We address the following research questions: 1. Do ideators prefer incentives other than cash when given the choice? While it is plausible to assume heterogeneous preferences among crowdsourcing participants, we seek to quantify what the most popular incentives are. (Quantified in Study 1) 2 2. Does offering a choice of incentives improve quality and effort? (Main effect examined in Study 2) 3. Does market context moderate the effect of incentives on quality and effort? That is, are cash and non-cash incentives equally effective across for-profit and non-profit contexts? (Moderator examined in Study 3) 4. What boundary conditions may constrain the main effect of incentive choice? Specifically, if the mechanism behind the effect of offering a choice is increased preference-incentive fit, the effect may depend on the degree to which ideators actually prefer different incentives. (Boundary condition examined in Study 4) In order to address our research questions, we offer evidence from four consecutive empirical studies. We start by quantifying incentive preferences in a realistic field experiment with over 1,000 participants (Study 1). Drawing on a broad population we offer six different incentives in a non-profit context to assess ideators’ interest in choosing their own incentive and measure popularity of different cash and non-cash prizes. We establish that both cash and non-cash incentives are highly desirable. While cash is the most popular individual choice (28%), when aggregating the different non- cash options together, they account over 49%, and 23% prefer to make no choice at all. Using an econometric technique to account for self-selection in incentive choice, we establish preliminary evidence for the main effect of incentive choice: offering a choice per se improves idea quality in the observational field experiment. We provide additional evidence for the main effect of offering a choice from a randomized lab experiment (Study 2) using the two most popular incentive choices from the field experiment (cash and a donation). We find that offering a choice improves idea quality (but not effort). Further, we show that ideators who choose the cash incentive produce lower quality ideas and that ideators who choose the non-cash incentive exert less effort. This study also offers insights into the specific form in which the incentive choice is delivered. It reveals that ideators are not merely indifferent to incentive options and some may even choose to forego incentives entirely. As theory suggests that the effectiveness of incentive options may depend on the market context—for-profit or non-profit—offering a choice of incentives may not be equally effective in different market contexts. Indeed, we find that market context is an important moderator of incentive effectivness in another randomized experiment (Study 3). We show that effort is generally lower in for-profit settings and even lower when combined with non-cash prizes. Finally, we test an important boundary condition in our last experiment (Study 4): despite attracting diverse populations in terms of gender, home country, or economic background, some online platforms may, over time, evolve an environment in which individuals have homogeneous incentive preferences. Using a sample of ideators drawn from a population of gig workers narrowly focused on earning income, we find no interest in non-cash incentives and consequently no benefit to offering a choice of incentives. Exploring this boundary condition suggests that incentive-preference fit (and hence improved sorting) is the driving mechanism behind the positive main effect of offering a choice (as opposed to the act of choosing itself). This suggests an important practical implication for contest designers: Offering a choice may not be effective in homogeneous ideator pools without diverse preferences. 3 Our paper makes three contributions to the literature on crowdsourcing design and incentive theory across different market contexts. First, we establish incentive choice as an important contest design element and explain why it works by shedding light on the underlying mechanism. This expands past work on crowdsourcing design which has focused on contest design aspects like prize structures [46,54,64], number of contestants [11], and entry barriers [26]. Second, we contribute to a better understanding of incentives in crowdsourcing by showing how for-profit and non-profit market contexts moderate the effect of incentives. Our study reveals how incentives can sometimes backfire when they are misaligned with the market context in which they are used [29,65]. Third, we extend previous research that has identified a range of reasons why individuals participate in crowdsourcing [1,8,11,23,45] by quantifying the considerable degree to which this occurs and that diverse motives result in diverse incentive choices. This guides subsequent theorizing and highlights the advantages of providing alternative incentives in contest design [58]. Our findings have practical implications as they enable managers to design more effective incentives for crowdsourcing contests. Our work suggests that in addition to cash incentives, crowdsourcing organizers should offer a choice of non-cash alternatives for participants to voluntarily choose from, with cash as a clearly marked default option to cater to ideators who are indifferent. Theoretical Background Incentives, Choice, and Market Context Incentives refer to rewards that motivate or encourage someone to act. They are external to the individual and embedded in a situation. Research is increasingly clear that rewards play a complementary role to intrinsic motivation and spur desired behaviors but can also be detrimental to intrinsic motivation and reduce desired outcomes [39]. The type and level of a person’s motivation together with extrinsic incentives determine a person’s likelihood to become active, as well as their level of activity in terms of frequency, intensity, persistence, and performance [51,67]. Personal motives determine which incentives are perceived as attractive and how they affect an individual’s behavior [52]. These reasons range from intrinsic motives (such as curiosity, interest in, and enjoyment of the task) to internalized extrinsic motives (such as skill development, making friends, or supporting others), to purely extrinsic motives associated with the outcome of their engagement (such as monetary rewards) [1,8,11,23,45]. Individuals derive more satisfaction from an activity and show higher levels of engagement the more the activity fulfills their motives [61]. Therefore, it is crucial to offer incentives that align with individuals’ motivations. Incentives are generally thought to have three kinds of effects which can be captured by a utility function with three components [9]: they value extrinsic rewards (the extrinsic motivation component), enjoy doing an activity (the intrinsic motivation component), and care about their image vis-à-vis themselves or others (the reputational motivation component). This model implies that intrinsic, extrinsic, and reputational motivations are not mutually exclusive but jointly predict behavior. However, how strongly each 4 component factors into the utility function is specific to the individual and the context in which these incentives are employed. Individuals have been shown to differ in both their preferences for the enjoyment of a task and the image component of their utility [32]. This has resulted in a robust literature on the limits of monetary incentives [28]. There are four reasons why non-cash incentives may be especially attractive compared to cash incentives [36]. First, non-cash incentives avoid the need to justify spending money. Second, they are visible to the social environment and thus may help to build an individual’s image and reputation. Third, non-cash incentives may represent an independent earning class. Because this earning class is mentally kept separate from other earnings, it may be considered especially rewarding. Fourth, individuals may mentally adjust the value of non-cash incentives depending on their personal perception and emotional reaction. Choice may allow a better incentive-preference match. As individuals’ utility functions are heterogeneous, for some a cash prize may be considered the most rewarding and appropriate incentive while for others the same prize may be considered inappropriate and detrimental. For example, someone may have been looking for a personal gift but received money instead. As organizers of crowdsourcing contests do not know individuals’ incentive preferences upfront [48,56] it may be beneficial to allow participants to choose their incentive and thus avoid the risk of an incentive-preference mismatch. Research is scarce on how offering ideators a choice of incentives—as opposed to a single, fixed incentive that is assigned equally to all ideators—influences the effect of incentive on effort and performance. So far, some early studies have explored the effect of choice between different pay schemes such as between fixed and performance-based pay [14,57] and using observational data between monetary and symbolic awards [53]. Yet no study has investigated choice between cash and non-cash prizes more broadly nor have they investigated the underlying mechanism why such incentive choices may be effective. Market context may further affect the perceived utility of an incentive and thus influence effort and quality [9,67]. For example, if you help a friend to fix a computer problem or a stranger to jump-starting a car you may expect no monetary compensation and a warm thank may be perceived as appropriate gesture. However, as IT administrator or a car mechanic you may expect monetary compensation for the same activity instead of a mere thank you. Hyman and Ariely [33] demonstrated that people categorize interactions based on whether they occur in a social or monetary market context. Depending on this classification, individuals may anticipate different incentives for the same activity. Through a series of laboratory experiments, they found that the direct impact of cash, non-cash rewards, or no incentives depends on whether the context is for-profit or non- profit (referred to as the 'market relationship'). In monetary market contexts, individuals adjust their effort and time according to the compensation offered, whereas they do not necessarily expect monetary incentives for their involvement in a social market context. Another study revealed that cash had a negative effect on the willingness to donate blood, whereas a non-cash incentive, such as a voucher of equal value, did not produce the same negative effects [40]. Thus, while previous research mainly focused on the direct effects of cash and non-cash incentives on performance, there remain open questions about how 5 market context moderates the impact of incentives on innovation performance in crowdsourcing. Related Work on Incentives and Creative Performance Incentives represent a prominent research topic in crowdsourcing. This section provides an overview of existing studies that examine how different cash and non-cash incentives affect creative performance. Appendix 1 offers a review of existing empirical studies investigating creative or innovation performance as a dependent variable. While the review highlights the strong interest in exploring incentives and their effect on innovation performance, especially lately in the crowdsourcing context [1,11,35,65] it mainly focuses on the effect of cash rewards. Further, existing research in psychology, education, and organizations examining the effect of extrinsic incentives on creativity remains ambiguous and sometimes show contradictory results (i.e., [3,17]). While some studies have suggested that rewards undermine intrinsic motivation and thus creative performance (e.g., [4,27]), others show that rewards lead to goal-directed behavior and thus increase creative performance [17]. More recent research has indicated that both intrinsic and extrinsic rewards may boost creative outcomes once properly adjusted to participants and context [31]. Only two studies [12,16] have explored the effects of cash and non-cash incentives on creative and innovation performance. Researchers analyzed the impact of various types of rewards, i.e., money and social cause on the contribution behavior in crowdsourcing campaigns [12] and examined intrinsic and extrinsic rewards on creative performance through five different studies [16]. While we know that participants in crowdsourcing contests have diverse incentive preferences, there is only one study that investigates the offering of choice between monetary and symbolic awards which uses observational data [53]. While the study did not find a direct effect of choice on quality, it did reveal a mediation effect of choice through effort: participants in the choice option allocated more time compared to those in the no-choice option. This study shows initial evidence that choice matters, however it does not disentangle the difference between cash and non-cash choice, the distribution of preferences, use random assignment, nor does it shed light on the underlying mechanism. Giving ideators the choice to select their preferred incentive among various options rather than assigning a specific incentive remains an unexplored research avenue to maximize the effectiveness of incentives.1 Our overview further shows that most empirical studies are set either in a for-profit or non-profit context but do not compare effects across market contexts within the same study, for example by considering market context as a moderator. In summary, although we have knowledge of the direct effects of incentives, limited research exists about how choice of incentives affects performance in crowdsourcing and how for-profit vs. non- profit market contexts moderate the effect of incentives. 1 Examples of existing studies investigated self-selected goal-reward levels for sales employees [10]; a lab experiment allowing individuals to choose between a fixed and performance-based pay [14]; and several studies that investigated self-selection of pay schemes allowing individuals to select a competitive (tournament) scheme or piece-rates [19,57]. In all cases, the self-selected incentive choice affected the overall expected compensation (mostly depending on the individual’s skill), not the type of compensation (i.e., cash vs. non-cash). 6 Research Framework This section introduces our research framework. We start from the assumption that individuals have heterogeneous incentive preferences. We derive two hypotheses about how incentive choice and market context moderate the effect of incentives on idea quality and effort in crowdsourcing contests (Figure 1). Choice Assignment/choice … of … Incentive Cash/non-cash H1 (main effect) Performance Quality/effort H2 (moderator) Offering a choice and having different incentive options are inherently linked. Context For-profit/non-profit Figure 1. Research framework. Main Effect of Incentive Choice. Giving ideators a choice of incentives may lead to increased effort and performance. We theorize two different pathways for a main effect of incentive choice. First, when individuals choose their preferred incentives this may lead to a better incentive-preference fit, which amplifies the effect of the incentive itself, which then spurs effort and idea quality. That is, giving ideators a choice can alleviate poor incentive-preference fit. The mechanism behind the incentive-perference fit is similar to mass customization, where customers self-configure products that meet their needs better than standardized products because they provide a better match with their preferences [22]. As incentive preferences of heterogeneous participants are not known upfront, it may make sense to let them choose their incentive [13,37,47,53]. For this mechanism to be effecitve, several conditions need to be met. Understanding these conditions will lead to a better understanding of boundary conditions under which a main effect of incentive choice may no longer materialize. First, the success of improving incentive-preference fit relies heavily on offering “appealing” choice options, which may not be an easy task. When participating in crowdsourcing contests, individuals must perceive the available options as valuable and in line with their preferences [68]. Second, in order to make choices that enhance incentive-perference fit, ideators must be aware of their own preferences, as benefits may not materialize if ideators are merely indifferent. In case of unclear incentive preference, choice could backfire by causing unnecessary effort and confusion instead of a better incentive-preference fit [50]. Finally, ideators have to have heterogeneous incentive preferences. Incentive choice will be unable to generate improved incentive-preference fit if incentive preferences are homogeneous and individuals (largely) make the same choice. Taking these boundary conditions into account, choice of incentives should lead to a better incentive-preference fit and thus increase the effectivness of incentives. Second, an alternative reason for increased performance given a choice of incentives rests on self-determination. Giving ideators a choice of incentives could grow their sense of control over their actions, which in turn can raise intrinsic motivation [15] which 7 increases ideators’ effort and idea quality. That is, the act of choosing an incentive may in itself increase motivation by making ideators feel more self-determined and enhancing their sense of autonomy and control. We hypothesize: H1 (main effect): Incentive choice has a positive effect on a) quality, and b) effort. Market Context as Moderator. Research shows that the effect of incentives on effort and quality may depend on the market context and its perception as a monetary market or a social market of the crowdsourcing contest [33]. That is, market context may moderate the effect of different incentives because, depending on the context, individuals may have different incentive expectations for the same activity. Consider getting help for a painting job or preparing tax returns. In such situations, one may ask either a friend or a professional [33]. While a friend may help without expecting a return [7], a professional expects money for their time and effort [20]. Here, the relationship the person seeking help has with the person providing the help, moderates the effect of the (non-)cash incentive. It will work well in one context but can backfire in another. According to relational incentive theory [21,29] people categorize the context in which an interaction takes place into one of four relationship categories: common sharing, authority ranking, equality matching, and market pricing. They adjust their participation behavior and reward expectations according to the classified context. In common sharing (CS), individuals consider their engagement as contributing to a common goal and supporting the group to solve a pressing problem without asking for returns because of shared beliefs, solidarity, and altruistic reasons. In authority ranking (AR), individuals accept a hierarchical order. They engage in relationships to learn from superiors, fulfill their duties as good citizens, or conform to the authorities. Equality matching (EM) is characterized by reciprocity, balance, and tit-for-tat. Individuals engage in, e.g., carpooling or dinner party invitations because they trust that others will reciprocate at a later time. Market pricing (MP) refers to situations dominated by cost- benefit calculations, where personal gain determines if one engages in an activity or not. While the first three relationship patterns (CS, AR, EM) are social in nature, the MP schemata is based on economic exchange. Thus, previous research suggests simplifying the model and subsuming the four categories into two categories, social markets— consisting of the CS, AR, and EM relationships, and monetary markets – referring to MP [33]. They investigate the main effect of different market contexts and find that students show higher intentions to help moving a sofa, spend more time in an online experiment, and show higher efforts in solving a serious of puzzles depending on the cash or non-cash incentives offered [33]. They conclude, while monetary markets are sensitive to compensation, social ones are not. In monetary markets, effort is directly related to the amount of compensation. In social markets, effort is shaped by altruism. Based on this insight, we theorize that the same incentive will be perceived differently depending on the market context and that market context will thus moderate the effect of the incentive [29]. Specifically, we expect the effect of non-cash incentives to weaken in monetary market contexts. While for-profit organizations develop new solutions to stay competitive and generate profits for their shareholders, non-profit organizations innovate to solve pressing problems and create common goods that benefit everyone. We therefore suggest that for- 8 profit contests be perceived as monetary markets, while non-profit contests be perceived as social markets. When the incentive is in line with the market context (such as a cash incentive in a for-profit setting) this should strengthen the effect of the incentive. Conversely, when the incentive is misaligned with the market context (such as a non-cash incentive in a for-profit setting), the effect of the incentive should weaken. Further, as participants adjust their incentive preferences and behaviors according to the market context, the for-profit status of the contest organizer may activate ideators’ extrinsic motives and lead to expectations of cash. Alternatively, those engaging in non-profit settings may not expect to get paid but may appreciate a non-cash reward like a gift or praise. Non-cash prizes may in fact constitute a mismatch in a for-profit contest setting, where organizers benefit economically from the crowd’s contribution while the ideators receive only a non-cash prize. Ideators may expect a fair monetary compensation when organizers generate successful returns [30]. Thus, we theorize: H2 (moderator): Market context moderates the effect of cash and non-cash incentives on a) quality and b) effort. Multi-Study Overview We conducted four studies (see Figure 2) to quantify incentive preferences in a non-profit field study (Study 1), test the main effect of incentive choice (Study 2), investigate whether market context moderates the effect of incentives on quality and effort (Study 3), and finally explore preference heterogeneity as a boundary condition (Study 4). Together, these four studies paint a more complete picture of how incentive choice and market context collectively affect effort and idea quality in crowdsourcing contests. How Choice and Context Affect Incentives Performance in Crowdsourcing Contests Sequence & Motivation Explore and quantify incentive preferences Study 1 Main effect of choice Study 2 Context as moderator Study 3 Boundary condition Study 4 Main Goal Quantify incentive preferences in realistic crowdsourcing field setting Validate main effect of choice in randomized experiment with control condition Test whether market context moderates the effect of incentives? Treatment Manipulation Observational study w/o treatment manipulation. Choice between cash and four non-cash options Randomly assigned choice vs. assigned incentive treatment manipulation 2x2 design: incentive delivery (choice vs. assigned) crossed with context (for-profit/non-profit) Context Sample Population Single context (non-profit) Single context (non-profit) Randomly assigned for-profit and non-profit Heterogeneous - broadcasting Heterogeneous - broadcasting Heterogeneous - broadcasting Establish heterogeneous populations as important boundary condition Six incentive and choice conditions including opt-out and indifference condition Single context (non-profit) Homogeneous - Amazon Mechanical Turk (AMT) Study 3 Study 4 Prize Value 1000€ 1st; 600€ 2nd; 400€ 3rd $0 (control) or $50 $500 1st; $300 2nd, $200 3rd $0 (control) or $50 Show-up Fee No No No $1 Method Field study (n=1,205) Experiment with random assignment (n=208) Experiment with random assignment (n=120) Experiment with random assignment (n=160) Figure 2. Summary of sequential study design. 9 Study 1: Quantify Non-Cash Incentive Preferences in a Field Experiment Design and Empirical Setting To quantify ideators’ incentive preferences and offer preliminary evidence for a main effect of incentive choice, we set up the “Scraplab” crowdsourcing contest. It was hosted on a leading contest platform (www.hyvecrowd.com). The contest dealt with up-cycling and the goal was to create products out of recyclable materials instead of producing waste. Since the contest contributes to the Sustainable Development Goals of the United Nations and aims to create impact rather than increasing corporate profits, the context can be classified as non-profit. This topic seemed to be appropriate as it does not require specialized knowledge, skills, or familiarity with existing brands. Further, because resource shortage affects everyone, anyone can have ideas on how to solve it. The submission of an idea consisted of a visual design in the form of photographs, a textual description, and a list of materials used (see Appendix 2 for sample designs). As is common in crowdsourcing contests, ideators could make multiple idea submissions, create a profile page to share personal information, interact with each other by commenting on design submissions, provide feedback through ratings, and promote designs by sharing them on Facebook. The contest was designed as a rank-order tournament [43] in which the three highest- rated ideas would receive prizes valued at 1,000€ (1st), 600€ (2nd), and 400€ (3rd). During registration, participants were offered the option to choose among a cash prize and five non-cash prizes of the same value. The non-cash prizes included, a donation to a charity of choice (altruistic motives [34]); a short internship (same renumeration as the cash prize); career advancement motive [41]; participation in a workshop to improve their own design innovations (need for a solution [34] or a funded party with friends (hedonic motives [5,9]). As the incentive question was optional, participants who didn’t select one of the five options were assigned to the default cash incentive. Recruiting and participants. To reach a large and global audience, we advertised the design contest globally in various design and sustainability communities, including design schools. The competition was open for submissions for ten weeks. During the contest period, the website had 16,686 unique visitors (unique IP addresses). 1,205 participants registered and were exposed to the incentive-choice question during their upfront registration. 924 participants (77%) answered the question and chose their preferred incentive. 281 participants (23%) did not make a choice. 259 participants submitted one or more ideas/designs (587 ideas in total) and thus are labeled as ideators. Among the 259 ideators, 118 answered the optional incentive preference question while 141 did not and were automatically assigned to the default cash option. At the end of the contest, three ideators were awarded prizes for their designs. Ideators from 64 different countries participated in the contest. The highest concentration of ideators was from the United States of America (48%) and most were female (75%). Data Sources and Measures 10 We used three main sources of data in our study: (1) registration data on ideators’ demographics, occupation, and preferred incentive; (2) log-file data from the online platform running the contest to explore participants’ behavior; and (3) data from an independent external consumer panel using experienced workers on Amazon Mechanical Turk to assess the quality of design submissions. Appendix 3 shows descriptive statistics and correlations of our individual-level measures. Independent variables and controls. The key variable of interest is the ideators’ Incentive Preference, which was an optional question on the registration survey. The question was not forced so that ideators could choose whether to select an incentive or not. We focused our analysis on contrasting ideators who chose the cash prize from those who chose any of the other non-cash prizes. Data on ideators’ gender was from the registration survey, as well as imputed from the first names and profile pictures of ideators where necessary.2 Gross Domestic Product (GDP) data from each ideators’ current country of residence was collected using 2011 numbers from the World Bank. For ideators with missing values for country of origin (50 instances), we substituted the GDP sample mean [62]. Because the key outcome measure of interest in this study is the quality of ideas, it is critical to control for ideators’ ability, which may affect the quality of their designs [42]. Ability is generally unobserved and difficult to capture reliably. As an imperfect measure, we included a measure indicating whether an ideator is a professional designer. Professional designers are expected to have relevant experience in design tasks like that of our contest, by having previously engaged in design work full-time for an extended period. Experience is probably the most widely used proxy for expertise [18]. Thus, based on information that ideators provided during the registration procedure, we included an ideator’s status as a professional designer as a proxy for expertise (1=professional designer; 0=not professional designer). Data from the online platform includes information on the time an ideator first registered, and the number of designs, comments, and ratings he/she submitted. We included controls for the number of Submitted Ideas, Submitted Comments, and Submitted Ratings, as these provide ideators with the ability to learn from both their own direct experience and from observing the work of others [60]. Lastly, we included a control for the number of ideas that were already submitted to the contest just prior to the signup of an ideator. The number of prior ideas is an easily observable signal to potential contributors of how competitive a contest is. As such, it may affect ideators’ choice of incentive. Dependent variable. This is the quality of each ideator’s best idea. We followed standard practice in ideation studies [38,66] and measured the dependent variable, Design Quality, for all designs submitted to the contest. We used an outside panel that followed a relative assessment technique [4]. We recruited an independent jury who were blind to the research propositions. They were experienced workers on Amazon Mechanical Turk. We collected five evaluations for each design, resulting in 2,927 ratings of each of the six assessment items (17,562 ratings in total) from a total of 77 different raters (Appendix 2 2 Two researchers independently coded gender; discrepancies were discussed and resolved; 12 out of 125 instances of missing gender information could not be coded and were subsequently excluded from the analysis. 11 provides details on the method, the assessment items, and some design examples). We analyzed how incentive choice affects ideators’ probability to become active (Appendix 4). Results Quantifying incentive preferences. We find that when given the choice, 77% of ideators 1 Study 1 - Scrablab Field Experiment (N=924) chose an incentive and revealed their preference (Table 1). Almost half of ideators (49%, N=591) preferred a non-cash incentive over cash. Only 28% (N=333) actively chose the cash incentive. The most popular non-cash incentive was a donation with 14% (N=171), followed by the internship with 14% (N=170), the workshop incentive 11% (N=127), and the party incentive 10% (N=123). Personal characteristics such as gender and economic background are weak proxies for non-cash incentive preferences (Appendix 5). This suggests that while overall non-cash incentives are very popular, there are many different forms such incentives could take. Design Quality (1) Designs Submitted (2) Incentive: No Answer (3) Incentive: Cash (4) Comments Written (5) Ratings Submitted (6) Tenure (7) log(GDP) (8) Western (9) Female (10) Designs Prior to Registration (11) Professional (12) 0.29 0.22 0.08 -0.66 -0.14 -0.06 0.21 0.30 0.14 0.11 0.10 0.00 0.00 0.11 0.21 0.02 -0.02 -0.07 0.04 -0.06 -0.07 -0.04 0.09 -0.08 577.00 -0.02 -0.22 -0.11 -0.06 SD Min 0.66 1.00 1.00 2.65 0.00 0.50 0.00 0.44 0.00 10.32 0.00 18.82 0.14 22.06 8.44 0.61 0.00 0.50 0.00 0.49 290.89 171.55 0.00 0.00 -0.16 0.57 -0.08 -0.07 0.27 0.03 -0.09 0.02 -0.04 0.03 0.12 0.07 -0.28 -0.01 0.04 0.40 0.02 -0.03 0.08 -0.01 -0.04 -0.10 0.10 -0.12 0.22 0.00 -0.06 0.01 -0.07 -0.01 0.04 -0.21 -0.99 0.07 -0.01 Max 4.78 24.00 1.00 1.00 115.00 165.00 70.44 11.25 1.00 1.00 Mean 3.33 2.26 0.55 0.27 4.46 7.11 31.14 10.28 0.47 0.61 -0.03 -0.04 0.36 1.00 0.48 (10) (5) (7) (9) (3) (6) (4) (8) (2) (1) -0.08 (11) Table 1. Study 1 – Descriptive statistics of incentive preferences for participants who made at least one design Table 1: Study 1 - Descriptive statistics and correlations of main study variables of ideators who submission. made at least one design submission (N = 259). N Percent Female > 0 E↵ort Percent No Choice Cash Donation Internship Workshop Party 281 333 171 170 127 123 (23%) (28%) (14%) (14%) (11%) (10%) Total 1,205 56% 72% 82% 80% 97% 90% 75% 50% 21% 6% 18% 5% 1% 21% Table 2: Study 1 - Incentive choices and activity. We provide the number of participants who choose each of the prizes. We give the percentage of ideators (among the total of 1,205) who chose Main effect of incentive choice on quality. Since Study 1 is an observational study a given prize in parentheses. that did not permit the randomized assignment of incentives, the analysis of the main effect of choice is complicated as the analysis needs to account for self-selection of the assignments. We analyze our data using a Tobit-5 switching regression (this is also known as the Roy model [59] to distinguish the cash vs. non-cash incentive effect from the influence of choice (see Appendix 2 for details on the Tobit-5 model). To establish whether there is main effect of offering a choice we need to establish two findings. First, we need to determine whether there is a performance difference among ideators who chose the cash vs. the non-cash incentive. Second, we need to examine whether the correlation of the error terms of the self-selection component in the model are positively correlated. A positive correlation would signify that ideators who chose the incentive performed better than they would have in a hypothetical scenario in which they were randomly assigned to that incentive. Regarding the first condition, we find systematic performance differences between ideators who chose cash and those who chose a non- cash prize. Specifically, we observe a significantly higher effect of the non-cash incentive on idea quality compared to the cash incentive (direct incentive effects in Table 2; testing for equal coefficients comparing 𝛽!"!#$%&’ = 3.6 to 𝛽$%&’ = 3.0; 𝜒((12) = 10.29; p < 1 12 0.001). Regarding the second condition, we find that the estimated correlation coefficients between the error term of the selection equation and the outcome equations (the r1 and r2 estimates in the table), are both large and significantly different from zero (p < 0.001). Since r1 is positive and significantly different from zero, the model suggests that individuals who chose the non-cash prize produced designs of higher quality (than a random individual from the sample). Conversely, since r2 is negative and significantly different from zero, the model suggests that ideators who chose a cash prize produced designs of lower quality (than a random individual from the sample would have). Table 2. Study 1 – Selection and outcome equations comparing the choice of the cash prize over any of the non- cash prizes. Standard error in parentheses. Sample: 118 ideators who answered the incentive question and submitted at least one design. Selection Equation Pr(Choose Cash) Intercept log(GDP) Western Female Designs Prior to Registration 0.63 (2.03) 0.07 (0.20) 0.62⇤⇤⇤ (0.20) 0.30 (0.20) 0.00⇤⇤⇤ (0.00) Outcome Equation Design Quality Direct Incentive E↵ect (Intercept) Professional experience Submitted Designs Submitted Comments Submitted Ratings Choice E↵ect ⇢1 ⇢2 Num. obs. selection eq. Num. obs. outcome eq. AIC Log Likelihood ⇤⇤⇤p < 0.01; ⇤⇤p < 0.05; ⇤p < 0.1 Non-Cash 3.55⇤⇤⇤ (0.11) 0.30 (0.12) 0.10 (0.06) 0.00 (0.02) 0.008 (0.01) 0.78⇤⇤⇤ (0.05) 49 118 530.72 -246.36 Cash 3.03⇤⇤⇤ (0.10) 0.11 (0.10) 0.13⇤⇤⇤ (0.05) 0.05⇤⇤ (0.02) 0.005⇤⇤ (0.00) 0.73⇤⇤⇤ (0.06) 69 Table 5: Study 1 - Selection and outcome equations comparing the choice of the cash prize over any of the non- cash prizes. Standard error in parentheses. Sample: 118 ideators who answered incentive question and submitted at least one design. Based on these estimates, we can predict a counterfactual of how ideators who chose the non-cash incentive would have responded if they were assigned to the cash incentive , = 1)) and vice versa to estimate the direct benefit of (i.e., we compute 𝑦) offering ideators a choice. This suggests an 11% increase in design quality from offering ideators a choice of incentives compared with simply offering cash to everyone. That is, using an econometric approach to account for self selection, the observational data from *+ − 𝐸(𝑦) *(|𝑦) 3 13 Study 1 provides preliminary evidence for a main effect of incentive choice on idea quality supporting H1a. Since we do not know how much time ideators spent on creating their upcyled designs, we cannot test hypothesis H1b (on effort) in Study 1. Summary Participants had diverse incentive preferences and chose both cash and non-cash prizes when given the choice. While cash was the most popular choice, almost half of ideators chose one of the non-cash options, and about one quarter preferred to not make a choice at all (receiving the default cash option). We also find preliminary evidence that incentive choice had a significant main effect on idea quality: Offering ideators a choice of incentives is an effective strategy for contest organizers to unlock additional value independent of the incentive offered, supporting H1a. Study 2: Main Effect of Incentive Choice in Online Experiment with Random Assignment To explore the preliminary evidence of a main effect of incentive choice, Study 2 uses random assignment to give some ideators a choice of incentives while others had no such choice. Further, as the field study left unanswered why some participants did not actively choose a prize, Study 2 also tests participants’ indifference between cash and non-cash prizes as well as the decision to opt out of receiving any prize. Method Procedure. We implemented an online experiment that mirrored the task of an ideation crowdsourcing contest (Figure 3). First, subjects completed a pre-treatment survey to answer demographic questions, as well as a practice task (a standard “unusual uses” creativity task [16], and answered three items from the Intrinsic Motivation Battery [51]; Cronbach’s 𝛼 = .9). Second, we performed a treatment manipulation in which subjects were either randomly assigned a prize (“assigned” condition), were asked to choose a prize (“choice” condition), or were not offered a prize (“control” condition). Third, subjects completed the main ideation task. Fourth, we used an external panel to evaluate idea quality. Stage 1: Pre-experiment survey Demographics, practice task, intrinsic motivation battery Stage 2: Treatment manipulation Incentive is Assigned or Choice (ask subject to choose desired prize) Stage 3: Main task Complete idea generation task competing for the assigned or chosen prize Stage 4: External panel assesses idea quality Consensual Assessment Technique 14 Figure 3. Study 2 – Study procedure and treatment flow. The main ideation task asked subjects to submit ideas to reduce water consumption, which they entered through a free-form text field. Again, as in Study 1, the context can be classified as non-profit as the ideation task contributes to the Sustainable Development Goals and aims to create impact rather than increase corporate profits. In addition to the idea itself, we also measured effort in terms of time (in seconds) that participants devoted to submitting their idea.3 Individuals could type as many ideas as they wanted (in situations where multiple ideas were submitted, we calculated the overall effort as the total number of seconds spent entering all of the ideas). Recruiting. To recruit participants, we advertised the experiment on the Volunteer Science platform, an online laboratory for experiments in social psychology. We performed no specific recruiting to attract participants for our study (see [59] for details and validation of the Volunteer Science platform). Treatments. All subjects competed either for no prize at all, a $50 cash prize, or a $50 donation to a charity of their choice. Participation was entirely voluntary and no additional compensation (such as a flat show-up fee) was paid. The treatment manipulation was whether the prize was randomly assigned or whether the subject got to choose what the prize was. We implemented three alternative versions of the choice treatment manipulation to explore the kinds of choice options contest designers may consider: 1. Assigned no prize (control condition) 2. Assigned a $50 cash prize (i.e., no choice offered) 3. Assigned a $50 donation to a charity of choice (i.e., no choice offered) 4. Choice between either a $50 cash prize or a $50 donation 5. Choice between either $50 cash or a $50 donation, measured on a 7-point Likert scale. We treated the central values (3, 4, 5) as indicating indifference between cash and non-cash incentives, while we treated strong (1) and weak (2) preference for cash as cash and strong (7) and weak (6) preference for non-cash as non-cash. We informed ideators that their reward would be determined randomly using the proportions of their choice (i.e., 50:50 chance between cash and non-cash if they chose the middle point “4” on the Likert scale; 43:57 chance if they chose “5” and so on). 6. Choice between either accepting a $50 cash prize or opting out of prizes entirely. We performed (streaming) random assignment of participants to treatment conditions as is common in online experiments where the total number of participants is not known ex ante. We assigned more ideators to the choice conditions to reflect the fact that these conditions have several nested sub-conditions. Sample and Measures. We stopped the experiment after 40 days, at which point 221 3 We also implemented a mechanism to detect if they would simply copy & paste text into the text form (none did). 15 individuals had completed it and 208 ideas had been submitted (e.g., we removed ideators who submitted ideas such as “I don’t know;” Appendix 6). Four individuals dropped out after having been assigned to a treatment condition. The dropout was not correlated with treatment condition (𝜒((5) = 4.78). Women and men participated in equal proportion (51% female) with equal proportion in each treatment (𝜒((5) = 2.06). Participants were young (94% reported age between 18-24) and mostly from the USA (92%). We measure idea quality using Amabile’s Consensual Assessment Technique from an outside panel recruited through Amazon Mechanical Turk, following the same method and procedure as in Study 1. We collected 1,440 quality ratings in total from 93 different raters who performed an average of 15.5 ratings each. Inter-coder reliability is excellent (0.836); Cronbach’s alpha is good (0.84). Results Quantifying incentive preferences. Again, we find heterogeneous incentive preferences: 56% chose the cash incentive, and 44% the donation incentive when given a choice (see Appendix 6). Individual-level characteristics like gender, age, home country, or intrinsic motivation are no significant predictors of incentive preference. The field study did not address whether ideators may simply have been indifferent to our prize options, so we also explored whether ideators would opt-out or are indifferent. When given the option to forgo a cash prize, 29% of participants chose to opt out. We find that 37% of individuals who were given a choice between cash and non-cash indicated that they were indifferent between the two (they selected one of the middle points on the Likert-scale). Main effect of incentive choice on quality and effort. We used OLS regression to analyze the main effect of incentive choice on idea quality and a negative binomial regression for the effects on effort (Table 3). This analysis contrasts the two levels of the treatment conditions of offering a choice vs. assigning a fixed incentive to ideators. First, we find that offering ideators a choice, per se, improved idea quality (Model 1: β = .17; marginally significant at p = 0.076; supportting H1a) but not effort (Model 3: β = .263; n.s.; rejecting H1b). This suggests that incentive choice increases idea quality by 6.6%. To better understand the main effect of incentive choice, it is crucial to understand which incentives are most effective when chosen. We explore this through the interaction between the choice treatment (choice vs. assigned) and the incentive. The interaction between choice and cash shows a significant negative effect on idea quality (β = -.46; p < 0.05). We also find a significant interaction between choice and the non-cash incentive on effort (β = .85; p < 0.001). This suggests that the effectiveness of incentives on both idea quality and effort is moderated by the form in which incentives are delivered such that choice decreases the effect of cash incentives on idea quality and increases the effect of non-cash incentives on effort. 16 Table 3. Study 2 – Regression analysis. Omitted category: Assigned, no-prize. Note that Prize: Indifferent Regression Tables for Study 2-4 implies Choice: Yes. Dependent Variable Idea Quality E↵ort (1) (2) (3) (4) Treatments Choice: Yes Prize: Cash Prize: Non-Cash Prize: Indi↵erent Interaction Terms Choice: Yes Prize: Cash ⇥ Choice: Yes Choice: Yes ⇥ ⇥ Prize: Non-Cash Prize: Indi↵erent Intrinsic Motivation Intercept Controls Included Num. obs. Adj. R2 AIC Log Likelihood ⇤⇤⇤p < 0.01; ⇤⇤p < 0.05; ⇤p < 0.1 0.17⇤ (0.10) 0.27⇤⇤ (0.14) 0.37⇤⇤ (0.15) na 0.11 (0.20) 0.00 (0.05) 2.58⇤⇤⇤ (0.13) Yes 0.52⇤⇤ (0.25) 0.53⇤⇤⇤ (0.20) 0.55⇤⇤⇤ (0.21) na 0.46⇤ (0.28) 0.36 (0.31) 0.06 (0.23) 0.02 (0.05) 2.41⇤⇤⇤ (0.17) Yes 0.263 (0.212) 0.607⇤⇤⇤ (0.194) 0.829⇤⇤⇤ (0.314) na 0.32 (0.29) 0.07 (0.13) 4.33⇤⇤⇤ (0.20) Yes 0.12 (0.35) 0.83⇤⇤⇤ (0.30) 0.32 (0.30) na 0.27 (0.43) 0.85⇤⇤ (0.42) 0.41 (0.11) 0.10 (0.11) 4.37⇤⇤⇤ (0.22) Yes 208 0.02 208 0.02 208 208 2597.51 1286.76 2589.35 1280.68 Table 11: Study 2, regression analysis. Omitted category: Assigned, no-prize. Note that Prize: Indi↵erent implies Choice: Yes. Summary We find a significant main effect showing that choice improves idea quality (confiming H1a) but not effort (no support for H1b). While the coefficient for the main effect of choice on effort is insignificant, the direction of the effect is positive. We find that choice significantly reduces the effectivness of cash incentives on quality and amplifies the positive effect of the non-cash incentive on effort. Study 3: Moderator of For-Profit and Non-Profit Context In Study 3, we explore if market context moderates the effect of incentives on quality and effort. We conducted a crowdsourcing contest of a typical for-profit organization to generate ideas that could be commercialized to ensure future profits. To create a realistic experiment, we replicated the ideation task of the Intel Future Contest, a real-world ideation challenge sponsored by Intel. In that contest, Intel solicited ideas for new products or services around a new technology [55]. This technology aims to allow the building of a new generation of smart devices (e.g., smart wearable technology) and applications. The task provided background information on the new technology adapted from the Intel Future Contest and then we asked ideators to provide information on 9 17 product features, benefits, uses, and design proposals. We advertised an open call to a public audience on Craigslist, Reddit, Facebook, and various other technology-related communities and blogs. Method Procedure. We implemented an online experiment which randomly assigned ideators following the same general setup as in Study 2. The design is a 2 (incentive: cash, non- cash) × 2 (choice: choice, assigned) × 2 (context: for-profit, non-profit) between-subject study, resulting in six treatment conditions (Appendix 7). We manipulated market context by framing the ideation task as being solicited by either a for-profit or non-profit organization. The for-profit condition framed the ideation task as “a for-profit multinational corporation wants to develop a for-profit product or service”, while the non-profit condition solicited ideas for “a non-profit research organization wants to develop a non-profit product or service.” To increase the stakes compared to Study 2, we increased the prize money to $1,000, split as follows: $500 for 1st prize, $300 for 2nd prize, and $200 for 3rd prize. We performed (streaming) random assignment of participants to treatment conditions as is common in online experiments where the total number of participants is not known ex ante. We stopped the experiment after four weeks. The study was pre-registered before data collection began (https://osf.io/8qw7t/). Measures. We measured the effort in seconds spent on the ideation task. Following the same procedure as in the other studies, we collected data to measure idea quality from an outside panel recruited through Amazon Mechanical Turk. We collected 770 ratings of quality from 46 different raters who performed an average of 17 ratings each and five ratings per idea. Inter-coder reliability is excellent (0.77); Cronbach’s alpha is good (0.80). Results A total of 120 ideators completed the experiment and submitted at least one idea (153 ideas were submitted in total). In a post-experiment self-report question, the manipulation of the profit motive was effective with 76% of ideators correctly recalling the profit structure of the organization sponsoring the contest (i.e., for-profit vs. non- profit). Quantifying incentive preferences across market contexts. Despite qualitatitive differences, we find no statistically significant difference of preference for cash or non- cash incentives between for-profit and non-profit contexts (Appendix 8, Model 2). In the non-profit context, 31% chose the non-cash incentive compared to 23% in the for-profit context. Within the for-profit context, a simple for equal proportion indicates that cash is the significantly more popular choice (cash is significantly more popular than non-cash; p = 0.001). There is no significant difference within the non-profit context. Moderating effect of market context. We find no significant direct effect of market context on quality (Table 4: Model 1; β = -.06; n.s.) but a significant effect on effort (Model 6; β = -.41; p < 0.05). We find no significant moderation effect between market 18 context and incentives for quality (Model 4; β = -.21; n.s.). However, the for-profit context significantly reduces the effect of non-cash incentives on effort (Model 9; β = - .75; p < 0.05). That is, we find evidence that market context moderates the effect of incentives on effort (H2b) but not quality (H2a). Table 4. Study 3 – Regression analysis. Omitted category: Assigned cash, non-profit. Dependent Variable Treatments Choice: Yes Prize: Non-Cash Context: For-Profit (1) (2) Idea Quality (3) (4) (5) (6) (7) E↵ort (8) (9) (10) 0.12 0.18 0.15 0.18 0.18 0.15 0.19 0.15 0.12 0.14 (0.20) (0.21) (0.21) (0.12) (0.11) (0.11) (0.11) (0.11) (0.20) (0.21) 0.06 0.08 0.06 0.35⇤⇤⇤ 0.33⇤⇤⇤ 0.33⇤⇤⇤ 0.36⇤⇤⇤ 0.33⇤⇤⇤ 0.09 0.03 (0.20) (0.20) (0.20) (0.20) (0.20) (0.12) (0.12) (0.12) (0.12) (0.12) 0.45⇤⇤ 0.50⇤⇤⇤ 0.46⇤⇤ 0.41⇤⇤ 0.41⇤⇤ 0.03 0.09 0.05 0.04 0.06 (0.19) (0.19) (0.19) (0.19) (0.18) (0.11) (0.11) (0.10) (0.10) (0.10) Interaction Terms Choice: Yes ⇥ Prize: Non-Cash Choice: Yes ⇥ Context: For-Profit Prize: Non-Cash ⇥ Context: For-Profit 0.46⇤ (0.24) 0.05 (0.22) 0.21 (0.24) Choice: Yes Prize: Non-Cash Context: For-Profit Social Value Orientation ⇥ Intercept Controls Included Num. obs. Adj. R2 Log Likelihood ⇤⇤⇤p < 0.01; ⇤⇤p < 0.05; ⇤p < 0.1 ⇥ 0.32 (0.20) 1.65⇤⇤⇤ (0.24) Yes 0.25 (0.20) 1.60⇤⇤⇤ (0.23) Yes 0.31 (0.21) 1.67⇤⇤⇤ (0.27) Yes 0.30 (0.19) 1.70⇤⇤⇤ (0.22) Yes 0.23 (0.38) 5.96⇤⇤⇤ (0.36) Yes 0.30 (0.35) 5.90⇤⇤⇤ (0.35) Yes 0.15 (0.36) 5.74⇤⇤⇤ (0.34) Yes 0.37 (0.36) 6.18⇤⇤⇤ (0.39) Yes 120 120 120 120 120 120 120 120 120 120 0.13 0.16 0.13 0.13 0.14 951.22 950.58 950.86 948.80 947.31 0.43⇤ (0.24) 0.10 (0.23) 0.25 (0.24) 0.37 (0.42) 0.22 (0.20) 1.78⇤⇤⇤ (0.27) Yes 0.41 (0.39) 0.30 (0.39) 0.75⇤⇤ (0.37) 0.37 (0.41) 0.19 (0.37) 0.81⇤⇤ (0.39) 0.89 (0.76) 0.41 (0.36) 6.18⇤⇤⇤ (0.41) E↵ort Treatments Dependent Variable Table 10: Study 4, regression analysis. Omitted category: Assigned, non-cash, non-profit. Further, there is no significant effect of the three-way interaction on either idea quality (Model 5; β = .37; n.s.) or effort (Model 10; β = .89; n.s.). This suggests that the Idea Quality effectivness of offering a choice (such as the positive interaction effect between choice (1) and non-cash on quality) does not strongly depend on the market context, albeit our 0.11 statistical power for this analysis is quite low. (0.11) 0.07 (0.18) 0.45⇤⇤⇤ (0.17) 0.01 In Study 3 we investigate whether market context moderates the effect of incentives on (0.03) 2.93⇤⇤⇤ quality (H2a) and effort (H2b). We find support for this moderation effect for effort but (0.18) Yes not quality. 0.01 (0.12) 0.29 (0.19) 0.32⇤ (0.18) 0.01 (0.03) 4.56⇤⇤⇤ (0.22) Yes Summary Intrinsic Motivation Controls Included Prize: Non-Cash Choice: Yes Prize: Cash Intercept (2) Num. obs. Adj. R2 Log Likelihood 175 175 0.02 945.81 Study 4: Boundary Condition of Incentive Choice Effect ⇤⇤⇤p < 0.01; ⇤⇤p < 0.05; ⇤p < 0.1 Table 11: Study 3 - AMT Sample. Note that we only include the cash and control groups and excluded the small group of individuals who chose non-cash or are indi↵erent as those groups are too small to analyze. Consequently, Choice: Yes implies Prize: Cash and there is no separate interaction term that can be estimated. Finally, we test an important boundary condition in our last experiment (Study 4). So far, our evidence suggests that the main effect of offering ideators a choice derives from achieving improved fit between ideators’ preferred incentive and the incentive they actually receive (as opposed to the direct effect of simply asking them to reveal their preference). That is, actually observing diverse incentive preferences in the population 6 may be crucial for the main effect of incentive choice to unfold. This is important because over time some online platforms may evolve an environment in which individuals have homogeneous incentive preferences despite representing diverse 19 populations in terms of gender, home country, and economic background. In this study, we test this boundary condition by repeating the basic setup from Study 2, using a sample of ideators drawn from a population of gig workers narrowly focused on earning income. We recruited participants from Amazon Mechanical Turk (AMT), an online labor market Here, we expect participating workers to have a homogeneous preference for cash as transactions in the online labor market are based on strict market relationships. Prior research has shown that AMT workers are predominantly motivated by financial incentives [49]. For example, they often give themselves daily or weekly quotas of how much money they want to earn working on the AMT platform. If we find that the majority of ideators make the same choice and we find no main effect of incentive choice, this suggests that heterogeneous incentive preferences are an important boundary condition to realize a positive direct effect of incentive choice. Method Procedure. We recruited 160 workers from the AMT online labor market. Workers were compensated with a $1 “show-up” fee and randomly assigned to the same ideation task and treatment conditions from Study 2: no prize (N=21), cash (N=20), non-cash (N=29), a choice between cash and non-cash (N=37), and a choice between cash and opt out (N=46). Results Quantifying incentive preferences. As predicted, we find that participants drawn from AMT showed no interest in non-cash prizes whatsoever (Appendix 9). Zero participants chose the non-cash prize in the cash/non-cash condition (out of 29) and only two (out of 46) chose to opt out. Main effect of incentive choice on quality and effort. We used OLS regression to explore the incentive and choice effects on idea quality and a negative binomial regression for the effects on effort. Notice that we only included the cash, assigned non- cash, and no-prize control groups in our sample, and omitted the “opt out” and “chose non-cash” groups due to the small number of individuals making those choices. We find no significant main effect of offering a choice on either quality (Table 5: Model 1; β = .1; n.s.) or effort (Model 2; β = -.02; n.s.), indicating that the mere choice of the preferred cash option – without interest in other incentive options – offered no benefit and did not lead to increased performance. To directly test if increased agency and self-determination is a plausible mechanism behind the choice effect, we added the intrinsic motivation battery to the post-experiment survey. We find no difference in intrinsic motivation between individuals who chose cash compared to those who were assigned cash (t = .69; d.f. = 31.14; p = .50), suggesting that simply asking cash motivated individuals to reveal their cash motivation does not itself lead to increased motivation. 20 Table 5. Study 4 – Regression analysis of AMT sample. Note that we do not estimate coefficients for the small number of ideators who chose non-cash or were indifferent as those groups are too small for a meaningful analysis. Consequently, Choice: Yes implies Prize: Cash and there is no separate interaction term that can be estimated. Dependent Variable Idea Quality E↵ort (1) (2) Treatments Choice: Yes Prize: Cash Prize: Non-Cash Intrinsic Motivation Intercept Controls Included Num. obs. Adj. R2 Log Likelihood 0.11 (0.11) 0.07 (0.18) 0.45⇤⇤⇤ (0.17) 0.01 (0.03) 2.93⇤⇤⇤ (0.18) Yes 175 0.02 0.01 (0.12) 0.29 (0.19) 0.32⇤ (0.18) 0.01 (0.03) 4.56⇤⇤⇤ (0.22) Yes 175 945.81 ⇤⇤⇤p < 0.01; ⇤⇤p < 0.05; ⇤p < 0.1 Table 12: Study 3 - AMT Sample. Note that we only include the cash and control groups and excluded the small group of individuals who chose non-cash or are indi↵erent as those groups are too small to analyze. Consequently, Choice: Yes implies Prize: Cash and there is no separate interaction term that can be estimated. Summary Dependent Variable Treatments Choice: Yes Prize: Non-Cash Context: For-Profit (1) Idea Quality We find that the population of workers on Amazon Mechanical Turk had a homogeneous preference for the cash prize. We find no sign that simply asking ideators to reveal their incentive preference increased their motivation. Together, the findings suggest that the benefit of offering a choice unfolds through a better incentive-preference fit. The increased sense of autonomy from the choice itself does not lead to higher performance. 0.15 (0.12) 0.35⇤⇤⇤ (0.12) 0.04 (0.11) 0.12 (0.20) 0.09 (0.20) General Discussion and Theoretical Contribution 0.41⇤⇤ (0.19) 0.18 (0.11) 0.33⇤⇤⇤ (0.12) 0.06 (0.10) 0.15 (0.11) 0.36⇤⇤⇤ (0.12) 0.04 (0.10) 0.10 (0.20) 0.05 (0.20) 0.43⇤⇤ (0.19) 0.15 (0.21) 0.06 (0.20) 0.41⇤⇤ (0.18) E↵ort (2) (3) (5) (4) (6) Interaction Terms Choice: Yes ⇥ Prize: Non-Cash Prize: Non-Cash ⇥ Context: For-Profit Choice: Yes ⇥ Prize: Non-Cash Social Value Orientation Intercept Controls Included Num. obs. Adj. R2 Log Likelihood ⇤⇤⇤p < 0.01; ⇤⇤p < 0.05; ⇤p < 0.1 0.41 (0.39) 0.46⇤ (0.24) Context: For-Profit The research presented in this paper sets out incentive choice (cash vs. non-cash) to alleviate a 0.43⇤ (0.24) fundamental issue in incentive design and enhance the alignment between incentive and 0.22 preferences to increase effort and idea quality in crowdsourcing contests across market (0.23) 0.39 contexts (for-profit vs. non-profit). ⇥ (0.41) We present evidence from four consecutive empirical studies (see result summary in Table 0.23 6). Our field study (Study 1) helped us quantify ideators’ preferences and suggests they are (0.20) 1.73⇤⇤⇤ very diverse, with over 49% choosing among one of the non-cash incentives and 28% (0.25) choosing cash. There may be several explanations for the no-choice effect: 1) participants Yes who did not make a choice were fine with the default cash option and did not want to 120 explicitly reveal their incentive preference; 2) participants may have been indifferent and had no strong preference for any of the offered incentives; 3) they made no choice because none of the offered incentives matched their preferences; 4) they were opposed to rewards and/or choice altogether and were happy to participate without any incentive. 0.37 (0.41) 0.86⇤⇤ (0.39) 0.84 (0.79) 0.47 (0.38) 6.32⇤⇤⇤ (0.41) Yes 0.30 (0.35) 5.90⇤⇤⇤ (0.35) Yes 0.23 (0.38) 5.96⇤⇤⇤ (0.36) Yes 0.32 (0.20) 1.65⇤⇤⇤ (0.24) Yes 0.25 (0.20) 1.60⇤⇤⇤ (0.23) Yes 947.44 951.22 950.58 0.16 0.13 0.15 120 120 120 120 120 Table 13: Study 4, regression analysis. Omitted category: Assigned, cash, non-profit. 10 21 Table 6. Summary Findings. Key Findings Study 1 (Field study n=1,205) • Quantify incentive preferences: > 49% choosing among one of the non-cash incentives; 28% choosing cash; 23% prefer to make no choice at all. • Preliminary evidence of main effect of choice on quality Study 2 (Online experiment with random assignment n=208) • Choice increases quality (main effect) but not effort (supporting H1a) • Choice reduces the effect of cash incentives on quality and increases the effect of non-cash incentives on effort. Study 3 (Online experiment with market context as treatment manipulation, n=120) • Market context moderates the effect of incentives on effort (but not quality; supporting H2b) • No significant difference in preference for non-cash prize in for-profit context. Lower effort for non-cash incentive in for-profit context. • Establishes heterogeneous preferences as important Study 4 (Online experiment with gig-worker sample, n=160) boundary condition: no interest in non-cash incentives in pool of gig-workers focused on earning income. • Without sorting, no effect materializes. Indicating improved incentive-preference fit is driving mechanism. The field study (Study 1) and the randomized lab experiment (Study 2) both provide evidence for the main effect of offering a choice on idea quality (not effort; both those studies were set in a non-profit context). Study 3 finds that market context moderates the effect of incentives on effort (but not idea quality). One possible explanation for the null-effect on quality may be that idea quality in creative settings only partially depends on effort or it may simply be a result of low statistical power (while not statistically significant, the regression coefficient for quality points in the same direction as that for effort). Ideators exerted less effort in for-profit settings than in non-profit settings overall and even less when the for-profit setting is paired with a non-cash incentive. Finally, we point to an important boundary condition (Study 4): If ideators have uniform incentive preferences for cash such as gig workers on Amazon Mechanical Turk, offering a choice of incentives has no effect. The lack of a direct effect of choice per se, suggests that the benefit arises from the improved matching between incentive preference and the incentive being offered. As a result, we theorize that the gains from offering a choice do not arise from a feeling of agency but instead from improved sorting of preferences to incentives. Without heterogeneous incentive preferences, there is no room for gains from sorting of preferences to actual incentive. This suggests that the strength of the effect of offering a choice depends both on the diversity of ideators’ incentive preferences and the diversity of incentives being offered to maximize this sorting effect. Across our four studies, personal characteristics such as gender, economic background, and intrinsic motivation served only as weak proxies for incentive preferences while social value orientation emerged as a strong predictor in Study 3. Our findings make three main contributions to theory. First, past work on crowdsourcing design focused on contest design aspects like prize structures [46,54,64], number of contestants [11], and entry barriers [26]. By contrast, we establish that incentive choice is a pivotal aspect of incentive design that is little understood [53] and are the first to shed light on the underlying mechanism why it may work. Our study explains why offering ideators a 22 choice of incentives per se can improve performance. We theorize that offering a choice improves creative performance in crowdsourcing contests because it improves the incentive- preference fit which increases effort and performance rather than an increased sense of autonomy and sense of control [15]. This connects with the idea of incentive choice in research on mass customization, where customers self-configure products that match their preferences rather than choose standardized products [22]. Our study is the first to shed light on the mechanism behind the effectiveness of incentive choice and its important boundary conditions. Outside crowdsourcing, [14,57] are notable exceptions that explored incentive choice in a lab experiment giving participants a choice between fixed and performance-based pay. We contribute to work on crowdsourcing design by explicating an important boundary condition: offering a choice is ineffective when the incentive preferences in the target population are homogeneous. Incentive preferences may be homogeneous despite diverse geographic and economic background when recruiting ideators from online labor markets such as Amazon Mechanical Turk (see Study 4). Further, the sorting effect seems to strengthen when many attractive incentive options are offered (Study 1; c.f. [68]). Additionally, ideators may enjoy various benefits [9] and may hence also be rather indifferent to the available incentive options. Thus, forcing participants to reveal their preferences by choosing their preferred incentive may only increase their burden and not offer additional value [50]. Incentive choice may further consider an opt-out option (see Study 2) as participants may prefer to forgo any incentive rather than accept an incentive that, in their eyes, does not match their demonstrated performance [21]. Second, our study contributes to a better understanding of incentives in different market contexts. While crowdsourcing contests are used equally in for-profit vs. non- profit market contexts, existing research has studied the direct effect of cash vs. non-cash incentives [6,24], but not considered market context as a moderator. Our study fills this gap and extends received knowledge that incentives can sometimes backfire when they are misaligned with the market context in which they are used [29,65]. We show that incentive preference and its effect on quality and effort is not only influenced by individual’s motives and personal characteristics, but also market context. Researchers [34] have applied Fiske’s relationship theory to explain the signaling effect of incentives and their influence on effort [63], and empirical studies have referred to different effects of incentives in for-profit and non-profit contexts [12,16]. However, no one has yet considered the classification of context as a monetary market (for-profit) vs. social market (non-profit) as an important moderator for predicting incentive preference and its effect on quality and effort. Third, we expand on previous research that has identified a variety of intrinsic and extrinsic reasons for why individuals engage in crowdsourcing contests [1,8,11,23,45] by quantifying of the extent to which this this occurs and demonstrating that individuals not only have diverse preferences but actually choose different incentives when given the choice. While our results are consistent with past research showing that cash is generally the most prevalent single incentive due to its high option value [36], in some settings almost half prefer non-cash incentives. Our work is one of the first to validate various incentive preferences in a field setting. This emphasizes the importance of heterogeneous incentives not only as a niche aspect but a core driver of motivation to participate in crowdsourcing contests. This insight opens opportunities to improve contest design by considering alternative incentives. We also 23 determine that ideators not only have diverse incentive preferences but sometimes even prefer to opt out of receiving any incentives and sometimes are indifferent, thus suggesting entirely new forms of incentives to consider. These findings emphasize the existing literature that it is not easy to offer an appropriate incentive upfront [48,56] and providing incentive choice may be useful. Managerial Implications Our research has four practical implications for open innovation managers to design more effective incentive regimes. First, offering a choice of incentives may increase the effectiveness of incentives in crowdsourcing contests, especially when incentive preferences are diverse and a set of suitable non-cash incentives are available. Offering a choice can alleviate the concern that managers may not know what the most desirable incentive is and may worry about missing out if cash is not offered. Second, the incentive choice should be implemented as an optional choice with a clearly defined cash default option to cater to ideators who may be indifferent. This choice allows contest organizers to offer unique and unexpected incentives that may be very effective in special contexts (e.g., NASA offering a low value artifact like a sticker mentioning “flown in space”; [9,63]). Third, the accentuation of a non-profit context (social market) matters. Social markets can positively affect participants’ level of effort. Thus, crowdsourcing contests in social markets should underline their social character. Fourth, heterogeneous incentive preferences may over time vanish as online platforms specialize and evolve to cater to a more homogeneous user group (e.g., AMT). Crowdsourcing platforms must be aware that if they heavily rely on cash as their standard incentive, it will not be surprising that their community expects cash. Conversely, offering non-cash incentives and hosting contests for a variety of market contexts can be a means to attract heterogeneous ideators, which may improve idea quality throughout the platform. Limitations and Future Research Our findings are not without limitations. Although gathered in various setups and under realistic conditions, further research is required to establish the dimensions in which they generalize. Our strongest findings in favor of non-cash incentives come from the field data in Study 1. This study was set in a non-profit context and those results may not fully generalize to for-profit settings despite our insights from Studies 2-4. Future research could evaluate if similar incentive preferences persist in crowdsourcing contests in for-profit contexts. While our results are consistent in regards to effect of incentives, choice, and context, more research is required to investigate the effect in additional settings and to enhance the overall applicability of our conclusions. In addition, numerous new areas necessitate further investigation, including design of choice options, the effect of indifference, the offering of incentive bundles, the effect of incentive opt out, and the conditions that create a social market character. Further, exploring how incentives over time lead to homogeneous preferences and adjusted behaviors, e.g., those found at AMT, would be illuminating. Additionally, our study employed prize purses that are commonly used in current reserach. Higher prizes may 24 lead to different outcomes and different incentive choices. In particular, we expect incentives to function differently in crowdsourcing contests compared with grand challenges like NASA’s $1M CO2 Conversion Challenge or the $10M Ansari X-Prize for Suborbital Flight. We speculate that ideators would be much less likely to either forgo incentives or choose a non-cash option if the stakes are very high. Consequently, the full range of ideators’ sensitivity to prize levels remains an open question. Our analysis also focused on shifts in the mean quality of ideas due to selection and treatment effects. However, sometimes shifts in maximum quality are more important than shifts in mean quality, especially in rank-order. Conclusion In conclusion, this research significantly advances our understanding of incentive design in crowdsourcing contests, highlighting the importance of offering a choice between cash and non-cash incentives to match diverse ideators preferences. It underscores the role of market context which moderates the effectiveness of these incentives and reveals that personal characteristics are less indicative of preference than previously thought. The findings open new pathways for designing more effective crowdsourcing contests, emphasizing the need for flexibility and customization of incentives to cater to diverse participant motives and market contexts. References 1. 2. 3. 4. 5. 6. 7. 8. 9. Acar, O.A. Harnessing the creative potential of consumers: money, participation, and creativity in idea crowdsourcing. Marketing Letters, 29, 2 (2018), 177–188. Afuah, A.N. and Tucci, C. Reflection on the 2022 AMR Decade Award: Crowdsourcing as a Solution to Distant Search. Academy of Management Review, 48, 4 (2023), 597–610. Amabile, T.M. Social psychology of creativity: A consensual assessment technique. Journal of Personality and Social Psychology, 43, 5 (1982), 997–1013. Amabile, T.M., Goldfarb, P., and Brackfield, S.C. Social influences on creativity: Evaluation, coaction, and surveillance. Creativity Research Journal, 3, 1 (1990), 6–21. Ariely, D., Bracha, A., and Meier, S. Doing good or doing well? Image motivation and monetary incentives in behaving prosocially. American Economic Review, 99, 1 (2009), 544–555. Ashraf, N., Bandiera, O., and Jack, K. No margin, no mission? A field experiment on incentives for public service delivery. Journal of Public Economics, 120, 1 (2014), 1– 17. Batson, D.C., Polycarpou, M.P., Harmon-Jones, E., et al. Empathy and attitudes: Can feeling for a member of a stigmatized group improve feelings toward the group? Journal of Personality and Social Psychology, 72, 1 (1997), 105–118. Belenzon, S. and Schankerman, M. Motivation and sorting of human capital in open innovation. Strategic Management Journal, 36, 6 (2015), 795–820. Bénabou, R. and Tirole, J. Incentives and prosocial behavior. American Economic Review, 96, 5 (September 2006), 1652–1678. 10. Bommaraju, R. and Hohenberg, S. Self-Selected Sales Incentives: Evidence of their Effectiveness, Persistence, Durability, and Underlying Mechanisms. Journal of Marketing, 82, 5 (2018), 106–124. 25 11. Boudreau, K.J., Lacetera, N., and Lakhani, K.R. Incentives and problem uncertainty in innovation contests: An empirical analysis. Management Science, 57, 5 (April 2011), 843–863. 12. Cappa, F., Rosso, F., and Hayes, D. Monetary and Social Rewards for Crowdsourcing. Sustainability, 11, 10 (2019), 2834. 13. De Charms, R. Personal Causation. Academic Press, New York, 1968. 14. Chow, C.W. The Effects of Job Standard Tightness and Compensation Scheme on Performance: An Exploration of Linkages. Accounting Review, 58, 4 (1983), 667–685. 15. Deci, E. and Ryan, R. Intrinsic Motivation and Self-Determination in Human Behavior. Plenum Press, New York, 1985. 16. Eisenberger, R. and Rhoades, L. Incremental Effects of Reward on Creativity. Journal of personality and social psychology, 81, 4 (2001), 728. 17. Eisenberger, R. and Selbst, M. Does Reward Increase or Decrease Creativity? Journal of Personality and Social Psychology, 66, 6 (1994), 1116–1127. 18. Ericsson, K., Krampe, R., and Tesch-Römer, C. The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100, 3 (1993), 363–406. 19. Eriksson, T., Teyssier, S., and Villeval, M.-C. Self-selection and the efficiency of 20. 21. 22. 23. 24. 25. 26. tournaments. Economic Inquiry, 47, 3 (2009), 530–548. Fehr, E. and Falk, A. Psychological foundations of incentives. European Economic Review, 46, 4–5 (2002), 687–724. Fiske, A.P. The four elementary forms of sociality: Framework for a unified theory of social relations. Psychological Review, 99, 4 (1992), 689–723. Franke, N., Keinz, P., and Steger, C.J. Testing the value of customization: when do customers really prefer products tailored to their preferences? Journal of Marketing, 73, 5 (2009), 103–121. Füller, J. Refining virtual co-creation from a consumer perspective. California Management Review, 52, 2 (2010), 98–122. Füller, J., Hutter, K., and Fries, M. Crowdsourcing for goodness sake: Impact of incentive preference on contribution behavior for social innovation. In K.S. Swan and S. Zou, eds., Advances in International Marketing. Emerald, 2012, pp. 137–159. Füller, J., Hutter, K., Hautz, J., and Matzler, K. User Roles and Contributions in Innovation-Contest Communities. Journal of Management Information Systems, 31, 1 (2014), 273–307. Fullerton, R.L. and McAfee, R.P. Auctioning Entry into Tournaments. Journal of Political Economy, 107, 3 (1999), 573–605. 27. Gagné, M. and Deci, E.L. Self-determination theory and work motivation. Journal of Organizational Behavior, 26, 4 (June 2005), 331–362. 28. Gallus, J. and Frey, B.S. Awards: A strategic management perspective. Strategic Management Journal, 37, 8 (2016), 1699–1714. 29. Gallus, J., Reiff, J., and Fiske, A.P. Relational incentives theory. Psychological Review, 129, 3 (2022), 586–602. 30. Gebauer, J., Füller, J., and Pezzei, R. The dark and the bright side of co-creation: Triggers of member behavior in online innovation communities. Journal of Business Research , 66, 9 (2013), 1516–1552. 31. Gerhart, B. and Fang, M. Pay, Intrinsic Motivation, Extrinsic Motivation, Performance, and Creativity in the Workplace: Revisiting Long-Held Beliefs. Annual Review of Organizational Psychology and Organizational Behavior, 2, 1 (2015), 489–521. 32. Gneezy, U., Meier, S., and Rey-Biel, P. When and Why Incentives (Don’t) Work to Modify Behavior. Journal of Economic Perspectives, 25, 4 (2011), 191–209. 33. Heyman, J. and Ariely, D. Effort for payment - A tale of two markets. Psychological Science, 15, 11 (November 2004), 787–793. 26 von Hippel, E. Democratizing Innovation. MIT Press, Cambridge, MA, 2005. 34. 35. Hofstetter, R., Zhang, Z.J., and Herrmann, A. The Hidden Pitfall of Innovation Prizes. 36. 37. Harvard Business Review, 2017, 1–19. https://hbr.org/2017/11/the-hidden-pitfall-of- innovation-prizes. Jeffrey, S.A. and Shaffer, V. The Motivational Properties of Tangible Incentives. Compensation & Benefits Review, 39, 3 (June 2007), 44–50. Jeppesen, L.B. and Frederiksen, L. Why do users contribute to firm-hosted user communities? The case of computer-controlled music instruments. Organization Science, 17, 1 (2006), 45–63. 38. Kornish, L.J. and Ulrich, K.T. The Importance of the Raw Idea in Innovation: Testing the Sow’s Ear Hypothesis. Journal of Marketing Research, 51, 1 (2014), 14–26. 39. Kunz, A.H. and Pfaf, D. Agency theory, performance evaluation, and the hypothetical construct of intrinsic motivation. Accounting, Organizations and Society, 27, 3 (2002), 275–295. 40. Lacetera, N. and Macis, M. Social image concerns and prosocial behavior: Field evidence from a nonlinear incentive scheme. Journal of Economic Behavior & Organization, 76, (2010), 225–237. 41. Lakhani, K.R. and Wolf, R.G. Why Hackers do what they do: Understanding Motivation and Effort in free/open Source Software Projects. In J. Feller, B. Fitzgerald, S.A. Hissam and K.R. Lakhani, eds., Perspectives on free and open source software. MIT Press, Cambridge, 2005, pp. 3–22. 42. Larkin, J., Mcdermott, J., Simon, D.P., and Simon, H.A. Expert and novice performance in solving physics problems. Science, 208, 4450 (1980), 1335–1342. 43. Lazear, E.P. and Rosen, S. Rank-order tournaments as optimum labor contracts. Journal of Political Economy, 89, 5 (1981), 841–864. 44. Leimeister, J.M., Huber, M., Bretschneider, U., and Krcmar, H. Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition. Journal of Management Information Systems, 26, 1 (2009), 197–224. 45. Li, D. and Hu, L. Exploring the effects of reward and competition intensity on participation in crowdsourcing contests. Electronic Markets, (2017), 199–210. 46. Liu, J. and Kim, K. Designing contests for data science competitions: Number of stages and prize structures. Production and Operations Management, (2023). 47. Lovas, B. and Ghoshal, S. Strategy as guided evolution. Strategic Management Journal, 21, 9 (2000), 875–896. 48. Majchrzak, A. and Malhotra, A. Towards an information systems perspective and research agenda on crowdsourcing for innovation. The Journal of Strategic Information Systems, 22, 4 (December 2013), 257–268. 49. Mason, W. and Suri, S. Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods, 44, 1 (2012), 1–23. 50. Matzler, K., Stieger, D., and Füller, J. Consumer Confusion in Internet-Based Mass Customization: Testing a Network of Antecedents and Consequences. Journal Consumer Policy, 34, (2011), 231–247. 51. McClelland, D.C. How motives, skills, and values determine what people do. American Psychologist, 40, 7 (1985), 812–825. 52. McClelland, D.C., Koestner, R., and Weinberger, J. How do self-attributed and implicit motives differ? Psychological Review, 96, 4 (1989), 690–702. 53. Moghaddam, E.N., Aliahmadi, A., Bagherzadeh, M., Markovic, S., Micevski, M., and Saghafi, F. Let me choose what I want: The influence of incentive choice flexibility on the quality of crowdsourcing solutions to innovation problems. Technovation, 120, (February 2023), 102679. 27 54. Morgan, J. and Wang, R. Tournaments for ideas. California Management Review, 52, 2 (2010), 77–97. 55. Mrass, V., Peters, C., and Leimeister, J.M. Managing Complex Work Systems Via Crowdworking Platforms: How Intel and Hyve Explore Future Technological Innovations. SSRN Electronic Journal, (January 2018). 56. Nevo, D. and Kotlarsky, J. Crowdsourcing as a strategic IS sourcing phenomenon: Critical review and insights for future research. The Journal of Strategic Information Systems, 29, 4 (December 2020), 101593. 57. Niederle, M. and Vesterlund, L. Do women shy away from competition? Do men compete too much? Quarterly Journal of Economics, 122, 3 (2007), 1067–1101. von Nordenflycht, A. Clean up Your Theory! Invest in Theoretical Clarity and Consistency for Higher-Impact Research. Organization Science, 34, 5 (2023), 1651– 1996. 58. 59. Radford, J., Pilny, A., Reichelmann, A., et al. Volunteer Science: An Online Laboratory for Experiments in Social Psychology. Social Psychology Quarterly, 79, 4 (2016), 376–396. 60. Riedl, C. and Seidel, V. Learning from Mixed Signals in Online Innovation Communities. Organization Science, 29, 6 (2018), 1010–1032. 61. Ryan, R. and Deci, E. Intrinsic and extrinsic motivations: Classic definitions and new 62. 63. direction. Contemporary Educational Psychology, 25, 1 (2000), 54–67. Shadish, W.R., Cook, T.D., and Campbell, D.T. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton Mifflin, Boston, MA, USA, 2002. Sittenthaler, H.M. and Mohnen, A. Cash, non-cash, or mix? Gender matters! The impact of monetary, non-monetary, and mixed incentives on performance. Journal of Business Economics, 90, (2020), 1253–1284. 64. Terwiesch, C. and Xu, Y. Innovation contests, open innovation, and multiagent problem solving. Management Science, 54, 9 (2008), 1529–1543. 65. Toubia, O. Idea Generation, Creativity, and Incentives. Marketing Science, 25, 5 (2006), 411–425. 66. Toubia, O. and Netzer, O. Idea Generation, Creativity, and Prototypicality. Marketing Science, 36, 1 (2017), 1–20. 67. Vallerand, R.J., Fortier, M.S., and Guay, F. Self-determination and persistence in a real-life setting: Toward a motivational model of high school dropout. Journal of Personality and Social Psychology, 72, 5 (1997), 1161–1176. 68. Williams, S. An organizational model of choice: A theoretical analysis differentiating choice, personal control, and self-determination. Genetic, Social, and General Psychology Monographs, 124, 4 (1998), 465–491. 69. Yan, J.K., Leidner, D.E., and Benbya, H. Differential Innovativeness Outcomes of User and Employee Participation in an Online User Innovation Community. Journal of Management Information Systems, 35, 3 (July 2018), 900–933. 28 Online Appendices Appendix 1. Positioning of Current Paper in Literature on Choice of Incentives Incentive Application Study Design Context Summary Dependent Variable Selected References Choice of Intrinsic (non-cash) Extrinsic (cash) Creativity in Crowdsourcing Observational and Experimental Data For profit & Non-profit (both) e c n a m r o f r e P n o i t a v o n n I & e v i t a e r C Current article Yes Yes Yes Yes [26] Yes Yes Yes Yes [6] [12] No No Yes Yes Yes Yes Yes No [20] No Yes Yes No [31] No Yes Yes No [5] No No Yes Yes [21] No No Yes Yes [33] [1] [25] [28] No No No No No No No No Yes Yes Yes Yes Yes Yes Yes No Yes Yes No No Yes No Yes Yes Yes Yes No Yes Yes No No No No No No No No No No No The current study tests the effectiveness of cash vs. non-cash incentives – assigned to or self-selected by ideators – in crowdsourcing contest for-profit vs. non-profit organizers. The authors test in a single experiment with two rounds how providing participants with incentive choice (monetary vs. symbolic) choice impacts solution quality. Cappa et al. empirically test if two different types of rewards – monetary and social rewards – increase the number of contributions in crowdsourcing. Eisenberger and Rhoades examined unrewarded and rewarded creativity training and its impact on creative task performance. Heymann and Ariely test the relationship between forms of compensation (cash vs. token), the levels of payment (no, low, and medium), and the resulting effort expended in monetary and social markets. Sittenthaler and Mohen employs an experiment to test the impact of monetary, non-monetary, and a combination of monetary and non-monetary incentives on performance. The authors test incentives and their impact on contest performance in high and low-uncertainty problems with greater and lower rivalry. The authors empirically test if incentive and parallel path effects – adding numbers of competitors – are of comparable magnitude and thus be explicitly considered together when designing crowdsourcing contests. Toubia examines if tailored ideation incentives improve creative output. Acar investigates whether the use of monetary rewards is effective in stimulating creativity and, if so, how large those rewards should be. The authors explore the effects of different incentives on crowdsourcing participation and contribution quality in randomized field experiments. The authors empirically test the effects of extrinsic financial rewards on intrinsic motivation. 1 Dependent Variable Selected References Incentive Application Study Design Context Choice of Intrinsic (non-cash) Extrinsic (cash) Creativity in Crowdsourcin g Observational and Experimental Data For profit & Non-profit (both) Summary [8] [15] No No [14] No [13] No [3] [2] No No [27] No [9] No No No No No No No No No Yes Yes Yes Yes Yes Yes Yes Yes [19] No No Yes [11] No No Yes [16] No [10] No No No Yes No No No No No No No No No No No No No e c n a m r o f r e P n o i t a v o n n I & e v i t a e r C Yes No No No No No No No No No No No No No No No No No No No No No No No Deci tests in two laboratory experiments and one field experiment the effects of external rewards on intrinsic motivation to perform an activity. The authors investigate synergistic extrinsic motivators to foster creativity and innovation of intrinsically motivated knowledge workers. Erat and Gneezy empirically test whether piece-rate and competitive incentives affect creativity, and if so, how the incentive effect depends on different types – and not merely the presence – of extrinsic incentives. Eisenberger and Selbst investigate why behaviorists and cognitive oriented investigators show opposite conclusions about reward’s effects on creativity. The authors examine the effect of reward on children’s and adults’ creativity. Amabile test the creativity motivation hypothesis by investigating the effects of a common extrinsic constraint – competing for prizes – on children’s artistic creativity in a field setting. Pinder test additivity versus non-additivity of intrinsic and extrinsic incentives on work motivation, performance and attitude. Deci empirically investigates what happens to a person’s intrinsic motivation for an activity when he is rewarded extrinsically for performing the activity. Hammer and Foster test that contingent monetary rewards actually reduced intrinsic task motivation in both a boring and nonboring task setting. Eder and Manso test in a controlled experimental setting the effects of different incentives schemes (e.g. fixed wage, pay for performance, exploration) on innovation and performance. The authors evaluate in a series of laboratory experiments the fixed prize mechanisms as a means to obtain a given quality of research at as low a cost as possible under various market conditions. Deci and Cascio test changes in intrinsic motivation as a function of negative feedback and threats. Our review categorizes the studies along incentives and choice. In addition, we classify the study settings (creativity in crowdsourcing vs. other settings), how the data was gathered (observational vs. experimental data design), and the context setting (for-profit vs. non- profit). While our review does not claim to be exhaustive it covers the most relevant and actual studies. 2 Appendix 2. Study 1 Incentive Paper - JMR Round 2 1 Study 1 – Scraplab Estimation Procedure: We estimate the following system of three simultaneous latent equations 1.1 Model ⇤i = S0xS yS yO1 ⇤i = O10xO1 ⇤i = O20xO2 yO2 i + ✏S i i + ✏O1 i + ✏O2 i . Incentive Paper - JMR Round 2 i (1) (2) (3) Equation (1) is the selection rule, where an individual’s i choice 𝑦! " is the choice of the 0 cash incentive or the non-cash incentive. 𝑦#$∗ and 𝑦#&∗ are the latent outcomes, only one of 1 if yS ⇤i < 0 otherwise 1 Study 1 – Scraplab yS i (4) 1.1 Model ( yO1 which is observable, depending on the sign of 𝑦! ⇤i (5) (1) yO2 ⇤i designs produced under the cash treatment or the quality produced under the non-cash treatment (2) " (that is: we observe either the quality of if yS i = 0 i + ✏S ⇤i = S0xS yS i otherwise i + ✏O1 yO1 ⇤i = O10xO1 i + ✏O2 ⇤i = O20xO2 yO2 yO i ( . i i but not both). Hence, we observe 2 4 ✏S ✏O1 ✏O23 yS i 5 N (0, ⌃), if yS ⇤i < 0 otherwise ⇠ 0 1 ( (3) (6) (4) (5) (7) ⌃ = yO1 ⇤i ⇢ 1 yO i yO2 ( 1 ⇢ ⇤i ⇢13 ⇢23 2 if yS i = 0 otherwise. . ⇢13 ⇢23 1 3 2 ⇠ 4 (3) (2) (4) 5 N (0, ⌃), Covariates x’ 1.2 Results (6) (5) " are the (7) error terms for the selection equation. Equation (2) and (3) are the outcome equations and model Design Quality (1) Designs Submitted (2) Incentive: No Answer (3) Incentive: Cash (4) Comments Written (5) Ratings Submitted (6) design quality conditional on covariates x’ 1.2 Results Tenure (7) log(GDP) (8) Western (9) Female (10) Competition (11) Professional (12) ( are fixed ideator characteristics (home-country GDP, gender, and western (6) background) and a measure of competition at the time of registration (number of designs that had 4 SD Min already been submitted). The vector 𝛽") are the estimated regression coefficients and 𝜖! 1.00 0.66 1.00 2.65 . ⌃ = 0.22 0.00 0.50 -0.14 0.00 0.44 0.30 0.00 10.32 5 0.57 0.10 0.00 18.82 * (professional designers, time since registration, and 0.22 0.27 0.21 0.14 22.06 0.00 -0.09 0.61 -0.02 8.44 counts of ideators’ submitted designs, comments, and ratings). The outcome equations are 0.50 -0.07 -0.06 0.00 0.02 (6) (5) Mean 3.33 0.49 0.12 0.09 0.00 -0.01 2.26 171.55 -0.28 -0.22 0.00 -0.21 estimated separately for each treatment: Eq. (2) is estimated for ideators who choose a non-cash 0.55 0.48 -0.01 -0.01 -0.03 0.00 0.27 4.46 #$and Eq (3) is estimated for ideators who choose 7.11 0.22 31.14 0.00 10.28 -0.07 0.47 -0.01 0.61 -0.21 290.89 -0.01 0.36 ⇢2 0.29 ⇢23 0.08 1 -0.06 0.14 0.00 0.00 0.02 -0.07 Max (1) 4.78 -0.08 24.00 -0.02 1.00 -0.04 1.00 115.00 Table 1: Descriptive statistics and correlations of ideators who made at least one design submission 165.00 70.44 (N=260) 11.25 ✏S ✏O1 ✏O23 5 Max (1) 4.78 1 ⇢1 24.00 1 ⇢1 1.00 ⇢2 ⇢23 1.00 115.00 4 165.00 70.44 11.25 1.00 SD Min 0.66 1.00 1.00 2.65 1.00 577.00 0.50 0.00 1.00 0.00 0.44 0.00 10.32 0.00 18.82 0.14 22.06 0.61 8.44 0.50 0.00 0.00 0.49 0.00 171.55 0.00 0.48 Design Quality (1) Designs Submitted (2) Incentive: No Answer (3) Incentive: Cash (4) Comments Written (5) Ratings Submitted (6) Tenure (7) log(GDP) (8) Western (9) Female (10) Competition (11) Professional (12) 0.29 0.08 -0.06 0.14 0.00 0.00 0.02 1.00 -0.07 -0.08 1.00 -0.02 577.00 -0.04 1.00 0.22 -0.66 -0.14 0.21 0.30 0.11 0.10 0.11 0.21 -0.02 -0.07 -0.06 0.04 -0.04 0.09 -0.11 -0.22 -0.06 -0.03 Mean 3.33 2.26 0.55 0.27 4.46 7.11 31.14 10.28 0.47 0.61 290.89 0.36 prize with the observed dependent variable 𝑦! -0.16 0.57 -0.08 0.27 -0.07 0.03 -0.09 -0.04 0.02 0.12 0.03 -0.28 0.07 -0.01 0.04 -0.66 0.21 0.11 0.11 -0.07 0.04 -0.04 -0.11 -0.06 -0.16 -0.08 -0.07 0.03 -0.04 (4) 0.03 0.07 0.04 -0.06 0.01 (7) 0.04 -0.99 0.07 0.40 0.02 0.08 -0.12 -0.06 0.01 0.04 -0.99 0.07 -0.03 -0.01 0.10 (9) (8) (3) (2) 2 3 3 0.40 0.02 0.08 -0.12 -0.03 -0.01 0.10 -0.04 -0.10 -0.08 (7) (8) (9) (10) (11) (10) (11) -0.04 -0.10 -0.08 Table 1: Descriptive statistics and correlations of ideators who made at least one design submission (N=260) 1 1 Incentive Paper - JMR Round 2 Incentive Paper - JMR Round 2 1 Study 1 – Scraplab 1 Study 1 – Scraplab 1.1 Model 1.1 Model i i + ✏S i i + ✏O1 i + ✏O2 i the cash prize with the observed dependent variable 𝑦! ⇤i = S0xS yS ⇤i = O10xO1 yO1 i + ✏S yS ⇤i = S0xS i yO2 ⇤i = O20xO2 i i + ✏O1 ⇤i = O10xO1 yO1 i + ✏O2 ⇤i = O20xO2 yO2 i if yS ⇤i < 0 0 otherwise 1 if yS ⇤i < 0 (note, however, that we use the same covariates in both; that is 𝑥! if yS i = 0 otherwise otherwise if yS i = 0 otherwise separate sets of estimated regression coefficients for the covariates 𝑥! #$, and 𝜖! 𝜖! yO i yS i yS i . . yO1 ⇤i yO2 ⇤i #& are trivariate normally distributed with 0 mean and covariances [32] given by #&). The error terms 𝜖! ", (4) (5) #$ = 𝑥! #&. The two vectors 𝛽#$) and 𝛽#&) are #$ and 𝑥! #&, respectively (4) ( ( 0 yO 1 ( i ( yO1 ⇤i yO2 ⇤i ✏S ✏O1 ✏O23 2 ✏S 5 4 ✏O1 ✏O23 ⇠ 1 ⇢ 5 4 1 ⇢ ⌃ = 2 ⇢13 ⇢23 ⇢1 1 4 ⇢1 1 ⇢2 ⇢23 ⇠ 2 2 ⌃ = N (0, ⌃), N (0, ⌃), ⇢13 ⇢23 1 . 3 5 . 3 ⇢2 ⇢23 1 with 𝛴 1.2 Results (1) (2) (1) (3) (2) (3) (5) (6) (6) (7) (7) (2) (3) (6) (4) (5) 1.2 Results In the model, the presence of selection can be quantified by the statistical and substantive 4 SD Min 1.00 0.66 1.00 2.65 0.00 0.50 SD Min (6) 0.00 0.44 significance correlation coefficient 𝜌$ between the errors of the selection equation (Eq. 1), and 1.00 0.66 10.32 0.00 1.00 2.65 0.57 0.00 18.82 0.00 0.50 0.22 0.14 22.06 0.27 the outcome equations for ideators who choose a non-cash prize (Eq. 2) and the correlation 0.00 0.44 0.00 -0.09 8.44 0.61 0.00 10.32 -0.07 0.02 0.00 0.50 coefficient 𝜌& between the errors of the selection equation (Eq. 1), and the outcome equations for 0.00 18.82 -0.01 0.12 0.00 0.49 171.55 -0.21 -0.28 0.00 0.22 0.14 22.06 -0.01 -0.01 0.00 0.48 0.00 8.44 0.61 ideators who choose the cash prize (Eq. 3). If 𝜌$/& is zero, then the unmeasured factors which -0.07 0.00 0.50 -0.01 0.00 0.49 -0.21 0.00 171.55 influence whether an ideator chooses an incentive are independent of the unmeasured factors -0.01 0.00 0.48 Design Quality (1) Designs Submitted (2) Incentive: No Answer (3) Incentive: Cash (4) Design Quality (1) Comments Written (5) Designs Submitted (2) Ratings Submitted (6) Incentive: No Answer (3) Tenure (7) Incentive: Cash (4) log(GDP) (8) Comments Written (5) Western (9) Ratings Submitted (6) Female (10) Competition (11) Tenure (7) Professional (12) log(GDP) (8) Western (9) Female (10) Competition (11) Professional (12) 5 Max (1) 4.78 24.00 1.00 Max (1) 1.00 4.78 115.00 24.00 165.00 1.00 70.44 1.00 11.25 115.00 1.00 165.00 1.00 577.00 70.44 1.00 11.25 1.00 1.00 577.00 1.00 Mean 3.33 2.26 0.55 Mean 0.27 3.33 4.46 2.26 7.11 0.55 31.14 0.27 10.28 4.46 0.47 7.11 0.61 290.89 31.14 0.36 10.28 0.47 0.61 290.89 0.36 -0.16 0.57 -0.08 0.27 -0.07 0.03 -0.09 -0.04 0.02 0.12 0.03 -0.28 0.07 -0.01 0.04 0.29 0.08 -0.06 0.14 0.00 0.00 0.02 -0.07 -0.08 -0.02 -0.04 0.29 0.08 -0.06 0.14 0.00 0.00 0.02 -0.07 -0.08 -0.02 -0.04 0.22 -0.14 0.30 0.10 0.21 -0.02 -0.06 0.09 -0.22 -0.03 0.22 -0.14 0.30 0.10 0.21 -0.02 -0.06 0.09 -0.22 -0.03 -0.66 0.21 0.11 0.11 -0.07 0.04 -0.04 -0.11 -0.06 -0.66 0.21 0.11 0.11 -0.07 0.04 -0.04 -0.11 -0.06 -0.16 -0.08 -0.07 0.03 -0.04 0.03 0.07 0.04 (4) (2) (5) (3) (7) -0.06 0.01 0.04 -0.99 0.07 -0.06 0.01 0.04 -0.99 0.07 Table 1: Descriptive statistics and correlations of ideators who made at least one design submission (N=260) (8) (9) (10) (11) 0.40 0.02 0.08 -0.12 -0.03 -0.01 0.10 -0.04 -0.10 -0.08 0.40 0.02 0.08 -0.12 -0.03 -0.01 0.10 (7) (8) (9) (10) (11) -0.04 -0.10 -0.08 which determine the quality of the design produced by that ideator. If 𝜌$/& is positive then the Table 1: Descriptive statistics and correlations of ideators who made at least one design submission (N=260) unmeasured factors that lead an ideator to choose an incentive are positively correlated with the unmeasured factors that lead them to produce designs of higher quality. If, on the other hand, 1 𝜌$/& is negative then the unmeasured factors that lead an ideator to choose an incentive are 1 negatively correlated with the unmeasured factors that lead them produce designs of higher quality. We estimate the equations simultaneously using maximum-likelihood in R [29] using the SampleSelection package [32]. 4 Construction of Dependent Variable: In this section we provide additional details how we constructed the baseline rating of design quality. We measure the dependent variable, Design Quality, for all designs submitted to the contest using the Consensual Assessment Technique (CAT; Amabile, 1982). We recruited an independent jury that was blind to the research hypotheses from reliable and experienced workers on Amazon Mechanical Turk. 1 This panel evaluated each product based on the following six dimensions: (1) creativity, (2) novel use of materials (e.g., materials are used in a unique way), (3) novel association (e.g., unique or unusual association with existing products or objects), (4) variation of materials used (e.g., different materials, number of colors, originality), (5) level of detail and complexity (e.g., of the design or decoration), and (6) appearance (i.e., how good it would look in a home or office). Workers were instructed to make relative assessments based on their own definition of creativity [2]. As part of the instructions, workers were shown a grid of nine randomly selected designs to facilitate this relative assessment. Workers evaluated designs in random order and for each design, the assessment items were arranged in random order. We collected five evaluations for each design, resulting in 2,927 ratings of each of the six assessment items (17,562 ratings in total) from a total of 77 different raters. We apply the technique suggested by (Ipeirotis, Provost, and Wang 2010) to identify and then remove unreliable workers. The technique identified 17 low-quality raters who submitted ratings with extremely low information content (e.g., rating completely at random or submitting 1 AMT offers a mechanism to restrict the pool of eligible workers using various qualifications. We restricted our task to workers with the following qualifications: • Approval rate for all prior tasks greater than or equal to 99% • Number of tasks approved greater than or equal to 10,000. That is, selecting experienced workers is nothing that we did specifically, but is a feature directly available on AMT. 5 identical ratings for all five items). These raters collectively submitted 432 ratings (15%). After cleaning the ratings from low-quality raters, 2,495 ratings remain (that is, 14,970 ratings in total), with an average of 4.3 ratings per design and 42 ratings per rater.2 We paid workers on average $4.20 for their effort. The key premise of a crowdsourcing contest is to attract submissions from a diverse pool of participants [23]. Hence, crowdsourcing contests are most effective when they are geared toward reaching out to outsiders to solicit creative ideas. Research has now shown that 1) online panels such as those from Amazon Mechanical Turk are appropriate [34], 2) expertise does not significantly affect the quality of assessments [30], 3) high correlation exists between the assessments from experts and laypeople across different evaluation methods [4], and 4) assessments from panels of consumers can be even better than expert panels [24]. Overall, we conceptualize the performance of ideators in crowdsourcing idea contests as the quality of the best idea that an individual submits, rather than effort (e.g., measure in time spent on the task or the length of the submitted idea). Prior research on ideation has often defined performance as the average quality of ideas or the number of ideas generated by an individual, ignoring that most organizations seek a few great ideas [17]. Consequently, in case an ideator made multiple design submissions, we use the quality of the ideator’s best submission. That is, at the individual level, the performance of an ideator is then measured as the quality of the best idea that an individual submitted. A focus on ideators’ best idea rather than average idea quality is also more consistent with the nature of a rank-order tournament in which prizes are only awarded to the 2 Robustness tests including all ratings and not dropping low-quality raters do not substantively change our conclusions but explained variance (R2) is lower, supporting the notion that low-quality raters added only noise. 6 contest winners.3 Cronbach’s alpha of 0.9 indicates high internal consistency of the six assessment items. Intercoder reliability ICC (2,k) for the aggregated scale (all six items) is 0.70 indicating good inter-rater agreement [7]. Individual item ICC (2,k) range from 0.59 to 0.66. The quality of ideas, i.e., their creativity, is critically important for an innovation’s success and ultimately market success [24] and is thus most important from a managerial perspective. 3 We do perform robustness tests using an ideator’s average quality instead and find substantively similar results. See section on robustness tests for more details. 7 Sample Designs – We show example designs submitted to the consumer innovation contest. 8 Robustness Tests Using Alternative Measures of Quality: Since the goal in rank order contests is to win the contest, we focused the analyses presented in the main paper on the quality of the best idea submitted by an ideator. We show correlation coefficients of three different quality measures in the table below and find substantively similar results for any of the three quality measures. Not surprisingly, the maximum and mean quality are highly correlated (𝜌 = 0.88; p < .001). We find substantively similar results using average quality as the dependent variable. Correlation of quality measures (N=259). Design Quality Best Idea (AMT; main measure used in paper) (1) Average Design Quality (AMT) (2) Average Community Rating Best Idea (3) Mean 3.33 3.14 3.56 SD Min Max 4.78 1.00 0.66 4.50 1.00 0.61 4.90 1.00 0.73 (1) (2) 0.88*** 0.37*** 0.28*** Table 5: Correlation of quality measures (N=259). 6 9 Appendix 3. Study 1 – Descriptive statistics and correlations of main study variables of ideators who made at least one design submission (N = 259). 1 Study 1 - Scrablab Field Experiment (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) Design Quality (1) Designs Submitted (2) Incentive: No Answer (3) Incentive: Cash (4) Comments Written (5) Ratings Submitted (6) Tenure (7) log(GDP) (8) Western (9) Female (10) Designs Prior to Registration (11) Professional (12) Mean 3.33 2.26 0.55 0.27 4.46 7.11 31.14 10.28 0.47 0.61 SD Min 0.66 1.00 1.00 2.65 0.00 0.50 0.00 0.44 0.00 10.32 0.00 18.82 0.14 22.06 8.44 0.61 0.00 0.50 0.00 0.49 290.89 171.55 0.00 0.00 0.36 0.48 Max 4.78 24.00 1.00 1.00 115.00 165.00 70.44 11.25 1.00 1.00 0.29 0.22 0.08 -0.66 -0.14 -0.06 0.21 0.30 0.14 0.11 0.10 0.00 0.11 0.21 0.00 0.02 -0.02 -0.07 0.04 -0.06 -0.07 -0.04 0.09 -0.08 577.00 -0.02 -0.22 -0.11 -0.06 -0.04 -0.03 1.00 -0.16 0.57 -0.08 0.27 -0.07 0.03 -0.09 0.02 -0.04 0.03 0.12 0.07 -0.28 -0.01 0.04 0.22 0.00 -0.06 0.01 -0.07 -0.01 0.04 -0.21 -0.99 0.07 -0.01 0.40 0.02 -0.03 0.08 -0.01 -0.04 -0.10 0.10 -0.12 -0.08 Table 1: Study 1 - Descriptive statistics and correlations of main study variables of ideators who made at least one design submission (N = 259). N Percent Female > 0 E↵ort Percent No Choice Cash Donation Internship Workshop Party 281 333 171 170 127 123 (23%) (28%) (14%) (14%) (11%) (10%) Total 1,205 56% 72% 82% 80% 97% 90% 75% 50% 21% 6% 18% 5% 1% 21% Table 2: Study 1 - Incentive choices and activity. We provide the number of participants who choose each of the prizes. We give the percentage of ideators (among the total of 1,205) who chose a given prize in parentheses. 1 10 Dependent Variable Non-Cash Preference Intercept log(GDP) Western Female AIC Log Likelihood Deviance Num. obs. ⇤⇤⇤p < 0.01; ⇤⇤p < 0.05; ⇤p < 0.1 (1) 0.91 (1.67) 0.05 (0.16) 0.97⇤⇤⇤ (0.18) 0.38⇤ (0.19) 1152.77 572.39 1144.77 924 Table 3: Study 1 - Logistic regression of non-cash incentive preference. Sample: All registered ideators who expressed incentive preference. Appendix 2. Study 1 - Bivariate Probit of joint decision to answer the incentive question and participate (make at least one design submission). Sample: All registered ideators. Intercept log(GDP) Western Female Designs Prior to Registration Choose Incentive: Yes ⇢ (p-value, 2 test of ⇢ = 0) Total edf Num. obs. Pseudo-R2 ⇤⇤⇤p < 0.01, ⇤⇤p < 0.05, ⇤p < 0.1 Choose Incentive > 0 E↵ort (1) 3.208⇤⇤⇤ (0.834) 0.312⇤⇤⇤ (0.081) 0.606⇤⇤⇤ (0.093) 0.514⇤⇤⇤ (0.092) 0.049 (0.038) (2) 4.968⇤⇤⇤ (0.918) 0.539⇤⇤⇤ (0.086) 0.281⇤⇤ (0.127) 0.428⇤⇤⇤ (0.094) 0.038 (0.036) 0.621⇤⇤ (0.261) -0.863 (0.002)⇤⇤⇤ 12 1,205 0.088 Table 4: Study 1 - Bivariate Probit of joint decision to answer the incentive question and participate (make at least one design submission). Sample: All registered ideators. Choice and becoming active. Interestingly, the 77% (N=924) of participants who actively chose an incentive show a lower probability to submit an idea 12.8% (N=118) than those 23% (N=281) participants who made no choice. No-choice participants show a much higher probability to submit an idea (50%, N=140; Table 1 in the main text). 2 To further explore the effect of incentive choice on becoming active, we conducted a bivariate probit model [18]. This model allowed us to further explore the effect of the sequential choice of incentive followed by the choice to exert effort and make a submission. The analysis shows that choice has a positive effect on becoming active when controlling for participants’ individual characteristics. Personal characteristics – high-income country (β = -.54; p < 0.01), western background (β = -.28; p < 0.05), and female (β = - .43; p < 0.01) – are strongly negatively correlated with submitting a design. Controlling for these characteristics reveals a positive effect of choosing an incentive (β = .62; p < 0.05) and becoming active. Contrary to our descriptive observation, we find that choice significantly increases the likelihood of becoming active by 38.7%. 11 Appendix 3. Study 1 - Logistic regression of non-cash incentive preference. Sample: All registered ideators who expressed incentive preference. Dependent Variable Non-Cash Preference Intercept log(GDP) Western Female AIC Log Likelihood Deviance Num. obs. ⇤⇤⇤p < 0.01; ⇤⇤p < 0.05; ⇤p < 0.1 (1) 0.91 (1.67) 0.05 (0.16) 0.97⇤⇤⇤ (0.18) 0.38⇤ (0.19) 1152.77 572.39 1144.77 924 Table 3: Study 1 - Logistic regression of non-cash incentive preference. Sample: All registered ideators who expressed incentive preference. Appendix 4. Study 2 (Volunteer Science Sample): Study design and description of participant choices. Descriptives for Study 2-4 Choose Incentive > 0 E↵ort (1) (2) Intercept Total No Prize Cash Assigned No Prize log(GDP) Assigned Cash Assigned non-Cash Western Choice 1 (cash/non-cash) Choice 2 (cash/indi↵erent/non-cash) Choice 3 (cash/opt-out) 13 23 26 41 60 45 13 (100%) 13 (29%) 23 (100%) 23 (56%) 22 (37%) 32 (71%) Female Total 208 26 100 Incentive Received Indi↵erent 3.208⇤⇤⇤ Non-Cash (0.834) 0.312⇤⇤⇤ (0.081) 26 (100%) 0.606⇤⇤⇤ 18 (44%) 16 (27%) (0.093) 0.514⇤⇤⇤ 22 60 (0.092) 0.049 (0.038) 22 (37%) Comment 4.968⇤⇤⇤ (0.918) 0.539⇤⇤⇤ (0.086) 0.281⇤⇤ no sig. di↵erence no sig. di↵erence (0.127) cash sig. more popular (p < 0.007) 0.428⇤⇤⇤ (0.094) 0.038 (0.036) 0.621⇤⇤ (0.261) Comment Table 6: Study 2 (Volunteer Science Sample): Study design and description of participant choices. Designs Prior to Registration Choose Incentive: Yes Incentive Received Treatment Condition Total No Prize Cash Non-Cash Indi↵erent 21 (100%) ⇢ 21 Assigned No Prize 20 Assigned Cash (p-value, 2 test of ⇢ = 0) 29 Assigned non-Cash 22 46 48 2 (4%) Choice 1 (cash/non-cash) Total edf Choice 2 (cash/indi↵erent/non-cash) Num. obs. Choice 3 (cash/opt-out) Pseudo-R2 Total 20 (100%) -0.863 (0.002)⇤⇤⇤ 29 (100%) 22 (100%) 37 (80%) 46 (96%) 0 (0%) 4 (9%) 186 23 125 33 cash sig. more popular (p < 0.001) cash sig. more popular (p < 0.001) cash sig. more popular (p < 0.001) 12 5 (10%) 1,205 0.088 5 ⇤⇤⇤p < 0.01, ⇤⇤p < 0.05, ⇤p < 0.1 Table 7: Study 3 (Online Labor Market Sample): Study design and description of participant choices. 12 Table 4: Study 1 - Bivariate Probit of joint decision to answer the incentive question and participate (make at least one design submission). Sample: All registered ideators. Treatment Condition Total Cash Non-Cash Comment Incentive Received For-Profit Assigned Non-Cash Choice (cash/non-cash) Non-Profit Assigned Non-Cash Assigned Cash Assigned Cash 9 (100%) 9 13 39 9 11 39 30 (77%) 9 (100%) 13 (100%) 9 (23%) 2 11 (100%) 12 (31%) Total 120 75 45 Choice (cash/non-cash) 27 (69%) cash sig. more popular than non-cash (p = 0.02) cash equally popular in for-profit vs. non-profit (p = 0.79) cash sig. more popular than non-cash (p = 0.001) Table 8: Study 4 (Framing Study): Study design and description of participant choices. No significant di↵erence between choosing cash in for-profit vs. non-profit framing (30 out of 39 vs. 27 out of 39; p = 0.6. 6 Descriptives for Study 2-4 Total No Prize Cash Non-Cash Indi↵erent Comment Assigned No Prize Assigned Cash Assigned non-Cash 13 (100%) 13 23 26 41 60 45 Incentive Received 23 (100%) 23 (56%) 22 (37%) 32 (71%) 26 (100%) 18 (44%) 16 (27%) Total 208 26 100 60 22 Choice 1 (cash/non-cash) Choice 2 (cash/indi↵erent/non-cash) 22 (37%) no sig. di↵erence no sig. di↵erence Choice 3 (cash/opt-out) 13 (29%) cash sig. more popular (p < 0.007) Table 6: Study 2 (Volunteer Science Sample): Study design and description of participant choices. Treatment Condition Total No Prize Cash Non-Cash Indi↵erent Comment Incentive Received Assigned No Prize Assigned Cash Assigned non-Cash 21 20 29 21 (100%) 20 (100%) Choice 1 (cash/non-cash) Choice 2 (cash/indi↵erent/non-cash) Choice 3 (cash/opt-out) 22 46 48 2 (4%) 22 (100%) 37 (80%) 46 (96%) cash sig. more popular (p < 0.001) cash sig. more popular (p < 0.001) cash sig. more popular (p < 0.001) Appendix 5. Study 3 (Framing Study): Study design and description of participant choices. No significant difference between choosing cash in for-profit vs. non-profit framing (30 out of 39 vs. 27 out of 39; p = 0.6. Table 7: Study 3 (Online Labor Market Sample): Study design and description of participant choices. Total 186 125 23 33 5 29 (100%) 0 (0%) 4 (9%) 5 (10%) Treatment Condition Total Cash Non-Cash Comment Incentive Received For-Profit Non-Profit Assigned Cash Assigned Non-Cash Choice (cash/non-cash) Assigned Cash Assigned Non-Cash Choice (cash/non-cash) 9 13 39 9 11 39 9 (100%) 30 (77%) 9 (100%) 27 (69%) 13 (100%) 9 (23%) 11 (100%) 12 (31%) Total 120 75 45 cash sig. more popular than non-cash (p = 0.001) cash sig. more popular than non-cash (p = 0.02) cash equally popular in for-profit vs. non-profit (p = 0.79) Table 8: Study 4 (Framing Study): Study design and description of participant choices. No significant di↵erence between choosing cash in for-profit vs. non-profit framing (30 out of 39 vs. 27 out of 39; p = 0.6. Appendix 6. Study 3: Social value orientation is strong predictor of preference for non-cash incentives across both for- and non-profit contexts. Dependent Variable Non-Cash Preference Intercept SVO Context: Non-Profit SVO ⇥ Context: Non-Profit (1) (2) 3.68⇤⇤⇤ (1.09) 4.20⇤⇤⇤ (1.56) 3.78⇤⇤⇤ 6 (1.10) 4.13⇤⇤⇤ (1.55) 0.29 (0.55) AIC Log Likelihood Deviance Num. obs. 84.55 40.28 80.55 78 86.28 40.14 80.28 78 ⇤⇤⇤p < 0.01; ⇤⇤p < 0.05; ⇤p < 0.1 (3) 4.19⇤⇤ (1.78) 4.74⇤ (2.62) 0.95 (2.24) 0.99 (3.25) 88.18 40.09 80.18 78 Table 10: Study 3: Social value orientation is strong predictor of preference for non-cash incentives across both for- and non-profit contexts. 13 8 Descriptives for Study 2-4 Total No Prize Cash Non-Cash Indi↵erent Comment Incentive Received Assigned No Prize Assigned Cash Assigned non-Cash 13 23 26 13 (100%) 23 (100%) 26 (100%) Choice 1 (cash/non-cash) Choice 2 (cash/indi↵erent/non-cash) Choice 3 (cash/opt-out) 23 (56%) 22 (37%) 32 (71%) Appendix 7. Study 4 (Online Labor Market Sample): Study design and description of participant choices. no sig. di↵erence no sig. di↵erence cash sig. more popular (p < 0.007) 18 (44%) 16 (27%) 13 (29%) 22 (37%) Table 6: Study 2 (Volunteer Science Sample): Study design and description of participant choices. 41 60 45 Total 208 100 60 26 22 Treatment Condition Total No Prize Cash Non-Cash Indi↵erent Comment Incentive Received Assigned No Prize Assigned Cash Assigned non-Cash Choice 1 (cash/non-cash) Choice 2 (cash/indi↵erent/non-cash) Choice 3 (cash/opt-out) 21 20 29 22 46 48 21 (100%) 2 (4%) 20 (100%) 22 (100%) 37 (80%) 46 (96%) 29 (100%) 0 (0%) 4 (9%) 5 (10%) Total 186 23 125 33 5 cash sig. more popular (p < 0.001) cash sig. more popular (p < 0.001) cash sig. more popular (p < 0.001) Table 7: Study 3 (Online Labor Market Sample): Study design and description of participant choices. Treatment Condition Total Cash Non-Cash Comment Incentive Received For-Profit Non-Profit Assigned Cash Assigned Non-Cash Choice (cash/non-cash) Assigned Cash Assigned Non-Cash Choice (cash/non-cash) 9 13 39 9 11 39 9 (100%) 30 (77%) 9 (100%) 27 (69%) 13 (100%) 9 (23%) 11 (100%) 12 (31%) Total 120 75 45 cash sig. more popular than non-cash (p = 0.001) cash sig. more popular than non-cash (p = 0.02) cash equally popular in for-profit vs. non-profit (p = 0.79) Table 8: Study 4 (Framing Study): Study design and description of participant choices. No significant di↵erence between choosing cash in for-profit vs. non-profit framing (30 out of 39 vs. 27 out of 39; p = 0.6. 6 14 REFERENCES - Appendix 1. 2. 3. 4. 5. 6. 7. 8. 9. Acar, O.A. Harnessing the creative potential of consumers: money, participation, and creativity in idea crowdsourcing. Marketing Letters, 29, 2 (2018), 177–188. Amabile, T.M. Social psychology of creativity: A consensual assessment technique. Journal of Personality and Social Psychology, 43, 5 (1982), 997–1013. Amabile, T.M., Hennessey, B.A., and Grossman, B.S. Social influences on creativity: The effects of contracted-for reward. Journal of Personality and Social Psychology, 50, 1 (1986), 14. Blohm, I., Riedl, C., Füller, J., and Leimeister, J.M. Rate or Trade? Identifying Winning Ideas in Open Idea Sourcing. Information Systems Research, 27, 1 (2016), 27–48. Boudreau, K.J., Lacetera, N., and Lakhani, K.R. Incentives and Problem Uncertainty in Innovation Contests: An Empirical Analysis. Management Science, 57, 5 (2011), 843– 863. Cappa, F., Rosso, F., and Hayes, D. Monetary and Social Rewards for Crowdsourcing. Sustainability, 11, 10 (2019), 2834. Cicchetti, D. V. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6, 4 (1994), 284–290. Deci, E.L. Effects of externally mediated rewards on intrinsic motivation. Journal of personality and Social Psychology, 18, 1 (1971), 105. Deci, E.L. Notes on the theory and metatheory of intrinsic motivation. Organizational behavior and human performance, 15, 1 (1976), 130–145. 10. Deci, E.L. and Cascio, W.F. Changes in intrinsic motivation as a function of negative feedback and threats. (1972). 11. Ederer, F. and Manso, G. Is Pay For Performance Detrimental to Innovation? Management Science, 59, 7 (2013), 1496–1513. 12. Eisenberger, R. and Rhoades, L. Incremental Effects of Reward on Creativity. Journal of personality and social psychology, 81, 4 (2001), 728. 13. Eisenberger, R. and Selbst, M. Does Reward Increase or Decrease Creativity? Journal of Personality and Social Psychology, 66, 6 (1994), 1116–1127. 14. Erat, S. and Gneezy, U. Incentives for Creativity. Experimental Economics, 19, 2 (2016), 15. 16. 269–280. Fischer, C., Malycha, C.P., and Schafmann, E. The influence of intrinsic motivation and synergistic extrinsic motivators on creativity and innovation. Frontiers in Psychology, 10, (2019), 137. Fullerton, R.L., Linster, B.G., MCKee, M., and Slate, S. An experimental investigation of research tournaments. Economic Inquiry, 37, 4 (1999), 624–636. 17. Girotra, K., Terwiesch, C., and Ulrich, K.T. Idea generation and the quality of the best idea. Management Science, 56, 4 (2010), 591–605. 18. Greene, W.H. Econometric Analysis. Prentice Hall, Boston, MA, 2011. 19. Hamner, W.C. and Foster, L.W. Are intrinsic and extrinsic rewards additive: A test of Deci’s cognitive evaluation theory of task motivation. Organizational Behavior and Human Performance, 14, 3 (1975), 398–415. 20. Heyman, J. and Ariely, D. Effort for payment - A tale of two markets. Psychological Science, 15, 11 (November 2004), 787–793. 15 21. Hofstetter, R., Zhang, Z.J., and Herrmann, A. The Hidden Pitfall of Innovation Prizes. 22. 23. (2017). Ipeirotis, P.G., Provost, F., and Wang, J. Quality management on Amazon Mechanical Turk. Proceedings of the ACM SIGKDD Workshop on Human Computation - HCOMP ’10, (2010). Jeppesen, L.B. and Lakhani, K.R. Marginality and problem solving effectiveness in broadcast search. Organization Science, 21, 5 (2010), 1016–1033. 24. Kornish, L.J. and Ulrich, K.T. The Importance of the Raw Idea in Innovation: Testing the Sow’s Ear Hypothesis. Journal of Marketing Research, 51, 1 (2014), 14–26. 25. Liu, T.X., Yang, J., Adamic, L.A., and Chen, Y. Crowdsourcing with all-pay auctions: A field experiment on Taskcn. Management Science, 60, 8 (2014). 26. Moghaddam, E.N., Aliahmadi, A., Bagherzadeh, M., Markovic, S., Micevski, M., and 27. 28. Saghafi, F. Let me choose what I want: The influence of incentive choice flexibility on the quality of crowdsourcing solutions to innovation problems. Technovation, 120, (February 2023), 102679. Pinder, C.C. Additivity versus nonadditivity of intrinsic and extrinsic incentives: Implications for work motivation, performance, and attitudes. Journal of Applied Psychology, 61, 6 (1976), 693. Pritchard, R.D., Campbell, K.M., and Campbell, D.J. Effects of extrinsic financial rewards on intrinsic motivation. Journal of Applied Psychology, 62, 1 (1977), 9. 29. R Core Team. R: A Language and Environment for Statistical Computing. 2015. 30. Riedl, C., Blohm, I., Leimeister, J.M., and Krcmar, H. The Effect of Rating Scales on Decision Quality and User Attitudes in Online Innovation Communities. International Journal of Electronic Commerce, 17, 3 (2012), 7–36. Sittenthaler, H.M. and Mohnen, A. Cash, non-cash, or mix? Gender matters! The impact of monetary, non-monetary, and mixed incentives on performance. Journal of Business Economics, 90, (2020), 1253–1284. 31. 32. Toomet, O. and Henningsen, A. Sample Selection Models in R: Package sampleSelection. Journal of Statistical Software, 27, 7 (2008). 33. Toubia, O. Idea generation, creativity, and incentives. Marketing Science, 25, 5 (2006), 411–425. 34. Toubia, O. and Netzer, O. Idea Generation, Creativity, and Prototypicality. Marketing Science, 36, 1 (2017), 1–20. 16
ai_researcher
6
Exploring_Scientific_Hypothesis_Generation_with_Mamba.pdf
Preprint LEARNING MAMBA AS A CONTINUAL LEARNER Chongyang Zhao, Dong Gong∗ University of New South Wales (UNSW Sydney) {chongyang.zhao,dong.gong}@unsw.edu.au ABSTRACT Continual learning (CL) aims to efficiently learn and accumulate knowledge from a data stream with different distributions. By formulating CL as a sequence pre- diction task, meta-continual learning (MCL) enables to meta-learn an efficient continual learner based on the recent advanced sequence models, e.g., Transform- ers. Although attention-free models (e.g., Linear Transformers) can ideally match CL’s essential objective and efficiency requirements, they usually perform not well in MCL. Considering that the attention-free Mamba achieves excellent per- formances matching Transformers’ on general sequence modeling tasks, in this paper, we aim to answer a question – Can attention-free Mamba perform well on MCL? By formulating Mamba with a selective state space model (SSM) for MCL tasks, we propose to meta-learn Mamba as a continual learner, referred to as MambaCL. By incorporating a selectivity regularization, we can effectively train MambaCL. Through comprehensive experiments across various CL tasks, we also explore how Mamba and other models perform in different MCL scenarios. Our experiments and analyses highlight the promising performance and generalization capabilities of Mamba in MCL. 1 INTRODUCTION Continual learning (CL) aims to efficiently learn and accumulate knowledge in a non-stationary data stream (De Lange et al., 2021; Wang et al., 2024) containing different tasks. Given a sequence of data DT = ((x1, y1), ..., (xt, yt), ..., (xT , yT )) with a series of paired observations xi (e.g., images) and targets yi (e.g., class labels) from different tasks, CL is usually produced to learn one model Pϕt (y|x) parameterized by ϕt that can perform prediction for any tasks corresponding to the seen data Dt. For example, in class incremental learning (CIL) (Rebuffi et al., 2017; Zhou et al., 2023), a widely studied CL scenario, DT consists of data with incrementally added classes, and Pϕt(y|x) is trained to recognize all previously seen classes. To ensure computational and memory efficiency, CL methods are explored for learning from data streams while minimizing the storage of historical data or limiting running memory growth, such as restricting the increase rate to be constant or sub-linear (De Lange et al., 2021; Ostapenko et al., 2021). The main challenge in CL is to preserve performance on previously seen tasks while continually updating the model parameters ϕt (De Lange et al., 2021; Wang et al., 2024). 4 2 0 2 c e D 1 ] G L . s c [ 1 v 6 7 7 0 0 . 2 1 4 2 : v i X r a CL methods continually train/update the model Pϕt(y|x) from seen sequence Dt at arbitrary step t and perform predictions on any observation xtest (following seen data distribution) for the cor- responding ytest. From this perspective, the whole learning and inference process in CL can be seen as a sequence prediction (SP) problem, i.e., predicting ytest of a query xtest conditioning on , xtest) the seen data sequence and the testing input, i.e., (Dtrain , ytrain 1 In conventional CL, the model parameter ϕt is (Lee et al., 2023; Bornschein et al., 2024). trained to maintain the states on the sequence, i.e., knowledge in the historical data, in a way of ϕt+1 = optim-step(ϕt, xt, yt). This connection between sequence prediction and CL train- ing process motivates us to investigate meta-learning a continual learner as a sequence prediction model, for computation-and-data-efficient CL. Through meta-continual learning (MCL) framework (Lee et al., 2023; Son et al., 2023), a continual learner fθ() parameterized by θ is trained via sequence prediction on multiple CL episodes. A meta-learned fθ() can take a given sequence (Dt, xt+1) as , xtest) ≡ (xtrain , ..., xtrain , ytrain t 1 t t ∗D. Gong is the corresponding author. 1 Preprint input and predict the label yt+1 = fθ((Dt, xt+1)), which is equivalent to a predictive model con- ditioning on the seen data stream Pθ(y|x, Dt). The data stream can also be seen as a context of the tasks for performing prediction for a new query. Transformers (Vaswani et al., 2017; Touvron et al., 2023) have shown strong sequence modeling capabilities and next-token prediction performance in language modeling (LM), relying on self- attention across per-step tokens and emergent in-context learning (ICL) ability (Brown et al., 2020; Garg et al., 2022). It is thus straightforward to meta-learn a Transformer as the SP-based continual learner (Lee et al., 2023; Son et al., 2023). Given a data stream in CL, a meta-learned Trans- former generates a new key-value pair at each step and makes the prediction for each query based on attention over the key-value pairs retained from all preceding training samples. Benefiting from the retrieval-based modeling, Transformers can perform effectively in continual learning (CL) (Lee et al., 2023; Bornschein et al., 2024). However, they require maintaining key-value pairs for all seen training samples in a key-value cache, allowing the model to access all seen samples during inference. It contradicts the principles and intended purpose of continual learning. Although the key- value cache can be viewed as the hidden state of a recurrent neural network (RNN) (Katharopoulos et al., 2020; Lee et al., 2023), analogous to the parameters of a learner, its size grows linearly with the number of all seen tokens and suffers from increasing memory and computational demands over time. Despite their advanced sequence modeling capabilities as in (Lee et al., 2023), Transformers may not be an ideal choice for continual learning due to misalignment with the objectives of CL and efficiency concerns. A series of attention-free models achieve efficiency by approximating the softmax attention with kernel methods and linear operations, leading to constant hidden state sizes and linear computation complexity, such as Linear Transformer (Katharopoulos et al., 2020) and Performer (Choromanski et al., 2020). Although these efficient Transformers align better with the purpose of CL, it is seen that they cannot perform well in MCL (Lee et al., 2023), due to limitations in approximation and insufficient expressive power (Katharopoulos et al., 2020; Choromanski et al., 2020; Tay et al., 2020). Recent advancements of the state space models (SSMs) on sequence modeling lead to a series of attention-free models that are efficient in processing long sequences with nearly linear computa- tion (Gu et al., 2021a;b). By integrating time-varying modeling into the SSM as a selective SSM, Mamba (Gu & Dao, 2023; Dao & Gu, 2024) can achieve near state-of-the-art performances on se- quence modeling tasks (e.g., LM tasks (Gao et al., 2020)). Given its exceptional performance as an attention-free model with a constant hidden state size, which ideally aligns with the requirements of MCL, rather than relying on Transformers (Lee et al., 2023), we pose a concrete question: Can the attention-free model Mamba perform well in MCL? In this paper, we investigate this question by formulating the selective SSM and Mamba to handle MCL, referred to as MambaCL. We iden- tify that it is not trivial to train the sequence prediction models, including Mamba, for MCL, due to difficulty in convergence. To address the issue, we introduce a selectivity regularizer relying on the connection across SSM/Mamba and Linear Transformers and Transformers, which guides the behaviour of the generated time-variant parameters of the selective SSM during training. Relying on the specifically designed regularization and customized designs, we achieve an effective Mam- baCL model for MCL. Beyond the scope of the existing work (Lee et al., 2023) focusing on basic MCL formulation and setting, we expand the formulation and studies to more realistic scenarios and try to answer – how can different models (including Transformers and Mamba) perform in differ- ent MCL tasks. Our experiments and analyses show that Mamba can perform well on most of the MCL scenarios. Mamba performs significantly better than other attention-free methods, e.g., Lin- ear Transformers; Mamba can match or outperform the performances of Transformers with fewer parameters and computations. Specifically, on some challenging with more global structures across the sequences (e.g., fine-grained data) and many challenging scenarios (e.g., domain shifts and long sequences), Mamba can perform more reliably and effectively than Transformers, demonstrating better generalization and robustness. Additionally, we analyzed the influence of the model design and conducted preliminary studies to explore the potential of model variants of Mamba, e.g., Mamba mixture-of-experts (MoE), in MCL. 2 RELATED WORK Continual learning focuses on mitigating catastrophic forgetting, a significant challenge in model training across sequential tasks (De Lange et al., 2021; Wang et al., 2024). The predominant ap- proaches to continual learning are categorized into three main types: replay-based, regularization- based, and architecture-based methods. Replay-based methods, such as maintaining a memory 2 Preprint buffer for old task data, effectively prevent forgetting but are constrained by buffer size and potential privacy issues (Rebuffi et al., 2017; Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019; Buzzega et al., 2020). Alternatively, generative models can approximate previous data distributions to pro- duce pseudo-samples (Shin et al., 2017; Rostami et al., 2019; Riemer et al., 2019). Regularization- based strategies (Kirkpatrick et al., 2017; Zenke et al., 2017; Nguyen et al., 2017; Li & Hoiem, 2017; Aljundi et al., 2018; Zhang et al., 2020) mitigate forgetting by penalizing changes to critical param- eters of previous tasks and employing knowledge distillation to retain earlier knowledge. Lastly, architecture-based methods (Yoon et al., 2017; Serra et al., 2018; Li et al., 2019; Yan et al., 2021; Ye & Bors, 2023) allocate specific subsets of parameters to individual tasks, utilizing techniques like task masking or dynamic architecture adjustment to minimize task interference. Meta-learning is a learning paradigm where models improve their ability to adapt to new tasks by leveraging limited data and prior experience. The bi-level optimization framework of meta-learning is inherently suited for continual learning, as it focuses on balancing the fit for current tasks while maintaining generalization across all previously encountered tasks (Riemer et al., 2018; Beaulieu et al., 2020; Gupta et al., 2020; Wu et al., 2024). Meta-continual learning (MCL) deviates from traditional continual learning settings by incorporating multiple continual learning episodes, struc- tured into meta-training and meta-testing sets (Son et al., 2023). Lee et al. (2023) conceptualizes MCL as a sequence modeling problem, aligning the continual learning objectives with autoregres- sive models typical in language modeling. OML (Javed & White, 2019) employs a dual-architecture approach, updating a prediction network while keeping the encoder static during training, then opti- mizing both components in meta-testing for stability. MetaICL (Min et al., 2022) introduces a meta- training framework for natural language in-context learning. MetaICL sharing a common mathe- matical formulation with MCL, while the underlying functions to be fitted are distinct. Compared to text sequences, the problems we address are inherently more complex, requiring the learning of more intricate functions and making the learning process more challenging. Transformer architecture is esteemed for its superior sequence modeling capabilities, largely at- tributed to its attention mechanism (Vaswani et al., 2017). Decoder-only models like GPT (Brown et al., 2020) and Llama (Touvron et al., 2023), which process inputs causally, have significantly propelled the success of modern deep learning. Although Transformers employing softmax-based attention benefit from efficient parallel training, they encounter challenges due to their quadratic computational complexity relative to sequence length. This has prompted a shift towards more RNN-like models capable of linear-time sequence modeling. As a viable alternative, linear at- tention substitutes the traditional exponential similarity function with a simple dot product across transformed key/query vectors, gaining traction through recent advancements (Katharopoulos et al., 2020; Choromanski et al., 2020; Tay et al., 2020). State Space Models (SSMs), inspired by traditional state-space models (Kalman, 1960), have re- cently emerged as a promising architecture for sequence modeling (Gu et al., 2021a;b). Mamba incorporates time-varying parameters into the SSM framework through a selective architecture and enhances training and inference efficiency with a hardware-aware algorithm (Gu & Dao, 2023; Dao & Gu, 2024). It is widely applied in fields such as computer vision and natural language process- ing (Zhu et al., 2024; Zhang et al., 2024; Han et al., 2024; Lieber et al., 2024). 3 PROBLEM FORMULATION AND METHODOLOGY given (CL), non-stationary T = {(xn, yn)}N = In Continual Learning a ((x1, y1), . . . , (xt, yt), . . . , (xT , yT )) as training data, where xt ∈ Xt and yt ∈ Yt. A pre- dictive model gϕt() : X → Y is trained on the stream (at step t) as Pϕt(y|x) for a potential testing set Dtest n=1, where xt ∈ Xt and yt ∈ Yt, with the same distribution as the training set. A conventional continual learner is manually crafted for continual updating/optimizing the model parameter ϕt. The data stream DT usually consists of the data from different tasks or distributions, which is usually piecewise stationary within an interval of a task. In a general online CL setting, each sample point can only be seen once; if the samples belonging to one task can be held and accessed as a batch, it is an offline CL. We mainly consider the online CL setting. stream Dtrain data T For achieving efficient CL, Meta-Continual Learning (MCL) (Lee et al., 2023; Son et al., 2023) is formulated to meta-learn a parameterized continual learner that can efficiently learn/update a predictive model (in X → Y) from samples in a data stream (in X × Y). Considering that the (continually) learned model from a sequence Dtrain is deployed for prediction given a testing sample t 3 Preprint Figure 1: The overall framework of our proposed methods. We meta-train a Mamba Learner fθ() to perform meta-continual learning (MCL) by processing an online data stream containing paired (x, y) examples. Meta-learning of this continual learner is conducted across multiple CL episodes. The model produces predictions by relying on the retained hidden state. Here, we demonstrate how the Mamba learner recurrently processes input data at steps 0, 2, and t − 1, respectively. t t input xtest, i.e., Pθ(ytest|xtest, Dtrain ). Thus MCL is equivalent to learning a functional model of the predictive model functions. And MCL can be treated as the task of learning a sequence prediction model fθ() : (X × Y) × X → Y parameterized by θ. fθ() can continually take streaming data as input and make predictions for any testing samples in a Dtest conditioning on Dtrain Dtrain via t ˆytest = fθ(xtest, Dtrain ). The learner updates internal hidden states to reflect the continually taken data samples corresponding to a CL process. By giving multiple episodes with (Dtrain, Dtest), the parameter θ of the learner can be learned in the meta-learning/updating process for optimizing the performance on all Dtest. Note that the targets y in different episodes are independent, which are only symbolic indicators without general semantic meaning across episodes. In this work, we focus on MCL based on a parameterized sequence prediction model for general purpose, despite the existence of other types of meta-learning scheme (Finn et al., 2017; Javed & White, 2019). 3.1 PRELIMINARIES: TRANSFORMERS, LINEAR TRANSFORMERS, AND SSMS Transformers produce next-token predictions in sequence relying on a self-attention mechanism (Vaswani et al., 2017). Given a sequence of N vectors in M -dimension denoted as Z ∈ RN ×M , the vanilla self-attention is formulate with a softmax attention method: t Q = ZWQ, K = ZWK, V = ZWV , ut = N (cid:88) j=1 (cid:16) exp QtK⊤ j / (cid:16) √ (cid:17) d (cid:80)N j=1 exp QtK⊤ j / √ (cid:17) Vj, d (1) where WQ ∈ RM ×C, WK ∈ RM ×C, WV ∈ RC×C are the projection weight matrices, ut ∈ RC denote the output embedding, and C is the hidden dimension. Qt, Kj, and Vj denote the indexed vectors in the corresponding matrices. The notation fonts are slightly abused to be consistent with the literatures. Each input token generates a key-value pair, leading to a linearly increased key- value cache size. Softmax attention measures the similarities between the query-key pairs, leading to O(N 2) complexity. Linear Transformer (Katharopoulos et al., 2020) reduces the complexity relying a linear attention method. By applying a feature representation function ϕ() corresponding to a kernel for Q and K, the linear attention method replaces the softmax attention with a linear operation as: ut = N (cid:88) j=1 QtK⊤ j j=1QtK⊤ j (cid:80)N Vj = (cid:16)(cid:80)N Qt j=1K⊤ j Vj (cid:17) (cid:16) (cid:80)N j=1K⊤ j Qt (cid:17) , (2) where Q = ϕ(ZWQ), K = ϕ(ZWK), V= ZWV , ϕ(·) is set as ϕ(x) = elu(x)+1 in (Katharopou- los et al., 2020). Performer employs ϕ(x) = exp (cid:0)xWp − ∥x∥2/2(cid:1), with Wp comprising orthog- 4 Meta-Loss𝑒𝑝𝑖𝑠𝑜𝑑𝑒𝑦!𝑦"……𝑦#𝑦$𝑦%&!𝑞𝑢𝑒𝑟𝑦……Mamba Learner𝑓!A"B"x"H#H! updateA,B,x,H-A.B.x.H./"………H.C.C,H" updateH#$% updateC"Mamba Learner𝑓!Mamba Learner𝑓!𝑦& predict𝑦%+𝑦%𝑰𝒏𝒏𝒆𝒓 𝑳𝒐𝒐𝒑𝑶𝒖𝒕𝒆𝒓 𝑳𝒐𝒐𝒑 Preprint onal random vectors (Choromanski et al., 2020). Through rearranging (QK⊤)V as Q(K⊤V) according to associative property, the computational complexity is reduced to O(N ). In practice, the attention operations in Eq. (1) and (2) can be implemented in autoregressive models, where calculation of ut can only see the proceeding tokens with j ≤ t. Specifically, with the causal masking, the linear attention can be rewritten as: ut = (cid:16)(cid:80)t Qt j=1 K⊤ j Vj (cid:17) (cid:16) (cid:80)t j=1 K⊤ j Qt (cid:17) = QtSt QtGt , St = St−1 + K⊤ t Vt, Gt = Gt−1 + K⊤ t , (3) where St = (cid:80)t j=1 K⊤ linear attention by cumulatively updating St and Gt, which serve as internal hidden states. j . This enables recurrent computation of causal j Vj and Gt = (cid:80)t j=1 K⊤ In the autoregressive process with causal masking, the softmax attention operation Eq. (1) in Transformer can be seen as a recurrent process based on an accumulated set of key-value pairs {(Kj, Vj)}t j=1 as a hidden state (Katharopoulos et al., 2020). Structured state space sequence models (SSM or S4) (Gu & Dao, 2023; Gu et al., 2021a; Dao & Gu, 2024) are sequence models describing a system that maps input zt ∈ R to output ut ∈ R through a hidden state ht ∈ RC×1 in a discrete sequence applied with neural networks. Specifically, SSMs can be formulated with parameters A ∈ RC×C, B ∈ RC×1, C ∈ R1×C, and D ∈ R, as ht = Aht−1 + Bzt, ut = Cht + Dzt. (4) We directly formulate a discrete SSM in Eq. (4), where the A and B are transformed from a con- tinuous version A′ and B′ relying on a timescale parameter ∆ ∈ R, via A = exp(∆A′) and B = A−1(A − I) · ∆B′. A and B perform the selection or gating in hidden state updating. 3.2 SSM AND MAMBA FOR META-CONTINUAL LEARNING Selective SSM & Mamba in MCL. The dynamics of the basic SSM or S4 are time-invariant, restricting the model’s ability to han- dle complex sequences. Mamba (Gu & Dao, 2023) incorporates a selective SSM into the model by generating the input-dependent SSM parameters to reflect the input/step-sensitive selection process. The selective SSM can be written as: ht = Atht−1 + Btzt, ut = Ctht + Dzt, (5) where At, Bt, and Ct are produced in Mamba relying on the input token at step t. Different from Transformers maintaining key-value pairs for all input tokens (leading to linearly increasing state size), Mamba compresses the context information in a fixed/constant-size hidden state, matching the efficiency requirements and original ob- jective of CL. In our MCL tasks and other practical scenarios, we need Mamba to handle the input sequence Z ∈ RN ×M with each token as a vector zt ∈ RM . Mamba applies the selective SSM to each dimension/channel independently: Figure 2: designs of Mamba block. Illustration of the Ht = [At,iht−1,i + Bt,izt,i]M i=1, ut = CtHt + D ⊙ zt, (6) where Ht ∈ RC×M is a concatenation of the hidden state corresponding to all M dimensions of the input embedding, Ct ∈ R1×C, D ∈ R1×M , and ut ∈ R1×M . As shown in Fig. 2, the Mamba block used in our work applies a 1-D convolution on the input tokens and then projects the representations to obtain the input-dependent SSM parameters (Dao & Gu, 2024). Multiple Mamba blocks are stacked homogeneously. Relying on the selective mechanism (Gu & Dao, 2023), Mamba’s ability to handle complex MCL tasks can be stronger than other attention-free models and competitive or better than Transformers with a key-value cache. 3.2.1 MCL WITH MAMBA AS A CONTINUAL LEARNER We will train a Mamba model fθ() to perform CL by processing an online data stream containing paired (x, y); the model can produce predictions relying on the retained hidden state, for all the seen 5 …ConvANormLinearLinearLinearZBCMamba Block×LSSMZUHADBC𝜎𝜎×…U Preprint tasks. Meta-learning of such continual learner will be conducted on multiple CL episodes. Each CL episode contains a training data stream Dtrain and a testing set Dtest from the same task distribution, denoted as P(X ,Y) with (Dtrain, Dtest) ∼ P(X ,Y). For example, in ICL, all classes used for testing should have been seen in the preceding classes in the data stream. The objective of MambaCL is to meta-learn the parameter of Mamba model, i.e., θ, to perform prediction ˆytest = fθ((Dtrain, xtest)) for i ) ∈ Dtest. The CL task can be treated as next token prediction problem in sequence: any (xtest (xtrain , ..., xtrain k . The meta-learning of a Mamba continual learner can be 1 performed by optimizing the sequence prediction task on a series of sampled CL episodes: k ) → ytest T , ytrain T , xtest i , ytrain 1 , ytest min θ E(Dtrain,Dtest)∼P(X ,Y) (cid:88) (xtest,ytest)∈Dtest ℓ(fθ((Dtrain, xtest)), ytest), (7) where ℓ(·, ·) denotes the proper loss function for different tasks, e.g., classification or regression. On the data stream, the meta-learned Mamba fθ() recognizes the association relationship between x and y through the sequence, and then recurrently updates the hidden state Ht, which can used for prediction, as shown in Fig. 1. This efficient online CL process selects and compresses the knowledge in the data stream in a time-variant and content-aware selective manner. To further validate the extension ability of Mamba in MCL, we also explore the potential of incorporating mixture-of-expert (MoE) architecture into Mamba model (Fedus et al., 2022; Pioro et al., 2024) for learning and mixing multiple learners. Target token embeddings. The value of the target y is essentially a symbol with consistent indica- tion meaning for x within each episode, which does not take any global meaning across the episode. The model is thus trained to handle arbitrary CL episodes with the ability to generalize to different domains. Instead of pre-defining a small and fixed feasible set of candidate targets e.g., classes, and a restricted prediction head, we conduct token embeddings for targets based on a universal and large vocabulary (Lee et al., 2023), inspired by the tokenization in LMs (Sennrich, 2015; Devlin et al., 2019). For each episode, a subset of unique codes is randomly picked from the vocabulary to indicate different classes; in inference, the sequence model produces the probability of the next step for all possible tokens in the vocabulary. Instead of conducting experiments of meta-training and meta-testing with the same number of classes (Lee et al., 2023), we conduct generalization analyses. 3.2.2 REGULARIZING SELECTIVITY OF MAMBA FOR META-TRAINING It is non-trivial to meta-learn the continual learner for associating the input and target by seeing a data stream, for both Transformers and attention-free models. The meta-training can be slow to converge or hard to find an optimal solution. We thus consider giving additional guidance in meta- training, by enhancing the association between the query tokens (i.e., testing input) with correlated preceding tokens. During training, for an input x (corresponding to a pair (x, y)) in the stream at the step 2t + 1 after 2t tokens of t samples, its association relationship with preceding tokens can be represented as p2t+1 = [1y2t+1(y1), 1y2t+1 (y1), ..., 1y2t+1(yt), 1y2t+1(yt)] with p ∈ {0, 1}2t, where 1y(y′) is an indicator function with 1y(y′) = 1, if y = y′, and 1y(y′) = 0, if y ̸= y′. We hope the meta-learned learner can also identify and use this pattern in CL (i.e., meta-testing). Transformers maintain the key-value pairs for all samples as the state. For prediction at a step, attention is applied to all the stored keys through a query, retrieving the learned information. As shown in Eq. (1), for the token at step 2t + 1, the attention weights/patterns to previous-step tokens j=1 ∈ R2t. Note that we omit the normalization terms can be denoted as qTrans in attention weights to simplify the presentation. The meta-learning guidance can be applied by encouraging the similarity between qTrans 2t+1 = [Q2t+1K⊤ j ]2t 2t+1 and p2t+1. Mamba and other attention-free methods (e.g., Linear Transformer) compress knowledge in a hidden state at each step, as shown in Eq. (3) and (5). Specifically, Mamba applies an input-dependent selection and gating at each step. Although there are no explicit attention weights produced in Mamba, we formulate the regularization for the selectivity of Mamba by bridging the selective SSM (in Eq. (5) and (6)) with linear attention (in Eq. (3)) and the softmax attention (in Eq. (1)). As shown in Eq. (3), Linear Transformer updates the state S (and the normalization term G) using kernel-based K and V, and performs prediction based on Q. Considering that the K, V, and Q in linear attention share the same meaning as in the softmax attention, we still can obtain qLNTrans j=1 by storing the Kj of intermediate tokens only during training for regularization. By examining the duality relationship between the SSM in Eq. (5) and the formulation of Linear Transformer in Eq. (3) (Dao & Gu, 2024), we can identify the connections between the selective parameters, i.e., 2t+1 = [Q2t+1K⊤ j ]2t 6 Preprint Ct and Bt, in SSM and query-key embeddings, i.e., Qt and Kt, in linear attention. Relying on the linear attention as the bridge, we can obtain the associative indicators of Mamba as qMamba 2t+1 = [C2t+1B⊤ j=1. To regularize the models’ attention or selection behavior in meta-training, for a query sample (x, y) in a sequence, we apply a selectivity regularization: j ]2t ℓslct((x, y)) = KL(pidx((x,y)), q∗ idx((x,y))), (8) where idx() indicates the step of the token x, ∗ indicates the arbitrary model, and KL divergence is used to minimize the difference between model’s association pattern and the ground truth. Note that this regularization and maintained intermediate components are not necessary in inference. We apply this regularization to MambaCL and other sequence prediction models (weighted by a scalar λ) together with the MCL objective in Eq. (7), which improves the meta-training stability and con- vergence for all models. 4 EXPERIMENTS AND ANALYSES Experimental setup. To evaluate the performance of various architectures across multiple types of tasks, we conducted a series of experiments. Firstly, we divided one dataset into multiple tasks, typically with each task representing a distinct class within the dataset. We distributed these tasks into two non-overlapping sets, i.e., meta-training and meta-testing. The construction of CL episodes for both meta-groups follows the same procedure: for each CL episode, we randomly select K distinct tasks. K is set as 20 by default. We also investigated scenarios with different of K values. By default, each task in both the training and testing sequences includes five samples (5-shot). Additionally, involving fewer and more shots were also explored to further assess adaptability and learning efficiency. Datasets. We conduct experiments across various datasets: general image classification tasks included Cifar-100 (Krizhevsky & Hinton, 2009), ImageNet-1K (Russakovsky et al., 2015), ImageNet-R (Russakovsky et al., 2015), MS-Celeb-1M (Celeb) (Guo et al., 2016),CASIA Chinese handwriting (Casia) (Liu et al., 2011), and Omniglot (Lake et al., 2015); fine-grained recogni- tion tasks involved CUB-200 (Wah et al., 2011), Stanford Dogs (Khosla et al., 2011), Stanford Cars (Krause et al., 2013), and FGVC-Aircraft (Aircraft) (Maji et al., 2013); the large domain shift tasks featured (Peng et al., 2019); and regression tasks consisted of sine wave reconstruction (sine), image rotation prediction (rotation), and image completion (completion). Implementation details. We conduct our main experiments on a single NVIDIA A100 GPU. We repeated each experiment five times and reported the mean and standard deviation of these runs. Results are reported upon convergence on the meta-training set. The batch size is set to 16, and the Adam optimizer is applied. We set the initial learning rate to 1 × 10−4, with decays of 0.5 every 10, 000 steps. For all models, we ensure a consistent setup to enable fair comparisons and make sure all models achieve satisfactory results, with additional details provided in Sec. B. Specifically, for experiments involving training from scratch, we adopt the settings from (Lee et al., 2023) to maintain fairness. For the networks built on pre-trained models, we use the OpenAI/CLIP-ViT-B16 (Radford et al., 2021; Ilharco et al., 2021) as our image encoder, with its parameters frozen during training and an additional trainable linear projector. 4.1 EXPERIMENTAL RESULTS AND ANALYSES In our experiments, we assess several models including OML (Javed & White, 2019), Vanilla Trans- former (Vaswani et al., 2017), Linear Transformer (Katharopoulos et al., 2020), Performer (Choro- manski et al., 2020), and our MambaCL. OML serves as a conventional SGD-based meta-continual learning baseline, featuring a two-layer MLP prediction network on top of a meta-learned encoder. Transformers exhibit advanced sequence modeling capabilities, but they may not be optimal for (CL) due to computational inefficiencies and the broad objectives associated with CL. To enhance efficiency, Linear Transformer and Performer utilize kernel methods and linear operations to approx- imate softmax attention, which maintain a constant hidden state size and exhibit linear computational complexity. All transformer models share a similar structure, each with 4 layers and 512 hidden dimensions. Mamba is an attention-free model optimized for efficiently processing long sequences with near-linear computational demands. Our Mamba Learner also utilizes 4 layers and 512 hidden dimensions, facilitating comparison with the transformer models, yet it features significantly fewer parameters. General image classification tasks. Table 1 and 2 present comparative performance analyses of dif- ferent architectures on several general image classification tasks, initiating training from scratch and 7 Preprint Table 1: Classification accuracy (%) across 20-task 5-shot MCL, training from the scratch on general image classification tasks. The best and second best performances are indicated in red and blue, respectively. Cifar-100 Omniglot Casia Celeb Method Meta- Train Meta- Train Meta- Test 99.4±0.1 10.1±0.4 Meta- Test 99.9±0.0 75.2±2.2 97.2±0.1 96.8±0.1 58.2±0.3 57.5±0.2 OML Transformer 100.0±0.0 17.2±0.8 100.0±0.0 86.3±0.6 99.7±0.0 99.6±0.0 70.9±0.2 70.0±0.2 99.9±0.1 16.6±0.5 100.0±0.0 64.0±1.4 99.6±0.0 99.3±0.0 68.9±0.3 67.6±0.3 Linear TF 99.9±0.1 62.9±4.6 99.5±0.0 99.3±0.0 67.5±0.5 66.3±0.2 100.0±0.0 17.1±0.3 Performer 99.9±0.1 18.3±0.4 100.0±0.0 87.7±0.5 99.8±0.1 99.5±0.1 69.4±0.2 68.1±0.1 Mamba Meta- Train Meta- Train Meta- Test Meta- Test Table 2: Classification accuracy (%) across 20-task 5-shot MCL, training from the pre-trained mod- els on general image classification tasks. Method OML Transformer Linear TF Performer Mamba Cifar-100 64.4±0.4 62.7±0.7 54.3±0.7 53.4±0.3 67.1±0.4 ImageNet-1K ImageNet-R 90.5±0.3 93.5±0.1 89.1±0.2 90.8±0.5 93.6±0.2 67.5±0.3 63.6±0.2 55.7±0.3 52.8±0.9 69.7±0.4 Celeb 72.8±0.1 78.4±0.1 76.5±0.2 76.8±0.1 77.0±0.1 Casia 81.5±0.5 93.8±0.2 90.9±0.4 93.0±0.3 93.1±0.2 Omniglot 90.4±0.2 94.4±0.2 86.5±0.5 89.3±0.3 95.9±0.2 extracting image representation based on a pre-trained model, respectively. In Table 1, within the CIFAR-100 datasets, all methods suffer from substantial meta-overfitting, as evidenced by the large gap between meta-training and meta-testing scores. This may be attributed to the lower task (class) diversity. In Table 2, the results for continual learners built on pre-trained models exhibit similar trends in CIFAR-100 and ImageNet-R. Furthermore, Mamba demonstrates superior performance compared to other methods in these scenarios, underscoring its robustness against overfitting. On larger datasets such as ImageNet-1K, Casia, and Celeb, Mamba performs on par with or surpasses transformers. Without losing generality, we use the pre-trained image representations for our exper- iments by default. Table 3: Classification accuracy (%) across 20-task 5-shot MCL on fine-grained recognition tasks. Method OML Transformer Linear TF Performer Mamba CUB-200 78.7±0.6 81.4±0.4 69.7±0.7 69.2±0.8 83.0±0.4 Dogs 72.4±0.5 77.5±0.6 69.7±0.7 69.4±0.4 79.2±0.5 Cars 83.6±0.7 87.0±0.3 76.6±0.8 73.9±0.8 88.3±0.4 Aircraft 49.5±0.2 53.9±0.7 49.0±0.7 48.6±0.6 55.3±0.6 Fine-grained recognition tasks. Table 3 presents a performance comparison of different architec- tures on fine-grained recognition datasets. In fine-grained datasets, where only subtle differences exist between classes (e.g., the CUB-200 dataset, which contains 200 bird subcategories), models need capture global information across the entire training episode to distinguish these fine-grained differences. Mamba outperforms other models across these datasets, potentially due to its robustness to capture subtle inter-class distinctions. 4.2 GENERALIZATION ANALYSES We hope a meta-learned learner has the ability to be generalized to unseen scenarios. We conduct generalization analyses for Transformer models and Mamba in scenarios involving generalization to longer untrained sequence lengths, larger domain shifts, and sensitivity to the noise inputs, for meta- testing. Additionally, to analyze the behaviors of these models, we visualize the attention weights of Transformers and the associative weights of Mamba to demonstrate their attentions and selectivity patterns in Sec. D. Generalization to different stream length. To effectively address episodes of continual learning of indefinite length, the learning algorithm should demonstrate the capability to generalize beyond the sequence lengths observed during meta-training. We conducted length generalization experiments on ImageNet-1K, training vanilla Transformers, linear Transformers, and Mamba on 20-task 5-shot MCL, each with a vocabulary of 200 tokens. The length of a continual learning episode is calculated as 2 × tasks × shots + 1. 8 Preprint (a) Task (b) Shot (c) Noise Figure 3: Generalization Analysis on ImageNet-1K, meta-trained on 20-task 5-shot MCL: (a) meta- testing on varying number of tasks (5-shot); (b) meta-testing on varying number of shots (20-task); (c) meta-testing on varying inputs noise intensity level. Meta-testing on different numbers of tasks. Fig. 3a shows the performance of the three models meta-trained on a 20-task, 5-shot setup, evaluated during meta-testing across varying numbers of tasks while keeping a constant shot number of 5. Both the Transformer and Linear Transformer suffer significant performance degradation when meta-testing at untrained episode lengths, even for simpler tasks such as the 10-task, 5-shot configuration. Mamba’s meta-testing performance on the 10-task setup is better relative to the meta-trained 20-task setup, and the performance degradation is relatively mild compared to transformers as the number of tasks gradually increases. Meta-testing on different number of shots. In Fig. 3b, we evaluate the performance of three mod- els meta-trained on a 20-task, 5-shot setup, evaluated during meta-testing across varying numbers of shots while maintaining a constant task count of 20. Both the vanilla Transformer and linear Transformer exhibit significant performance degradation, likely due to overfitting the 20-task, 5- shot pattern. However, Mamba experiences only about a 10% performance degradation when the meta-testing shot number reaches 50, which is ten times the meta-training episode length. Fig. 3a and 3b demonstrate Mamba’s robustness in length generalization. Results and analyses on larger domain shift. We explore a larger domain shift scenario using the DomainNet dataset (containing 6 different domains) to further evaluate model generalization to unseen input distributions, with one domain reserved for meta-testing and the remaining domains for meta-training, which represents a more realistic setting. The experimental results are presented in Table 4. Overall, these models demonstrate the capability to handle large domain shift scenar- ios. Mamba performs on par with or surpasses Transformer models across various target domains, benefiting from the potentially better generalization ability from smaller-size model with less over- fitting possibility. Vanilla Transformers perform well when the targets are real images or paintings. Mamba excels particularly in the Quickdraw domain, which exhibits larger differences compared to other domains. This performance may be attributed to Mamba’s robustness in processing inputs with larger deviations from the training distribution. Table 4: Classification accuracy (%) across 20-task 5-shot MCL on DomainNet dataset. (inf,pnt,qdr,rel,skt→clp denotes meta-testing on Clipart domain, and the remaining domains used for meta-training. clp: clipart, inf: infograph, pnt: painting, qdr: quickdraw, rel: real, skt: sketch.) Method inf,pnt,qdr, rel,skt→clp clp,pnt,qdr, rel,skt→inf clp,inf,qdr, rel,skt→pnt clp,inf,pnt, rel,skt→qdr clp,inf,pnt, qdr,skt→rel clp,inf,pnt, qdr,rel→skt Transformer Linear TF Performer Mamba 91.8±0.1 91.0±0.0 91.3±0.2 91.7±0.2 69.4±0.1 66.2±0.8 66.4±0.6 70.2±0.2 82.6±0.2 80.6±0.8 81.3±0.2 81.8±0.2 50.2±0.6 30.7±1.4 39.4±1.7 55.6±0.8 93.8±0.1 92.9±0.1 92.8±0.1 93.0±0.1 85.9±0.3 85.5±0.1 84.8±0.5 87.2±0.3 Avg 79.0±0.3 74.5±0.5 76.0±0.6 79.9±0.3 Sensitivity to the noisy inputs. To evaluate the sensitivity of different models to noisy inputs, we conduct experiments on meta-trained 20-task, 5-shot MCL models using the ImageNet-1K dataset. Within each meta-testing episode, we apply noise to the input embeddings xi of five randomly selected samples. We add the noise following Gaussian distributions characterized by a mean (µ) of 0 and a standard deviation (σ) that ranges from 0 to 10. As depicted in Fig. 3c, the vanilla transformer and linear transformer suffer significant performance degradation. In contrast, Mamba demonstrates robust performance when processing inputs with high levels of noise. 9 1020406080100Number of Task20406080100Classification accuracy (%)TransformerLinear TFMamba51020304050Number of Shot20406080100Classification accuracy (%)TransformerLinear TFMamba0246810Noise Intensity Level20406080100Classification accuracy (%)TransformerLinear TFMamba Preprint Table 5: Classification accuracy (%) and regression errors across 100-task 5-shot MCL. Method OML Transformer Linear TF Mamba Casia 93.2±0.9 99.0±0.0 97.7±0.1 99.1±0.1 Celeb 45.5±0.2 60.5±0.1 54.7±0.1 59.9±0.1 Sine 0.0498±0.0004 0.0031±0.0002 0.0139±0.0003 0.0054±0.0001 Rotation 0.524±0.087 0.031±0.001 0.047±0.002 0.025±0.001 Completion 0.1087±0.0001 0.0989±0.0001 0.1084±0.0001 0.0895±0.0001 4.3 TRANING ON LONGER EPISODES We conducted experiments to meta-train the models on longer episodes across both classification and regression tasks. Table 5 demonstrates that Mamba continues to perform comparably to Trans- former, and significantly outperforms SGD-based approaches (OML). 4.4 ABLATION STUDIES Hyper-parameter of selectivity regularization loss. We conducted an ablation study to assess the influence of training loss hyper-parameter on our Mamba model’s efficacy. Specifically, this study involved adjusting the λ values within our selectivity regularization loss, experimenting with hyper-parameters set at {0.1, 0.2, 0.5, 1.0, 2.0}, as depicted in Fig. 4. The results indicate that these variations have a minor impact on our Mamba model’s performance. Consequently, we selected a λ value of 0.5 for our experiments. SSM state size. In Fig. 5, we evaluate the impact of varying the SSM state size on the performance of our methods. We conducted experiments on ImageNet-1K and Cifar-100, training MambaCL with state sizes of 16, 32, 64, 128, and 256. The results show consistent performance improvement as state size increases. To balance performance and computational cost, we set the state size as 128. Table 6: Different Mamba archi- tectures on 20-tasks 5-shot MCL. Method Transformer Linear TF Mamba-1 MambaFormer Mamba-2 Mamba+MoE Cifar-100 62.7±0.7 54.3±0.7 59.7±0.5 62.4±0.6 67.1±0.4 68.9±0.2 ImageNet-1K 93.5±0.1 89.1±0.2 90.1±0.3 92.7±0.1 93.6±0.2 94.0±0.2 Figure 5: Ablation of vary- ing SSM state size. Figure 4: Ablation of vary- ing λ in training loss. Different architectures. In Table 6, we present an ablation study comparing different Mamba architectures in our MambaCL, including Mamba-1, MambaFormer (Park et al., 2024), and Mamba- 2. MambaFormer is a hybrid model that integrates the vanilla attention mechanism of Mamba-1 and replaces the transformer’s positional encoding with a Mamba block. The results in Table 6 demonstrate that MambaFormer also achieved performance comparable to that of the transformer. However, Mamba-2 performed better on Cifar-100 than the other variants. Mamba+MoE. In Table 6, we present experiments where Mamba was enhanced with Mixture of Experts (MoE), incorporating twelve 2-layer MLP expert networks with a dense-MoE router following each Mamba Block, resulting in improved performance. Addition- ally, we include performance metrics for vanilla and linear trans- formers for reference. Table 7: Computational cost on 20-task 5-shot MCL. Methods Params.↓ Inf. Speed↑ TF Mamba 9.2M 5.4M 325ep/s 858ep/s Computational cost. In Table 7, we detail various aspects of computational cost using our imple- mentation in PyTorch [27], executed on an NVIDIA 4090 GPU and an INTEL I9-14900k CPU. We specifically report the costs associated with meta-testing at a batch size of 1. Notably, Mamba, characterized by fewer parameters and increased processing speed, achieves performance that either matches or surpasses that of the vanilla transformer. 5 CONCLUSION In this paper, we tried to answer a question – Can attention-free Mamba perform well on MCL? We formulate the SSM and Mamba as a sequence prediction-based continual learner and meta-learn it on CL episodes. A selectivity regularization is introduced for meta-learning the models. Compre- hensive experiments show that Mamba performs well across diverse MCL scenarios, significantly outperforming attention-free methods and matching or exceeding Transformers’ performance with fewer parameters and computations. In challenging scenarios with global structures, domain shifts, and long sequences, Mamba demonstrates obvious reliability, generalization, and robustness. 10 0.10.20.51.02.0Regularization Strength 660708090100Classification accuracy (%)ImageNet-1KCifar-100163264128256State Size60708090100Classification accuracy (%)ImageNet-1KCifar-100 Preprint Limitations and future work. This study can be extended to larger-scale datasets and offline CL settings. Beyond the current MCL framework, we aim to explore the online meta-continual learning paradigm to broaden the applicability of our approach to a wider range of scenarios. REFERENCES Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In ECCV, 2018. Shawn Beaulieu, Lapo Frati, Thomas Miconi, Joel Lehman, Kenneth O Stanley, Jeff Clune, and Nick Cheney. Learning to continually learn. In ECAI, 2020. Jorg Bornschein, Yazhe Li, and Amal Rannen-Triki. Transformers for supervised online continual learning. arXiv preprint arXiv:2403.01554, 2024. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NeurIPS, 2020. Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. Dark experience for general continual learning: a strong, simple baseline. In NeurIPS, 2020. Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc’Aurelio Ranzato. On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486, 2019. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020. Tri Dao and Albert Gu. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. In ICML, 2024. Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleˇs Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. TPAMI, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2019. William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. JMLR, 2022. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. Shivam Garg, Dimitris Tsipras, Percy S Liang, and Gregory Valiant. What can transformers learn in-context? a case study of simple function classes. In NeurIPS, 2022. Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. Albert Gu, Karan Goel, and Christopher R´e. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396, 2021a. Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher R´e. Com- bining recurrent, convolutional, and continuous-time models with linear state space layers. In NeurIPS, 2021b. 11 Preprint Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In ECCV, 2016. Gunshi Gupta, Karmesh Yadav, and Liam Paull. Look-ahead meta learning for continual learning. In NeurIPS, 2020. Dongchen Han, Ziyi Wang, Zhuofan Xia, Yizeng Han, Yifan Pu, Chunjiang Ge, Jun Song, Shiji Song, Bo Zheng, and Gao Huang. Demystify mamba in vision: A linear attention perspective. In NeurIPS, 2024. Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip, July 2021. URL https://doi.org/10.5281/ zenodo.5143773. Khurram Javed and Martha White. Meta-learning representations for continual learning. In NeurIPS, 2019. Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng., 1960. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and Franc¸ois Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In ICML, 2020. Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Fei-Fei Li. Novel datasets for fine-grained image categorization. In CVPRW, 2011. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcom- ing catastrophic forgetting in neural networks. Proc. National Academy of Sciences, 2017. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In ICCVW, 2013. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech- nical report, University of Toronto, 2009. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015. Soochan Lee, Jaehyeon Son, and Gunhee Kim. Recasting continual learning as sequence modeling. In NeurIPS, 2023. Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In ICML, 2019. Zhizhong Li and Derek Hoiem. Learning without forgetting. TPAMI, 2017. Opher Lieber, Barak Lenz, Hofit Bata, Gal Cohen, Jhonathan Osin, Itay Dalmedigos, Erez Safahi, Shaked Meirom, Yonatan Belinkov, Shai Shalev-Shwartz, et al. Jamba: A hybrid transformer- mamba language model. arXiv preprint arXiv:2403.19887, 2024. Cheng-Lin Liu, Fei Yin, Da-Han Wang, and Qiu-Feng Wang. Casia online and offline chinese handwriting databases. In ICDAR, 2011. David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learning. In NeurIPS, 2017. Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013. Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. Metaicl: Learning to learn in context. In NAACL, 2022. 12 Preprint Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning. arXiv preprint arXiv:1710.10628, 2017. Oleksiy Ostapenko, Pau Rodriguez, Massimo Caccia, and Laurent Charlin. Continual learning via local module composition. In NeurIPS, 2021. Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kang- wook Lee, and Dimitris Papailiopoulos. Can mamba learn how to learn? a comparative study on in-context learning tasks. In ICML, 2024. Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In ICCV, 2019. Maciej Pioro, Kamil Ciebiera, Krystian Krol, Jan Ludziejewski, and Sebastian Jaszczur. Moe- arXiv preprint mamba: Efficient selective state space models with mixture of experts. arXiv:2401.04081, 2024. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In CVPR, 2017. Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interfer- ence. arXiv preprint arXiv:1810.11910, 2018. Matthew Riemer, Tim Klinger, Djallel Bouneffouf, and Michele Franceschini. Scalable recollections for continual lifelong learning. In AAAI, 2019. Mohammad Rostami, Soheil Kolouri, and Praveen K Pilly. Complementary learning for overcoming catastrophic forgetting using experience replay. arXiv preprint arXiv:1903.04566, 2019. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander Berg, and Fei-Fei Li. Imagenet large scale visual recognition challenge. IJCV, 2015. Rico Sennrich. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015. Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In ICML, 2018. Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In NeurIPS, 2017. Jaehyeon Son, Soochan Lee, and Gunhee Kim. When meta-learning meets online and continual learning: A survey. arXiv preprint arXiv:2311.05241, 2023. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. Efficient transformers: A survey. arXiv preprint cs.LG/2009.06732, 2020. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, and Aidan N Gomez. Attention is all you need. In NeurIPS, 2017. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. Technical report, California Institute of Technology, 2011. Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: Theory, method and application. TPAMI, 2024. 13 Preprint Yichen Wu, Long-Kai Huang, Renzhen Wang, Deyu Meng, and Ying Wei. Meta continual learning revisited: Implicitly enhancing online hessian approximation via variance reduction. In ICLR, 2024. Shipeng Yan, Jiangwei Xie, and Xuming He. Der: Dynamically expandable representation for class incremental learning. In CVPR, 2021. Fei Ye and Adrian G Bors. Self-evolved dynamic expansion model for task-free continual learning. In ICCV, 2023. Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547, 2017. Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In ICML, 2017. Junting Zhang, Jie Zhang, Shalini Ghosh, Dawei Li, Serafettin Tasci, Larry Heck, Heming Zhang, and C-C Jay Kuo. Class-incremental learning via deep model consolidation. In WACV, 2020. Zeyu Zhang, Akide Liu, Ian Reid, Richard Hartley, Bohan Zhuang, and Hao Tang. Motion mamba: Efficient and long sequence motion generation. In ECCV, 2024. Da-Wei Zhou, Qi-Wei Wang, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan, and Ziwei Liu. Deep class-incremental learning: A survey. arXiv preprint arXiv:2302.03648, 2023. Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model. In ICML, 2024. 14 Preprint A DATASETS A.1 GENERAL IMAGE CLASSIFICATION TASKS Cifar-100 (Krizhevsky & Hinton, 2009) dataset consists of 60,000 images across 100 classes, each with 600 images. We select 60 classes at random for meta-training and use the remaining 40 for meta-testing. ImageNet-1K (Russakovsky et al., 2015) dataset comprises over one million labeled images dis- tributed across 1,000 categories. We select 600 classes at random for meta-training and use the remaining 400 for meta-testing. ImageNet-R(endition) (Russakovsky et al., 2015) extends 200 ImageNet classes with a compilation of 30,000 images tailored for robustness research. Celeb (Guo et al., 2016) is a large-scale facial image collection featuring approximately 10 million images of 100,000 celebrities. We randomly allocated 1,000 classes for meta-testing and assigned the remaining classes to meta-training. Casia Chinese handwriting (Liu et al., 2011) dataset encompasses a total of 7,356 character classes with 3.9 million images. We randomly selected 1,000 classes for the meta-testing and allocated the remaining classes for meta-training. Omniglot (Lake et al., 2015) is a collection of 1,632 handwritten characters from 50 different alpha- bets. The meta-training set comprises 963 classes, while the meta-testing set includes 660 classes, with each class containing 20 images. A.2 FINE-GRAINED RECOGNITION TASKS CUB-200-2011 (Wah et al., 2011) is a widely used fine-grained visual categorization dataset, com- prising 11,788 images across 200 bird subcategories. We randomly selected 80 classes for the meta- testing and allocated the remaining classes for meta-training. Stanford Dogs (Khosla et al., 2011) dataset comprises 20,580 images spanning 120 global dog breeds, divided into 12,000 training images and 8,580 testing images. We select 48 classes at random for meta-testing and use the remaining 72 for meta-training. Stanford Cars (Krause et al., 2013) comprises 16,185 images across 196 car classes, primarily captured from the rear perspective. We select 80 classes at random for meta-testing and use the remaining 40 for meta-training. FGVC-Aircraft (Maji et al., 2013) dataset comprises 10,200 images across 102 aircraft model variants, each represented by 100 images, primarily consisting of airplanes. We randomly selected 40 classes for the meta-testing and allocated the remaining classes for meta-training. A.3 LARGE DOMAIN SHIFT TASKS DomainNet (Peng et al., 2019) dataset is a benchmark for domain adaptation, encompassing com- mon objects organized into 345 classes across six domains: clipart, real, sketch, infograph, painting, and quickdraw. We evaluate model adaptability to out-of-domain data by using one domain for meta-testing and the remaining domains for meta-training. A.3.1 REGRESSION TASKS Sine Wave Reconstruction (Sine) The sine wave ω(τ ) = A sin(2πντ + ψ) is defined by its am- plitude A, frequency ν, and phase ψ. We denote the target values y as evaluations of the sine wave at 50 predefined points: y = [ω(τ1), . . . , ω(τ50)]. In each task, the frequency and phase remain constant, but the amplitude is allowed to vary. To corrupt y into x, we introduce a phase shift and Gaussian noise, where the phase shift is randomly selected for each task. The mean squared error between y and the model’s prediction ˆy is reported as the evaluation criterion. Image Rotation Prediction (Rotation) The model is provided with an image rotated by an angle ψ ∈ [0, 2π), and its task is to predict the rotation angle ˆψ. We use 1 − cos( ˆψ − ψ) as the evaluation 15 Preprint Table 8: Model Configurations Mamba Transformer Linear TF Performer Batch size Max Train Step Optimizer Learning Rate Learning Rate Decay Learning Rate Decay Step Learning Rate Decay Rate Regularization λ Hidden Dimension Layer State Size Delta Convolution Attention 16 50000 Adam 1 × 10−4 Step 10000 0.5 0.5 512 4 128 4 - - - Softmax - - Elu - - Favor metric, where a perfect prediction would result in a score of 0, while random guessing would yield an average score of 1.0. The Casia dataset is employed, with each class being treated as an individual task, maintaining the same meta-split configuration. Image Completion (Completion) In this task, the model is tasked with filling in the missing parts of an image given the visible sections. Using the Casia dataset, we modify the input x to consist of the top half of the image, while the target y is the bottom half. The performance is evaluated by computing the mean squared error between the predicted and true pixel values. We report the MSE between y and the model’s prediction ˆy as the evaluation criterion. B ADDITIONAL EXPERIMENTAL DETAILS Table 8 presents the configurations of the models employed in our experiments. C ADDITIONAL EXPERIMENTS C.1 EFFECTS OF SELECTIVITY REGULARIZATION AND META-TRAINING LOSS CURVES Due to the complexity of the MCL task, the regularization technique plays a crucial role in stabilizing and improving the training process. Fig. 6 showing the initial training phases (2500 steps) for differ- ent models with and without selectivity regularization. The losses are 3–5 times higher compared to the models with regularization applied and successfully converging. Beyond 2500 steps, the losses oscillate and no longer decrease. The results indicate that models without our regularization struggle to converge and exhibit significant oscillations during training, highlighting the effectiveness of the regularization. C.2 MORE ABLATION STUDIES ON REGULARIZATION STRENGTH In Fig. 4, we conducted ablation study to assess the influence of regularization strengths on our Mamba’s efficacy. Fig. 7 illustrates more ablation studies assessing the impact of regularization strengths λ by setting is as {0.1, 0.2, 0.5, 1.0, 2.0}, across multiple models on both ImageNet-1K and Cifar-100 datasets. The results demonstrate that all models exhibit stability within a wide and appropriate range of λ, providing evidence of consistent patterns. In our experiments, without losing generality, all models employed a regularization strength of 0.5 by default. C.3 MORE ABLATION STUDIES ON LEARNING RATES Fig. 8 illustrates ablation studies assessing the impact of varying initial learning rates {5 × 10−5, 1 × 10−4, 2 × 10−4, 5 × 10−4}, across multiple models on both ImageNet-1K and Cifar-100 datasets. 16 Preprint (a) Mamba w/ ℓslct (b) Transformer w/ ℓslct (c) Linear TF w/ ℓslct (d) Performer w/ ℓslct (e) Mamba w/o ℓslct (f) Transformer w/o ℓslct (g) Linear TF w/o ℓslct (h) Performer w/o ℓslct Figure 6: Training loss curves for (a, e) Mamba, (b, f) Transformer, (c, g) Linear Transformer, and (d, h) Performer, under the same type of representation and experimental settings, with and without selectivity regularization (ℓslct) during meta-training on 20-task, 5-shot MCL on Cifar-100. (a) Mamba (b) Transformer (c) Linear TF (d) Performer Figure 7: Ablation studies on regularization strength λ (0.1, 0.2, 0.5, 1.0, 2.0) during meta-testing of 20-task, 5-shot models (meta-trained on 20-task, 5-shot) for (a) Mamba, (b) Transformer, (c) Linear Transformer, and (d) Performer. The results indicate that within a reasonable range, the learning rate does not significantly affect model performance. In our experiments, without losing generality, we set the initial learning rate to 1 × 10−4, with decays of 0.5 every 10,000 steps. C.4 ADDITIONAL GENERALIZATION ANALYSES Without the regularization, models struggle to converge and exhibit significant oscillations during training, as shown in Fig. 6. In Sec. 4.2 and Fig. 3, we conducted generalization analyses of var- ious models by conducting meta-testing on the episodes different from the meta-training settings. Specifically, we apply the models meta-trained with 20-task-5-shot episodes on the meta-testing episodes with varying numbers of tasks or shots or the episodes contaminated by noise. The results show that Mamba shows better generalization ability to unseen scenarios and Transformer shows more meta-overfitting issues. To validate that the results are not relevant to the regularization, we evaluated various models with a small regularization strength (λ = 0.1) to assess the impact of reg- ularization on this generalization experiment and the meta-overfitting issue. The results indicate that 17 5001000150020002500Steps1234567LossMamba5001000150020002500Steps0.51.01.52.02.53.03.54.0LossTransformer5001000150020002500Steps1.01.52.02.53.03.54.04.5LossLinear TF5001000150020002500Steps1.52.02.53.03.54.04.55.0LossPerformer5001000150020002500Steps5.45.65.86.06.26.4LossMamba5001000150020002500Steps2.93.03.13.23.33.4LossTransformer5001000150020002500Steps3.53.63.73.83.94.04.14.2LossLinear TF5001000150020002500Steps4.04.24.44.64.8LossPerformer0.10.20.51.02.0Regularization Strength 5060708090100Classification accuracy (%)ImageNet-1KCifar-1000.10.20.51.02.0Regularization Strength 5060708090100Classification accuracy (%)ImageNet-1KCifar-1000.10.20.51.02.0Regularization Strength 5060708090100Classification accuracy (%)ImageNet-1KCifar-1000.10.20.51.02.0Regularization Strength 5060708090100Classification accuracy (%)ImageNet-1KCifar-100 Preprint (a) Mamba (b) Transformer (c) Linear TF (d) Performer Figure 8: Ablation studies on learning rates ({5×10−5, 1×10−4, 2×10−4, 5×10−4}) during meta- testing of 20-task, 5-shot models (meta-trained on 20-task, 5-shot) for (a) Mamba, (b) Transformer, (c) Linear Transformer, and (d) Performer. regularization strengths of 0.1 (Fig. 9) and 0.5 (Fig. 3) lead to similar phenomena across different models. (a) Task (b) Shot (c) Noise Figure 9: Generalization Analysis on ImageNet-1K with regularization strength λ = 0.1: (a) meta- trained on 20-task 5-shot MCL, meta-testing on varying number of tasks (5-shot); (b) meta-trained on 20-task 5-shot MCL, meta-testing on varying number of shots (20-task); (c) meta-trained on 20- task 5-shot MCL, meta-testing on 20-task 5-shot with varying inputs noise intensity level D VISUALIZATION OF ATTENTION AND SELECTIVITY PATTERN Given meta-learned sequence models as the continual learner, the models process the samples in sequence in the meta-test CL process. To analyze the behaviors of these models, we visualize the attention weights of Transformers and the associative weights of Mamba (as discussed in Sec. 3.2.2) to demonstrate their attention and selectivity patterns, respectively. In a meta-testing episode, given a trained model and a sequence of samples, the prediction for a given xtest is produced based on the attention or implicit association of seen samples in the sequence. Visualizing the attention and selec- tivity patterns can empirically show how the models make predictions. For the standard benchmark- ing case, Fig. 10 shows that both Transformer and Mamba can effectively associate seen samples with query inputs, leading to the results as shown in Table 2. Specifically, we use this visualization to analyze how different models perform in the generalization studies (discussed in Sec. 4.2), i.e., generalizing to meta-testing cases that are different from meta- training cases. D.1 VISUALIZATION ANALYSES FOR GENERALIZATION TO DIFFERENT STREAM LENGTH The experiments shown in Fig. 3a and Fig. 3b validate the generalization ability of models by meta- testing on CL episodes/sequences that differ from those seen during meta-training. Specifically, the models are meta-trained on 20-task, 5-shot MCL episodes and meta-tested on episodes with task and shot numbers exceeding those in meta-training. Transformers generally converge more easily during 18 5e-51e-42e-45e-4Learning Rate5060708090100Classification accuracy (%)ImageNet-1KCifar-1005e-51e-42e-45e-4Learning Rate5060708090100Classification accuracy (%)ImageNet-1KCifar-1005e-51e-42e-45e-4Learning Rate5060708090100Classification accuracy (%)ImageNet-1KCifar-1005e-51e-42e-45e-4Learning Rate5060708090100Classification accuracy (%)ImageNet-1KCifar-1001020406080100Number of Task20406080100Classification accuracy (%)Linear TFTransformerMamba51020304050Number of Shot20406080100Classification accuracy (%)Linear TFTransformerMamba0246810Noise Intensity Level20406080100Classification accuracy (%)Linear TFTransformerMamba Preprint (a) Mamba (b) Transformer Figure 10: 20-task 5-shot in meta-testing: visualization of the final layer associations between vari- ous test shots (queries) and a single MCL train episode (prompt) of both (a) Mamba and (b) Trans- former during meta-testing on 20-task 5-shot MCL episode (meta-trained on 20-task 5-shot). In meta-testing, the four visualizations share a single MCL training episode (prompt) spanning 0th−99th shots, while the test shots (queries at the 100th shot) correspond to the 0th, 1st, 9th, and 18th tasks (0th−4th, 5th−9th, 45th−49th, and 90th−94th train shots), respectively. meta-training compared to Mamba, due to their strong fitting ability. However, this advantage may also lead to meta-overfitting. To analyze how different models perform on these sequences, we visualize the final layer attention weights of Transformer and the corresponding selective scores (associative indicators) of Mamba, between various test shots (queries) and a single MCL train episode (prompt) of both Mamba and Transformer. Note that Mamba does not have explicit attention weights, we compute the scores relying on the connection between Mamba and Transformers described in Sec. 3.2.2. Specifically, we computed the parameters Ctest and B (CtestB⊤) within its SSMs to compare its behavior with the attention matrix (QtestK⊤) of Transformers, where Ctest ∈ R1×C and Qtest ∈ R1×C correspond to the row of the test shot. Both models are meta-trained on a 20-task, 5-shot setting using the ImageNet-1K dataset. For models meta-trained on the 20-task, 5-shot setting, we meta-tested them and visualized their weights on 20-task, 5-shot episodes (Fig. 10), 20-task, 10-shot episodes (Fig. 11), and 40-task, 5- shot episodes (Fig. 12). Specifically, we observed that Transformers tend to either average attention or consistently focus on specific token positions in episodes that deviate from the training length. In contrast, Mamba effectively associates with relevant shots. This suggests that Transformers may learn pattern biases in the sequences (e.g., positional biases unrelated to content), leading to meta- overfitting during these generalization tests. D.2 VISUALIZATION ANALYSIS OF GENERALIZATION TO NOISE-CONTAMINATED EPISODES In the experiments, the modes are meta-trained on noise-free episodes. And the noise is added on randomly selected samples/shots in the meta-testing episodes. The task can also be seen as validating the ability of ignoring the irrelevant samples or contaminated outlier samples in the sequences. To directly show how the models work in this scenarios, we visualized the final layer attention weights for test shots compared to training shots for both Mamba and Transformer, each meta- trained in a 20-task, 5-shot setting. During meta-testing, these models processed a 20-task, 5-shot episode with five noisy input shots (shot index: 8, 18, 39, 61, 75) at noise strengths of 1 (Fig. 13), 2 (Fig. 14), and 6 (Fig. 15). The results indicate that the Transformer meta-trained on clean episodes tend to produce extreme attention weights (either very high or very low) on noisy or outlier shots, 19 Preprint (a) Mamba (b) Transformer Figure 11: More shots in meta-testing: visualization of the final layer associations between various test shots (queries) and a single MCL train episode (prompt) of both (a) Mamba and (b) Transformer during meta-testing on 20-task 10-shot MCL episode (meta-trained on 20-task 5-shot). In meta- testing, the four visualizations share a single MCL training episode (prompt) spanning 0th−199th shots, while the test shots (queries at the 100th shot) correspond to the 0th, 1st, 9th, and 18th tasks, respectively. (a) Mamba (b) Transformer Figure 12: More tasks in meta-testing: visualization of the final layer associations between various test shots (queries) and a single MCL train episode (prompt) of both (a) Mamba and (b) Transformer during meta-testing on 40-task 5-shot MCL episode (meta-trained on 20-task 5-shot). In meta- testing, the seven visualizations share a single MCL training episode (prompt) spanning 0th−199th shots, while the test shots (queries at the 100th shot) correspond to the 0th, 1st, 9th, 18th, 28th, 38thand 39th tasks, respectively. whereas Mamba is less affected. This observation suggests that Transformers’ learned attention mechanisms tend to associate samples based on local and independent representations. In contrast, 20 Preprint (a) Mamba (b) Transformer Figure 13: Noise inputs in meta-testing: visualization of the final layer associations between various test shots (queries) and a single MCL train episode (prompt) of both (a) Mamba and (b) Transformer (meta-trained on 20-task 5-shot without noise inputs), during meta-testing on 20-task 5-shot MCL episode with noise inputs (noise strength=1, noise on 8th, 18st, 39th, 61th, and 75th training shots). In meta-testing, the four visualizations share a single MCL training episode (prompt) spanning 0th− 99th shots, while the test shots (queries at the 100th shot) correspond to the 0th, 1st, 9th, and 18th tasks (0th−4th, 5th−9th, 45th−49th, and 90th−94th train shots), respectively. (a) Mamba (b) Transformer Figure 14: Noise inputs in meta-testing: visualization of the final layer associations between various test shots (queries) and a single MCL train episode (prompt) of both (a) Mamba and (b) Transformer (meta-trained on 20-task 5-shot without noise inputs), during meta-testing on 20-task 5-shot MCL episode with noise inputs (noise strength=2, noise on 8th, 18st, 39th, 61th, and 75th training shots). In meta-testing, the four visualizations share a single MCL training episode (prompt) spanning 0th− 99th shots, while the test shots (queries at the 100th shot) correspond to the 0th, 1st, 9th, and 18th tasks (0th−4th, 5th−9th, 45th−49th, and 90th−94th train shots), respectively. Mamba performs more effectively by selectively associating relevant information and leveraging its recurrently updated latent state, which accumulates global sequence information. 21 Preprint (a) Mamba (b) Transformer Figure 15: Noise inputs in meta-testing: visualization of the final layer associations between various test shots (queries) and a single MCL train episode (prompt) of both (a) Mamba and (b) Transformer (meta-trained on 20-task 5-shot without noise inputs), during meta-testing on 20-task 5-shot MCL episode with noise inputs (noise strength=6, noise on 8th, 18st, 39th, 61th, and 75th training shots). In meta-testing, the four visualizations share a single MCL training episode (prompt) spanning 0th− 99th shots, while the test shots (queries at the 100th shot) correspond to the 0th, 1st, 9th, and 18th tasks (0th−4th, 5th−9th, 45th−49th, and 90th−94th train shots), respectively. 22
ai_researcher
2
Knowledge_Base-enhanced_Multilingual_Relation_Extraction_with_Large_Language_Models.pdf
KaLM: Knowledge-aligned Autoregressive Language Modeling via Dual-view Knowledge Graph Contrastive Learning Peng Yu 1, Cheng Deng1, Beiya Dai1, Xinbing Wang1, Ying Wen1* 1Shanghai Jiao Tong University {pursuit_yp, davendw, beiya_dai, xwang8, ying.wen}@sjtu.edu.cn 4 2 0 2 c e D 6 ] L C . s c [ 1 v 8 4 9 4 0 . 2 1 4 2 : v i X r a Abstract Autoregressive large language models (LLMs) pre-trained by next token prediction are inher- ently proficient in generative tasks. However, their performance on knowledge-driven tasks such as factual knowledge querying remains un- satisfactory. Knowledge graphs (KGs), as high- quality structured knowledge bases, can pro- vide reliable knowledge for LLMs, potentially compensating for their knowledge deficiencies. Aligning LLMs with explicit, structured knowl- edge from KGs has been a challenge; previ- ous attempts either failed to effectively align knowledge representations or compromised the generative capabilities of LLMs, leading to less- than-optimal outcomes. This paper proposes KaLM, a Knowledge-aligned Language Mod- eling approach, which fine-tunes autoregres- sive LLMs to align with KG knowledge via the joint objective of explicit knowledge alignment and implicit knowledge alignment. The ex- plicit knowledge alignment objective aims to di- rectly optimize the knowledge representation of LLMs through dual-view knowledge graph con- trastive learning. The implicit knowledge align- ment objective focuses on incorporating tex- tual patterns of knowledge into LLMs through triple completion language modeling. Notably, our method achieves a significant performance boost in evaluations of knowledge-driven tasks, specifically embedding-based knowledge graph completion and generation-based knowledge graph question answering. 1 Introduction Large language models (LLMs) like PaLM 2 (Anil et al., 2023) and GPT-4 (Achiam et al., 2023) have recently made remarkable advancements in a wide range of natural language processing tasks (Li et al., 2022; Su et al., 2019). However, LLMs still face challenges in tasks requiring factual or domain- specific knowledge, resulting in unsatisfactory per- formance in knowledge-driven tasks. From the * Ying Wen is the corresponding author. 1 perspective of knowledge representation, LLMs serve as parametric knowledge bases, providing im- plicit, non-deterministic knowledge, while knowl- edge graphs (KGs) function as structured knowl- edge bases, offering explicit, deterministic knowl- edge. KGs, commonly organized as factual knowl- edge triples describing relations between entities, can serve as a reliable knowledge source for LLMs. Aligning LLMs with KG knowledge can enhance the knowledge reasoning capabilities of LLMs and improve their performance on knowledge-driven tasks, such as knowledge graph completion (KGC) and knowledge graph question answering (KGQA). Autoregressive LLMs pre-trained through next token prediction tasks often exhibit limitations in knowledge representation, leading to embeddings that lack diversity and specificity. This limitation becomes evident in tasks that demand distinctive sentence embeddings, such as dense retrieval and semantic search (Muennighoff, 2022; Ma et al., 2023). As demonstrated in Figure 1(a), the repre- sentations generated by LLMs tend to be overly homogeneous across different pieces of knowledge, undermining their effectiveness in applications re- quiring fine-grained semantic distinctions. The concept of explicit knowledge alignment is introduced to directly optimize the knowledge representation within language models by devising direct knowledge training objectives. This strategy emerges in response to the observed degradation in knowledge representation within autoencoder- based pre-trained language models (PLMs), a phe- nomenon termed representation anisotropy (Etha- yarajh, 2019). This issue is characterized by the clustering of learned token and sentence embed- dings within a constrained area of the representa- tion space, leading to a lack of distributional uni- formity (Li et al., 2020). While previous efforts to address representation anisotropy have largely concentrated on promoting uniformity among to- ken representations, they often overlook the critical (a) LLaMA (b) KaLM Figure 1: Similarity matrix of knowledge representations of (a) Llama-2-7B (Touvron et al., 2023) and (b) KaLM. The values denote the cosine similarity between the head-relation and tail embedding. The diagonal elements represent positive <head-relation, tail> pairs from the same KG triple, which should maintain high similarity (darker color); off-diagonal elements represent negative <head-relation, tail> pairs from different KG triples, which should have lower similarity (lighter color). In an ideal setting, knowledge representations should be able to distinguish between different triples, while maintaining alignment and uniformity of the representation, as shown in Figure 1(b). alignment of similar sentence representations (Su et al., 2021; Li et al., 2020; Su et al., 2022). More recent works advocate for integrating KG triples and using knowledge graph embedding losses to fine-tune PLMs, aiming to bolster their knowledge representation abilities (Shen et al., 2022; Wang et al., 2022b). Nonetheless, such approaches may limit themselves to optimizing at the token level or reduce the model to a mere text encoder, thereby diminishing its inherent generative capabilities. Conversely, implicit knowledge alignment lever- ages the pre-training or fine-tuning of language models with external knowledge sources, employ- ing the vanilla language modeling objective or its variations. This approach predominantly preserves the next token prediction framework, essentially re- taining the native text generation prowess of LLMs. In the realm of implicit knowledge alignment, the prevalent practice involves the fine-tuning of LLMs with KG triples and their textual descriptions, as opposed to directly altering the hidden knowl- edge representations (Chen et al., 2022; Yao et al., 2023). Nevertheless, the efficacy of these meth- ods on knowledge graph completion tasks remains substantially inferior when compared to strategies that directly fine-tune knowledge representations (Wang et al., 2022b,a). Intriguing findings from (Fu et al., 2023) reveal that fine-tuning PLMs with randomly unaligned KG triples can achieve per- formance on par with that obtained through fine- tuning with aligned triples in various tasks, includ- ing named entity recognition and relation classifi- cation. Their findings suggest that the hidden states of entities, whether infused with aligned or random knowledge, exhibit remarkable similarity. Conse- quently, existing implicit alignment methods fail to effectively utilize the injected knowledge or accu- rately discern the connection between newly intro- duced knowledge and the model’s inherent knowl- edge, culminating in suboptimal performance. In this paper, we propose KaLM, a Knowledge- aligned Language Modeling approach for aligning LLMs with KG knowledge. Specifically, we use KG triples and their textual descriptions to fine- tune LLMs via the joint objective of explicit knowl- edge alignment and implicit knowledge alignment. The explicit knowledge alignment objective aims to directly optimize the hidden representations of knowledge in LLMs through dual-view knowledge graph contrastive learning. We theoretically prove and empirically show that this objective can facili- tate knowledge representation alignment and alle- viate representation anisotropy. For KG triples, we consider tail entity description and the concatena- tion of head entity description and relation descrip- tion as two distinct views of the same knowledge. The key insight is that: (1) representations of two different views of the same knowledge (i.e., from the same triple) should be pulled together, while (2) representations of different knowledge (i.e., from 2 Tail DescriptionHead-Relation Description0.20.00.20.40.60.81.0Tail DescriptionHead-Relation Description0.20.00.20.40.60.81.0 different triples) should be pushed apart. The first term encourages semantically similar knowledge to remain close in the representation space, promoting knowledge representation alignment. The second term forces dissimilar knowledge to be as far apart as possible in the vector space, improving knowl- edge representation uniformity and mitigating rep- resentation anisotropy. As shown in Figure 1(b), our method can obtain the ideal knowledge repre- sentations that are both aligned and uniform. The implicit knowledge alignment objective fo- cuses on incorporating textual patterns of knowl- edge into LLMs through triple completion lan- guage modeling, which can maintain the gener- ative capability of LLMs and boost performance on knowledge inference tasks. We constructed a triple completion dataset based on the KG triples to fine- tune LLMs, improving their instruction-following ability and facilitating implicit knowledge align- ment. We also show the implicit knowledge align- ment objective can further boost knowledge repre- sentation performance. This confirms that both ex- plicit alignment and implicit alignment are crucial for knowledge alignment, as they both essentially require a deep understanding of knowledge. Our contributions are summarized as follows: • We introduce KaLM, a knowledge-aligned language modeling approach that aligns au- toregressive LLMs with KG knowledge via the joint objective of explicit knowledge align- ment and implicit knowledge alignment. • We theoretically prove and empirically demon- strate that the explicit knowledge alignment objective achieved through dual-view knowl- edge graph contrastive learning can facilitate knowledge representation alignment and alle- viate the issue of representation anisotropy. • The experimental results on knowledge-driven tasks demonstrate the effectiveness of KaLM. In the embedding-based KGC task, KaLM sig- nificantly improves Mean Rank and Hit@10 metrics compared to previous state-of-the-art methods. In the generation-based KGQA task, KaLM achieves a notable improvement in an- swering accuracy compared to the base LLM. 2 Related Work Our work is closely related to Knowledge Enhance- ment for LLMs and Representation Anisotropy of Language Models. A more detailed review of re- lated work can be found in Appendix A. Knowledge Enhancement for LLMs Knowl- edge enhancement aims to incorporate factual and domain-specific knowledge into LLMs to address their knowledge deficiencies. This can be divided into retrieval-based augmentation and training- based integration. Retrieval-based knowledge aug- mentation methods leverage external retrieval mod- ules to provide additional knowledge, aiming to improve the knowledge reasoning capability of LLMs (Sun et al., 2023; Jiang et al., 2023). How- ever, this approach may lead to knowledge conflicts (Feng et al., 2023), where knowledge in LLMs and knowledge in the retrieved documents are in- consistent or the retrieved multiple documents are contradictory. Training-based knowledge integra- tion methods involve using KG triple descriptions to pre-train or fine-tune LLMs, aiming to achieve knowledge alignment. These methods can be di- vided into explicit alignment (Wang et al., 2021b; Yasunaga et al., 2022) and implicit alignment (Yao et al., 2023; Zhang et al., 2023) based on whether they directly optimize the knowledge representa- tion. Nevertheless, prior methods have either sacri- ficed the generative capability or lacked effective representation alignment. Our approach enhances the knowledge of LLMs via a unique joint objective of explicit alignment and implicit alignment, im- proving the quality of knowledge representations and generative knowledge reasoning capabilities. Representation Anisotropy of Language Models PLMs have long been plagued by representation anisotropy (Ethayarajh, 2019), where the learned token and sentence embeddings are confined to a narrow cone within the entire representation space. The issue of representation anisotropy not only re- sults in model degradation (Su et al., 2022) but also leads to poor performance on discriminative tasks. Previous work on alleviating representation anisotropy has mainly focused on post-processing techniques such as normalizing flows (Li et al., 2020) or whitening operations (Su et al., 2021). Su et al. (2022) propose a contrastive training objective to encourage learning isotropic token representa- tions. However, these methods mainly improve the isotropy of token representations without enhanc- ing the discriminability of sentence representations. Our method improves the token-level and sentence- level representation anisotropy of LLMs through dual-view knowledge graph contrastive learning, and it has rigorous theoretical guarantees. 3 3 Knowledge-aligned Autoregressive Language Modeling In this section, we introduce KaLM, a Knowledge- aligned Language Modeling approach for aligning LLMs with KG knowledge via the joint objective of explicit knowledge alignment and implicit knowl- edge alignment. The overview is shown in Figure 2. 3.1 Notations and Preliminaries A KG G stores factual knowledge, denoted as G = (E, R, T , D). E and R are the set of entities and relations, respectively. D is the description set of all entities and relations. De and Dr are the textual description of entity e and relation r, respectively. T = {(h, r, t)|h, t ∈ E, r ∈ R} is the triple set. A triple (h, r, t) depicts the fact that there is a relation r between the head entity h and the tail entity t. 3.2 Explicit Knowledge Alignment For KG triples, the textual description of the tail entity and the concatenation of the textual descrip- tions of the head entity and relation can be seen as two distinct views of the same knowledge. This inspires KaLM to align representations of two dis- tinct views of the same knowledge (i.e., from the same triple), while separating representations of different knowledge (i.e., from different triples). The LLM, denoted as ELLM , is fine-tuned with the dual-view knowledge graph contrastive learn- ing loss. The training corpus contains paired textual descriptions, {(Dhr, Dt)}N i=1, where Dt is the tail entity description, and Dhr is the concatenation of the head entity description and relation description. Given a training pair (Dhr, Dt), the same ELLM is used to compute the embeddings of Dhr and Dt independently. Moreover, we prepend the [bos] to- ken to the beginning and append the [eos] token to the end of the textual description. The augmented input is fed into ELLM , and the hidden representa- tion corresponding to the [eos] token from the last layer is used as the final embedding of the input. ehr = ELLM ([bos]hr ⊕ Dhr ⊕ [eos]hr), et = ELLM ([bos]t ⊕ Dt ⊕ [eos]t), where ⊕ is the operation to concatenate two strings and Dhr = Dh ⊕ Dr. For stable training, we adopt “[” as [bos]hr and “]” as [eos]hr, while using “{” as [bos]t and “}” as [eos]t. We utilize the knowledge graph contrastive learn- ing loss to directly optimize the knowledge repre- sentation of the LLM by encouraging semantically similar knowledge to stay close in the representa- tion space and pushing dissimilar knowledge to be far apart in the representation space. More specifi- cally, we apply the InfoNCE loss with an additive margin over the in-batch negatives to fine-tune the model. The row-direction loss ℓr is as follows for a given positive pair, and the column-direction loss ℓc is defined similarly (see Appendix C.2). ℓr = − log e(ϕ(ehr,et)−γ)/τ e(ϕ(ehr,et)−γ)/τ + (cid:80)N i=1 e ϕ(ehr,et′ i )/τ , (1) where N is the negative batch size, τ is the train- able temperature that controls the strength of penal- ties on hard negative samples, ϕ is the cosine sim- ilarity function that measures the plausibility of a triple, and γ is the additive margin that encourages increasing the similarity score of positive pairs. The training objective for explicit knowledge alignment is the sum of the ℓr and the ℓc losses: Lexp = 1 N (cid:88) (Dhr,Dt) (ℓr + ℓc)/2. (2) 3.3 Implicit Knowledge Alignment The implicit knowledge alignment objective fo- cuses on incorporating textual patterns of knowl- edge into the LLM to prevent catastrophic forget- ting of previous knowledge and maintain its gen- erative capability. We constructed an instruction- tuning dataset based on the KG triple descriptions to fine-tune the model through triple completion language modeling. We also show that the implicit knowledge alignment objective can bring perfor- mance boosts on knowledge representation evalu- ations. This indicates that explicit alignment and implicit alignment are both imperative for effective knowledge alignment, as they both essentially ne- cessitate a profound understanding of knowledge. We follow the recipe of Stanford Alpaca (Taori et al., 2023) and use the provided template to con- struct the instruction-tuning dataset. The instruc- tion passed to the template, abbreviated as inst, is: “Given the head entity and relation, write a tail entity that completes the triple”. The input and output are Dhr and Dt, respectively. The training objective for implicit knowledge alignment is: Limp = 1 M (cid:88) (Dhr,Dt) − log P (Dt|inst, Dhr), (3) where M is the instruction-tuning batch size. 4 Figure 2: The overall framework of KaLM. Up: The explicit knowledge alignment objective (Lexp) aims to directly optimize the knowledge representation of LLMs via dual-view knowledge graph contrastive learning. Down: The implicit knowledge alignment objective (Limp) focuses on incorporating textual patterns of knowledge into LLMs via triple completion language modeling. The final training objective is the weighted average of Lexp and Limp. 3.4 Knowledge-aligned Language Modeling The ultimate training objective of our proposed KaLM is the weighted average of Lexp and Limp: LKaLM = Lexp + λ · Limp, (4) where λ is a hyperparameter that adjusts the relative weight between them. Notably, this formulation allows us to use different batch sizes for explicit knowledge alignment (N ) and implicit knowledge alignment (M). Previous work has shown that a sufficiently large batch size is key to the success of contrastive representation learning (Chen et al., 2020). With Equation 4, we can significantly in- crease the explicit knowledge alignment batch size while keeping the implicit knowledge alignment batch size fixed to save computational resources. 4 Theoretical Analysis We theoretically prove that the explicit knowledge alignment objective implemented through dual- view knowledge graph contrastive learning can fa- cilitate knowledge representation alignment and alleviate the issue of representation anisotropy. 4.1 Dual-view Contrastive Learning for Knowledge Representation Alignment The outstanding performance of contrastive repre- sentation learning has attracted researchers to ana- lyze its underlying reasons for success from a theo- retical perspective. Wang and Isola (2020) identify alignment and uniformity as two key properties of contrastive learning and propose two quantifiable metrics to measure the quality of representations. We concentrate on understanding the dual-view knowledge graph contrastive learning loss from the knowledge alignment and uniformity perspective. To simplify the notation, we use f to denote ELLM . Alignment computes the expected distance be- tween positive pairs and encourages the learned representations for positive pairs to be similar. Uni- formity evaluates the even distribution of represen- tations and encourages the separation of features from randomly selected negative samples. ℓalign(f ; α) ≜ E (Dhr,Dt)∼ppos [∥f (Dhr) − f (Dt)∥α 2 ] , ℓuniform(f ; t) ≜ log E i.i.d. ∼ pdata Di,Dj (cid:104) e−t∥f (Di)−f (Dj )∥2 2 (cid:105) , where ppos denotes the distribution of positive pairs {(Dhr, Dt)}N i=1 and pdata represents the data dis- tribution of textual descriptions {Di}N i=1. Since the learned knowledge representations are L2-normalized, we have ϕ(ehr, et) = f (x)⊤f (y). The additive margin γ encourages the model to learn more robust features without affecting the asymptotic analysis, thus we ignore it. For ease of analysis, we reformulate the contrastive learning 5 LLMs/wCausalAttentionHead DescRelation DescTail DescTriple 1Triple 2Triple n........................LLMs/wCausalAttentionpositive pairsnegativepairsnegativepairsEmbhrEmbtshared weight[bos][eos][bos][eos]InstructionHead DescriptionRelation DescriptionTail DescriptionTail DescriptionAutoregressive Large Language Models (LLMs)Autoregressive Generate[eos]Explicit Knowledge Alignmentdual-view knowledge graph contrastive learningtriple completion language modelingImplicit Knowledge AlignmentKaLM: Knowledge-aliged Language Modeling objective of Equation 1 and 2 as follows: Lexp(f ; τ, N ) ≜ E (Dhr,Dt)∼ppos {Dt i}N ′ i=1 i.i.d. ∼ pdata  − log     ef (Dhr)⊤f (Dt)/τ + ef (Dhr)⊤f (Dt ef (Dhr)⊤f (Dt)/τ N (cid:80) i=1      , (5) ′ i)/τ Following Wang and Isola (2020), we analyze the asymptotics of the objective in Equation 5. Theorem 1 (Asymptotics of Lexp). For tempera- ture τ > 0, as the number of negative samples N → ∞, the normalized dual-view knowledge graph contrastive loss in Equation 5 converges to lim N →∞ Lexp(f ; τ, N ) − log N = − 1 τ E (Dhr,Dt)∼ppos (cid:104) f (Dhr)⊤f (Dt) (cid:105) (cid:34) + E Di∼pdata log E i ∼pdata D− (cid:35) i )⊤f (Di)/τ (cid:105) (cid:104) ef (D− . (6) We have the following conclusions: 1. By pulling together the representations of two different views of the same knowledge, the first term of Equation 6 is minimized, and the en- coder ELLM is perfectly knowledge-aligned. 2. Assuming the perfect uniform knowledge en- coder ELLM exists, it precisely minimizes the second term of Equation 6 by pushing away the representations of different knowledge. Proof. See Appendix B.1. 4.2 Alleviation of Representation Anisotropy We then prove that the dual-view knowledge graph contrastive learning objective can directly alleviate representation anisotropy and improve the discrim- inability of knowledge representations. Let E be the sentence embedding matrix of {Di}N i=1, where the i-th row of E is ei. Following Ethayarajh (2019), the sentence-level representa- tion anisotropy value of {Di}N i=1 is defined as: anisotropy{D} = 1 N (N − 1) N (cid:88) N (cid:88) i=1 j=1,j̸=i e⊤ i ej. (7) We can further derive the following theorem. 6 Theorem 2 (Alleviation of Anisotropy). When pdata is uniform over finite samples {Di}N i=1, the second term of Equation 6 is the upper bound of the sentence-level anisotropy of {Di}N (cid:34) E Di∼pdata log E i ∼pdata D− (cid:104) i=1, i.e., (cid:35) ef (D− i )⊤f (Di)/τ (cid:105) (8) ≥ N − 1 τ N · anisotropy{D} + 1 τ N . We have the following result: By optimizing the second term of Equation 6, we essentially minimize the upper bound of the sentence-level anisotropy of corpus {Di}N i=1, thereby directly alleviating the representation anisotropy problem. Proof. See Appendix B.2. 5 Experiments In this section, we assess the effectiveness of KaLM in knowledge alignment. The experimental setup is outlined in 5.1. In 5.2 and 5.3, we present results on knowledge graph completion (KGC) and knowl- edge graph question answering (KGQA). In 5.4, we provide further analysis of knowledge representa- tion and present case studies of KGQA generations. 5.1 Experimental Setup Datasets. We use WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) as the KGs for knowledge alignment training. WN18RR and FB15k-237 are derived from WordNet and Freebase, respectively (Bordes et al., 2013). We use the information provided by KG-BERT (Yao et al., 2019) for textual descriptions. Following Wang et al. (2022a), we add an inverse triple (t, r−1, h) for each triple (h, r, t) in the triple set, where r−1 is the inverse relation of the original relation r. Model Training. We choose Llama-2-7B, Llama- 3-8B, and Mistral-7B as base LLMs and fine-tune them through the joint objective of explicit knowl- edge alignment and implicit knowledge alignment. To save computational resources for parameter- efficient fine-tuning, we use LoRA (Hu et al., 2021) to fine-tune the feed-forward network of the model. Evaluation Details. Experiments mainly focus on two aspects: knowledge representation assessment and knowledge inference evaluation. For knowl- edge representation assessment, we evaluate the embedding-based KGC task and illustrate the alle- viation of representation anisotropy. We report five automated metrics: Mean Rank (MR), Mean Re- ciprocal Rank (MRR), and Hit@k (k ∈ {1, 3, 10}). Table 1: Embedding-based KGC results on WN18RR and FB15k-237. Baseline results are from their papers, with “-” indicating a missing result. The best and second-best results are marked by bold and underline, respectively. Method WN18RR FB15k-237 MR MRR H@1 H@3 H@10 MR MRR H@1 H@3 H@10 0.043 0.412 0.428 0.243 0.444 0.476 structure-based methods 2300 TransE 7000 DistMult 3340 RotatE description-based methods (autoencoder PLMs) 51 StAR C-LMKE 72 - SimKGC description-based methods (autoregressive LLMs) 15969 Llama-2-7B 19 Llama2-7BKaLM Llama3-8BKaLM 23 Mistral-7BKaLM 20 0.004 0.409 0.446 0.484 0.010 0.556 0.588 0.612 0.401 0.598 0.671 0.243 0.480 0.587 0.441 0.470 0.492 0.491 0.675 0.731 0.010 0.656 0.676 0.702 0.532 0.504 0.571 0.709 0.806 0.817 0.020 0.851 0.860 0.869 323 512 177 117 183 - 5359 114 121 116 0.279 0.281 0.338 0.296 0.404 0.333 0.006 0.299 0.308 0.317 0.198 0.199 0.241 0.205 0.324 0.246 0.002 0.204 0.212 0.225 0.376 0.301 0.375 0.322 0.439 0.362 0.004 0.325 0.337 0.351 0.441 0.446 0.533 0.482 0.556 0.510 0.012 0.502 0.509 0.518 Figure 3: Comparison of generative knowledge infer- ence performance between Llama-2-7B and KaLM. ↑ means higher is better and ↓ means lower is better. We compare KaLM with structure- and description- based methods. Structured-based methods include TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), and RotatE (Sun et al., 2018). Description- based methods include StAR (Wang et al., 2021a), C-LMKE (Wang et al., 2022b), and SimKGC (Wang et al., 2022a). For knowledge inference eval- uation, we evaluate the generation-based KGQA task and analyze the PPL metric and MMLU score (Hendrycks et al., 2020). We report the prediction accuracy over entities, relations, and triples. We also provide case studies of KGQA generations. Additional experimental results and detailed ab- lation studies can be found in Appendix D and E. 5.2 Knowledge Representation Assessment The embedding-based KGC results are shown in Ta- ble 1. The base LLM failed to finish this task, with all metrics lagging far behind. On the WN18RR dataset, our method surpasses prior methods by a substantial margin in terms of MR and Hit@10. (a) LLaMA (b) KaLM Figure 4: Similarity matrix on the Wikitext-103 test set. From top-left to bottom-right, element (i, j) denotes the cosine similarity between the i-th and the j-th sentence. Other metrics fall slightly short of state-of-the-art methods, yet remain competitive. The performance of KaLM on FB15k-237 is slightly inferior, but it still achieves the best MR. Previous description- based methods generally perform poorly on FB15k- 237, possibly due to the absence of effective textual descriptions. An example relation description from FB15k-237 is “/music/artist/origin”, which is quite vague and abstract. SimKGC uses a large batch size through intricate negative sampling methods and in- corporates neighbor description augmentation and neighbor-based re-ranking techniques. C-LMKE uses self-adversarial negative sampling and utilizes extra entity degree information. These tricks enable SimKGC and C-LMKE to achieve higher perfor- mance. Using a larger batch size and more tech- niques can further improve other metrics of KaLM. Overall, the results reveal that KaLM notably en- hances the quality of knowledge representation, bringing performance boosts in KGC tasks. 7 head predtail predrelation predtriple clsMMLUPPL0102030405060scores or accuracy7.811.63.755.942.34.8116.228.512.161.642.04.98LLaMAKaLM Figure 5: Case studies of Llama-2-7B and KaLM on KGQA tasks. Note that the head entity, relation, and tail entity are denoted by different colors. The mark indicates the correct answer, while signifies an incorrect answer. 5.3 Knowledge Inference Evaluation The generation-based KGQA results are depicted in Figure 3. Llama-2-7B performs poorly in en- tity prediction and relation prediction. Our method demonstrates a significant performance boost in all generation-based KGQA tasks, including head/tail entity prediction, relation prediction, and triple clas- sification. Furthermore, despite a slight increase in perplexity (PPL) scores on Wikitext-103 (Merity et al., 2016) test set, our method still shows compet- itive performance in the MMLU test. The results demonstrate that KaLM achieves effective knowl- edge alignment, bringing in significantly improved KGQA performance while preserving the original generative and knowledge inference capabilities. 5.4 Visualization of Knowledge Representation and Case Studies We provide visualization results to illustrate knowledge representation improvements. Fig- ure 4 shows the sentence similarity matrix of Llama-2-7B and KaLM on Wikitext-103. The di- agonal elements denote the similarity of the same sentence, so the values are always 1. From color intensity, it is evident that KaLM learns more dis- criminative sentence representations, while Llama- 2-7B assigns high similarity for arbitrary sentences. The sentences are organized by celebrities and their careers, thus there should also be a high similarity between adjacent sentences. This phenomenon is reflected in the similarity matrix of KaLM in Fig- ure 4(b), manifested in the smaller matrices with darker colors along the diagonal. More concretely, numerical analysis shows that after training with our method, the sentence-level anisotropy value significantly decreased from 0.83 to 0.21. We present KGQA generation cases to demon- strate knowledge inference enhancements. Fig- ure 5 illustrates concrete examples of KGQA gen- eration results on the WN18RR dataset. We show- case the responses generated by Llama-2-7B and KaLM for four tasks involving head entity predic- tion, relation prediction, tail entity prediction, and triple classification. The prompt templates for each subtask are shown in the second column of Figure 5, where the “inverse relation” is the original relation description with a prefix word “inverse” and the “relation list” consists of all relations concatenated by the symbol “|”. We display the generated an- swers for triple <salviniaceae, member meronym, salvinia> and triple <refrigerator, hypernym, white goods>. The base LLaMA frequently gives wrong answers and tends to identify keywords from the in- put prompts for prediction. In contrast, our method can understand the questions and correctly answer various KGQA tasks in most cases. 6 Conclusion In this work, we show that the subpar performance of LLMs on knowledge-driven tasks stems from a lack of effective knowledge alignment. We present KaLM, a novel knowledge-aligned language mod- eling approach for aligning autoregressive LLMs with KG knowledge. Specifically, we identify two imperative objectives to achieve knowledge align- ment: explicit knowledge alignment and implicit knowledge alignment. We conducted comprehen- sive experiments and analyses on embedding-based KGC and generation-based KGQA. Experimental results demonstrate that our method achieves ef- fective knowledge alignment and consistently im- proves performance on knowledge-driven tasks. 8 Given the head entity and relation, write a tail entity that completes the triple: [tail entity], [inverse relation]head entitypredictionsalviniasalviniaceaewhite goodsrefrigeratorGiven the head entity and relation, write a tail entity that completes the triple: [head entity], [relation]tail entitypredictionsalviniasalviniarefrigeratorwhite goodsIs this true: [head] [relatin] [tail]? Please choose your answer from: ''Yes, this is true'' or ''No, this is not true''.tripleclassificationNo, this is not true.Yes, this is true.Yes, this is true.Yes, this is true.What is the relation between [head entity] and [tail entity]? Please choose your answer from: [relation list].relationpredictionsynset dom-ain topic ofmember meronyminstance hypernymsynset dom-ain topic ofPrompts with Instruciton and Input Fields Task NameLLaMAKaLMLLaMAKaLMGenerations for Triple 1: <salviniaceae, member meronym, salvinia>Generations for Triple 2: <refrigerator, hypernym, white goods> Limitations There are several future directions to improve this work. Firstly, due to the limitation of computational resources, we used the limited-scale LLMs to train and evaluate our method. Evaluations on larger- scale LLMs, such as the 13B and 70B models, can further validate the effectiveness of our approach. Secondly, we use a simple linear combination of ex- plicit alignment loss and implicit alignment loss as the final training objective for KaLM. Further inves- tigations into various forms of loss combinations remain to be explored to maximize the utility of knowledge-aligned language modeling. Finally, we can delve into the performance of the knowledge representations obtained from knowledge-aligned language modeling in cross-domain applications such as retrieval-augmented generation, to gain broader insights into the generalization capabilities of the proposed approach. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin John- son, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. Advances in neural information pro- cessing systems, 26. Chen Chen, Yufei Wang, Bing Li, and Kwok-Yan Lam. 2022. Knowledge is flat: A seq2seq generative frame- work for various knowledge graph completion. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4005–4017. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In In- ternational conference on machine learning, pages 1597–1607. PMLR. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowl- edge graph embeddings. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65. Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xi- aocheng Feng, Bing Qin, et al. 2023. Trends in inte- gration of knowledge and large language models: A survey and taxonomy of methods, benchmarks, and applications. arXiv preprint arXiv:2311.05876. Peng Fu, Yiming Zhang, Haobo Wang, Weikang Qiu, and Junbo Zhao. 2023. Revisiting the knowledge injection frameworks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Lan- guage Processing, pages 10983–10997. Beliz Gunel, Jingfei Du, Alexis Conneau, and Ves Stoy- anov. 2020. Supervised contrastive learning for pre- trained language model fine-tuning. arXiv preprint arXiv:2011.01403. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg- Kirkpatrick, and Graham Neubig. 2021. Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adap- tation of large language models. arXiv preprint arXiv:2106.09685. Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Struct- gpt: A general framework for large language model arXiv preprint to reason over structured data. arXiv:2305.09645. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119–9130. Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2022. Pretrained language mod- els for text generation: A survey. arXiv preprint arXiv:2201.05273. Song Liu, Haoqi Fan, Shengsheng Qian, Yiru Chen, Wenkui Ding, and Zhongyuan Wang. 2021. Hit: Hi- erarchical transformer with momentum contrast for video-text retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11915–11925. Kawin Ethayarajh. 2019. How contextual are contex- tualized word representations? comparing the ge- In ometry of bert, elmo, and gpt-2 embeddings. Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, and Jimmy Lin. 2023. Fine-tuning llama for multi-stage text retrieval. arXiv preprint arXiv:2310.08319. 9 Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, and Yi Chang. 2021a. Structure-augmented text representation learning for efficient knowledge graph completion. In Proceedings of the Web Confer- ence 2021, pages 1737–1748. Feng Wang and Huaping Liu. 2021. Understanding the behaviour of contrastive loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2495–2504. Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022a. Simkgc: Simple contrastive knowledge graph completion with pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4281–4294. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929–9939. PMLR. Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021b. Kepler: A unified model for knowledge embedding and pre-trained language representation. Transac- tions of the Association for Computational Linguis- tics, 9:176–194. Xintao Wang, Qianyu He, Jiaqing Liang, and Yanghua Xiao. 2022b. Language models as knowledge em- beddings. arXiv preprint arXiv:2206.12617. Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jian- feng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Confer- ence on Learning Representations (ICLR) 2015. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kg- bert: Bert for knowledge graph completion. arXiv preprint arXiv:1909.03193. Liang Yao, Jiazhen Peng, Chengsheng Mao, and Yuan Luo. 2023. Exploring large language mod- els for knowledge graph completion. arXiv preprint arXiv:2308.13916. Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D Manning, Percy S Liang, and Jure Leskovec. 2022. Deep bidirectional language-knowledge graph pretraining. Advances in Neural Information Processing Systems, 35:37309– 37323. Yichi Zhang, Zhuo Chen, Wen Zhang, and Huajun Chen. 2023. Making large language models perform bet- ter in knowledge graph completion. arXiv preprint arXiv:2310.06671. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els. In International Conference on Learning Repre- sentations. Niklas Muennighoff. 2022. Sgpt: Gpt sentence embeddings for semantic search. arXiv preprint arXiv:2202.08904. Jianhao Shen, Chenguang Wang, Linyuan Gong, and Dawn Song. 2022. Joint language semantic and struc- ture embedding for knowledge graph completion. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1965–1978. Dan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu, and Pascale Fung. 2019. Generalizing question answering system with pre- trained language model fine-tuning. In Proceedings of the 2nd Workshop on Machine Reading for Ques- tion Answering, pages 203–211. Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou. 2021. Whitening sentence representations for bet- ter semantics and faster retrieval. arXiv preprint arXiv:2103.15316. Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Ling- peng Kong, and Nigel Collier. 2022. A contrastive framework for neural text generation. Advances in Neural Information Processing Systems, 35:21548– 21561. Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Heung-Yeung Shum, and Jian Guo. 2023. Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph. arXiv preprint arXiv:2307.07697. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2018. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd workshop on continuous vector space models and their composi- tionality, pages 57–66. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288. 10 A More Detailed Review of Related Work This work focuses on fine-tuning autoregressive LLMs to align with KG knowledge. Our work inter- sects with the following research areas: Knowledge Enhancement for LLMs, Knowledge Graph Com- pletion, Contrastive Representation Learning, and Representation Anisotropy of Language Models. textual descriptions of KG triples and leverage pre- trained language models to learn knowledge repre- sentations of entities and relations (Yao et al., 2019; Shen et al., 2022; Wang et al., 2022b). However, structure-based methods fail to generalize to un- seen entities and relations, while description-based methods lack interpretability and exhibit lower effi- ciency when dealing with extremely large KGs. A.1 Knowledge Enhancement for LLMs Knowledge enhancement aims to incorporate fac- tual and domain-specific knowledge into LLMs to address their knowledge deficiencies. This can be divided into retrieval-based knowledge augmen- tation and training-based knowledge integration. Retrieval-based knowledge augmentation methods leverage external retrieval modules to provide addi- tional knowledge, aiming to improve the knowl- edge reasoning capability of LLMs (Sun et al., 2023; Jiang et al., 2023). However, this approach may lead to knowledge conflicts (Feng et al., 2023), where the knowledge in LLMs and the knowl- edge in the retrieved documents are inconsistent or the retrieved multiple documents are contradictory. Training-based knowledge integration methods in- volve using the textual descriptions of KG triples to pre-train or fine-tune LLMs, aiming to achieve knowledge alignment. These methods can be cate- gorized into explicit alignment (Wang et al., 2021b; Yasunaga et al., 2022) and implicit alignment (Yao et al., 2023; Zhang et al., 2023) based on whether they directly optimize the knowledge representa- tion. Nevertheless, these methods have either sacri- ficed the generative capability or lacked effective representation alignment. Our approach enhances the knowledge of LLMs via a unique joint objective of explicit alignment and implicit alignment, im- proving the quality of knowledge representations and generative knowledge reasoning capabilities. A.2 Knowledge Graph Completion Knowledge graph completion (KGC) refers to in- ferring missing triples from an incomplete KG, which can be used to evaluate the knowledge rea- soning ability and knowledge representation quality of LLMs. Existing KGC methods can be catego- rized into structure-based and description-based. Structure-based methods represent entities and re- lations as fixed-dimensional vector embeddings and use scoring functions to assess the plausibility of triples (Bordes et al., 2013; Sun et al., 2019). Description-based methods further incorporate the A.3 Contrastive Representation Learning Contrastive learning has demonstrated remarkable success in learning representations across various domains (Chen et al., 2020; Liu et al., 2021; Gunel et al., 2020). The goal is to learn representations that capture shared information between positive pairs while remaining invariant to perturbing noise. The commonly used contrastive learning objectives share a standardized design involving a softmax function over cosine similarity of paired features, with a temperature parameter to control the penalty strength on hard negative samples. Wang and Isola (2020) propose understanding contrastive learning through the lens of alignment and uniformity on the hypersphere. Wang and Liu (2021) show that tem- perature in the contrastive loss controls the strength of penalties over negative samples. A.4 Representation Anisotropy of Language Models PLMs have long been plagued by representation anisotropy (Ethayarajh, 2019), where the learned token and sentence representations are confined to a narrow cone within the entire representation space. The issue of representation anisotropy not only re- sults in model degradation (Su et al., 2022) but also leads to poor performance on discriminative tasks (Muennighoff, 2022). Previous work on alleviat- ing representation anisotropy has mainly focused on post-processing techniques such as normalizing flows (Li et al., 2020) or whitening operations (Su et al., 2021) to obtain isotropic representations. Su et al. (2022) propose a contrastive training objective to encourage learning isotropic token representa- tions. However, these methods mainly improve the isotropy of token representations without enhanc- ing the discriminability of sentence representations. Our method improves the token-level and sentence- level representation anisotropy of LLMs through dual-view knowledge graph contrastive learning, and it has rigorous theoretical guarantees. 11 B Proofs for Theoretical Analysis In this section, we present proofs for theorems in Sections 4.1 and 4.2 of the main paper. B.1 Proof of Theorem 1 in Section 4.1 Recall the reformulated dual-view knowledge graph contrastive learning objective (Equation 5): Lexp(f ; τ, N ) ≜ E (Dhr,Dt)∼ppos {Dt i}N ′ i=1 i.i.d. ∼ pdata  − log     ef (Dhr)⊤f (Dt)/τ + ef (Dhr)⊤f (Dt ef (Dhr)⊤f (Dt)/τ N (cid:80) i=1      . ′ i)/τ From the symmetry of p, we can derive: Lexp(f ; τ, N ) = (cid:104) E (Dhr,Dt)∼ppos −f (Dhr)⊤f (Dt)/τ (cid:105) + E (Dhr,Dt)∼ppos {Dt i}N ′ i=1 i.i.d. ∼ pdata (cid:34) log (cid:32) ef (Dhr)⊤f (Dt)/τ + ef (Dt i)⊤f (Dt)/τ ′ (cid:33)(cid:35) . N (cid:88) i=1 Note that we can have the following limits almost surely by the strong law of large numbers (SLLN):      lim N →∞ log ef (Dhr)⊤f (Dt)/τ N + N (cid:80) i=1 ef (Dt i)⊤f (Dt)/τ ′ N      = log E i ∼pdata D− f (D− i )⊤f (Di)/τ. Then we can derive the following limits:  + E     lim N →∞ log      ef (Dhr)⊤f (Dt)/τ N + N (cid:80) i=1 ef (Dt i)⊤f (Dt)/τ ′ N           = − 1 τ E (Dhr,Dt)∼ppos (cid:105) (cid:104) f (Dhr)⊤f (Dt) (cid:34) + E Di∼pdata log E i ∼pdata D− (cid:104) ef (D− i )⊤f (Di)/τ (cid:105) (cid:35) . We now finish the proof of Theorem 1. Lexp(f ; τ, N ) − log N = lim N →∞ 1 τ − E (Dhr,Dt)∼ppos (cid:34) (cid:104) f (Dhr)⊤f (Dt) (cid:105) + E Di∼pdata log E i ∼pdata D− (cid:104) ef (D− i )⊤f (Di)/τ (cid:105) (cid:35) . B.2 Proof of Theorem 2 in Section 4.2 Recall the asymptotics of the explicit knowledge alignment objective when the number of negative samples approaches infinity (Equation 6): lim N →∞ Lexp(f ; τ, N ) − log N = − 1 τ E (Dhr,Dt)∼ppos (cid:104) f (Dhr)⊤f (Dt) (cid:105) (cid:34) + E Di∼pdata log E i ∼pdata D− (cid:104) ef (D− i )⊤f (Di)/τ (cid:105) (cid:35) . (cid:105) Recall the definition of sentence-level anisotropy value of corpus {Di}N i=1 (Equation 7): lim N →∞ Lexp(f ; τ, N ) − log N (cid:104) = E (Dhr,Dt)∼ppos + lim N →∞ E (Dhr,Dt)∼ppos −f (Dhr)⊤f (Dt)/τ {Dt ′ i}N i=1 i.i.d. ∼ pdata ef (Dhr)⊤f (Dt)/τ N +  log          N (cid:80) i=1 ef (Dt ′ i)⊤f (Dt)/τ N           = E (Dhr,Dt)∼ppos (cid:104) −f (Dhr)⊤f (Dt)/τ (cid:105) 12 anisotropy{D} = 1 N (N − 1) N (cid:88) N (cid:88) i=1 j=1,j̸=i e⊤ i ej. We can further derive the inequality below from the second term of Equation 6 with Jensen’s inequality when pdata is uniform over finite samples {Di}N i=1: (cid:104) ef (D− i )⊤f (Di)/τ (cid:105) (cid:35) (cid:34) log E Di∼pdata = 1 N N (cid:88) i=1 E D− i ∼pdata  1 N N (cid:88) j=1 log    ee⊤ i ej /τ ≥ 1 τ N 2 = 1 τ N 2 N (cid:88) N (cid:88) j=1 i=1   N (cid:88) e⊤ i ej N (cid:88)  e⊤ i ej + N  i=1 j=1,j̸=i N − 1 τ N · 1 N (N − 1) = = N − 1 τ N N (cid:88) N (cid:88) i=1 j=1,j̸=i 1 τ N e⊤ i ej + 1 τ N . · anisotropy{D} + We now finish the proof of Theorem 2. (cid:34) E Di∼pdata log E i ∼pdata D− (cid:104) (cid:35) i )⊤f (Di)/τ (cid:105) ef (D− ≥ N − 1 τ N · anisotropy{D} + 1 τ N . C Further Details about Implementation and Experimental Setup C.1 Dataset Details WN18RR and FB15k-237 are commonly used KGs derived from WordNet and Freebase, respectively (Bordes et al., 2013). They have been carefully constructed to prevent test set leakage by removing inverse relations. We use these datasets for training and evaluation. The statistics are shown in Table 2. Table 2: Statistics of the datasets. Dataset #Entity #Relation #Train #Valid #Test WN18RR FB15k-237 40, 943 14, 541 11 237 86, 835 272, 115 3, 034 17, 535 3, 134 20, 466 C.2 KaLM Implementation Details We initially choose Llama-2-7B as the base LLM and fine-tune it through the training objective in Equation 4. We use varying batch sizes for ex- plicit knowledge alignment and implicit knowledge alignment. For WN18RR, we use a batch size of 24 for explicit alignment and 4 for implicit align- ment. For FB15k-237, the batch sizes are 40 for explicit alignment and 6 for implicit alignment. To 13 save computing resources for parameter-efficient fine-tuning, we use the LoRA (Hu et al., 2021) method to fine-tune the [“gate_proj”, “up_proj”, “down_proj”] modules in the feed-forward net- work of the Llama-2-7B model. We conducted all training on an NVIDIA 4090×8 GPU. The hyper- parameters utilized for training KaLM (based on Llama-2-7B) are enumerated in Table 3. Table 3: Hyper-parameters for training KaLM. Hyper-parameters WN18RR FB15k-237 epochs max-description-length max-language-modeling-length explicit-alignment-batch-size implicit-alignment-batch-size lora-module lora-alpha lora-drouout lora-rank bnb-config learning-rate LR-sheduler-type weight-decay gradient-checkpointing optimizer AdamW-beta1 AdamW-beta2 bf16 20 50 256 24 4 ffn 16.0 0.05 8 load-in-8bit 1e-4 cosine 0.001 True AdamW 0.9 0.999 True 15 50 256 40 6 ffn 16.0 0.05 8 load-in-8bit 1e-4 cosine 0.001 True AdamW 0.9 0.999 True We also implemented KaLM based on other LLMs to demonstrate the generalizability of our approach, including Llama-3-8B, Mistral-7B-v0.1, OPT-6.7B, Pythia-6.9B, and Pythia-2.8B. It is im- portant to note that the feed-forward network layers in the Pythia model are named [“dense_h_to_4h”, “dense_4h_to_h”], while in the OPT model they are named [“f c1”, “f c2”]. This differs from the feed-forward network layers in the Llama and Mis- tral model series. The parameters used in these experiments are shown in Table 4 (only the differ- ing parameters are listed; the unlisted parameters remain consistent with Table 3). For the cosine similarity matrix composed of head entity-relation embeddings (row direction) and tail entity embeddings (column direction), we calculate the cross-entropy loss in the row direction (i.e., a head entity-relation embedding matching different tail entity embeddings) and the column direction (i.e., a tail entity embedding matching dif- ferent head entity-relation embeddings) separately. We then take the average of the two losses to obtain the final InfoNCE loss. Similar to Equation 1, the Table 4: Additional Hyper-parameters for training KaLM with different LLMs. Models epochs explicit-batch-size implicit-batch-size bnb-config Llama-3-8B-WN Llama-3-8B-FB Mistral-7B-v0.1-WN Mistral-7B-v0.1-FB OPT-6.7B-WN OPT-6.7B-FB Pythia-6.9B-WN Pythia-6.9B-FB Pythia-2.8B-WN Pythia-2.8B-FB 20 15 20 15 20 15 20 15 20 15 18 36 40 72 24 40 24 42 48 96 3 5 5 8 3 6 4 6 8 10 load-in-8bit load-in-8bit load-in-4bit load-in-4bit load-in-8bit load-in-8bit load-in-8bit load-in-8bit load-in-8bit load-in-8bit column-direction loss is defined as follows: ℓc = − log e(ϕ(et,ehr)−γ)/τ e(ϕ(et,ehr)−γ)/τ + (cid:80)N j=1 e . ϕ(et,ehr′ j )/τ C.3 More Details about Evaluations For the embedding-based KGC task, we report five automated metrics: Mean Rank (MR), Mean Re- ciprocal Rank (MRR), and Hit@k (k ∈ {1, 3, 10}). MR is the mean rank of all test triplets and MRR de- notes the average reciprocal rank of all test triples. Hit@k measures the proportion of entities correctly ranked in the top k. Following previous work, our method is evaluated under the filtering setting (Bor- des et al., 2013), where the scores of all true triples in the training, validation, and testing set are ig- nored. All results are averaged over the tail direc- tion (a <head entity-relation> embedding matching different tail entity embeddings, i.e., tail entity pre- diction) and head direction (a <tail entity-inverse relation> embedding matching different head entity embeddings, i.e., head entity prediction). For the generation-based KGQA task, we report the prediction accuracy over head entities, tail enti- ties, relations, and relation classifications. To better prompt LLMs for the knowledge graph question- answering task, we selected several triples from the validation set and constructed few-shot examples using the corresponding templates from Table 5. D.1 More Experiments on Knowledge Representation Assessment In Table 5, we present additional knowledge repre- sentation results (the embedding-based KGC task) to demonstrate the effectiveness of KaLM in knowl- edge alignment. The best and second-best experi- mental results are indicated by bold and underline texts, respectively. Overall, the proposed method achieved excellent performance on the embedding- based KGC task, delivering impressive results in the MR and Hit@10 metrics, while also being highly competitive in other metrics. The experimental results based on LLMs of dif- ferent sources and scales demonstrate the effective- ness and generalizability of our proposed method. Under similar experimental settings, more pow- erful LLMs (such as Llama3-8B and Mistral-7B) achieved better metrics after being fine-tuned with KaLM, which also demonstrates the scalability of our method. It is worth noting that for LLMs of the same origin but different scales (Pythia-6.9B and Pythia-2.8B), the smaller-scale Pythia-2.8B bene- fited from a larger training batch size during fine- tuning. As a result, its final experimental metrics matched or even surpassed those of the more pow- erful Pythia-6.9B model. This also highlights the importance of large batch sizes for the embedding- based KGC task, suggesting that using more pow- erful computing resources and larger GPU memory could further enhance the effectiveness of the pro- posed KaLM method. D Addition Experimental Results D.2 More Experiments on Knowledge Inference Evaluation In this section, we provide more experimental re- sults to show the effectiveness of our method. In Figure 6, we present additional knowledge infer- ence results (generation-based KGQA) to demon- 14 Table 5: More Embedding-based KGC results with various LLMs on WN18RR and FB15k-237. Method WN18RR FB15k-237 MR MRR H@1 H@3 H@10 MR MRR H@1 H@3 H@10 0.043 0.412 0.428 0.243 0.444 0.476 structure-based methods 2300 TransE 7000 DistMult RotatE 3340 description-based methods (autoencoder PLMs) 97 KG-BERT 51 StAR 72 C-LMKE SimKGC - description-based methods (autoregressive LLMs) 15969 Llama-2-7B 19 Llama2-7BKaLM Llama3-8BKaLM 23 Mistral-7BKaLM 20 OPT-6.7BKaLM 24 Pythia-6.9BKaLM 28 Pythia-2.8BKaLM 30 0.004 0.409 0.446 0.484 0.397 0.394 0.398 0.010 0.556 0.588 0.612 0.514 0.508 0.539 0.216 0.401 0.598 0.671 0.041 0.243 0.480 0.587 0.441 0.470 0.492 0.302 0.491 0.675 0.731 0.010 0.656 0.676 0.702 0.603 0.598 0.644 0.532 0.504 0.571 0.524 0.709 0.806 0.817 0.020 0.851 0.860 0.869 0.822 0.818 0.829 323 512 177 153 117 183 - 5359 114 121 116 126 130 133 0.279 0.281 0.338 - 0.296 0.404 0.333 0.006 0.299 0.308 0.317 0.288 0.289 0.292 0.198 0.199 0.241 - 0.205 0.324 0.246 0.002 0.204 0.212 0.225 0.199 0.199 0.205 0.376 0.301 0.375 - 0.322 0.439 0.362 0.004 0.325 0.337 0.351 0.312 0.310 0.318 0.441 0.446 0.533 0.420 0.482 0.556 0.510 0.012 0.502 0.509 0.518 0.486 0.484 0.489 strate the effectiveness of KaLM in knowledge alignment. This section demonstrates the per- formance of various powerful LLMs (including Llama-2-7B, Llama-3-8B, and Mistral-7B) before and after fine-tuning with KaLM, across various knowledge graph question-answering tasks (includ- ing head entity prediction, tail entity prediction, relation prediction, and triple classification). The experimental results can be divided into three groups by color: the green series, blue series, and red series correspond to the KGQA results of Llama-2-7B, Llama-3-8B, and Mistral-7B before and after training, respectively. It can be observed that after fine-tuning with KaLM, all three LLMs achieved consistent improvements in prediction ac- curacy for the question-answering tasks. At the KGQA task level, the most significant overall improvements were observed in tail entity prediction (an average increase of 14.1%) and triple classification (an average increase of 12.7%), fol- lowed by relation prediction (an average increase of 8.6%) and head entity prediction (an average increase of 6.9%). At the LLM level, the most ex- citing improvements were seen in Llama-3-8B (an average increase of 11.1%) and Mistral-7B (an aver- age increase of 10.8%), while Llama-2-7B showed relatively smaller gains (an average increase of 9.6%). This suggests that our method demonstrates better scalability with more powerful LLMs. D.3 More Visualizations on Knowledge Representation Matrix From this section onward, unless stated otherwise, KaLM refers to the model checkpoint trained on Llama-2-7B using our method. We present more knowledge representation results to demonstrate the effectiveness of KaLM in knowledge align- ment. Figure 7 displays the sentence similarity matrix of several similar entity descriptions from the WN8RR dataset. Detailed information about entity names and descriptions can be found in Fig- ure 8. It is evident that KaLM can obtain more distinguishable knowledge representations, where the similarity between related entities (diagonal elements) is high, while the similarity between un- related entities (off-diagonal elements) is low. D.4 Detailed analysis of Representation Anisotropy We further analyze the sentence-level representa- tion anisotropy on the Wikitext-103 test set using model checkpoints trained on the WN18RR dataset. The sentence-level anisotropy value for a given corpus {Di}N i=1 is defined in Equation 7, where a lower anisotropy value indicates better discrimina- tive characteristics of sentence representations. Figure 9 plots the anisotropy value over different layers for LLaMA and KaLM. We can observe that the anisotropy value of LLaMA consistently 15 Figure 6: Comparison of generative knowledge inference performance between Base LLMs and their fine-tuned KaLM versions, best viewed in three color groups. The symbol ↑ means higher is better and ↓ means lower is better. remains at a relatively high level, suggesting that the base LLM suffers from severe representation anisotropy issues. In contrast, our proposed KaLM notably mitigates this issue, with the anisotropy values decreasing gradually as the depth of the model increases, and dropping significantly from 0.5 to 0.2 at the output layer. The anisotropy values of the last layer for LLaMA and KaLM show that after training with our method, the sentence-level anisotropy value significantly decreased from 0.83 to 0.21. The results indicate that our method can effectively reduce the anisotropy of representations across layers in LLMs, resulting in a significant improvement in knowledge representation. Figure 10 analyzes the changes in anisotropy val- ues during the model training process. The results show that the anisotropy values decrease rapidly af- ter a few epochs of training and eventually stabilize at a low level. We assume that the initial epochs of training have completed the preliminary alignment of knowledge representation, while the subsequent training epochs mainly focus on integrating explicit and implicit representations. E Ablation Studies In this section, we present concrete ablation studies to analyze the effectiveness of each component of our approach. We ablate the settings that led to the final design, including training objectives, fine-tuning modules, and training epochs. It is important to note that the results of the ablation experiments in this section were obtained from earlier runs on an NVIDIA 3090×4 GPU, which may lead to slight differences compared to the full KGC results presented in the main text. E.1 The necessity of the implicit knowledge alignment objective (Equation 3) In Table 6, we train the model using different loss weights (i.e., the λ parameter in Equation 4) and analyze its performance on the KGC task. Note that this experiment is conducted solely for ablation analysis, thus only 10 training epochs are used. Ex- perimental results reveal that incorporating the im- plicit knowledge alignment objective (i.e., λ > 0) generally leads to better performance in KGC, indi- cating further improvement in knowledge represen- tation. The best performance in KGC is achieved when λ = 0.1. The results confirm that both ex- plicit alignment and implicit alignment are crucial for knowledge alignment, as they both essentially require a deep understanding of knowledge. The implicit knowledge alignment objective fo- cuses on incorporating textual patterns of knowl- edge into the LLM to prevent catastrophic forget- ting of previous knowledge and maintain its gen- erative capability. We also conducted additional perplexity (PPL) evaluation experiments to illus- 16 head predtail predrelation predtriple cls010203040506070prediction accuracy7.811.63.755.916.228.512.161.611.914.53.153.617.228.112.869.411.617.929.049.318.629.836.765.8Llama-2-7BLlama-2-KaLMLlama-3-8BLlama-3-KaLMMistral-7BMistral-KaLM (a) LLaMA (b) KaLM Figure 7: Similarity matrix of selected similar entity descriptions from the WN8RR dataset. Figure 8: Selected entities and their corresponding textual descriptions. trate the impact of the implicit knowledge align- ment loss. The additional results show that for the corresponding λ = 0, 0.01, 0.1, 1.0 in Table 6, the model’s PPL are 6.42, 4.96, 4.97, and 4.98, respectively. Therefore, we can conclude that in- corporating the implicit alignment loss maintains the model’s language modeling capability, whereas not using the implicit alignment loss significantly impairs the model’s generative ability. E.2 The effects of fine-tuning different LLM modules using LoRA In Table 7, we fine-tune different modules of the model using the LoRA (Hu et al., 2021) method and analyze their performance on KGC tasks and PPL Table 6: KGC results with different λ in Equation 4. Method KaLM (λ = 0) KaLM (λ = 0.01) KaLM (λ = 0.1) KaLM (λ = 1.0) WN18RR MR MRR H@1 H@3 H@10 0.815 0.355 21.2 19.8 0.818 0.352 0.825 0.359 20.1 0.806 0.336 21.6 0.512 0.510 0.517 0.500 0.611 0.604 0.615 0.596 PPL 6.42 4.96 4.98 4.98 evaluations. Note that this experiment is conducted solely for ablation analysis, hence only 10 epochs of training were performed. “att” indicates fine- tuning only the attention module, “ffn” indicates fine-tuning only the feed-forward network, and “att- ffn” indicates fine-tuning both the attention module and the feed-forward network simultaneously. The 17 unseeablesoundsameuntrustymaintainunperceivablehealthyequalunfaithfulsustain0.20.00.20.40.60.81.0unseeablesoundsameuntrustymaintainunperceivablehealthyequalunfaithfulsustain0.20.00.20.40.60.81.0Entity NameEntity Desctriptionunseeableunseeable, impossible or nearly impossible to see; imperceptible by the eye; "the invisible man"; "invisible rays"; "an invisible hinge"; "invisible mending"unperceivableunperceivable, impossible or difficult to perceive by the mind or senses; "an imperceptible drop in temperature"; "an imperceptible nod"; "color is unperceivable to the touch"soundsound, financially secure and safe; "sound investments"; "a sound economy"healthyhealthy, having or indicating good health in body or mind; free from infirmity or disease; "a rosy healthy baby"; "staying fit and healthy"samesame, closely similar or comparable in kind or quality or quantity or degree; "curtains the same color as the walls"; "mother and son have the same blue eyes"equalequal, having the same quantity, value, or measure as another; "on equal terms"; "all men are equal before the law"untrustyuntrusty, not worthy of trust or belief; "an untrustworthy person"unfaithfulunfaithful, not true to duty or obligation or promises; "an unfaithful lover"maintainmaintain, keep in a certain state, position, or activity; e.g., "keep clean"; "hold in place"; "She always held herself as a lady"; "The students keep me on my toes"sustainsustain, lengthen or extend in duration or space; "We sustained the diplomatic negotiations as long as possible"; "prolong the treatment of the patient"; "keep up the good work" Figure 9: layer-wise analysis of anisotropy. The ver- tical axis represents the sentence-level representation anisotropy value on the Wikitext-103 test set, while the horizontal axis denotes the number of model layers. Figure 10: epoch-wise analysis of anisotropy. The ver- tical axis represents the sentence-level representation anisotropy value on the Wikitext-103 test set, while the horizontal axis denotes the number of training epochs. E.3 The sustained gains and potential impacts of training for more epochs In Table 8, we fine-tune the model using differ- ent numbers of training epochs and analyze their performance on KGC tasks. This experiment is mainly conducted to investigate whether additional training epochs can lead to further improvement in knowledge representations. The experimental results show that using more training epochs can continuously improve the performance of KaLM on the KGC task, resulting in higher MRR and Hit@k metrics. The model trained with our method consis- tently maintains an acceptable PPL value due to the implicit knowledge alignment objective. However, this also comes with more computational resource consumption and training time. As a result, we selected a moderate number of training epochs. Table 8: KGC results with different training epochs. Method KaLM (epoch=10) KaLM (epoch=20) KaLM (epoch=30) WN18RR MR MRR H@1 H@3 H@10 0.825 0.359 20.1 19.6 0.848 0.402 0.854 0.427 21.9 0.517 0.554 0.576 0.615 0.650 0.673 PPL 4.96 4.98 5.00 results show that fine-tuning with the “att-ffn” ap- proach achieves the best KGC performance, but it also leads to higher PPL values, suggesting that the model’s generation capability may be significantly compromised. Therefore, as a compromise, we choose the “ffn” fine-tuning approach, maintaining moderate knowledge representation performance while preserving the original generation capability. These experimental results are consistent with the conclusions of (He et al., 2021), where the FFN learns local features and patterns within the input sequence, allowing it to directly capture task- specific text patterns. Meanwhile, attention pro- vides the model with the ability to capture complex contextual relationships, which is key to LLMs’ understanding and generation of natural language. Under the knowledge-aligned language modeling objective, we aim to align the internal knowledge representations of LLMs while preserving their inherent natural language generation capabilities. Therefore, directly fine-tuning the FFN layers can reduce resource consumption and maximize the effectiveness of KaLM fine-tuning. Table 7: KGC results and PPL evaluation results when fine-tuning different network modules with LoRA. Method KaLM (att) KaLM (ffn) KaLM (att-ffn) WN18RR MR MRR H@1 H@3 H@10 0.784 0.331 21.9 0.825 0.359 20.1 0.831 0.371 19.5 0.47.5 0.517 0.525 0.580 0.615 0.619 PPL 5.03 4.96 5.07 18 048121620242832model layers0.20.30.40.50.60.70.80.91.0sentence anisotropyLLaMAKaLM02468101214161820training epochs0.20.30.40.50.60.70.8sentence anisotropyKaLM
ai_researcher
1
The_Role_of_Posttranslational_Modification_and_Mitochondrial_Quality_Control_in_Cardiovascular_Diseases.pdf
SCIENCE CHINA Life Sciences SPECIAL TOPIC: Calcium signaling • REVIEW • doi: 10.1007/s11427-016-5089-3 doi: 10.1007/s11427-016-5089-3 Mitochondrial Ca2+ uptake in skeletal muscle health and disease Jingsong Zhou*, Kamal Dhakal & Jianxun Yi Kansas City University of Medicine and Bioscience, Dybedal Research Center, Kansas City MO 64106, USA Received May 16, 2015; accepted June 7, 2016 Muscle uses Ca2+ as a messenger to control contraction and relies on ATP to maintain the intracellular Ca2+ homeostasis. Mi- tochondria are the major sub-cellular organelle of ATP production. With a negative inner membrane potential, mitochondria take up Ca2+ from their surroundings, a process called mitochondrial Ca2+ uptake. Under physiological conditions, Ca2+ uptake into mitochondria promotes ATP production. Excessive uptake causes mitochondrial Ca2+ overload, which activates down- stream adverse responses leading to cell dysfunction. Moreover, mitochondrial Ca2+ uptake could shape spatio-temporal pat- terns of intracellular Ca2+ signaling. Malfunction of mitochondrial Ca2+ uptake is implicated in muscle degeneration. Unlike non-excitable cells, mitochondria in muscle cells experience dramatic changes of intracellular Ca2+ levels. Besides the sudden elevation of Ca2+ level induced by action potentials, Ca2+ transients in muscle cells can be as short as a few milliseconds during a single twitch or as long as minutes during tetanic contraction, which raises the question whether mitochondrial Ca2+ uptake is fast and big enough to shape intracellular Ca2+ signaling during excitation-contraction coupling and creates technical challeng- es for quantification of the dynamic changes of Ca2+ inside mitochondria. This review focuses on characterization of mito- chondrial Ca2+ uptake in skeletal muscle and its role in muscle physiology and diseases. skeletal muscle, mitochondria, Ca2+ Citation: Zhou, J., Dhakal, K., and Yi, J. (2016). Mitochondrial Ca2+ uptake in skeletal muscle health and disease. Sci China Life Sci. doi: 10.1007/s11427-016-5089-3 INTRODUCTION ATP is the major currency of energy for sustaining life and is mostly produced in mitochondria. At the expense of other nutrient substrates and oxygen, mitochondria produce ATP that can be exchanged instantly whenever intracellular en- ergy is required (Knowles, 1980). As described in the his- torical review by O’Rourke (O’Rourke, 2010), mitochon- dria, when initially discovered by Richard Altmann in 1890, were called “bioplast”, meaning “life germs”. The word “mitochondria” was given by Carld Benda in 1898. For decades mitochondria were studied as the power house of cell, and soon it was realized that Ca2+ entry into mitochon- dria is required to stimulate the Krebs cycle and electron transport chain activity that result in enhanced ATP synthe- *Corresponding author (email: [email protected]) sis inside mitochondria (Balaban, 2002; Carafoli, 2014; Denton et al., 1980; Drago et al., 2011). Ca2+ is fundamental to normal cellular function. Cells possess specialized mechanisms to ensure a tightly con- trolled intracellular Ca2+ level. These mechanisms involve complex interplay between intracellular Ca2+ storage, buff- ering and Ca2+ influx and efflux through the plasma mem- brane. The mitochondrial matrix has the ability to sequester Ca2+ when free cytosolic Ca2+ rises above a set point (Nicholls, 2005). Thus, mitochondria are recognized as one of the sub-cellular organelles participating in regulation of the intracellular Ca2+ homeostasis. Mitochondria are dy- namic organelles that interact with the plasma membrane and the endoplasmic reticulum (ER) (Boncompagni et al., 2009; Eisner et al., 2013), and contribute to the recycling of Ca2+ back to the vicinal ER (Arnaudeau et al., 2001; Frieden et al., 2005). While intracellular Ca2+ signaling controls © The Author(s) 2016. This article is published with open access at link.springer.com life.scichina.com link.springer.com 2 Zhou, J., et al. Sci China Life Sci August (2016) Vol.59 No.8 mitochondrial motility, distribution and function (Yi et al., 2004), reciprocally, mitochondria also modulates spatial and temporal intracellular Ca2+ levels. Skeletal muscle contraction needs both Ca2+ and ATP. Thus, muscle physiology largely depends on two intracellu- lar organelles: the sarcoplasmic reticulum (SR) for Ca2+ storage and release (Franzini-Armstrong and Jorgensen, 1994), and mitochondria for ATP synthesis (Russell et al., 2014). In non-muscle cells, the functional and physical cou- pling between ER and mitochondria is attributed to the in- ter-organelle tether proteins called mitofusion at the juxta- position between the ER and mitochondria (de Brito and Scorrano, 2008). This type of structure was also found in skeletal muscle cells in which a tether like protein connects the SR and mitochondria (Boncompagni et al., 2009; Pie- trangelo et al., 2015). These pivotal findings have height- ened the role of mitochondria as a key player in the dynam- ic regulation of physiological Ca2+ signaling in skeletal muscle. Although it is believed that there is resemblance of mitochondrial structure and function among all cell types, the way by which mitochondrial Ca2+ uptake regulating intracellular Ca2+ signaling has specific features in skeletal muscle. Mitochondria in muscle cells face rapid changes of intracellular Ca2+ levels during contraction. Whether mito- chondria Ca2+ uptake modifies Ca2+ signaling during excita- tion-contraction coupling has been a fundamental question in muscle physiology (O’Rourke and Blatter, 2009; Rossi et al., 2009). In order to answer this fundamental question, effort has been made to evaluate mitochondrial Ca2+ uptake in skeletal muscle under various physiological conditions. Characterization of mitochondrial Ca2+ uptake is a key step to understand the role of mitochondria in muscle physiology and diseases. This review focuses on characterization of mitochondrial Ca2+ uptake in skeletal muscle and its signif- icance in skeletal muscle physiology and diseases. during rapid muscle contraction. Such high demand of ATP cannot be fulfilled by the finite amount ATP normally stored inside the skeletal muscle. Muscle contraction re- quires fast and sustained ATP production, which is fulfilled primarily by mitochondria (Porter and Wall, 2012). As such, skeletal muscle is known to be a tissue of high energy demand with mitochondria occupying 10%–15% of the fi- ber volume and densely packed within muscle cells (Eisen- berg, 1983). In skeletal muscle, mitochondria are located largely within the I-bands, surrounding the SR network (Ei- senberg, 1983). Importantly, mitochondria are found to be linked to the SR in skeletal muscle by developmentally reg- ulated tethering structures (Boncompagni et al., 2009; Pie- trangelo et al., 2015). This intimate juxtaposition of the SR and mitochondria, together with the ability of mitochondria to take up Ca2+ from their surroundings, allows the move- ment of Ca2+ between these organellar systems (Bianchi et al., 2004; Csordas and Hajnoczky, 2009; Rizzuto and Poz- zan, 2006; Santo-Domingo and Demaurex, 2010). These movements are believed to help tailor mitochondrial metab- olism and ATP synthesis to the demand of muscle contrac- tion. Early studies of intact skeletal muscle observed an increase in NADH/NAD+ during the transition from resting to working status, suggesting that an enhanced intracellular Ca2+ level promotes mitochondrial metabolism in skeletal muscle (Duboc et al., 1988; Kunz, 2001; Sahlin, 1985). Later, using isolated mitochondria derived from skeletal muscle, Kavanagh et al. confirmed that an elevation in mi- tochondrial Ca2+ was able to stimulate oxidative phosphor- ylation (Kavanagh et al., 2000). As discussed in the review article by Rossi et al., mitochondrial Ca2+ uptake should assist with stimulation of aerobic ATP production in order to balance increased ATP consumption associated with cross bridge cycling and SERCA-mediated Ca2+ sequestra- tion during muscle contraction (Rossi et al., 2009). MITOCHONDRIAL Ca2+ UPTAKE REGULATES ENERGY PRODUCTION IN SKELETAL MUSCLE Ca2+ is a critical messenger not only for muscle contraction, but also for promoting mitochondrial ATP production. In mammalian cells, Ca2+ is a key regulator of ATP production (Griffiths and Rutter, 2009). Four important mitochondrial dehydrogenase involved in the direct supply of NADH (re- duced nicotinamide adenine dinucleotide) and FADH (re- duced flavin adenine dinucleotide) for ATP production were found to be regulated by Ca2+ inside mitochondria (Denton, 2009). A transient increase of free Ca2+ concentration is required to stimulate electron transport chain (ETC) of mi- tochondria in cardiac cells (Gueguen et al., 2005; Territo et al., 2000). The role of mitochondrial Ca2+ uptake in car- diac muscle energy metabolism has been widely studied (Balaban, 2002; Brookes et al., 2004). In skeletal muscle, ATP demand increases ~100 times EVALUATION OF MITOCHONDRIAL Ca2+ UPTAKE IN SKELETAL MUSCLE In order to understand the role of mitochondrial Ca2+ uptake in skeletal muscle physiology, it is vital to evaluate the amount and the kinetics of mitochondrial Ca2+ uptake in skeletal muscle cells under physiological conditions. The early studies on mitochondrial Ca2+ uptake were performed on isolated mitochondria (Deluca and Engstrom, 1961; Mraz, 1962). These studies showed that isolated mitochon- dria from rat kidney were able to take up 60% of Ca2+ from the surrounding medium (Deluca and Engstrom, 1961). The kinetics of mitochondrial Ca2+ uptake was well documented in the isolated mitochondria from the liver and heart (Cara- foli and Crompton, 1978; McMillin-Wood et al., 1980). Sembrowich et al. was the first to explore the Ca2+ uptake by mitochondria derived from different types of skeletal muscle both from rats and rabbits (Sembrowich et al., 1985). Using direct patch-clamp recording on the inner mi- Zhou, J., et al. Sci China Life Sci August (2016) Vol.59 No.8 3 (AM) ester of the acetoxymethyl tochondrial membrane, Fieni et al. recorded the mitochon- drial Ca2+ uptake activity in mitoplasts isolated from mito- chondria of different types of tissue including skeletal mus- cle (Fieni et al., 2012). These in vitro studies also suggested a potential influence of mitochondrial Ca2+ uptake on cyto- solic Ca2+ signaling during muscle contraction. However, such conclusion needs validation from in vivo studies. Spe- cifically, it requires characterization of mitochondrial Ca2+ uptake in intact muscle cells under physiological conditions. There are a few probes available to monitor Ca2+ fluxes into and out of mitochondria in live cells. The commercially available fluorescent dyerhod-2 has been widely used in investigating mitochondrial Ca2+ handling in cultured cells because rhod-2 (Rhod-2-AM) preferentially targets mitochondria (see re- view (Pozzan and Rudolf, 2009)). Rhod-2 has been used to measure mitochondrial Ca2+ uptake in cultured skeletal muscle myotubes under electric stimulation (Eisner et al., 2010). The shortcoming is that Rhod-2 is not a ratiometric dye (Fonteriz et al., 2010). The uneven distributions of the dye among individual mitochondria can also cause prob- lems for quantification of mitochondrial Ca2+ concentration changes based on fluorescence intensity (Lakin-Thomas and Brand, 1987). Rhod-2 has also been used to monitor mito- chondrial Ca2+ uptake in intact skeletal muscle fibers fol- lowing repeated tetanic stimulation (Ainbinder et al., 2015; Bruton et al., 2003). However, the specific targeting of Rhod-2-AM to mitochondria in intact muscle fibers was challenging. To avoid the Rhod-2 signals from outside mi- tochondria, Shkryl and Shirokova recorded mitochondrial Ca2+ uptake during caffeine-induced Ca2+ release in perme- abilized rat skeletal muscle fibers (Shkryl and Shirokova, 2006). In this case, cell membrane permeabilization of the muscle fibers allowed the non-targeted Rhod-2 dye to leak out of the cytosol. However, since muscle fibers with per- meabilized membrane no longer respond to physiological stimulations (i.e. membrane depolarization), the condition employed in such a study is not suitable for quantitative and specific evaluation of mitochondrial Ca2+ uptake in intact skeletal muscle cells under physiological conditions. Due to various limitations, quantitative measurement of mitochondrial Ca2+ uptake in skeletal muscle remains to be challenging. GFP and other functionally similar fluorescent proteins have modernized the research in cell biology (Tsien, 1998). Owing to mutations and variations in gene sequences, genetically encoded fluorescent proteins have been developed as Ca2+ biosensors with varying properties including differences in fluorescence spectra, Ca2+ binding affinities and kinetics as well as those that change spectral properties upon binding to calcium (Palmer et al., 2006). The rapid growth of molecular biology techniques also al- lows the genetically encoded Ca2+ biosensors to target to specific sub-cellular organelles such as mitochondria (Poz- zan and Rudolf, 2009). Thus, organelle-targeted ratiometric Ca2+ biosensors has become a better choice for characteriza- tion of mitochondrial Ca2+ uptake in skeletal muscle under physiological conditions. Using a mitochondrial targeted biosensor (2mtYC2), Rudolf et al. demonstrated that a sin- gle twitch could cause measurable dynamic changes in mi- tochondrial Ca2+ levels in live skeletal muscle fibers. How- ever, they also noted some limitations of 2mtYC2 for mito- chondrial Ca2+ measurement in muscle cells, for instance, YC2 had a small dynamic range with an increase of the emission ratio <26% in the cytosol and <14% in mitochon- dria during muscle contraction (Rudolf et al., 2004). Sub- sequently, Palmer et al. developed a new version of mito- chondrial targeted Ca2+ biosensor, 4mtD3cpv, which has a dynamic ratio range of 5.1 (Palmer et al., 2006). Upon test- ing 4mtD3cpv on live skeletal muscle fibers under volt- age-clamp conditions, Zhou et al. found that while 4mtD3cpv showed a significant improvement in monitoring mitochondrial Ca2+ levels in live muscle fibers with an in- creased dynamic ratio range, the kinetics of the detected signal set some limitations for quantitatively calculating the changes of the mitochondrial Ca2+ level (Zhou et al., 2008). As an alternative, YC3.6, another Ca2+ biosensor construct- ed by Nagai and colleagues (Nagai et al., 2004), with a dy- namic ratio range of 5.6 and apparent Kd of 0.25 μ mol L−1 was later tested by Yi et al. in live skeletal muscle fibers (Yi et al., 2011). By introducing a mitochondrial targeting se- quence (Wang et al., 2008) at the 5′-end of YC3.6 cDNA, they developed a mitochondrial targeting Ca2+ biosensor, mt11-YC3.6. The highly specific mitochondrial expression of mt11-YC3.6 and the simple kinetics of the recorded YC3.6 ratio signal allowed quantitative evaluation of the dynamic changes of free Ca2+ levels inside mitochondrial matrix in skeletal muscle fibers in response to a Ca2+ release transient induced by cell membrane depolarization under whole-cell voltage clamped conditions. This study shows that at the peak of the voltage-induced Ca2+ release, the mi- tochondrial Ca2+ uptake contributes to around 10%–18% of the total Ca2+ removal, and the average mitochondrial Ca2+ influx is around 4.1±1.0 μmol L−1 ms−1 (Yi et al., 2011). This study represents the first quantitative characterization of mitochondrial Ca2+ uptake and its role in shaping the cytosolic Ca2+ signaling in skeletal muscle during excita- tion-contraction coupling. IMPAIRED SKELETAL MUSCLE MITOCHONDRIAL Ca2+ SIGNALING IN MUSCLE DISEASES Mitochondrial Ca2+ uptake plays vital roles in life and death of the cell. Impaired mitochondrial Ca2+ uptake is observed in various skeletal muscle myopathies and neuromuscular diseases. Defective intracellular Ca2+ signaling is associated with degeneration of skeletal muscle cells in aging (Del- bono, 2002; Weisleder et al., 2006) and muscular dystrophy (mdx) (De Backer et al., 2002; DiFranco et al., 2008; Han 4 Zhou, J., et al. Sci China Life Sci August (2016) Vol.59 No.8 et al., 2006; Hopf et al., 1996; Mallouk et al., 2000; Van- debrouck et al., 2002; Wang et al., 2005). Since the defects usually entail increases in the SR Ca2+ release activity and elevated myoplasmic Ca2+ levels, which likely affect mito- chondrial Ca2+ uptake. An early study by Robert et al. di- rectly tested this hypothesis by recording mitochondrial Ca2+ uptake in myotubes derived from a Duchenne Muscu- lar Dystrophy mdx mouse model. Using mitochon- dria-targeted Ca2+-sensitive photoprotein aequorin, they reported that a larger caffeine-induced Ca2+ release from the SR led to an augmented mitochondrial Ca2+ uptake in the myotubes derived from the mdx mice (Robert et al., 2001). A later study by Shkryl et al. confirmed that the excessive myoplasmic Ca2+ was taken up by mitochondria in adult skeletal muscle fibers derived from the mdx mouse model during osmotically induced Ca2+ release (Shkryl et al., 2009). Moreover, genetic mutations that affect mitochon- drial function are often associated with skeletal muscle dysfunction. The mitochondrial myopathy mouse model with disruption of the gene for mitochondrial transcriptor factor A (Tfam) shows remarkably altered mitochondrial morphology in skeletal muscle and reduced muscle force (Wredenberg et al., 2002). A later study on skeletal muscle of this mouse model showed that mitochondria accumulated excessive amount of Ca2+ following a repetitive contraction (Aydin et al., 2009). Furthermore, mutations in RyR1 gene encoding the skeletal muscle isoform of the ryanodine re- ceptor (RyR1) cause malignant hyperthermia (MH) and central core disease (CCD). The MH and CCD mutations lead to altered Ca2+ release from the SR. By overexpressing the MH and CCD RyR1 mutant proteins in HEK-293 cells, Brini et al. reported a correlation between the level of cyto- solic Ca2+ transient and the amount of mitochondrial Ca2+ uptake, demonstrating that the MH mutation with enhanced cytosolic Ca2+ transients simultaneously leads to enhanced mitochondrial Ca2+ uptake (Brini et al., 2005). In addition, the knock-in mice harboring the Y522S RyR1 MH mutation showed defective mitochondrial morphology in skeletal muscle (Durham et al., 2008), indicating that uncontrolled Ca2+ release due to the mutation in RyR1 leads to mito- chondrial damage. Finally, a study on the skeletal muscle fibers derived from aged mice also showed that the in- creased Ca2+ leakage from the SR led to Ca2+ accumulation in mitochondria (Andersson et al., 2011). Altogether, the studies listed above support the concept that an enhanced SR Ca2+ release or an elevated myoplasmic Ca2+ level pro- motes mitochondrial Ca2+ uptake in various muscle diseas- es. The enhanced mitochondrial Ca2+ uptake could lead to Ca2+ overload inside mitochondrial matrix and initiate downstream responses leading to muscle cell degeneration, such as excessive mitochondrial ROS production that dis- rupts the cellular redox state observed in various types of muscle diseases (Durham et al., 2008; Wang et al., 2005; Weisleder et al., 2006). In skeletal muscle, the intracellular release and uptake of Ca2+ are mainly controlled by the SR, which forms a net- work that is intimately associated with mitochondria. This close spatial proximity between the SR and mitochondria, together with the ability of mitochondria to take up Ca2+, suggests that mitochondria could play an important role in shaping intracellular Ca2+ signaling in muscle cells. How- ever, whether mitochondrial Ca2+ uptake is large and rapid enough to modulate physiological Ca2+ transients in skeletal muscle and whether alterations in mitochondrial Ca2+- buffering capacity contribute to muscle dysfunction under pathophysiological conditions are fundamental questions for understanding muscle degeneration in various diseases. A direct evidence of mitochondrial regulation on the SR Ca2+ release activity in live skeletal muscle cells was obtained from the study on an amyotrophic lateral sclerosis (ALS) mouse model (G93A) with transgenic overexpression of the human ALS-associated SOD1G93A mutant (Zhou et al., 2010). The G93A muscle fibers display localized depolari- zation of mitochondrial inner membrane potential in the fiber segment near the neuromuscular junction. The depo- larized mitochondria lose the driving force for Ca2+ uptake, which impairs mitochondrial Ca2+ buffering capacity. The fiber segments with depolarized mitochondria shows greater osmotic stress-induced Ca2+ release activity, which can in- clude propagating Ca2+ waves. Those Ca2+ waves are con- fined to regions of depolarized mitochondria and stop propagating shortly upon entering the regions of normal, polarized mitochondria. Uncoupling of mitochondrial membrane potential with FCCP or inhibition of mitochon- drial Ca2+ uptake by Ru360 also led to cell-wide propaga- tion of such Ca2+ release events. These data reveals that mitochondrial Ca2+ uptake is large and rapid enough to shape cytosolic Ca2+ signaling in skeletal muscle under physiological conditions. The ALS muscle fibers provide a unique opportunity to characterize the mitochondrial Ca2+ uptake under physio- logical conditions. The localized mitochondrial defect in the ALS muscle fibers allows for examination of mitochondrial contribution to Ca2+ removal during excitation-contraction coupling by comparing Ca2+ transients in regions with nor- mal and depolarized mitochondria in the same muscle fiber. Using whole cell voltage-clamp technique, Yi et al. showed that Ca2+ transients elicited by membrane depolarization in the fiber segment with depolarized mitochondria displayed increased amplitude of ~10%. Using the mitochon- dria-targeted Ca2+ biosensor (mt11-YC3.6) expressed in ALS muscle fibers, these authors recorded the dynamic change of mitochondrial free Ca2+ levels during volt- age-induced SR Ca2+ release and detected a reduced Ca2+ uptake by mitochondria in the fiber segment with depolar- ized mitochondria, which mirrored the elevated Ca2+ tran- sients in the cytosol in the same region (Yi et al., 2011). This study provides a direct demonstration of the im- portance of mitochondrial Ca2+ uptake in shaping cytosolic Ca2+ signaling in skeletal muscle during excitation- Zhou, J., et al. Sci China Life Sci August (2016) Vol.59 No.8 5 contraction coupling and suggests that the reduced Ca2+ buffering capacity of mitochondria likely contributes to muscle degeneration in ALS. Although, it was well known that mitochondria from all cell types were able to take up Ca2+ and that the channel or transport responsible for mitochondrial Ca2+ uptake was defined as mitochondrial Ca2+ uniporter (MCU), the molec- ular identity of the putative MCU had remained mysterious for decades (Carafoli, 2014; Drago et al., 2011; Starkov, 2010). It was not until 2011 when two research groups in- dependently identified the gene that encodes MCU, a transmembrane protein located to the inner mitochondrial membrane (Baughman et al., 2011; De Stefani et al., 2011). This new progress has further advanced the investigation of the role of mitochondrial Ca2+ uptake in skeletal muscle health and diseases. Pan et al. generated a global knockout mouse model (MCU−/−). The MCU−/− mice survived well with a smaller body size, but showed impaired skeletal muscle performance along with absence of mitochondrial Ca2+ uptake in isolated skeletal muscle mitochondria, indi- cating that mitochondrial Ca2+ uptake plays an important role in skeletal muscle development and performance (Pan et al., 2013). Recently, direct evidence of MCU-dependent mitochondrial Ca2+ uptake induced skeletal muscle atrophy was provided by Mammu- cari et al. and Chemello et al., in which, the authors have shown that virus-mediated overexpression or silencing of MCU had significant impact on skeletal muscle atrophy through regulation expression of genes involved in hyper- trophic pathways in skeletal muscle (Chemello et al., 2015; Mammucari et al., 2015). Although the identified pore- forming molecule of MCU is a highly selective Ca2+ chan- nel, other auxiliary subunits participate forming the mito- chondrial Ca2+ uniportor complex (De Stefani et al., 2016; Jhun et al., 2016; Kamer and Mootha, 2015). The identifica- tion of loss-of function mutations in MICU1, a regulator of MCU (Csordas et al., 2013; Perocchi et al., 2010) in patients with proximal muscle myopathy (Logan et al., 2014) indi- cates the complexity of MCU in skeletal muscle and its role in normal muscle function. However, the precise physio- logical role and the molecular structure of the mitochondrial Ca2+ uniporter complex in skeletal muscle still has more to be determined. in protecting denervation- SUMMARY Mitochondrial Ca2+ uptake is a double-edged sword for muscle function. While the Ca2+ influx into mitochondria is required for promoting ATP synthesis, excessive Ca2+ ac- cumulation in mitochondria initiates a series of molecular malfunctions leading to mitochondrial damage and cell death. Under diseased conditions, such as muscular dystro- phy, gene-mutation related myopathies and aging, enhanced SR Ca2+ release activity overloads mitochondria with Ca2+, leading to mitochondrial dysfunction and muscle cell de- generation. In those cases, mitochondrial damage seems to be a consequence of extensive elevation of cytosolic Ca2+ levels. In ALS G93A skeletal muscle, the mitochondrial membrane potential is depolarized, which leads to a reduced Ca2+ buffering capacity of mitochondria. This reduced mi- tochondrial Ca2+ uptake further overloads those polarized mitochondria with Ca2+ and causes further mitochondrial damage in the same cell. In this case, the compromised mi- tochondrial Ca2+ uptake is a leading cause of the disrupted intracellular Ca2+ signaling that initiates muscle cell degen- eration. In summary, any dysregulation in the amount and kinetics of mitochondrial Ca2+ uptake will cause mitochon- drial dysfunction and abnormal intracellular Ca2+ signaling that leads to muscle cell degeneration. It is predicted that identification of molecular basis associated with mitochon- drial Ca2+ uptake will further advance the understanding of the role of mitochondrial Ca2+ uptake in skeletal muscle health and diseases. Compliance and ethics The author(s) declare that they have no conflict of interest. Acknowledgements This work was supported by National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS)/National Insti- tutes of Health (NIH) Grant (R01 AR057404) to Jingsong Zhou. Funder plays no role for this study, in design, data collecting, data analysis and interpretation, and manuscript writing. Ainbinder, A., Boncompagni, S., Protasi, F., and Dirksen, R.T. (2015). Role of Mitofusin-2 in mitochondrial localization and calcium uptake in skeletal muscle. Cell Calcium 57, 14–24. Andersson, D.C., Betzenhauser, M.J., Reiken, S., Meli, A.C., Umanskaya, A., Xie, W., Shiomi, T., Zalk, R., Lacampagne, A., and Marks, A.R. (2011). Ryanodine receptor oxidation causes intracellular calcium leak and muscle weakness in aging. Cell Metab 14, 196–207. Arnaudeau, S., Kelley, W.L., Walsh, J.V.Jr., and Demaurex, N. (2001). Mitochondria recycle Ca2+ to the endoplasmic reticulum and prevent the depletion of neighboring endoplasmic reticulum regions. J Biol Chem 276, 29430–29439. Aydin, J., Andersson, D.C., Hanninen, S.L., Wredenberg, A., Tavi, P., Park, C.B., Larsson, N.G., Bruton, J.D., and Westerblad, H. (2009). In- creased mitochondrial Ca2+ and decreased sarcoplasmic reticulum Ca2+ in mitochondrial myopathy. Hum Mol Genet 18, 278–288. Balaban, R.S. (2002). Cardiac energy metabolism homeostasis: role of cytosolic calcium. J Mol Cell Cardiol 34, 1259–1271. Baughman, J.M., Perocchi, F., Girgis, H.S., Plovanich, M., Belch- er-Timme, C.A., Sancak, Y., Bao, X.R., Strittmatter, L., Goldberger, O., Bogorad, R.L., Koteliansky, V., and Mootha, V.K. (2011). Integra- tive genomics identifies MCU as an essential component of the mito- chondrial calcium uniporter. Nature 476, 341–345. Bianchi, K., Rimessi, A., Prandini, A., Szabadkai, G., and Rizzuto, R. (2004). Calcium and mitochondria: mechanisms and functions of a troubled relationship. Biochim Biophys Acta 1742, 119–131. Boncompagni, S., Rossi, A.E., Micaroni, M., Beznoussenko, G.V., Polishchuk, R.S., Dirksen, R.T., and Protasi, F. (2009). Mitochondria are linked to calcium stores in striated muscle by developmentally reg- ulated tethering structures. Mol Biol Cell 20, 1058–1067. Brini, M., Manni, S., Pierobon, N., Du, G.G., Sharma, P., MacLennan, D.H., and Carafoli, E. (2005). Ca2+ signaling in HEK-293 and skeletal muscle cells expressing recombinant ryanodine receptors harboring malignant hyperthermia and central core disease mutations. J Biol 6 Zhou, J., et al. Sci China Life Sci August (2016) Vol.59 No.8 Chem 280, 15380–15389. Brookes, P.S., Yoon, Y., Robotham, J.L., Anders, M.W., and Sheu, S.S. (2004). Calcium, ATP, and ROS: a mitochondrial love-hate triangle. Am J Physiol Cell Physiol 287, C817–C833. Bruton, J., Tavi, P., Aydin, J., Westerblad, H., and Lannergren, J. (2003). Mitochondrial and myoplasmic [Ca2+] in single fibres from mouse limb muscles during repeated tetanic contractions. J Physiol 551, 179–190. Carafoli, E. (2014). Discussion forum on mitochondrial calcium. Historical introduction. Biochem Biophys Res Commun 449, 365–366. Carafoli, E., and Crompton, M. (1978). The regulation of intracellular calcium by mitochondria. Ann N Y Acad Sci 307, 269–284. Chemello, F., Mammucari, C., Gherardi, G., Rizzuto, R., Lanfranchi, G., and Cagnin, S. (2015). Gene expression changes of single skeletal mus- cle fibers in response to modulation of the mitochondrial calcium uni- porter (MCU). Genom Data 5, 64–67. Csordás, G., Golenár, T., Seifert, E.L., Kamer, K.J., Sancak, Y., Perocchi, F., Moffat, C., Weaver, D., de la Fuente Perez, S., Bogorad, R., Koteli- ansky, V., Adijanto, J., Mootha, V.K., and Hajnóczky, G. (2013). MICU1 controls both the threshold and cooperative activation of the mitochondrial Ca2+ uniporter. Cell Metab 17, 976–987. Csordas, G., and Hajnoczky, G. (2009). SR/ER-mitochondrial local com- munication: calcium and ROS. Biochim Biophys Acta 1787, 1352–1362. De Backer, F., Vandebrouck, C., Gailly, P., and Gillis, J.M. (2002). Long-term study of Ca2+ homeostasis and of survival in colla- genase-isolated muscle fibres from normal and mdx mice. J Physiol 542, 855–865. de Brito, O.M., and Scorrano, L. (2008). Mitofusin 2 tethers endoplasmic reticulum to mitochondria. Nature 456, 605–610. De Stefani, D., Raffaello, A., Teardo, E., Szabo, I., and Rizzuto, R. (2011). A forty-kilodalton protein of the inner membrane is the mitochondrial calcium uniporter. Nature 476, 336–340. De Stefani, D., Rizzuto, R., and Pozzan, T. (2016). Enjoy the trip: calcium in mitochondria back and forth. Annu Rev Biochem 85, 161–192 Delbono, O. (2002). Molecular mechanisms and therapeutics of the deficit in specific force in ageing skeletal muscle. Biogerontology 3, 265–270. Deluca, H.F., and Engstrom, G.W. (1961). Calcium uptake by rat kidney mitochondria. Proc Natl Acad Sci USA 47, 1744–1750. Denton, R.M. (2009). Regulation of mitochondrial dehydrogenases by calcium ions. Biochim Biophys Acta 1787, 1309–1316. Denton, R.M., McCormack, J.G., and Edgell, N.J. (1980). Role of calcium ions in the regulation of intramitochondrial metabolism. Effects of Na+, Mg2+ and ruthenium red on the Ca2+-stimulated oxidation of oxoglu- tarate and on pyruvate dehydrogenase activity in intact rat heart mito- chondria. Biochem J 190, 107–117. DiFranco, M., Woods, C.E., Capote, J., and Vergara, J.L. (2008). Dys- trophic skeletal muscle fibers display alterations at the level of calcium microdomains. Proc Natl Acad Sci USA 105, 14698–14703. Drago, I., Pizzo, P., and Pozzan, T. (2011). After half a century mitochon- drial calcium in- and efflux machineries reveal themselves. EMBO J 30, 4119–4125. Duboc, D., Muffat-Joly, M., Renault, G., Degeorges, M., Toussaint, M., and Pocidalo, J.J. (1988). In situ NADH laser fluorimetry of rat fast- and slow-twitch muscles during tetanus. J Appl Physiol 64, 2692–2695. Durham, W.J., Aracena-Parks, P., Long, C., Rossi, A.E., Goonasekera, S.A., Boncompagni, S., Galvan, D.L., Gilman, C.P., Baker, M.R., Shi- rokova, N., Protasi, F., Dirksen, R., and Hamilton, S.L. (2008). RyR1 S-nitrosylation underlies environmental heat stroke and sudden death in Y522S RyR1 knockin mice. Cell 133, 53–65. Eisenberg, B.R. (1983). Quantitative Ultrastructure of Mammalian Skeletal Muscle. Handbook of Physiology, Skeletal Muscle (Bethesda: Ameri- can Physiological Society). Eisner, V., Csordas, G., and Hajnoczky, G. (2013). Interactions between sarco-endoplasmic reticulum and mitochondria in cardiac and skeletal muscle-pivotal roles in Ca2+ and reactive oxygen species signaling. J Cell Sci 126, 2965–2978. Eisner, V., Parra, V., Lavandero, S., Hidalgo, C., and Jaimovich, E. (2010). Mitochondria fine-tune the slow Ca2+ transients induced by electrical stimulation of skeletal myotubes. Cell Calcium 48, 358–370. Fieni, F., Lee, S.B., Jan, Y.N., and Kirichok, Y. (2012). Activity of the mitochondrial calcium uniporter varies greatly between tissues. Nat Commun 3, 1317. Fonteriz, R.I., de la Fuente, S., Moreno, A., Lobaton, C.D., Montero, M., and Alvarez, J. (2010). Monitoring mitochondrial [Ca2+] dynamics with rhod-2, ratiometric pericam and aequorin. Cell Calcium 48, 61–69. Franzini-Armstrong, C., and Jorgensen, A.O. (1994). Structure and devel- opment of E-C coupling units in skeletal muscle. Annu Rev Physiol 56, 509–534. Frieden, M., Arnaudeau, S., Castelbou, C., and Demaurex, N. (2005). Sub- plasmalemmal mitochondria modulate the activity of plasma membrane Ca2+-ATPases. J Biol Chem 280, 43198–43208. Griffiths, E.J., and Rutter, G.A. (2009). Mitochondrial calcium as a key regulator of mitochondrial ATP production in mammalian cells. Bio- chim Biophys Acta 1787, 1324–1333. Gueguen, N., Lefaucheur, L., Ecolan, P., Fillaut, M., and Herpin, P. (2005). Ca2+-activated myosin-ATPases, creatine and adenylate kinases regu- late mitochondrial function according to myofibre type in rabbit. J Physiol 564, 723–735. Han, R., Grounds, M.D., and Bakker, A.J. (2006). Measurement of sub-membrane [Ca2+] in adult myofibers and cytosolic [Ca2+] in myo- tubes from normal and mdx mice using the Ca2+ indicator FFP-18. Cell calcium 40, 299–307. Hopf, F.W., Turner, P.R., Denetclaw, W.F., Jr., Reddy, P., and Steinhardt, R.A. (1996). A critical evaluation of resting intracellular free calcium regulation J Physiol 271, C1325–C1339. in dystrophic mdx muscle. Am Jhun, B.S., Mishra, J., Monaco, S., Fu, D., Jiang, W., Sheu, S.S., and J, O.U. (2016). The mitochondrial Ca2+ uniporter: regulation by auxiliary subunits and signal transduction pathways. Am J Physiol Cell Physiol, ajpcell 00319 02015. Kamer, K.J., and Mootha, V.K. (2015). The molecular era of the mito- chondrial calcium uniporter. Nat Rev Mol Cell Biol 16, 545–553. Kavanagh, N.I., Ainscow, E.K., and Brand, M.D. (2000). Calcium regula- tion of oxidative phosphorylation in rat skeletal muscle mitochondria. Biochim Biophys Acta 1457, 57–70. Knowles, J.R. (1980). Enzyme-catalyzed phosphoryl transfer reactions. Annu Rev Biochem 49, 877–919. Kunz, W.S. (2001). Control of oxidative phosphorylation in skeletal mus- cle. Biochim Biophys Acta 1504, 12–19. Lakin-Thomas, P.L., and Brand, M.D. (1987). Mitogenic stimulation tran- siently increases the exchangeable mitochondrial calcium pool in rat thymocytes. Biochem J 246, 173–177. Logan, C.V., Szabadkai, G., Sharpe, J.A., Parry, D.A., Torelli, S., Childs, A.M., Kriek, M., Phadke, R., Johnson, C.A., Roberts, N.Y., Bonthron, D.T., Pysden, K.A., Whyte, T., Munteanu, I., Foley, A.R., Wheway, G., Szymanska, K., Natarajan, S., Abdelhamed, Z.A., Morgan, J.E., Roper, H., Santen, G.W., Niks, E.H., van der Pol, W.L., Lindhout, D., Raffa- ello, A., De Stefani, D., den Dunnen, J.T., Sun, Y., Ginjaar, I., Sewry, C.A., Hurles, M., Rizzuto, R., UK10K Consortium, Duchen, M.R., Muntoni, F., and Sheridan, E. (2014). Loss-of-function mutations in MICU1 cause a brain and muscle disorder linked to primary alterations in mitochondrial calcium signaling. Nat Gene 46, 188–193. Mallouk, N., Jacquemond, V., and Allard, B. (2000). Elevated subsarco- lemmal Ca2+ in mdx mouse skeletal muscle fibers detected with Ca2+-activated K+ channels. Proc Natl Acad Sci USA 97, 4950–4955. Mammucari, C., Gherardi, G., Zamparo, I., Raffaello, A., Boncompagni, S., Chemello, F., Cagnin, S., Braga, A., Zanin, S., Pallafacchina, G., Zentilin, L., Sandri, M., De Stefani, D., Protasi, F., Lanfranchi, G., and Rizzuto, R. (2015). The mitochondrial calcium uniporter controls skel- etal muscle trophism in vivo. Cell Rep 10, 1269–1279. McMillin-Wood, J., Wolkowicz, P.E., Chu, A., Tate, C.A., Goldstein, M.A., and Entman, M.L. (1980). Calcium uptake by two preparations of mitochondria from heart. Biochim Biophys Acta 591, 251–265. Mraz, F.R. (1962). Calcium and strontium uptake by rat liver and kidney mitochondria. Proc Soc Exp Biol Med 111, 429–431. Nagai, T., Yamada, S., Tominaga, T., Ichikawa, M., and Miyawaki, A. (2004). Expanded dynamic range of fluorescent indicators for Ca2+ by Zhou, J., et al. Sci China Life Sci August (2016) Vol.59 No.8 7 circularly permuted yellow fluorescent proteins. Proc Natl Acad Sci USA 101, 10554–10559. Nicholls, D.G. (2005). Mitochondria and calcium signaling. Cell Calcium 38, 311–317. O’Rourke, B. (2010). From bioblasts to mitochondria: ever expanding roles of mitochondria in cell physiology. Front Physiol 1, 7. O’Rourke, B., and Blatter, L.A. (2009). Mitochondrial Ca2+ uptake: tortoise or hare? J Mol Cell Cardiol 46, 767–774. Palmer, A.E., Giacomello, M., Kortemme, T., Hires, S.A., Lev-Ram, V., Baker, D., and Tsien, R.Y. (2006). Ca2+ indicators based on computa- tionally redesigned calmodulin-peptide pairs. Chem Biol 13, 521–530. Pan, X., Liu, J., Nguyen, T., Liu, C., Sun, J., Teng, Y., Fergusson, M.M., Rovira, II, Allen, M., Springer, D.A., Aponte, A.M., Gucek, M., Bala- ban, R.S., Murphy, E., and Finkel, T. (2013). The physiological role of mitochondrial calcium revealed by mice lacking the mitochondrial cal- cium uniporter. Nat Cell Biol 15, 1464–1472. Perocchi, F., Gohil, V.M., Girgis, H.S., Bao, X.R., McCombs, J.E., Palmer, A.E., and Mootha, V.K. (2010). MICU1 encodes a mitochondrial EF hand protein required for Ca2+ uptake. Nature 467, 291–296. Pietrangelo, L., D’Incecco, A., Ainbinder, A., Michelucci, A., Kern, H., Dirksen, R.T., Boncompagni, S., and Protasi, F. (2015). Age-dependent uncoupling of mitochondria from Ca2+ release units in skeletal muscle. Oncotarget 6, 35358–35371. Porter, C., and Wall, B.T. (2012). Skeletal muscle mitochondrial function: is it quality or quantity that makes the difference in insulin resistance? J Physiol 590, 5935–5936. Pozzan, T., and Rudolf, R. (2009). Measurements of mitochondrial calcium in vivo. Biochim Biophys Acta 1787, 1317–1323. Rizzuto, R., and Pozzan, T. (2006). Microdomains of intracellular Ca2+: molecular determinants and functional consequences. Physiol Rev 86, 369–408. Robert, V., Massimino, M.L., Tosello, V., Marsault, R., Cantini, M., Sor- rentino, V., and Pozzan, T. (2001). Alteration in calcium handling at the subcellular level in mdx myotubes. J Biol Chem 276, 4647–4651. Rossi, A.E., Boncompagni, S., and Dirksen, R.T. (2009). Sarcoplasmic reticulum-mitochondrial symbiosis: bidirectional signaling in skeletal muscle. Exerc Sport Sci Rev 37, 29–35. Rudolf, R., Mongillo, M., Magalhaes, P.J., and Pozzan, T. (2004). In vivo monitoring of Ca2+ uptake into mitochondria of mouse skeletal muscle during contraction. J Cell Biol 166, 527–536. Russell, A.P., Foletta, V.C., Snow, R.J., and Wadley, G.D. (2014). Skeletal muscle mitochondria: a major player in exercise, health and disease. Biochim Biophys Acta 1840, 1276–1284. Sahlin, K. (1985). NADH in human skeletal muscle during short-term intense exercise. Pflugers Arch 403, 193–196. Santo-Domingo, J., and Demaurex, N. (2010). Calcium uptake mechanisms of mitochondria. Biochim Biophys Acta 1797, 907–912. Sembrowich, W.L., Quintinskie, J.J., and Li, G. (1985). Calcium uptake in mitochondria from different skeletal muscle types. J Appl Physiol 59, 137–141. Shkryl, V.M., Martins, A.S., Ullrich, N.D., Nowycky, M.C., Niggli, E., and Shirokova, N. (2009). Reciprocal amplification of ROS and Ca2+ sig- nals in stressed mdx dystrophic skeletal muscle fibers. Pflugers Arch 458, 915–928. Shkryl, V.M., and Shirokova, N. (2006). Transfer and tunneling of Ca2+ from sarcoplasmic reticulum to mitochondria in skeletal muscle. J Biol Chem 281, 1547–1554. Starkov, A.A. (2010). The molecular identity of the mitochondrial Ca2+ sequestration system. FEBS J 277, 3652–3663. Territo, P.R., Mootha, V.K., French, S.A., and Balaban, R.S. (2000). Ca2+ activation of heart mitochondrial oxidative phosphorylation: role of the F0/F1-ATPase. Am J Physiol Cell Physiol 278, C423–C435. Tsien, R.Y. (1998). The green fluorescent protein. Annu Rev Biochem 67, 509–544. Vandebrouck, C., Martin, D., Colson-Van Schoor, M., Debaix, H., and Gailly, P. (2002). Involvement of TRPC in the abnormal calcium influx observed in dystrophic (mdx) mouse skeletal muscle fibers. J Cell Biol 158, 1089–1096. Wang, W., Fang, H., Groom, L., Cheng, A., Zhang, W., Liu, J., Wang, X., Li, K., Han, P., Zheng, M., Yin, J., Wang, W., Mattson, M.P., Kao, J.P., Lakatta, E.G., Sheu, S.S., Ouyang, K., Chen, J., Dirksen, R.T., and Cheng, H. (2008). Superoxide flashes in single mitochondria. Cell 134, 279–290. Wang, X., Weisleder, N., Collet, C., Zhou, J., Chu, Y., Hirata, Y., Zhao, X., Pan, Z., Brotto, M., Cheng, H., and Ma, J. (2005). Uncontrolled calcium sparks act as a dystrophic signal for mammalian skeletal mus- cle. Nat Cell Biol 7, 525–530. Weisleder, N., Brotto, M., Komazaki, S., Pan, Z., Zhao, X., Nosek, T., Parness, J., Takeshima, H., and Ma, J. (2006). Muscle aging is associ- ated with compromised Ca2+ spark signaling and segregated intracellu- lar Ca2+ release. J Cell Biol 174, 639–645. Wredenberg, A., Wibom, R., Wilhelmsson, H., Graff, C., Wiener, H.H., Burden, S.J., Oldfors, A., Westerblad, H., and Larsson, N.G. (2002). Increased mitochondrial mass in mitochondrial myopathy mice. Proc Natl Acad Sci USA 99, 15066–15071. Yi, J., Ma, C., Li, Y., Weisleder, N., Rios, E., Ma, J., and Zhou, J. (2011). Mitochondrial calcium uptake regulates rapid calcium transients in skeletal muscle during excitation-contraction (E-C) coupling. J Biol Chem 286, 32436–32443. Yi, M., Weaver, D., and Hajnoczky, G. (2004). Control of mitochondrial motility and distribution by the calcium signal: a homeostatic circuit. J Cell Biol 167, 661–672. Zhou, J., Yi, J., Fu, R., Liu, E., Siddique, T., Rios, E., and Deng, H.X. (2010). Hyperactive intracellular calcium signaling associated with lo- calized mitochondrial defects in skeletal muscle of an animal model of amyotrophic lateral sclerosis. J Biol Chem 285, 705–712. Zhou, J., Yi, J., Royer, L., Pouvreau, S., and Ríos, E. (2008). Distribution, responses during Ca2+ transients and calibration of a mitochon- dria-targeted cameleon biosensor expressed in muscle of live mice. Biophys J 94, 253a. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
ai_researcher
1
Thermodynamic_Assessment_of_the_Conversion_of_a_Typical_CCGT_Power_Plant_to_a_Fully_E-fuel_Fired_Unit.pdf
1 2 0 2 l u J 5 ] Y S . s s e e [ 1 v 3 1 1 2 0 . 7 0 1 2 : v i X r a Economic Dispatch of an Integrated Microgrid Based on the Dynamic Process of CCGT Plant Zhiyi Lin, Chunyue Song*, Jun Zhao, Chao Yang, Huan Yin College of Control Science and Engineering, Zhejiang University, Hangzhou, 310027 E-mail: [email protected] Abstract: Intra-day economic dispatch of an integrated microgrid is a fundamental requirement to integrate distributed generators. The dynamic energy flows in cogeneration units present challenges to the energy management of the microgrid. In this paper, a novel approximate dynamic programming (ADP) approach is proposed to solve this problem based on value function approximation, which is distinct with the consideration of the dynamic process constraints of the combined-cycle gas turbine (CCGT) plant. First, we mathematically formulate the multi-time periods decision problem as a finite-horizon Markov decision process. To deal with the thermodynamic process, an augmented state vector of CCGT is introduced. Second, the proposed VFA-ADP algorithm is employed to derive the near-optimal real-time operation strategies. In addition, to guarantee the monotonicity of piecewise linear function, we apply the SPAR algorithm in the update process. To validate the effectiveness of the proposed method, we conduct experiments with comparisons to some traditional optimization methods. The results indicate that our proposed ADP method achieves better performance on the economic dispatch of the microgrid. Key Words: Microgrid, Dynamic Process, Combined-Cycle Gas Turbine, Approximate Dynamic Programming 1 INTRODUCTION In the past few decades, the increasing consumption of fossil energy has led to public interests and technical developments in utilizing various distributed energy sources. Distributed generations (DGs) with flexible operation modes have been proposed as a general strategy for improving energy effi- ciency, lowering curtailment, peak shaving, and shifting. The DGs with high penetration make significant contribu- tions to power variations, however, DGs also bring challenges to operate and maintain the stability of the power grid. To solve this problem, microgrids (MGs) have been viewed as an applicable solution to integrate various DGs, energy storage devices and loads, which are connected to the power grid as a whole controllable unit. In the area of energy, the economic dispatch (ED) of MGs is critical and has received extensive attention from research and industrial fields. In order to operate the MG safely and economically, several related studies have been proposed over the last decade. In [1], a dynamic programming-based algorithm was derived to solve the unit commitment problem in the MG, including photovoltaic-based generators to reduce the economic cost. In [2] and [3], day-ahead optimization for gas and power sys- tems were studied, which also considered the partial differ- ential constraints in natural gas transmission. This work was supported partially by the National Key Re- search and Development Program of China (No.2017YFA0700300), par- tially by the Key Research and Development Program of Guangdong (No.2020B0101050001), and partially by the Key Research and Develop- ment Program of Zhejiang Province (No.2021C01151). For the models in system, though the aforementioned stud- ies contribute considerably to the MG optimization problems, they did not consider the dynamic energy flow constraints of the cogeneration units. The neglect of dynamic process may result in false optimal solutions since the cogeneration unit is constrained by physical dynamic transitions. Therefore, we consider that a more precise model of MG is critical to ob- tain feasible solutions. In this paper, we construct a practical MG system consists of combined-cycle gas turbine (CCGT) plant. Specifically, the CCGT plant is a typical cogenera- tion unit with a remarkable dynamic process, thus validating the effectiveness of the proposed method considering the dy- namic energy flow constraints. In this paper, we propose to use the intra-day optimization strategies, and solve the multi-time periods decision problem via dynamic programming (DP). However, the classical DP usually suffers from the ”three curses of dimensionality” [5] when handling high dimensional state space and action space. In this context, approximate dynamic programming (ADP) is a promising real-time optimization method [4], which makes a trade-off between solvability and optimality of solutions. In the ADP framework, the large-scale optimization problem is viewed as Markov decision process (MDP), which is di- vided into small sub-problems and solved sequentially. The ADP algorithm has been demonstrated to be valid in resource allocation problems [5] and energy storage management [7], while these works focused on the power systems. Therefore, how to apply ADP to operation MG with the CCGT plant still remains a problem. To deal with this issue, this paper proposes a novel ADP al- gorithm for the economic dispatch of an integrated heat and power microgrid. Specifically, an autoregressive moving av- erage (ARMA) multi-parameter identification model of the CCGT thermodynamic process is considered in the MG sys- tem model. We also design an ADP approach based on value function approximation (VFA), in which a post-decision state is employed to achieve the near-optimal solution to minimize the total operational cost over one day. Overall, compared to the existing works, the main contributions of this paper are listed as follows: • A finite-horizon MDP formulation is developed, which incorporates CCGT thermodynamic constraints. To keep the system Markovian, an augmented state vector of CCGT is introduced so that the principle of optimality holds. • A VFA-based ADP is proposed and achieves near- optimal solutions to the MDP model. As a result, the proposed method solves Bellman’s equation forward it- eratively. • Numerical experiments on the proposed ADP method are conducted with comparisons to the traditional my- opic policy and MPC policy, thus validating the effec- tiveness of the proposed ADP method. The rest of this paper is organized as follows. Section 2 presents the MG system and the formulated MDP model in detail. Section 3 introduces the ADP solution and the VFA design. The experimental settings and comparisons are pre- sented in Section 4. In Section 5, we draw conclusions and promising directions for the future. 2 MODEL OF ECONOMIC DISPATCH FOR MICROGRID This paper considers an integrated heat and power MG sys- tem, as shown in Figure 1, which consists of several dispatch- able DGs: CCGT plant, gas boiler (GB), heat pump (HP), fuel cell (FC) and storage device. Wind turbines (WT) are renewable and non-dispatchable sources, which are also in- cluded in this MG system. We assume that the MG runs in a grid-connected mode and can trade with the upper-level grid according to the real-time electricity price. The heat and elec- tricity demands on the users side are assembled in two load nodes respectively, which facilitates the information collec- tion and comprehensive dispatch of the MG. Considering the intra-day operation of the integrated MG, we specify a finite time horizon T , indexed by {∆t, 2∆t,. . . ,T }, where the time interval for each time step is ∆t, and we set ∆t = 15min, T = 96∆t = 24h in a day. 2.1 The Markov Decision Process Formulation In the MG system, the real-time economic dispatch problem is a typical multi-time periods optimization problem, which can be decomposed into multiple sequential sub-problems Figure 1: The schematic diagram of MG system. and solved iteratively in the MDP framework. The basic ele- ments of MDP are defined and introduced in this subsection. The state variables are related to the minimally dimensioned function of MG, which are necessary to compute the decision function and the evolution of the system. The state vector St at time step t are defined in Equation (1)-(3), including both power system state variables SE t and heat system state variables SH t as follows: St = (cid:8)SE t , SH t (cid:9) SE t = (cid:110) t−∆t, P CCGT P F C t , SOCt, P W T,a t , DE t , pt (cid:111) SH t = (cid:110) QGB t−∆t, QHP t−∆t, ¯QCCGT t (cid:111) , DQ t (1) (2) (3) t t t t and DQ where at time t, DE represent the electricity and heat demand of the system, P CCGT represents the active power output of CCGT, SOCt represents the state of charge of the power storage device, P W T,a represents the available t wind power, pt represents the electricity price in real market, ¯QCCGT represents the augmented states of CCGT thermal t output QCCGT (see Subsection2.2). Some past system deci- t−∆t, QGB sions in history are also included in St, e.g., P F C t−∆t and QHP t−∆t, since operational ramping constraints are consid- ered in this paper. In this context, the feasible power output of units at t is constrained by the previous states. The decision variables at time t include: the active power output of all dispatchable power generations, for example P F C ; the natural gas input flow of CCGT t the charge and discharge state uc gCCGT t and power t P c t ,P d the active power of the MG exchanged with the t ; upper-level grid P Grid ; and the curtailment power of WT ,Qcur and loads P cur P wcur . This paper only focuses on ac- t t tive power balance of the electricity, since the MG system runs in the grid-connected mode which ensures node volt- ages and phase angles stability. Hence, the decision vector xt is described in Equation (4) as follows: ,QGB t ; and QHP t ,ud t t t xt = {P F C t P wcur t , gCCGT t , P cur t , P Grid t , Qcur t , P c , QGB t t , uc t , P d , QHP t t , ud t , , Qcur t } (4) The exogenous information represents the stochastic factors in the system [4]. In this paper, the exogenous information vector Wt are the day-ahead forecast error of wind power generation ˆP W T , real-time electricity price ˆpt and demands ˆDE t . Wt is given by Equation (5) as follows: t , ˆDQ t Wt = { ˆP W T t , ˆDE t , ˆpt, ˆDQ t } (5) In the time sequence, the exogenous information Wt arrives after the previous time step t − ∆t and before the current decision making at time t. Therefore, the decision process evolves as Equation (6). M Gt = {S0, x0, W∆t, . . . , St−∆t, xt−∆t, Wt, St} (6) According to St, xt and Wt+∆t, the state transition function SM (St, xt, Wt+∆t) is determined by the following equa- tions: t t+∆t(1) = P F C SE t+∆t(2) = a0 + b0 · gCCGT SE P d t ηd ) · ∆t t (3) + (P c t ηc − t SE t+∆t(3) = SE (7) (8) (9) t+∆t(k) = P F SE t+∆t(k − 3) + Wt+∆t(k − 3), k ∈ {4, 5, 6} (10) (11) t SH t+∆t(1) = QGB , SH t+∆t(3) = A · ¯QCCGT SH t+∆t(4) = P F SH t t+∆t(2) = QHP + B · gCCGT t t t+∆t(4) + Wt+∆t(4) t+∆t, DQ,F (12) (13) t t+∆t, pF t+∆t , DE,F t+∆t = {P W T,F where P F t+∆t} represents the day-ahead forecast of exogenous information. Most units are modeled based on their popular energy hub model [5], while the state transition function of CCGT is reformu- lated from the obtained ARMA model (16), which describes the dynamic process of CCGT. ¯QCCGT represents the aug- mented states of the CCGT, and A,B represents the coeffi- cient matrices of ¯QCCGT repectively. The objective function V ∗ t (·) is defined to minimize the total operation cost of the MG over the finite horizon T . In time period t, the operation cost (14) is denoted by Ct(·), includ- ing fuel and operation cost C f t (·), cost of trading with grid C tr (·). Following Bellman’s optimality principle, we design the optimal value funcition based on the state vector St, decision vector xt and exogenous information vector Wt as follows: t (·), and penalties on the curtailment C cur t t Ct(St, xt) = C f t (St, xt)+C tr t (St, xt)+C cur t T (cid:88) E{ t=∆t Ct(St, xt)} V ∗ t = min xt∈Xt = min xt∈Xt (Ct(St, xt) + E[Vt+∆t(St+∆t)|St, xt]) (St, xt) (14) (15) where Xt is the set of fesible decisions, and E(·) is the con- ditional expectation. 2.2 Dynamic Process of CCGT The CCGT plant in the microgrid consists of the gas tur- bine, heat recovery system, steam turbine and corresponding controllers, etc. Obviously, the thermal power response of CCGT is slower than that of electric power due to the com- plex transient flow in the system. The consideration of this dynamic process makes our optimization work more reliable and distinct from the existing energy dispatch strategies for MGs. Some related work [11] proposed an ARMA identifi- cation model considering the different response times of the CCGT plant. This paper transforms the ARMA model into high-order difference constraints, as shown in Equation (16), which are then integrated into the energy dispatch optimiza- tion model. QCCGT (k) = 4 (cid:88) amQCCGT (k−m)+bmgCCGT (k−m−3) m=1 (16) where am, bm are parameters estimated by means of system identification technique. QCCGT (i) represents the heat out- put of CCGT at sampling point i, gCCGT (j) represents the natural gas flow input of CCGT at sampling point j. The sample interval is 50s, thus there are 18 sampling points over one time period t. To make the decision process Markovian and have the pre- requisite for applying DP, we reformulate the state variables in Equation (3) by adopting the augmented states ¯QCCGT (k) as follows [6]: ¯QCCGT t (k) = [x1(k) x2(k) · · · x6(k) x7(k)]T (17) ¯QCCGT t (k + 1) = (cid:21) (cid:20)0 I 0 A1 · ¯QCCGT (k) + (cid:2)0T 1(cid:3)T ·gCCGT t (k) (18) QCCGT t (k) = (cid:2)b4 b3 b2 b1 0 0 0(cid:3) ¯QCCGT (k) (19) where k = 1, 2, · · · , 18 in each time period t, I is 0 0]T, A1 = 6 × 6 identity matrix, 0 = [0 [0 0 0 a4 a3 a2 a1], respectively. 0 0 2.3 Constraints In addition to the above thermodynamic constaints of CCGT, the objective function is subjected to the following con- straints: t + P CCGT P F C t + (P W T,a t − P wcur t + P Grid t ) − P HP t · ud + (P d t + P cur t − P c t = DE t t · uc t ) QGB t + QCCGT t t ≤ P i P i t ≤ P i + QHP t + Qcur t = DQ t t , i ∈ {F C, CCGT, Grid} Qj t ≤ Qj t ≤ Qj t , j ∈ {GB, HP, CCGT } (20) (21) (22) (23) Ri,down t Rj,down t · ∆t ≤ P i · ∆t ≤ Qj t − P i t − Qj t−∆t ≤ Ri,up t−∆t ≤ Rj,up t t · ∆t · ∆t t · P c uc t ≤ P c t ≤ uc t · P c t t · P d ud t ≤ P d t ≤ ud t · P d t t + ud uc t ≤ 1, uc t , ud t ∈ {0, 1} SOC ≤ SOCt ≤ SOC ≤ P W T,a t 0 ≤ P wcur t t ≤ DE 0 ≤ P cur t , 0 ≤ Qcur t ≤ DQ t (24) (25) (26) (27) (28) (29) (30) (31) Figure 2: Optimal and approximate value function t , P i where Equation (20) and (21) are the power and heat bal- ance constraints of the MG respectively. The power gen- erated from the dispatchable DGs and traded with the grid t , Qj are limited by their lower and upper boundaries P i t , Qj t , as indicated by Equation (22)-(23). Note that if P Grid is positive, the MG purchases electricity from the grid; oth- erwise, the MG sells surplus energy to the grid. The ramp- ing rate of DGs is limited by Equation (24)-(25). The con- straints of energy storage device are shown in Equation (26)- (29), while uc t are integer variables. The curtailment constraints for renewable power and demands are shown in Equation (30)-(31) respectively. All the aforementioned con- straints in Equation (17)-(31), should be satisfied for time t ∈ T . t and ud t 3 APPROXIMATE DYNAMIC PROGRAM- MING SOLUTION In the framework of MDP, the typical multi-time periods de- cision problem can be solved recursively by dynamic pro- gramming. However, DP solves the Bellman’s equation backward through time and explores every possible state at every time period, which constantly suffers from the curses of dimensionality for the large state and action space. To solve this problem, an improved alternative ADP is devel- oped in this section. According to [8], ADP based on value function approximation (VFA) has been applied to obtain a near-optimal policy. By approximating the value function around post-decision state variables Sx t , the expectation form in Equation (15) is rephrased and the Bellman’s equation is reformulated as a deterministic minimization problem as fol- lows: Vt(St) = min xt∈Xt (Ct(St, xt) + ¯V x t (Sx t ) (32) t ) is the VFA around Sx where Sx t is the state after the decision xt has been made but before the new exogenous information Wt+∆t has arrived; ¯V x t (Sx t . Based on Equation (32), ADP is developed to solve the MDP problem forward at each It is worth to note that the computation of the time step. t ) = E[Vt+∆t(St+∆t)|St, xt] is time- value function V x consuming and intractable. Therefore, a proper approxima- tion ¯V x t ) is desired t ) to the optimal value function V x t (Sx t (Sx t (Sx for guaranteeing the near-optimal policy x∗ t according to cur- rent state information of the system. With this analysis, this paper proposes a piecewise linear function based ADP, and accomplishes the algorithm by learning the slopes of the op- timal value function at the heat output state QCCGT,x . t 3.1 Piecewise Linear Function Approximation The approximate value function quantifies the long-term in- fluence of the current decision xt. In this paper, a convex piecewise linear function (PLF) is used to estimate the value of the heat output of CCGT according to [9], as presented by Equation (33). ¯V x t (Sx t ) = ¯V x t (QCCGT,x t Nt(cid:88) ) = dt,art,a, a ∈ {1, · · · , Nt} a=1 (33) where the slopes dt,a should be monotonically increasing, i.e., dt,a ≤ dt,a+1. Actually, keeping convexity makes the optimization problem linear programs, which helps us handle the high-dimension state space and accelarates convergence. Figure 2 illustrates the exact optimal value function and the conducted approximation. In time period t, the post-decision state QCCGT,x equals to the heat output of CCGT at the last t sampling point of this period, which can be calculated by the augmented state Equation (17)-(19). The post-decision state QCCGT,x is then divided into Nt segments on average, thus: t QCCGT,x t = QCCGT t (18) 0 ≤ rt,a ≤ (QCCGT t − QCCGT t )/Nt (34) (35) We substitute (33) in the approximated Bellman’s equa- tion (32), then the near-optimal solution at time t can be ob- tained by solving a deterministic optimization as follows: x∗ t = arg min xt∈Xt,rt,a∈Rt (Ct(St, xt) + Nt(cid:88) a=1 dt,art,a) (36) where Rt is limited by Equation (34) and (35). Note that each time period is approximated by an independent PLF. Table 1: Parameters of Generators Pmin (MW) 0.8 6 -3 0 -6 Ramp Rate (MW/h) 7 38 - - 6 Pmax (MW) 7 43 3 3.6 6 Unit FC CCGT SOC WT Grid Parameters CC ($/MWh) 65 92 - - pt Table 3: Parameters of CCGT a3 -0.3266 b3 0.3656 a2 -0.6292 b2 0.06311 a1 1.6301 b1 0.2087 a4 0.2570 b4 0.4031 Table 2: Parameters of the Heat Generators Unit GB HP CCGT Qmax Qmin (MW) (MW) 15 1 5 0 50 15 Ramp Rate (MW/min) 3 5 0.5 CC ($/MWh) 300 - - 3.2 The Updating Process of PLF-ADP In order to make the decisions as close to the optimal as pos- sible, the slopes of each segment for each time period should be updated iteratively until convergence. In this paper, we in- troduce the superscript n to represent the variables value in the nth iteration. Therefore, ¯V x,n−1 represents the approxi- t mate value function obtained in the (n − 1)th iteration, which can be utilized to make decisions in the nth iteration as fol- lows: Vt(Sn t ) = min xt∈Xt = min xt∈Xt (Ct(Sn t , xn t ) + ¯V x,n−1 t (Sx,n t )) (Ct(Sn t , xn t ) + Nt(cid:88) a=1 dn−1 t,a rn t,a) (37) To update the slopes of each segment, a sample observation of the marginal value ˆdn ) is needed, as in- dicated by Equation (38). t−∆t,a(QCCGT,x,n t−∆t ˆdn t−∆t,a(QCCGT,x,n t (QCCGT,n = V ∗ t−∆t t ) = ˆdn t,a(QCCGT,n − ρ) t (QCCGT,n ) t t ) − V ∗ (38) Then the slopes of ¯V x,n lows: t−∆t(QCCGT,x,n t−∆t ) can be updated as fol- t−∆t,a(QCCGT,x,n dn ) = αn−1 ˆdn t−∆t + (1 − αn−1)dn−1 t,a(QCCGT,n ) t−∆t,a(QCCGT,x,n t−∆t t ) (39) where αn−1 is the stepsize to weight the information com- bined with the exsiting knowledge about the state value. There are several methods to decide the stepsizes, such as deterministic and stochastic stepsizes. In this work, a gener- alized harmonic stepsize rule is adopted to improve the rate of convergence. Note that Equation (39) only updates the slope for ath segment of ¯V x,n t−∆t(QCCGT,x,n ). Besides, we apply the SPAR algorithm in [4] to ensure the slopes are monoton- ically increasing after the update. t−∆t Figure 3: The Prediction of WT and Demands 4 Experiments , P cur t In this section, the significance of considering the thermody- namic process of CCGT and the performance of the proposed PLF-ADP algorithm are validated by numerical experiments on an integrated heat and power microgrid system, as shown in Figure 1. The MDP model and associated constraints of the microgrid are available in Section 2. The parameters of the microgrid are partially shown in Table 1-3, where CC rep- resents the cost coefficients of DGs in the optimization prob- lem. The initial energy stored in the device is set to 7.5MW, meanwhile, the capacity and cycled efficiency of SOC are t = ηd SOC = 1.5MW, SOC = 15MW, ηc t = 0.9. The and Qcur penalties of curtailments P wcur are set to be t t 200$/MWh, 150$/MWh and 350$/MWh respectively. The day-ahead predicted power demand, heat demand and wind power are shown in Figure 3. The wind power data in this paper comes from the real-world WT system in Turkey. The day-ahead prediction for electricity price of market is tiered pricing. All the experiments are conducted with Python on an Intel Core i5 2.80GHz Windows-based PC with 8GB RAM. Firstly, the classical mixed integer linear programming (MILP) algorithm is implemented to obtain the operation strategy for the MG with the thermodynamic process of CCGT. The ¯QCCGT augmented states model is used to con- strain the feasible region of the input gCCGT . Thus, the dynamic operation curve of the heat output QCCGT (k) is recorded every 50 seconds, i.e., there are total 1728 sample points in 24h. The experimental results are shown in Figure 4-5. It is obvi- ous that the fluctuating electricity and heat demand is mainly provided by the CCGT and grid, while the potential thermal- electric coupling makes the auxiliary units necessary to sat- isfy Equation (20)-(21). After the midnight(0:00-6:00), the market electricity price is quite low and the MG tends to t t t Figure 4: Day-ahead power dispatch based on MILP Figure 6: Intra-day power dispatch based on ADP Figure 5: The Heat Output Curves of CCGT Figure 7: Heat output of CCGT based on ADP increase power purchase from the grid accordingly. Mean- while, the energy storage device discharges to reduce the cost. In the time period 40-60(10:00-15:00), the demands gradually rise and the market price is relatively high, so the MG begins to sell power to the grid as much as possible on the basis of meeting the demands. Simultaneously, the power storage device is charged in advance to meet the load de- mand and reduce the load curtailment during load peak hours (time period 73-80). The gas boiler only generates heat in time periods 46-52 and 82-86 during which the heat output of CCGT almost reaches the upper limit. Figure 5 shows the heat output curves of the CCGT based on the energy hub model [10] and identification model respectively, while the former model only considers the stable transition state of the CCGT. Figure 5 also shows that the two curves are not ex- actly the same and the curve with dynamic process is more smooth and practical for CCGT, since the stable curve may conflict with ramping constraints when the demands fluctu- ate quickly, thus demonstrating the significance of consider- ing the thermodynamic process of CCGT. Secondly, the performance of the proposed PLF-ADP algo- rithm is presented in Figure 6-8. The generation output of CCGT and the power exchange between the MG and the grid are shown in Figure 6 and Figure 7. It is obvious that the MG sells electricity to the grid early in time period 24. The con- vergence process of the ADP is depicted in Figure 8, where the ADP converges in less than 40 iterations. To demonstrate the effectiveness, myopic policy and model Figure 8: The convergence curve of ADP predictive control (MPC) are used as competitive compar- isons. The experimental results show the solution from ADP performs better than the myopic policy and MPC algorithm and makes 5% cost reduction, though the computation time is longer due to the iteration process. In summary, based on augmented states value approximation, the proposed PLF- ADP algorithm is effective for the economic dispatch of the integrated microgrid. 5 CONCLUSIONS In this paper, we propose a novel ADP algorithm based on Markov decision process for the economic dispatch problem of a microgrid, which consists of both heat and power dis- tributed generators in the real world. Specifically, we inte- grate the CCGT thermodynamic process into the approximate dynamic programming with augmented states. In the experi- mental section, we validate the effectiveness of the proposed algorithm with comparisons to the conventional optimization strategies. However, there remain some limitations in our work. Based on the existing research work, we intend to improve both performance and efficiency in the future, also with the uncertainty in the real applications, thus making the proposed ADP a more feasible and extensive application for automatic economic dispatch. REFERENCES [1] H. Kanchev, F. Colas, V. Lazarow, and B. Francois, Emis- sion reduction and economical optimization of an urban micro- grid operation including dispatched PV-based active generators, IEEE Trans. Sustain. Energy, Vol.5, No.4, 1397-1405, 2014. [2] A. Zlotnik, L. Roald, S. Backhaus, M. Chertkov, G. Andersson, Coordinated Scheduling for Inter-dependent Electric Power and Natural Gas Infrastructures, IEEE Trans. on Power Systems, Vol.32, No.1, 600-610, 2017. [3] J. Fang, Q. Zeng, X. Ai, Z. Chen, J. Wen, Dynamic optimal energy flow in the integrated natural gas and electrical power systems, IEEE Trans. on Sustainable Energy, Vol.9, No.1, 188- 198, 2017. [4] W. B. Powell, Approximate Dynamic Programming: Solv- ing the curses of dimensionality, John Wiley Sons, Chap.3-13, 2007. [5] H. Shuai, J. Fang, X. Ai, Y. Tang, J. Wen, H. He, Stochastic optimization of economic dispatch for microgrid based on ap- proximate dynamic programming, IEEE Trans. on Smart Grid, Vol.10, No.3, 2440-2452, 2018. [6] Zadeh L, Desoer C. Linear system theory: the state space ap- proach, Courier Dover Publications, Chap.4, 2008. [7] H. Shuai, J. Fang, X. Ai, Y. Tang, J. Wen, H. He, Optimal real- time operation strategy for microgrid: An ADP-based stochas- tic nonlinear optimization approach, IEEE Trans. on Sustain- able Energy, Vol.10, No.2, 931-942, 2018. [8] D. F. Salas, W. B. Powell, Benchmarking a scalable approxi- mate dynamic programming algorithm for stochastic control of grid-level energy storage, INFORMS Journal on Computing, Vol.30, No.1, 106-123, 2018. [9] J. Nascimento, W. B. Powell, An optimal approximate dynamic programming algorithm for concave, scalar storage problems with vector-valued controls. IEEE Trans. on Automatic Con- trol, Vol.58, No.12, 2995-3010, 2013. [10] S. Bahrami, A. Sheikhi, From demand response in smart grid toward integrated demand response in smart energy hub. IEEE Trans. on Smart Grid, Vol.7, No.2, 650-658, 2015. [11] C. Yang, multi-time scale identification for multi-energy sys- tem, submitted.
ai_researcher
2
Six_challenges_for_fully_autonomous_scientific_discovery.pdf
Automated Scientific Discovery: From Equation Discovery to Autonomous Discovery Systems Stefan Kramer1 Mattia Cerrato1 Sašo Džeroski2 Ross D. King3,4 1Johannes Gutenberg University 2Jožef Stefan Institute 3Chalmers University 4University of Cambridge Mainz, Germany {kramerst, cerrato}@uni-mainz.de [email protected] [email protected] Ljubljana, Slovenia Gothenburg, Sweden Cambridge, UK Abstract The paper surveys automated scientific discovery, from equa- tion discovery and symbolic regression to autonomous dis- covery systems and agents. It discusses the individual ap- proaches from a "big picture" perspective and in context, but also discusses open issues and recent topics like the various roles of deep neural networks in this area, aiding in the dis- covery of human-interpretable knowledge. Further, we will present closed-loop scientific discovery systems, starting with the pioneering work on the Adam system up to current efforts in fields from material science to astronomy. Finally, we will elaborate on autonomy from a machine learning per- spective, but also in analogy to the autonomy levels in auton- omous driving. The maximal level, level five, is defined to require no human intervention at all in the production of sci- entific knowledge. Achieving this is one step towards solving the Nobel Turing Grand Challenge to develop AI Scientists: AI systems capable of making Nobel-quality scientific dis- coveries highly autonomously at a level comparable, and pos- sibly superior, to the best human scientists by 2050. 1 Introduction and Scope The automated discovery of scientific knowledge has always been on the agenda of artificial intelligence research, and prominently so since the end of the 1970s [Langley, 1977; Langley et al., 1987]. Scientific knowledge takes many forms: In many cases, the scientific process begins with collecting and classifying objects, and creating taxonomies of classes of objects. The more a scientific discipline advances, the more it tends to strive to describe the phenomena quanti- tatively, for better explanation and prediction. By far the most commonly used representation for describing systems of in- terest is in the form of mathematical equations, in particular differential equations. Thus, the automated discovery of equations from data has been established as a family of me- thods within and partly outside artificial intelligence: it runs under the heading of equation discovery [Langley, 1977; Džeroski, & Todorovski, 1993] as well as symbolic regres- sion [Koza, 1994]. The goal in many application domains of equation discovery and symbolic regression is to learn a human-understandable model of the system dynamics in the form of (mostly ordi- nary) differential equations.1 The underlying data are most frequently temporal. One important aspect of scientific dis- covery is that the resulting models need to be in principle in- terpretable.2 If a model cannot be communicated to a com- munity of researchers, it hardly qualifies as scientific, as com- munication is an indispensable part of the scientific endeavor. Thus, the goal is not optimization (e.g., of properties in mate- rial science or drug development), but to develop under- standing. An important part of the literature on automated scientific discovery [Langley et al., 1987; Li et al., 2021] discusses the topic from a cognitive science point of view (what are or could be the reasoning processes leading to certain dis- coveries?) and thus also a historical reconstruction of the pro- cesses. This is relevant, because today's AIs for scientific dis- covery also have to start from the same principles to enable discoveries in completely new application domains. While this can be viewed on the symbolic level only, many of today's approaches also consider the subsymbolic level to aid the process: neural networks of various sorts can play a vital role in guiding the search, providing valuable informa- tion to the discovery agent, or turning low-level sensory in- formation into high-level information that can be used for symbolic reasoning. Finally, the question of autonomy of the discovery agents ari- ses. While early systems assumed a table of input data is given by a human user, approaches with more autonomy on 1 In some application domains, e.g., in parts of systems biology, the great number of entities resp. quantities (and thus also constants to be fitted) prevents the modeling by equations. Here, one has to resort to formalisms like Boolean networks [Pušnik et al., 2022]. 2 Thus, we are aiming to discover the most prevalent form of knowledge in the natural and life sciences, systems of equations, and are, for the exposition of this paper, not interested in classification or regression models, models for structured prediction, or generative models applied in these fields. Fig. 1: (a) BACON [Langley, 1977; Langley et al., 1987] (b) Example of context-free grammar guiding the search for equations in the Lagramge system [Todorovski & Džeroski, 1997] (c) A probabilistic context-free grammar as used in ProGED [Brence et al., 2021] (d) Symbolic regression [Schmidt & Lipson, 2009] the side of the discovery agent are becoming more common. The approach became prominent with the development of the first robot scientist world-wide, Adam [King et al., 2009], that automated cycles of hypothesis generation and testing in the field of functional genomics. Meanwhile, the third generation of robot scientists is being developed. The degrees of auto- nomy of a discovery agent may range from completely pas- sive, i.e., supervised learning, via active learning [Cohn et al., 1996] to reinforcement learning [Sutton & Barto, 2018]. The remainder of the paper is organized as follows: In Sec- tion 2, we will review equation discovery and symbolic re- gression from the beginnings to the current state of the art, with a list of open problems. In Section 3, we discuss the re- presentations used in current scientific discovery and, in par- ticular, how neural networks can be employed to aid the dis- covery process. The topic of Section 4 is closed-loop scien- tific discovery, with recent progress in the field. Section 5 discusses different levels of autonomy with an outlook on possible future directions, before we conclude in Section 6. 2 From BACON to Modern Equation Discovery and Symbolic Regression The first system for the discovery of equations based on data was BACON by Pat Langley [Langley, 1977]. The first ver- sion of BACON was developed into a series of following sy- stems, BACON.2 to BACON.5, with increasingly complex functionality [Langley et al., 1987]. The basic philosophy be- hind the book by Langley et al. was that scientific discovery, even in its most intricate ways, is essentially problem solving. This even applies to the search for new problems, new re- presentations, and new measurement devices. In the case of the BACON systems, the idea was applied to the discovery of equations. BACON.1 to BACON.5 were implemented on the basis of PRISM, a system for the representation and inference of pro- duction rules. The BACON systems relied on the observation of the correlation of pairs of variables, when everything else is being held constant (ceteris paribus). This is a strong assumption, as it will in many cases not be possible to control all other variables in an experiment. Also, interestingly, BA- CON has a flavor of active learning, since users are requested to record data, if they are not available yet. One interesting feature of BACON is the construction of new terms, e.g., ra- tios or products of existing terms, by production rules. In this way, it takes advantage of the structure of the search space, which is rarely ever attempted in current systems. Noise handling is achieved by some tolerance parameter, which establishes that a value of a variable (constructed or initially given) is constant, even though it varies within a certain range. BACON.2 to BACON.5 included advanced features for symmetries, common divisors, and conservation laws, amongst others. Fig. 1 (a) shows the derivation of Kepler's third law D3P2 = k by a sequence of newly constructed terms, until a ‒ more or less ‒ constant value is found. The next generation of equation discovery systems was not restricted to keeping all but a pair of variables fixed, but was able to handle observational data. In addition, it was able to learn models of dynamical systems in the form of ordinary differential equations (ODEs). Lagrange [Džeroski, & Todo- rovski, 1993] computes all derivatives up to a pre-defined or- der, then generates products of up to a maximum of variables, before it calculates a simple linear regression to generate a candidate equation. More recently, this approach has been ta- ken up in the SINDY system [Brunton et al. 2016], which applies sparse (instead of simple) linear regression. The suc- cessor of Lagrange, named Lagramge [Todorovski & Džeroski, 1997], was a milestone in equation discovery, as it introduced the use of domain knowledge in addition to data: It was the first system to use a context-free grammar (CFG) the search for equations. In to guide the search for systems of equations. Grammars are a way for domain experts to use prior knowledge and let prior knowledge guide this way, Lagramge was able to solve problems that the predeces- sor Lagrange was not able to solve, for instance, the problem of two poles on a cart. An example CFG for Lagramge is shown in Figure 1(b). Lagramge GSAT [Ganzert et al., 2010] improves Lagramge by a bundle of modifications: First, it uses a search procedure similar to GSAT (random restart hill- climbing) to randomize search. Further, it employs a one-step look-ahead and a momentum to make the search less erratic. Washio & Motoda [1997] further improved the methods by also taking into account units and scale types as constraints. Dimensional units are also considered for use in ProGED [Brence et al., 2021;2023], which is based on the idea of using probabilistic CFGs to represent the search space and sample from it. An example is given in Fig. 1(c), where both the rules and the probabilities associated with the rules (p and q) are shown. These probabilities can be fixed, but can also be learned from corpora of equations [Chaushevska et al., 2022]. Sampling candidate equations from probabilistic CFGs enab- les easy parallelization: Batches of sampled equations can be distributed to nodes and evaluated in an embarrassingly parallel way. Symbolic regression, a development parallel to the deve- lopment of equation discovery, was originally based on ge- netic programming (GP): The term was introduced by Koza [1994]. Typical systems of symbolic regression work on an operator tree or DAG representation of equations (see Fig. 1 (d)). These trees are modified by a set of possible operations, such as cross-over between subtrees of two trees (equations), mutations, substitutions of variables by constants, or, vice versa, substitutions of constants by variables. Schmidt & Lip- son [2009] used symbolic regression to discover natural laws from measured data. Symbolic regression approaches were used early on to discover ODEs [Petrovski and Džeroski 1994] and used ideas from grammar-based genetic program- ming to consider domain specific knowledge, paving the way for systems that use both data and domain knowledge in equa- tion discovery, such as Lagramge, Lagramge2.0 [Todorovski and Dzeroski 2006], IPM [Bridewell et al. 2008] and Prob- MoT [Cerepnalkoski et al. 2012]. The last three use process- based formalism to represent models and domain knowledge. The Bayesian machine scientist [Guimerà et al., 2020] esta- blishes the plausibility of models using explicit approxima- tions to the exact marginal posterior over models and esta- blishes its prior expectations about models by learning from a large empirical set of mathematical expressions. The space of equations is explored via Markov Chain Monte Carlo (MCMC), with specific moves for mathematical expression sampling. Fig 2: Workflow of [Cranmer et al., 2020]: GNNs as an intermedi- ate representation to support or enable the learning process Deep Symbolic Regression (DSR) [Petersen et al., 2021] addresses the problem of GP approaches with finding solu- tions for larger problems. It employs a recurrent neural net- work to build an equation tree step by step. As the objective function (of fitting a low-error equation) is not differentiable, a reinforcement learning approach is proposed. More specifi- cally, DSR employs a risk-seeking policy gradient, which maximizes the best-case performance, not the average-case performance. Feynman 2.0 [Udrescu et al., 2020] is a recent symbolic re- gression approach that aims to improve its predecessor (i) by structuring the search space by building the equation in meaningful increments and (ii) making it more noise-tolerant. The first goal is achieved by graph modularity, i.e. con- structing the equations by so-called graph modules. It should be noted that, in doing so, it is one of the few approaches that takes advantage of the structure of the search space (instead of just brute-force search, sampling or "blind" randomized traversal). The second goal is achieved by employing an MDL-inspired evaluation function instead of the RMSE. This function is called MEDL in Feynman 2.0. Using MEDL, ef- fective pruning can be developed, because the complexity of the equation can be balanced against its error.  At least in the case of differential equations, fitting the model is the most expensive part. Ways of stopping the fitting process if it turns out to be unpromising would save a lot of computation time.  Equations are "brittle": Properties of differential equa- tions can change dramatically with only little syntactic modifications. Minor changes can lead to no solutions, one solution, or many solutions.  Overfitting avoidance and regularization: The syntactic complexity of an equation does not necessarily cor- respond to the complexity of the function in the feature space. Meaningful ways to approximate or bound com- plexity would be helpful.  Relating discovered equations to existing theory or ma- king the equations consistent with it remains a big chal- lenge. Quite related, it is not clear whether or how "un- derstanding of the physical meaning" of variables can be achieved. 3 Representation and Tricks The standard representation of data for scientific discovery is tabular data (see, e.g., also the tables in the book by Langley et al. [Langley et al., 1987] and Figure 1(a)). However, recent years have seen a surge of papers that use neural networks as an intermediate representation, to aid in the discovery of mo- dels. One notable example is the work of Miles Cranmer and Shir- ley Ho [Cranmer et al., 2020], who proposed Graph Neural Networks (GNNs) as an intermediate representation. GNNs were used to learn about the interaction of objects, in terms of, for example, forces that act upon each other. Classical examples include n-body problems or, more specifically, or- bital mechanics, the motion of planets and other bigger ob- jects in our solar system. The nodes in the graph represent the objects, which are annotated by feature vectors representing the properties of the objects. The edges in the graph represent the interaction of the objects and are annotated by the proper- ties of those, partially depending on the properties of the ob- jects. As an example, one may consider the masses of planets as properties of the nodes, and the distance and the gravity between the objects as the properties of the edges. When learning GNNs, typically, so-called node models φv are up- dated depending on the edge models φe of neighboring edges and, alternatingly, the edge models φe are updated based on the node models φn of the nodes the edges are connecting. Update steps are frequently framed as message passing, and pooling functions aggregate the input from multiple edges that are connected to one node. GNNs usually can be trained end-to-end, but are not guaranteed to converge. Fig 3: Neural network architecture of model that extracts known and unknown physical parameters from oscillating time series [Garcon et al., 2022] In many cases, equation discovery and symbolic regression are able to discover relatively complex equations of low error (or zero error, if they rediscover known equations) after some extensive and randomized search through the space of equa- tions. These astounding results raise the question why it is possible to find "the needle in the haystack" after hour-long search with many irrelevant intermediate states. Researchers then frequently recur to the "unreasonable effectiveness of mathematics", which "we neither understand nor deserve" [Wigner, 2012]. One partial, rational explanation may be that the language of mathematics was partly developed for the description of natural phenomena. However, clearly, this is not a given: there may well be natural phenomena with too complex models (that would explain them) to be found or simple fitting models with no clear relation to current theory. In equation discovery and symbolic regression, a few open problems can be identified:  It remains hard to exploit structure in the space of equa- tions to guide the search to promising parts of the search space. Opportunities for pruning would also be helpful. In the application domain that was given as an example, or- bital mechanics, the input to the system are (x, y, z) coordi- nates of the Sun, all planets, and all moons with a mass above new parameters (not just predict known ones) and reconstruct equations producing input time series. The situation is clearly more complex, when the observations are given as videos instead of tabular data. Chen et al. [Chen et al., 2022] presented a solution based on what they call neu- ral state variables. Neural state variables are essentially la- tent variables. The current state-of-the-approach to compu- ting latent variables would be to define an autoencoder with a bottleneck layer of the right dimension. The dimension should be large enough to still allow faithful reconstruction by the decoder, but small enough for the latent variables to be non-redundant. The goal of the proposed method is to have the number dimensions (i.e., the number of neural state va- riables) as close as possible to the degrees of freedom of the observations in the videos. In technical terms, the number of dimensions should be close to the so-called intrinsic dimen- sion ID, which is the minimum number of independent va- riables needed to fully describe the state of a dynamical sy- stem. Various methods from manifold learning, for instance, the one by [Levina & Bickel, 2004], are known to calculate an estimate of the intrinsic dimension efficiently. It would be tempting to calculate the intrinsic dimension by one of these methods for the videos and then use it as the bottleneck size of an autoencoder to come up with the latent variables. How- ever, practically, information becomes blurry at much larger sizes of the bottleneck than ID already. Therefore, Chen et al. take a two-step approach, and define two autoencoders, one regular and one mapping the latent variables of the first to further ID latent variables. These are the neural state va- riables that can be used for down-stream analysis. The ap- proach has not yet been made explainable for scientific dis- covery. Generally speaking, neural networks are used in this domain for    making the data sparse in the sense of removing small to negligible interactions [Cranmer et al., 2020], a change of representation (e.g., from coordinates to distances depending on some variables [Cranmer et al., 2020]), data augmentation (to sample arbitrarily large data from the neural network and also smoothen the data in that way [Cranmer et al., 2020; Li et al., 2021]), the prediction of important parameters to be used in equations directly [Garcon et al., 2022], and extracting latent variables from low-level input re- presentations (e.g., neural state variables from vi- deos [Chen et al., 2022]).   4 Closed-Loop Scientific Discovery One possible, if simplistic, formalization of the scientific pro- cess may be expressed as the repetition of six separate steps (Figure 4 (a)): Fig. 4: (a) six steps of the scientific process (b) robot scientist Adam 1018 kg. Data from 1980 to 2013 were used with time inter- vals of 30 minutes each, with the first 30 years for training and the last 3 years for validation. Garcon et al. [Garcon et al., 2022] proposed a method to pre- dict known physical parameters and discover new ones from oscillating time series. The method is trained on a large set of synthetic time series. The latent parameters used to generate the monochromatic sine waves are the carrier frequency, Fc and phase φ (which is mapped for technical reasons to two separate parameters, sin(φ) and cos(φ)), in addition to the co- herence time τ. The AM and FM sine waves are generated by adding a modulation function to the carrier. The modulation function's latent parameters are the modulation frequency Fm and amplitude Im. Noise is linearly added to the pure signals by sampling the Gaussian distribution. AM/FM-signals with minimum Im reduce to decaying monochromatic sine waves and reach 100% modulation with maximum Im. These latent parameter ranges are wide enough such that they would en- compass many foreseeable real-world signals. Figure 3 shows the neural network architecture that is used to predict the latent parameters, with an autoencoder-type subnetwork to support the prediction. The method can be used to discover ● Formulating a scientific question. That is, describing in formal or informal terms the phenomenon of interest for which our current understanding is unsatisfactory or incom- plete; ● Formulating an hypothesis, possibly based on previous work which is deemed relevant or related; ● Design experiment(s) which would be able to support, or disprove, the hypothesis at hand; ● Performing the experiment, which may require specialized tools or the development of new ones; ● Analyzing the experimental result, which may lead to re- vising the hypothesis or the experimental design if the analy- sis reveals some flaw; ● Communication of the results. Discipline Name Link Country Drug Eve https://www.chalmers.se/en/de- Sweden Discovery part- ments/bio/news/Pages/Chalmers- Robot-Scientist-ready-for-drug- discovery.aspx Drug Recursion https://www.recursion.com/ US Discovery Drug Lilly Life https://investor.lilly.com/news- US Discovery Sciences releases/news-release-details/eli- Studio lab lilly-and-company-collaboration- strateos-inc-launch-remote Chemistry roboRXN at https://research.ibm.com/sci- Switzerland IBM ence/ibm-roborxn Chemistry AI-Chemist https://academic.oup.com/nsr/ar- China ticle/9/10/nwac190/6694008 Materials/ Argonne https://www.anl.gov/autono- US Biology Autono- mous Discovery mous-discovery/developing-a- selfdriving-laboratory-prototype Materials Robot https://www.liverpool.ac.uk/le- UK Chemist verhulme-research-cen- tre/news/articles/feature-will-we- ever-have-a-robo-chemist/ Materials Acceleration https://acceleration.utoronto.ca/ Canada Consortium KIWI-lab https://kiwi-biolab.de Germany Biopro- cessing Astronomy Automated https://www.ucolick.org/pub- US lic/telescopes/apf.html Planet Finder (APF) Table 1: Ten prominent examples of efforts for closed-loop scientific discovery from seven different countries and across six different dis- ciplines automatically originate hypotheses to explain observations (abduction/induction), devise experiments to test these hy- potheses (deduction), physically run the experiments using laboratory robotics, interpret the results to change the proba- bility of hypotheses, and then repeat the cycle. [King et al., 2004; King et al., 2009; Coutant et al., 2019; Williams et al., 2015]. As the experiments are conceived and executed auto- matically by computer, it is possible to completely capture and digitally curate all aspects of the scientific process, mak- ing science more reproducible [King et al., 2009]. Such sys- tems are also more resilient to pandemics: an AI based chem- istry robotic system at the University of Liverpool made UK national news in the UK working alone through the pandemic [Burger et al. 2021]. While techniques such as BACON or symbolic regression (Section 2) are able to aid scientists in analyzing experimental results, they are not straightforwardly employable for auto- mation of other steps of the scientific process. In machine learning terms, one limitation of these techniques is that these methods rely on the availability of training data provided by human scientists. Nonetheless, techniques such as active learning [Cohn et al., 1996] and reinforcement learning [Sut- ton & Barto, 2018] may be employed to design algorithmic scientists (often called robot scientists), which are able to per- form experiments in the lab, autonomously extract data, and formulate their own theories. Developing autonomous systems for scientific discovery is enormously challenging on multiple dimensions. Hypothesis formation needs to be supported by a variety of AI and ML methods, from knowledge representation to active learning and reinforcement learning. The creation of a whole new the- ory, with theoretical terms and new measurement devices, is at least one level of complexity harder and has not been ad- dressed yet at all. In the space of hypothesis formation, the first contribution describing a largely autonomous system which discovered new knowledge is due to Ross D. King and his group [King et al., 2009], who developed the Adam robot scientist (see Figure 4(b)). Adam identified 6 genes encoding orphan enzymes in yeast (Saccharomyces cerevisiae), i.e. en- zymes which catalyze reactions occurring in yeast for which the encoding genes were not known at the time. The system was provided with a freezer, liquid handlers, plate readers, robot arms and further actuators, enabling yeast cultivation experiments lasting as long as 5 days. Yeast growth was measured via optical sensors. On the software side, Adam was provided with an extensive Prolog knowledge base de- scribing known facts about yeast metabolism. Hypotheses were formed by abduction, enabled by a combination of bio- informatic software and databases, after which an experiment planning module was responsible for selecting metabolites to be inserted in the yeast's growth medium. Arguably the cutting edge of applying AI to science is the integration and automation of these steps for the closed-loop automation of scientific research (aka 'Robot Scientists', 'Self-driving Labs', Autonomous Discovery). Such systems Another successful example of laboratory automation is Eve. Originally developed for high-throughput drug-screening [Sparkes et al., 2010], the system was then instrumental in discovering that several existing drugs could be repurposed to prevent tropical diseases [Williams et al., 2015]. Most prominently, it found that an anti-cancer compound (TNP- 470) could be employed against the parasite Plasmodium vi- vax, whose bite is the most frequent cause of recurring ma- laria. The system is able to hypothesize and test quantitative structure-activity relationships (QSARs) via a combination of active learning and Gaussian process regression (GPR). GPR is employed to learn a QSAR f mapping the characteristics of compounds to a response variable indicating the strength of the biological activity; then, the obtained function f is em- ployed as a noisy oracle to select K compounds out of a pool of possible candidates. Exploration and exploitation is bal- anced. This two-step process may be repeated until enough candidates are obtained. Level Summary Narrative Example 0 1 2 3 4 5 No Traditional human science before - automation the advent of computers. Machine The use of computers to automate Most current ap- assistance an aspect of science, e.g. analys- plications of ML. ing data. Partial An important aspect of the dis- Automation covery cycle is fully automated. AlphaFold 2 Realtime weather forecasting Conditional Closed-loop automation. The full See Table 1. Automation cycle of discovery is automated in a restricted domain. High Closed-loop automation. Multi- No existing Automation ple scientific domains. Limited system. ability to set its own goals. Full All aspects of science are auto- No existing Automation mated and no human intervention system. is required. Table 2: Six levels of autonomy in scientific discovery analogously to autonomy levels in autonomous driving Such systems are increasingly being applied to multiple sci- entific domains (ranging from quantum mechanics to astron- omy, from chemistry to medicine), see Table 1. Most of these systems are designed to automate the optimization of some form of product, e.g. a drug to treat cancer, material for a bat- tery, or a better protein. They are also only able to execute a restricted form of experiment that can be easily automated using laboratory automation. Such constraints on the system simplifies the engineering. The advantage of saving time/money in using such closed-loop automation has been repeatedly demonstrated (e.g., [Williams et al., 2015]), and therefore has already become standard in some industries. 5 Autonomy With self-driving cars there are is an agreed classification system with six levels of autonomy – ranging from fully man- ual to fully automated systems. A similar scheme for evalu- ating autonomy was proposed for AI in science [Kitano, 2021], since this too involves a transfer of responsibility from humans to machines (Table 2). This classification aims to be clear, measurable, achievable, relevant, time-bound, and ro- bust, both in the sense that assigning a level will be easy, and that the classification will prove robust over time and not re- quire us to constantly keep changing it. Participants at the 1st Workshop on the Nobel Turing Challenge, organized by the Alan Turing Institute in 2020, estimated that widespread up- take of Level Two and Level Three systems would happen within 5 years - which is happening. They considered that Level Four systems could become widespread in the next 10 to 15 years, and Level Five in the next 20-30 years. In games such as chess and go there is a continuum of ability from novices up to Grandmasters. We argue that this is also true in science, from the simple research that can be auto- mated now, through what most human scientists can achieve, up to the ability of a Newton or Einstein. If one accepts this, then just as in chess, it is likely that advances in technology and our understanding of science will drive the development of ever-smarter AI systems for science. The Physics Nobel Frank Wilczek (10 years ago) said that in 100 years’ time the best physicist will be a machine. In February 2020 a work- shop was held in London to kick-off the Nobel Turing Grand Challenge to develop AI Scientists: AI systems capable of making Nobel-quality scientific discoveries highly autono- mously at a level comparable, and possibly superior, to the best human scientists by 2050 [Kitano, 2021]. 6 Conclusions This paper is an attempt at giving a survey of research on au- tomated scientific discovery, from discovering equations to autonomous discovery systems or agents. In doing so, it takes a broad perspective on the topic, which is necessary to under- stand the individual efforts in context. The article covers the beginnings of the fields to very recent approaches, under- standing that we still have a long way of putting everything together to create human-level autonomous scientists. Hu- man-level autonomous scientists should, ultimately, be able to produce whole new theories, along with theoretical terms and measurement devices, which can be communicated to hu- mans and interpreted in the light of other, existing theories. At this point, autonomous discovery systems are focused pri- marily on "closing the loop" and lab automation, and not so much on generating human-interpretable knowledge, like (differential) equations. Vice versa, computational ap- proaches to scientific discovery, e.g., for equation discovery and symbolic regression, do not have the "embodiment" in autonomous systems in their focus yet. Ultimately, these cur- rently disparate efforts have to grow together. Finally, it should be noted that artificial intelligence has a role also in so far unexplored areas, like the design of experiments, where much of human ingenuity is currently still needed. 7. Acknowledgements This work was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP), Berzelius-2021-86, Data-Driven Life Science (DDLS) - SciLifeLab, and the UK Engineering and Physical Sciences Research Council (EPSRC) grant nos: EP/R022925/2 and EP/W004801/1. References [Brence et al., 2021] Jure Brence, Ljupčo Todorovski, Sašo Džeroski: Probabilistic Grammars for Equation Dis- covery, Knowledge Based Systems, 224:107077, 2021. [Brence et al., 2023] Jure Brence, Ljupčo Todorovski, Sašo Džeroski: Dimensionally consistent equation discovery through probabilistic attribute grammars. Information Sciences. 2023. [Bellinger et al., 2021] Colin Bellinger et al.: Active Meas- ure Reinforcement Learning for Observation Cost Mini- mization, in: Proc. of the 34th Canadian Conference on Artificial Intelligence (Canadian AI 2021), 2021. [Bridewell et al. 2008] Will Bridewell, Pat Langley, Ljupčo Todorovski, Sašo Džeroski. Inductive process modeling. Machine Learning, 71: 1-32. 2008. [Brunton et al. 2016] Discovering governing equations from data by sparse identification of nonlinear dynamical sys- tems, PNAS, 113: 3932-3937, 2016. [Burger et al. 2021] Benjamin Burger et al.:A mobile robotic chemist, Nature 583, 237–241, 2020. [Chaushevska et al., 2022] Marija Chaushevska, Ljupco To- dorovski, Jure Brence, Sašo Džeroski. Learning the pro- babilities in probabilistic context-free grammars for arith- metical expressions from equation corpora, in: Proc. Slo- venian Conference on Artificial Intelligence, 2022. [Chen et al., 2022] Boyuan Chen, Kuang Huang, Sunand Raghupathi, Ishaan Chandratreya, Qiang Du & Hod Lip- son: Automated discovery of fundamental variables hid- den in experimental data, Nature Computational Science, 2:433–442, 2022. [Cherepnalkoski et al., 2012] Darko Čerepnalkoski, Katerina Taškova, Ljupčo Todorovski, Nataša Atanasova, Sašo Džeroski, 2012. The influence of parameter fitting me- thods on model structure selection in automated modeling of aquatic ecosystems. Ecological Modelling, 45:136-165 [Cohn et al., 1996] D. A. Cohn, Z. Ghahramani, M. I. Jordan: Active Learning with Statistical Models, Journal of Arti- ficial Intelligence Research, 4, 129-145, 1996. [Coulant et al., 2019] Anthony Coutant et al. (2019) Closed- Loop Cycles of Experiment Design, Execution, and Learning Accelerate Systems Biology Model Deve- lopment in Yeast, Proceedings of the National Academy of Sciences, 116(36):18142-18147. [Cranmer et al., 2020] Miles D. Cranmer, Alvaro Sanchez- Gonzalez, Peter W. Battaglia, Rui Xu, Kyle Cranmer, Da- vid N. Spergel, Shirley Ho: Discovering Symbolic Mo- dels from Deep Learning with Inductive Biases, Advances in Neural Information Processing Systems 33, 2020. [De Raedt & Kramer, 2001] Luc De Raedt, Stefan Kramer: The Levelwise Version Space Algorithm and its Applica- tion to Molecular Fragment Finding, in: Proc. of the 17th International Joint Conference on Artificial Intelligence (IJCAI 2001), 853-862, 2001. [Du et al., 2022] Yuanqi Du, Tianfan Fu, Jimeng Sun, Sheng- chao Liu: MolGenSurvey: A Systematic Survey in Ma- chine Learning Models for Molecule Design, arXiv pre- print, CoRR abs/2203.14500, 2022. [Džeroski & Petrovski, 1994] Sašo Džeroski, Igor Petrovski: Discovering dynamics with genetic programming, in: Proc. Seventh European Conference on Machine Learning, pages 347-350. Springer, 1994. [Džeroski & Todorovski, 1993] Sašo Džeroski, Ljupčo To- dorovski: Discovering dynamics, in: Proc. of the Tenth International Conference on Machine Learning, pages 97-103. Morgan Kaufmann, 1993. [Ellis et al., 2021] Kevin Ellis, Catherine Wong, Maxwell I. Nye, Mathias Sablé-Meyer, Lucas Morales, Luke B. He- witt, Luc Cary, Armando Solar-Lezama, Joshua B. Te- nenbaum: DreamCoder: bootstrapping inductive program synthesis with wake-sleep library learning, in: Proc. of the 42nd ACM SIGPLAN International Conference on Pro- gramming Language Design and Implementation (PLDI 2021), 835-850, 2021. [Ganzert et al., 2010] Steven Ganzert, Josef Guttmann, Da- niel Steinmann, Stefan Kramer: Equation Discovery for Model Identification in Respiratory Mechanics of the Me- chanically Ventilated Human Lung, in: Proc. of the 13th International Conference on Discovery Science (DS 2010), Springer. 296-310. [Garcon et al., 2022] Antoine Garcon, Julian Vexler, Dmitry Budker, Stefan Kramer: Deep neural networks to recover unknown physical parameters from oscillating time se- ries, PLoS ONE, 17(5): e0268439, 2022. [Guimerà et al., 2020] Roger Guimerà, Ignasi Reichardt, An- toni Aguilar-Mogas, Francesco A. Massucci, Manuel Mi- randa, Jordi Pallarès, Marta Sales-Pardo: A Bayesian ma- chine scientist to aid in the solution of challenging scien- tific problems, Science Advances, 6:eaav6971, 2020. [Jumper et al., 2021] John Jumper et al.: Highly accurate protein structure prediction with AlphaFold, Nature, 596: 583-589, 2021. [King et al. 2004]: Ross D. King, Kenneth E. Whelan, Ffion M. Jones, Philip G.K. Reiser, Christopher H. Bryant, Ste- phen H. Muggleton, Douglas B. Kell, Stephen G. Oliver. Functional genomic hypothesis generation and experi- mentation by a robot scientist, Nature 427, 2004. [King et al., 2009] Ross D. King, Jem Rowland, Stephen G. Oliver, Michael Young, Wayne Aubrey, Emma Byrne, Maria Liakata, Magdalena Markham, Pinar Pir, Larisa N. Soldatova, Andrew Sparkes, Kenneth E. Whelan, Amanda Clare (2009) The Automation of Science, Sci- ence, 324:5923, 85-89. [Sutton & Barto, 2018] Richard Sutton, Andrew Barto: Rein- forcement Learning: An Introduction, MIT Press, 2018. [Todorovski & Džeroski, 1997] Ljupčo Todorovski, Sašo Džeroski: Declarative bias in equation discovery, in: Proc. of the Fourteenth International Conference on Ma- chine Learning, pages 376-384. Morgan Kaufmann, 1997. [Todorovski & Džeroski, 2006] Ljupčo Todorovski, Sašo Džeroski: Integrating knowledge-driven and data-driven approaches to modeling. Ecological Modelling, 194:3-13, 2006. [Udrescu et al., 2020] Silviu-Marian Udrescu, Andrew Tan, Jiahai Feng, Orisvaldo Neto, Tailin Wu, Max Tegmark: AI Feynman 2.0: Pareto-optimal symbolic regression ex- ploiting graph modularity, Advances in Neural Informa- tion Processing Systems 33, 2020. [Washio & Motoda, 1997] Takashi Washio and Hiroshi Mo- toda: Discovering Admissible Models of Complex Sys- tems Based on Scale-Types and Idemtity Constraints, in: Proc. of the Fifteenth International Joint Conference on Artificial Intelligence (IJCAI 1997), 810-819, 1997. [Wigner, 2012] Eugene Paul Wigner: Philosophical Reflec- tions and Syntheses, 549, Springer Science & Business Media, 2012. [Williams et al., 2015] Kevin Williams, Elizabeth Bilsland, Andrew Sparkes, Wayne Aubrey, Michael Young, Larisa N. Soldatova, Kurt De Grave, Jan Ramon, Michaela de Clare, Worachart Sirawaraporn, Stephen G. Oliver, Ross D. King: Cheaper faster drug development validated by the repositioning of drugs against neglected tropical dis- eases, Journal of Interface, 12(104):20141289, 2015. the Royal Society [Kitano, 2021] Hiroaki Kitano: Nobel Turing Challenge: cre- ating the engine for scientific discovery, npj Systems Bi- ology and Applications. 7(29), 2021. [Köppel et al., 2022] Marius Köppel, Alexander Segner, Martin Wagener, Lukas Pensel, Andreas Karwath, Chri- stian Schmitt & Stefan Kramer: Learning to rank Higgs boson candidates, Scientific Reports, 12, 13094, 2022. [Koza, 2004] John R. Koza: Genetic programming as a means for programming computers by natural selection, Stati- stics and Computing, 4:87-112, 1994. [Langley, 1977] Pat Langley: BACON: A Production Sy- stem That Discovers Empirical Laws, in: Proc. of the 5th International Joint Conference on Artificial Intelligence (IJCAI 1977), 344, 1977. [Langley et al., 1987] Patrick W. Langley, Herbert A. Simon, Gary Bradshaw, Jan M. Zytkow (1987) Scientific Dis- covery: Computational Explorations of the Creative Pro- cess, MIT Press. [Langley, 2021] Pat Langley: Agents of Exploration and Dis- covery, AI Magazine, 42:4, 72–82, 2022. [Lemos et al., 2022] Pablo Lemos, Niall Jeffrey, Miles D. Cranmer, Shirley Ho, Peter W. Battaglia: Rediscovering orbital mechanics with machine learning, arXiv preprint, CoRR abs/2202.02306, 2022. [Levina & Bickel, 2004] Elizaveta Levina, Peter J. Bickel: Maximum Likelihood Estimation of Intrinsic Dimension, Advances in Neural Information Processing Systems 17, pp. 777–784, 2004. [Li et al., 2021] Zelong Li, Jianchao Ji, Yongfeng Zhang: From Kepler to Newton: Explainable AI for Science, ar- Xiv preprint, https://doi.org/10.48550/arXiv.2111.12210, 2021. [Petersen et al., 2021] Brenden K. Petersen, Mikel Landajuela Larma, T. Nathan Mundhenk, Claudio P. San- tiago, Soo K. Kim, Joanne T. Kim: Deep Symbolic Re- gression: Recovering Mathematical Expressions from Data Via Risk-Seeking Policy Gradients, in: Proc. of the 9th International Conference on Learning Representa- tions ( ICLR 2021), 2021. [Pušnik et al., 2022] Žiga Pušnik, Miha Mraz, Nikolaj Zimic, Miha Moškon: Review and assessment of Boolean ap- proaches for inference of gene regulatory networks, He- liyon, 8(8): e10222, 2022. [Schmidt & Lipson, 2009] Michael Schmidt, Hod Lipson (2009) Distilling Free-Form Natural Laws from Experi- mental Data, Science, 324:5923, 81-85. [Sparkes et al. 2010] Andrew Sparkes, Wayne Aubrey, Emma Byrne, Amanda Clare, Muhammed N. Khan, Ma- ria Liakata, Magdalena Markham, Jem Rowland, Larisa N. Soldatova, Kenneth E Whelan, Michael Young, Ross D. King: Towards Robot Scientists for autonomous scien- tific discovery, Automated Experimentation, 2:1, 2021.
ai_researcher
1
AUTOMATION_AS_AN_INNOVATIVE_METHOD_OF_MANAGING_A_TRADE_NETWORK_OF_CLOTHING_STORES.pdf
HumanCoser: Layered 3D Human Generation via Semantic-Aware Diffusion Model Yi Wang1 2†, Jian Ma1†, Ruizhi Shao3, Qiao Feng1, Yu-Kun Lai4, Kun Li1* 1Tianjin University, China 2Changzhou Institute of Technology, China 3Tsinghua University, China 4Cardiff University, British 4 2 0 2 g u A 1 2 ] V C . s c [ 1 v 7 5 3 1 1 . 8 0 4 2 : v i X r a Figure 1: Our method can generate layered 3D humans guided by text prompts, which are physically-decoupled and structurally consistent. This allows our generated clothing to be reused, exchanging between digital avatars with different identities. ABSTRACT Decoupling, Human Animation. This paper aims to generate physically-layered 3D humans from text prompts. Existing methods either generate 3D clothed hu- mans as a whole or support only tight and simple clothing gen- eration, which limits their applications to virtual try-on and part- level editing. To achieve physically-layered 3D human generation with reusable and complex clothing, we propose a novel layer-wise dressed human representation based on a physically-decoupled dif- fusion model. Specifically, to achieve layer-wise clothing gener- ation, we propose a dual-representation decoupling framework for generating clothing decoupled from the human body, in conjunction with an innovative multi-layer fusion volume rendering method. To match the clothing with different body shapes, we propose an SMPL-driven implicit field deformation network that enables the free transfer and reuse of clothing. Extensive experiments demon- strate that our approach not only achieves state-of-the-art layered 3D human generation with complex clothing but also supports vir- tual try-on and layered human animation. More results and the code can be found on our project page at https://cic.tju.edu.cn/ faculty/likun/projects/HumanCoser. Index Terms: 3D Human Generation, Layered Clothing, Physical †Equal contribution. *Corresponding author. 1 INTRODUCTION The generation of 3D humans with changeable clothing plays an important role in movies, games and AR/VR. Existing meth- ods [7, 55, 60, 54, 47, 44] only produce a unified surface encom- passing both the body and clothing, leading to a body-clothing cou- pling. This limits their ability to edit clothing and body separately, restricting detailed customization and accurate adjustments for vir- tual try-ons, animated character design, and personalized avatar creation. In this paper, we aim to generate high-fidelity layered 3D human which can be edited and exchanged for clothing via representation-decoupling, as shown in Fig. 1. Recently, owing to the high-quality image synthesis capabil- ity of pre-trained diffusion models [49], methods [47, 44, 33] in- troduce a novel Score Distillation Sampling (SDS) strategy [49] to self-supervise the 3D human generation process. However, these methods ignore the diversity and self-occlusion of human shapes, which leads to inconsistencies in generated human struc- tures. Furthermore, most data-driven 3D avatar generation meth- ods [2, 58, 25, 66, 11, 15] generate 3D clothed humans in a coupled manner, and as a result, clothing cannot be exchanged between ar- bitrary bodies. Overall, the above methods fail to ensure structural consistency of the human body and lack the capability to generate and edit bodies and clothes in a layered and flexible manner. This paper introduces HumanCoser, a novel framework based on a physically-decoupled diffusion model. It aims to generate representation-decoupling and animatable 3D dressed humans with consistent body structure in a layer-wise manner, guided by text. To achieve accurate layer-wise clothing representation, we propose a dual-representation decoupling framework designed to generate clothing independent from the human body. This framework is complemented by an innovative multi-layer fusion volume render- ing method. HumanCoser, thus, effectively generates multi-layer clothing consistent with the text prompts. Moreover, to ensure ac- curate geometric alignment between decoupled clothing and the body, we present a 3D implicit deformation field leveraging SMPL [47] as a clothing proxy for matching clothing with the body. Fur- thermore, to enhance details, we introduce a normal prediction net- work for smooth normals, combined with optimized spherical har- monic (SH) lighting. Hence, the proposed HumanCoser can gener- ate reusable and intricate multi-layered dressed 3D humans that can be edited and changed separately as shown in Fig. 1. Our main contributions are summarized as follows: • We propose a layered 3D human generation framework with a multi-layer representation decoupling method. To our best knowledge, this is the first work that can make the 3D dressed human truly decoupled physically and support layered gener- ation and editing for 3D dressed human. We also introduce a decoupled shape prior to generating structurally consistent 3D content. • We propose a dual-representation decoupling strategy to im- prove the semantic consistency of generated clothing, com- bined with an innovative multi-layer fusion volumetric ren- dering approach. The strategy not only improves the seman- tic consistency of the clothing but is also generalizable to the enhancement of 3D semantics for other wearable outfits of humans. • We propose a 3D implicit deformation method based on SMPL vertex prediction to achieve the geometric matching of human bodies and clothing in an implicit manner so that the clothing can be transferred between different human subjects. 2 RELATED WORK Text-guided 3D Content Generation. CLIP-Forge [53] and Dream-Field [18] optimize Neural Radiance Fields (NeRFs) to gen- erate 3D shapes by aligning the embedding of the generated image with the text embedding in the space of the image-text model CLIP. CLIP-mesh [36] also uses CLIP to optimize meshes to represent 3D shapes. However, by directly generating images aligned with text in CLIP space, it is not possible to generate highly realistic images. Recently, diffusion modeling [38, 49, 51] has seen rapid growth due to its excellent performance in synthesizing high-quality im- ages. DreamFusion [43] proposes Score Distillation Sampling (SDS) based on a pre-trained diffusion model [49] to optimize train- able NeRFs. Magic3D [29] uses a 2-stage training strategy to boot- strap 3D texture networks to optimize 3D content generation. Al- though the above diffusion-based 3D generation models have some 3D generation capability, the generation of 3D human is a challenge for the above methods due to the complexity of their shapes and the diversity of their poses. Text-guided 3D Human Generation. AvatarCLIP [12] initializes the 3D human body shape via a VAE (Variational Autoencoder) en- coder and then performs geometric shaping and texture generation guided by an image-text model [45]. However, since the method focuses on shaping localized structures, it lacks in the generation of global structures such as skirts, long hair and loose clothing. In addition, Latent-NeRF [33] and TADA [28] both utilize pre-trained text-to-image diffusion models for 3D avatar generation work. In particular, Latent-NeRF [33] employs a Sketch-Shape to constrain the generation of the diffusion model, but the results of this method lack details due to the lack of optimization of normals and illumi- nation. TADA [28] is limited by the representation ability of the Table 1: Comparison of 3D human generation methods, includ- ing layered generation, geometric complexity, clothing transfer and clothing reusability. Method Multilayer Geometry (non-skin tight) Clothing Transfer Reusability AvatarCLIP [12] TADA [28] Latent-NeRF [33] HumanLiff [13] Ours (cid:37) (cid:37) (cid:37) (cid:33) (cid:33) (cid:37) (cid:37) (cid:33) (cid:33) (cid:33) (cid:37) (cid:37) (cid:37) (cid:37) (cid:33) (cid:37) (cid:37) (cid:37) (cid:37) (cid:33) confined mesh, and thus cannot represent non-convex structures or transparent materials well. Neither of the above methods can gen- erate 3D avatars with layer-wise bodies and clothing. Dreamhu- man [24] produces animatable coupled avatar based on text and hu- man posture. It combines 3D human prior to generate and re-pose the generated results, but it cannot arbitrarily adjust and replace the clothes of humans without retaining human identity. Avatar- craft [20] transforms text into a 3D avatar, using a diffusion model to stylize geometry and texture, while shape and pose are controlled by a parametric human model. Avatarcraft uses a bare neural human avatar as a template. Given a text prompts, Avatarcraft uses the dif- fusion model to guide the creation of the avatar by updating the tem- plate so that the geometry and texture are consistent with the text. Although Avatarcraft updates the avatar with new pose and shape parameters without training, the generated avatar hardly shows de- tails such as loose clothing and fluffy hair. Dreamwaltz [16] gener- ates 3D digital avatars from text prompts, leveraging prior knowl- edge of human body shapes and poses, and facilitating animation and interactive compositions between avatars, objects, and scenes. It learns the distribution of human animations through prior knowl- edge of human actions, enabling the generation of plausible human animations. However, Dreamwaltz’s learnable human action defor- mation module lacks generalizability for generating multi-layered humans, thereby hindering capabilities such as dress-up and cloth- ing editing. DreamAvatar [3] uses SMPL for shape guidance and introduces a dual-observation space design to optimize shape and pose jointly. It addresses the “Janus” problem and enhances facial details. However, it fails to fully consider human body occlusion information, and it also couples clothing and human body genera- tion. Distinct from non-layered methods, HumanLiff [13] generates human body based on the diffusion model in a layer-by-layer man- ner. However, the features depend on the tri-plane features of the previous layer. This coupling of features among layers impedes the separate editing and reuse of each layer. In summary, existing methods either generate 3D dressed hu- mans as a whole or support only tight and simple clothing gen- eration, which limits their applications to virtual try-on and part- level editing. In contrast, our method can generate reusable and intricate multi-layered 3D dressed humans that can be edited and changed separately. It achieves realistic body and clothing genera- tion by predicting normals and employing improved spherical har- monic lighting. Moreover, we ensure the semantic consistency of the generated clothing through an optimized dual-representation de- coupling framework. The layered clothing can be seamlessly trans- ferred between different shapes of human bodies using an implicit deformation network based on SMPL. We summarize the main dif- ferences between our work and related work in Tab. 1. 3 METHOD 3.1 Overview The proposed HumanCoser is a two-stage method to generate re- alistic 3D humans with consistent body structures guided by text in a layer-wise manner. The first stage (a) is to generate a mini- mized human body, and the second stage (b) performs decoupled generation of clothing and matches the clothing with the human body. Specifically, as Fig. 2 shown, stage (a) consists of a NeRF and ControlNet with SMPL skeleton conditions as inputs and generates a minimized human body in canonical space (Sec. 3.2). In addi- tion, stage (b) consists of a dual-representation decoupling network (Sec. 3.3) and an implicit deformation network driven by SMPL (Sec. 3.4). In which, the dual-representation decoupling network generates dis-entangled clothing on the basis of the minimized hu- man body combined with a multi-layer fusion rendering method. Finally, the decoupled clothing is matched with the human body through the deformation network mentioned above. 3.2 Canonical Body Generation In order to obtain the minimized human body, we adopt Control- Net with SMPL as conditional input and generate human body in canonical space, as shown in Fig. 2. We first use NeRF as a rep- resentation of layered humans. The inner body and each layer of clothing are represented by a separate network as follows: Fθ (γ(x)) = (σ , c), (1) where γ(.) is the frequency encoder. We render the scene using the volume rendering equation Fθ (.) [35]. σ and c denote the density and color predicted by each sampling point x. The color of each layer is predicted as follows: wici, C(r) = ∑ i wi = αi ∏ j<i (cid:0)1 − α j (cid:1) , (2) where αi = 1−exp (−σi ∥xi − xi+1∥) and ∥xi − xi+1∥ is the interval between sample i and i + 1. wi is the weight of the ith sampling point [35]. ci and σi is the predicted color and density of the ith sampling point [35]. Multi-Layer Fusion Rendering. In order to fuse the layered human body and clothing for rendering, we proposed a multi-layer composite rendering method based on the density and weight of each sampling point according to (Eq. (2)), which is defined as fol- lows: C′(r) = w j i c j i , w j i = max (cid:16) i · · · wn w1 i (cid:17) , (3) N ∑ i=1 where C′(r) is the rendering formula (Eq. (2)), w j is the weight i with the highest density in the n-layer component, and c j is the i corresponding color. In addition, in order to make the generated surface normals smoother, we calculate the normal loss between the predicted normal n′ and the surface normal n: Ln = ∑ i (cid:13) (cid:13)n′ i − ni (cid:13) (cid:13) , wi (4) where wi is the weight of the ith sampling point, which follows the definition of Eq. (2). Moreover, in order to regularize the nor- mal and reduce redundant semantically generated artefacts, a loss of regular constraints is added: L reg n = ∑ i wi (cid:0)1 −∑ n′ i · ni (cid:1) , (5) where wi is the weight of the ith sampling point. Definitions of n′ i and ni refer to Eq. (4). Furthermore, we introduce SDS loss to optimize 3D models of the body and clothing: ∇θ LSDS(φ , x) = Et,ε (cid:20) w(t) (cid:0)ε φ (xt ; y,t, c) − ε(cid:1) ∂ x ∂ θ (cid:21) (6) where w(t) represents a weight function dependent on the time step t, x is the noisy image, and y is the input text prompt. The noise injected by ε is added to the rendered image x. To maintain the con- sistency of the human body structure, we input the SMPL skeleton c as the conditional image. Therefore, the overall loss of the canon- ical body generation is as follows: Lbody = λ body SDS L body SDS +λnLn +λ reg n L reg n , (7) SDS = LSDS (cid:0)xb; yb, cb(cid:1), xb is the supervised body image, where L body yb is the prompt of body, and cb is the input condition of SMPL skeleton. λ body are the weights attributed to each loss. More details of Sec. 3.2 are provided in the supplementary material. SDS , λn, λ reg n 3.3 Dual-Representation Decoupling In order to accurately obtain the shape of clothing, we introduce a dual-representation decoupling framework (DRD) to eliminate the parts that are inconsistent with the semantics of clothing. Decoupling Clothing Representation. As illustrated in Fig. 2(b), the DRD model consists of a multi-layer component composition network and a clothing generation network. During the training of the clothing component at the Nth layer, we take the density of the sampling point with the largest weight in the first N − 1 layers as the combination density of the first (N − 1)th layers. This combined density is defined as follows: argmax δ (w(δ )), δ ∈ {δ1 . . . δN−1} , (8) where w(.) is defined in Eq. (2). The final combination weight w(δ ) is then calculated based on the combined density δ . We use the following loss to constrain the density of the overlapping parts of the Nth clothing and the first N − 1 components: (cid:13) Lreg ds = (cid:13) (cid:13)w (cid:17) (cid:16) (cid:110) δ N c δ N c | w (cid:16) δ N c (cid:17)(cid:13) (cid:13) (cid:13)2 > λ ∪ δ N , c < δbc (cid:111) , (9) where w(.) is Eq. (2) which only considers the input of density, δ N c is the density of the Nth clothing component, δbc is the combi- nation density of the first N − 1 components, and λ is the defined threshold. Finally, as shown in Fig. 2(b), we perform a composite rendering of the Nth clothing component and the first N − 1 com- ponents based on the Eq. (2) of multi-layer fusion rendering. The composite rendering is defined as follows: Ccomp(r) = ∑ xi∈Mc wicc (xi) + ∑ x j∈Mbc w jcbc (cid:0)x j (cid:1) , Mc = (cid:8)xi | w (xi) < λ ∪ δ (xi) > δ (cid:0)x j Mbc = ¯Mc, (cid:1)(cid:9) , (10) where Ccomp(.) is the rendering formula Eq. (2), x is the sampling point, w(x) and δ (x) is the weight and density of x, λ is the defined threshold, cc(x) and Mc is the predicted color and set of x for the Nth cloth component, cbc(x) and Mbc is the predicted color and set of x for the first N − 1 components. Dual SDS optimization. By using the above method, we obtain a rendered image composed of N components. However, direct uti- lization of this outcome for the training of clothing leads to issues with semantic inconsistency with clothing. As shown in Fig. 2(b), apart from utilizing SDS loss to supervise the composite rendering results, we also employ a single volume rendering combined with Diffusion model solely to supervise the clothing. After using the above decoupling strategy, we thus get clothing that is disentangled Figure 2: Illustration of our framework for generating the clothes and body of a dressed human in a layered manner. (a) shows the generation of the minimized body, and (b) shows the layered generation of clothing and the matching of clothing with the body. with the body. Additionally, we introduce NeRF density regular- ization loss Lr(.) to eliminate floating clouds. The loss function of the decoupling generation stage for clothing is as follows: SDS L cloth L comp SDS + λ comp Lclothing = λ cloth SDS SDS + λreg dsLreg ds + λrLr, (11) SDS = LSDS (xc; yc), xc is the supervised clothing image, where L cloth yc is the prompt of clothing. L comp SDS = LSDS (xcp; ycp), xcp is the supervised composite image, ycp is the prompt of composite image. SDS , λ comp λ cloth SDS , λreg ds, λr are the weights attributed to each loss. 3.4 Matching of Clothing and Body In order to perform fine deformation of the clothing shape to fit the body, we introduce the SMPL-driven implicit deformation network (SID Net), as shown in Fig. 2(b). Furthermore, for precise clothing editing, we use SMPL-X [41] for our clothing shape proxy and add learnable vertex offsets o for each shape proxy. At the same time, we use the vertex prediction model o = Fv(v) to predict the offset o of each vertex v of the SMPL shape proxy. The specific imple- mentation of SMPL to drive the clothing to match the body is as follows: Optimizing vertices. Given the body SMPL parameters (β , θ ), the vertex offset Fv : v → o and the camera parameter ρ, we render a mesh proxy of the body as a binary mask image Rm smpl , where Rm is a differentiable raster renderer. At the same time, we render a meshes proxy of the cloth- ing as binary mask images (where we use the SMPL model exclud- ing vertices of the head, hands and feet) Rm (Mcloth(β , θ ), ρ) → Icloth smpl . Then, since the body masks should be within the region where the clothing proxy Icloth mask rendered by NeRF and Icloth smpl are merged, we perform the optimization using the following loss: (cid:0)Mbody(β , θ , o), ρ(cid:1) → Ibody Lmatch = Lhuber (cid:16) mask + Icloth Icloth smpl − Ibody smpl (cid:17) , (12) where Lhuber(.) [4] is a smoothed loss function. Also to smooth the predicted vertex offsets, we introduce a regularization loss for the vertex offset o: L reg offset = ∥o∥2, where o contains the predicted vertex offsets for all vertices. Then, we update the vertex prediction model Fv : v → oopt using the gra- dient of the Lmatch loss to obtain the optimized vertex offsets oopt for optimizing the implicit geometry of the clothing. More details of Sec. 3.4 are provided in the supplementary material. The loss of clothing matching is delineated as follows: (13) Lmatching = λmatchLmatch + λregL reg offset, (14) where λmatch, λreg are the weights attributed to each loss. In con- clusion, the overall loss of the decoupled generation and matching of bodies and clothing is as follows: Lall = Lbody + Lclothing + Lmatching. (15) 4 EXPERIMENTS In this section, we assess the efficacy of our proposed layered human generation framework. We commence by providing im- plementation details in Sec. 4.1, followed by generated results in Sec. 4.2. Subsequently, quantitative and qualitative comparisons between state-of-the-art methods and ours are presented in Sec. 4.3. To evaluate the effectiveness of proposed modules, ablation studies are discussed in Sec. 4.4. Finally, we showcase the applications of our method. Please refer to the demo case for some experimental results. 4.1 Implementation Details Hyperparameters. We use ISM [27] to compute the SDS loss with normal CFG (7.5) for all stages. The warm-up period of ISM is 1,000 iterations. (1) Canonical Body Generation: The loss weights, SDS , λn and λ reg λ body n , are 1.0, 0.01, 0.05, respectively. The gradient scaling factor of ISM is 0.1. (2) Dual-Representation Decoupling: The loss function weights, λ cloth SDS , λreg ds and λr, are 1.0, 1.0, 0.05, 2.0, respectively. The gradient scaling factor of ISM is SDS , λ comp Table 2: Quantitative comparisons with non-layered and Layered methods. Method FID ↓ CLIP Score ↑ AvatarCLIP [12] Latent-NeRF [33] TADA [28] AvatarFusion[14] HumanLiff [13] 311.46 329.40 392.61 375.97 324.69 HumanCoser (Ours) 298.54 30.88 29.82 25.39 26.96 26.34 31.61 4.2 Generated Results We present physically-layered generated results in Fig. 3. When the same person is dressed in different clothes, our method generates 3D clothing that conform to the body shape. For example, when a woman wears a turquoise Cheongsam and then switches to a blue dress, our method generates her in the dress with a fitting body shape. In addition to all-in-one clothing, our method is capable of generating 3D humans in complex clothes, such as a man wearing a coat and pants or jeans. Notably, the generated clothes conform better to the body, including the waist position, suggesting that our physically-layered model not only accommodates various clothing changes but also ensures a better fit to the human body, resulting in a more natural appearance. 4.3 Comparison We compare our approach with five SoTA methods. (1) Avatar- CLIP [12] uses pre-trained vision-language CLIP model to guide NeuS [56] for 3D avatar generation; (2) TADA [28] creates 3D avatars from text by using hierarchical rendering with score distil- lation sampling; (3) Latent-NeRF [33] introduces sketch shape loss based on 3D shape guidance to supervise the training; (4) Avatar- Fusion [14] can generate avatars while simultaneously segmenting clothing from the avatar’s body; (5) HumanLiff [13] firstly gener- ates minimally clothed humans, represented by tri-plane features, in a canonical space and then progressively generates clothes in a layer-wise manner. 4.3.1 Quantitative Results This section quantitatively compares the proposed method with [12, 28, 33, 14, 13]. Inspired by [22], we use user preference metrics to compare the generation quality to the SoTA methods [12, 28, 33, Figure 3: The decoupled generation of human body and clothing by our method. (a) clothing prompt: “A turquoise Cheongsam”, (b) clothing prompt: “A deep-skyblue sleeveless sheath dress with lace trims”, (c) clothing prompt: “A Duffle Coat and a baggy linen pants”, (d) clothing prompt: “A Car Coat and a baggy jeans”. 0.07. (3) Matching of Clothing and Body: The loss weights, λmatch and λreg, are 10.0, 1.0, respectively. Training Details. The overall framework is trained using Adam optimizer, with the betas of [0.9, 0.99] and the learning rates of 5e− 5, 1e − 3 for the stage of decoupling dressed human and clothing- matching, respectively. The training of the body and clothing in the decoupling stage takes 12,000 and 8,000 iterations. Specifically, alternate training is used in clothing training, and the training ratio of the Nth layer to the combination of the first N layers is 1 : 6. The training of clothing-matching requires 3,000 iterations. We use the training resolution of 512 × 512 with a batch size of 2 and the whole optimization process takes three hours on a single NVIDIA 4096 GPU. Further training details are available in the SupMat. Figure 4: Quantitative results. Our method and methods [12, 33, 28, 14] are evaluated by using the method [22] to measure the visual quality of the generated 3D content, where higher scores are better. Figure 5: Qualitative comparison with coupled generation meth- ods [12, 28, 7]. (a) prompt: “A north American Indian chief in full regalia”, (b) prompt: “A Chinese lady wearing a gauzy hanfu”, (c) prompt: “A Hawaiian woman wearing a hula skirt”, (d) prompt: “A French woman wearing a light blue crinoline dress”. Table 3: User Study Results. We investigated user evaluations on geometric and texture quality, as well as consistency with text prompts. Case case 1 case 2 case 3 case 4 case 5 case 6 case 7 case 8 case 9 case 10 Average AvatarCLIP [12] Geometry 2.41 3.85 3.05 2.57 3.24 2.59 2.37 2.55 2.88 3.79 3.79 Texture 2.82 2.79 3.07 2.51 2.46 2.41 2.60 3.11 2.93 2.40 2.71 Text Geometry 2.88 2.85 2.33 3.27 2.74 3.95 2.58 2.08 3.08 2.64 2.84 2.58 2.24 2.47 2.78 2.54 3.02 2.67 2.57 2.51 1.92 2.53 TADA [28] Texture 2.74 2.48 2.31 2.43 2.10 2.61 1.83 2.26 2.10 2.74 2.36 14]. Fig. 4 demonstrates the superior performance of our method compared to [12, 28, 33, 14] in generation quality. Additionally, we calculate the FID [10] between the views rendered from the gener- ated 3D humans and the images produced by Stable Diffusion [50]. As shown in the Tab. 2, our method achieves the lowest FID score, indicating the best generation quality. Furthermore, we adopt CLIP score [9] to measure the compatibility between the prompts with the rendered views of 3D humans. Tab. 2 shows our method achieves the highest CLIP score, indicating that the human model generated by our framework is more aligned with the prompt. Compared to layered-generation SoTA methods [14] and [13], our method not only achieves better generation quality, but also freely performs clothing transfer and generalizable animations. Furthermore, we perform a user study comparing the human gen- eration results of our method with those of other state-of-the-art methods [12, 28, 33]. We generate 3D human for different methods based on 10 text prompts. Fifty volunteers (including 26 males and 24 females, aged between 18 and 50 years) were invited to rank the methods in terms of (1) geometric quality, (2) appearance quality, and (3) consistency with the text prompts. Volunteers score each comparative indicator for each method from 1 (worst) to 5 (best). The final evaluation results are provided in Tab. 3. Our method achieves optimal scores across all three metrics, indicating superior generative quality for geometry and texture based on text inputs. Figure 6: Qualitative comparison with the layered method [13]. Latent-NeRF [33] Ours Text Geometry 2.34 2.72 2.42 2.51 2.03 2.49 2.17 1.90 2.33 2.29 2.32 3.18 4.06 3.58 3.12 3.41 3.14 3.70 2.97 3.87 2.47 3.35 Texture 3.22 2.86 2.27 3.76 3.54 3.57 3.43 3.65 3.81 2.79 3.29 Text Geometry 4.04 3.29 3.82 2.89 3.24 3.17 3.95 3.71 3.22 2.97 3.43 3.72 4.53 4.61 3.08 4.66 4.16 4.68 4.32 4.47 4.37 4.26 Texture 4.26 4.51 3.79 3.74 3.64 3.93 4.13 3.73 4.40 3.97 4.01 Text 4.29 4.16 4.63 4.53 3.79 4.89 4.23 4.64 4.11 4.36 4.37 Figure 7: Editing results for adaptive matching of clothing to dif- ferent body shapes. 4.3.2 Qualitative Results Fig. 5 qualitatively compares to text-guided 3D generation meth- ods [12, 33, 28]. Considering that [12, 33, 28] are based on cou- pled generation, we provide a coupled generation model for com- parison. We render the model as multiple views for comparison. As shown in Fig. 5, although AvatarCLIP [12] generates view- consistent human bodies, it demonstrates limitations in effectively modeling global structures, such as skirts and long hair. Latent- NeRF [33] exhibits a limitation in its capacity to finely generate both geometry and texture. TADA [28] accuracy depends on the density of the mesh, and the discrete representation affects its ge- ometric appearance. So, [12, 33, 28] exhibit deficiencies, either in the representation of geometric details or in the portrayal of fine textures. In contrast, our method produces humans characterized by enhanced geometric details, including loose clothing and diverse long hair, along with finer textures. In addition, Fig. 6 illustrates the comparison of the layered 3D human generation approaches [13]. Since AvatarFusion [14] is not capable of multi-layer generation, we use HumanLiff [13]1 for the comparison of layered generation. HumanLiff [13] stands out as the most akin work to our method, employing a layer-by-layer gener- ation approach. However, it lacks the capability to change clothes, as illustrated in the top row. HumanLiff generates a clothed hu- man body by relying on a minimally-clothed human body. Instead, our method demonstrates the ability to generate the body and cloth- ing independently, as depicted in the second row in Fig. 6. Subse- quently, it engages in the matching of clothing and body, showcased in the third row. Finally, our method excels in the process of chang- ing and reusing clothing, as illustrated in the last row in Fig. 6. It’s important to highlight that our method not only facilitates the transfer and matching of clothing across bodies of varying shapes but also enables the generation of multi-layer clothing using multi- layer fusion volume rendering. Fig. 7 shows that the clothing can adaptively match different shapes of body by our method including 1HumanLiff currently does not provide the official implementation, and hence we compare with the visual results presented in [13] Figure 8: The effectiveness of multi-layer decoupled clothing. naturalness achieved by our method in multi-layer clothing. 4.4 Ablation Study Effectiveness of Dual-Representation Decoupling Framework. To assess the effectiveness of the dual-representation decoupling framework (DRD), we investigate the impact of employing dual SDS losses on clothing generation, as depicted in Fig. 9. Our find- ings indicate only utilizing a single SDS loss alongside a single vol- umetric rendering fails to accurately decouple the clothing from the human body and may result in incorrect clothing shapes as shown in the red box in Fig. 9. This is attributed to the single SDS loss super- vising clothing generation, leading to the production of redundant non-clothing parts. However, by incorporating additional SDS to supervise the combined results of the human body and the clothing, we observe a significant improvement. This augmentation enables the elimination of redundant non-clothing parts and maintains se- mantic consistency with the clothing. Consequently, the proposed dual-representation decoupling framework validates its efficacy in generating intricate and semantically consistent clothing. Effectiveness of Implicitly Deformed Modules. To adaptively match the decoupled clothing to different body shapes, we intro- duce the SMPL-driven implicit field deformation network (SID Net). As seen from the red boxes in Fig. 10, the decoupled clothing is directly matched to different body shapes, which leads to the is- sue of interpenetration between the clothing and the body, and the clothing does not fit tightly and naturally to the body. Our SID Net can optimize the SMPL proxy model of the clothing to deform the implicit field of the clothing to match the body by calculating the shape deviation loss between the clothing and the body. As can be seen from columns 4,5 of Fig. 10, arbitrarily decoupled cloth- ing can be freely and accurately matched with bodies of different shapes, even including extreme shapes of the human body, such as a super-fat or a very thin person. Our SID Net is validated to efficiently perform adaptive clothing-body matching via the above visualization results. Effectiveness of Optimizable Spherical Harmonic (SH) Light- ing. As detailed in Sec. 3.2, to mitigate the problem of color over- saturation stemming from SDS loss in the diffusion model, we in- Figure 9: Ablation study on the effectiveness of the dual- representation decoupling framework. Figure 10: Ablation study on the implicitly deformed modules. even extreme body shapes, i.e. the “super fat woman”. Fig. 8 shows that a lady is wearing two-layer clothes, i.e. a dress as well as an outer clothing. Two distinct views showcase the harmony and Figure 11: Ablation study on the effectiveness of spherical harmonic (SH) lighting. troduced an optimizable SH lighting component to modulate the color of the sample point. As depicted in the red box in Fig. 11 without incorporating SH lighting, the color of the 3D dressed hu- man exhibits oversaturation and lacks smoothness in surface ren- dering. Contrastingly, the blue box in Fig. 11 illustrates that inte- grating SH lighting enables the human model to achieve the correct coloration and a smoother visual effect. This enhancement not only addresses the issue of oversaturation but also contributes to improv- ing the overall realism and visual fidelity of the rendered human models. The addition of SH lighting introduces subtle variations in color and shading, resulting in a more natural appearance that bet- ter aligns with real-world lighting conditions. Hence, this approach enhances the quality and believability of the generated results, pro- viding more accurate representations of dressed human subjects. 4.5 Application Thanks to our capability of generating layered 3D humans, our method also has the ability to transfer clothing across people and enable skeleton-driven layered human animation. Clothing Transfer. Fig. 12 evaluates the effectiveness of our model in clothing transfer by exchanging avatars’ clothes (left/right). In this case, the layered avatars are generated based on different SMPL shapes θ with the same pose β . We transfer the clothing layer of avatar left to the body layer of avatar right and vice versa: ( cloth avatar left → bodyavatar right, clothavatar right →body avatar left(cid:1). Fig. 12 illustrates our model excels in adaptively shaping a match between the body and clothing layers, facilitating the transfer of the same clothing layer across different identity-based body layers. Generalizable Poses and Animations. Fig. 13 demonstrates the effectiveness of SMPL skeleton-driven layered human animation by applying complex animations and poses to the body and clothing layers. We learn a generalizable density-weighted network by sam- pling the pose of the SMPL from the pre-trained VPoser model as conditional inputs to the ControlNet. This refines the SMPL-based pose deformations and supports SMPL-driven animations and com- plex poses without additional training. Figure 12: The effectiveness of clothing transfers. Figure 13: The effectiveness of Pose-Driven Generation. 5 CONCLUSION AND LIMITATIONS Conclusion. This paper introduces a layer-wise dressed human generation framework built upon a physically-decoupled diffu- sion model. Central to our approach are the concepts of a dual- representation decoupling framework and a novel multi-layer fu- sion volumetric rendering technique. Building upon this decou- pled representation, we achieve multi-layer 3D human wearing loose-fitting clothing while the existing coupled methods struggle to achieve layered dressed human. Additionally, unlike other meth- ods that fail to arbitrarily change and exchange clothing, we intro- duce an implicit deformation module, guided by the SMPL model, which allows clothing to adaptively match different body shapes. Experimental results showcase that our method outperforms state- of-the-art approaches by generating high-quality multi-layered 3D humans wearing complex clothing and arbitrarily switching cloth- ing across various body shapes. Limitations. Given the absence of a uniform parametric clothing template, the assessment of matching loss to the body cannot be conducted through differentiable rendering employing a uniform 3D proxy tailored to the generated clothing. Consequently, we opt for a 3D implicit deformation field based on SMPL-X [41] to opti- mize the alignment between bodies and clothing. While our method enables the fitting of the clothing to various body shapes, it may yield unnatural matching outcomes when the shapes of the body and clothing differ significantly. In future, we will employ more accurate deformation proxies combined with object collision detec- tion to optimize the matching of clothing and body bidirectionally in order to achieve better quality of layered generation. ACKNOWLEDGMENTS This work was supported in part by the National Natural Sci- ence Foundation of China (62122058 and 62171317), and the Science Fund for Distinguished Young Scholars of Tianjin (No. 22JCJQJC00040). REFERENCES [1] Hugo Bertiche, Meysam Madadi, and Sergio Escalera. Cloth3d: In European Conference on Computer Vision, clothed 3d humans. pages 344–359. Springer, 2020. [2] Yukang Cao, Yan-Pei Cao, Kai Han, Ying Shan, and Kwan-Yee K Wong. Dreamavatar: Text-and-shape guided 3d human avatar gener- ation via diffusion models. arXiv preprint arXiv:2304.00916, 2023. 1 [3] Yukang Cao, Yan-Pei Cao, Kai Han, Ying Shan, and Kwan-Yee K Wong. Dreamavatar: Text-and-shape guided 3d human avatar gener- ation via diffusion models. arXiv preprint arXiv:2304.00916, 2023. 2 [4] Harvy Clyde Carver, AL O’TOOLE, and TE RAIFORD. The annals of mathematical statistics. Edwards Bros., 1930. 4 [5] Enric Corona, Albert Pumarola, Guillem Alenya, Gerard Pons-Moll, and Francesc Moreno-Noguer. Smplicit: Topology-aware generative model for clothed people. In Proceedings of the IEEE/CVF confer- ence on computer vision and pattern recognition, pages 11875–11885, 2021. [6] Yao Feng, Jinlong Yang, Marc Pollefeys, Michael J Black, and Timo Bolkart. Capturing and animation of body and clothing from monoc- ular video. In SIGGRAPH Asia 2022 Conference Papers, pages 1–9, 2022. [7] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh r-cnn. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9785–9795, 2019. 1, 5 [8] Georges Grinstein, Daniel Keim, and Matthew Ward. Information visualization, visual data mining, and its application to drug design. IEEE Visualization Course #1 Notes, October 2002. [9] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021. 6 [10] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale up- date rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 6 [11] Fangzhou Hong, Zhaoxi Chen, Yushi Lan, Liang Pan, and Ziwei Liu. Eva3d: Compositional 3d human generation from 2d image collec- tions. arXiv preprint arXiv:2210.04888, 2022. 1 [12] Fangzhou Hong, Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang, and Ziwei Liu. Avatarclip: Zero-shot text-driven generation and animation of 3d avatars. arXiv preprint arXiv:2205.08535, 2022. 2, 5, 6 [13] Shoukang Hu, Fangzhou Hong, Tao Hu, Liang Pan, Haiyi Mei, Weiye Xiao, Lei Yang, and Ziwei Liu. Humanliff: Layer-wise 3d human generation with diffusion model. arXiv preprint arXiv:2308.09712, 2023. 2, 5, 6 [14] Shuo Huang, Zongxin Yang, Liangting Li, Yi Yang, and Jia Jia. Avatarfusion: Zero-shot generation of clothing-decoupled 3d avatars In Proceedings of the 31st ACM International using 2d diffusion. Conference on Multimedia, pages 5734–5745, 2023. 5, 6 [15] Yukun Huang, Jianan Wang, Ailing Zeng, He Cao, Xianbiao Qi, Yukai Shi, Zheng-Jun Zha, and Lei Zhang. Dreamwaltz: Make a scene with complex 3d animatable avatars. arXiv preprint arXiv:2305.12529, 2023. 1 [16] Yukun Huang, Jianan Wang, Ailing Zeng, He Cao, Xianbiao Qi, Yukai Shi, Zheng-Jun Zha, and Lei Zhang. Dreamwaltz: Make a scene with complex 3d animatable avatars. Advances in Neural Information Pro- cessing Systems, 36, 2024. 2 [17] Petra Isenberg, Florian Heimerl, Steffen Koch, Tobias Isenberg, Pan- pan Xu, Chad Stolper, Michael Sedlmair, Jian Chen, Torsten M¨oller, and John Stasko. vispubdata.org: A Metadata Collection about IEEE Visualization (VIS) Publications. IEEE Transactions on Visualization and Computer Graphics, 23, 2017. To appear. [18] Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 867–876, 2022. 2 [19] Ruixiang Jiang, Can Wang, Jingbo Zhang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Avatarcraft: Transforming text into neural human avatars with parameterized shape and pose control. arXiv preprint arXiv:2303.17606, 2023. [20] Ruixiang Jiang, Can Wang, Jingbo Zhang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Avatarcraft: Transforming text into neural human avatars with parameterized shape and pose control. In Proceedings of the IEEE/CVF International Conference on Com- puter Vision, pages 14371–14382, 2023. 2 [21] Gordon Kindlmann. Semi-automatic generation of transfer functions for direct volume rendering. Master’s thesis, Cornell University, USA, 1999. [22] Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user prefer- ences for text-to-image generation. arXiv preprint arXiv:2305.01569, 2023. 5 [23] Kitware, Inc. The Visualization Toolkit User’s Guide, January 2003. [24] Nikos Kolotouros, Thiemo Alldieck, Andrei Zanfir, Eduard Bazavan, Mihai Fieraru, and Cristian Sminchisescu. Dreamhuman: Animatable 3d avatars from text. Advances in Neural Information Processing Sys- tems, 36, 2024. 2 [25] Nikos Kolotouros, Thiemo Alldieck, Andrei Zanfir, Eduard Gabriel Bazavan, Mihai Fieraru, and Cristian Sminchisescu. Dreamhuman: Animatable 3d avatars from text. arXiv preprint arXiv:2306.09329, 2023. 1 [26] Marc Levoy. Display of Surfaces from Volume Data. PhD thesis, University of North Carolina at Chapel Hill, USA, 1989. [27] Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. Luciddreamer: Towards high-fidelity text-to-3d gen- eration via interval score matching. arXiv preprint arXiv:2311.11284, 2023. 4 [28] Tingting Liao, Hongwei Yi, Yuliang Xiu, Jiaxaing Tang, Yangyi Huang, Justus Thies, and Michael J Black. Tada! text to animatable digital avatars. arXiv preprint arXiv:2308.10899, 2023. 2, 5, 6 [29] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiao- hui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 300–309, June 2023. 2 [30] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons- Moll, and Michael J Black. Smpl: A skinned multi-person linear In Seminal Graphics Papers: Pushing the Boundaries, Vol- model. ume 2, pages 851–866. 2023. [31] William E. Lorensen and Harvey E. Cline. Marching cubes: A high resolution 3D surface construction algorithm. SIGGRAPH Computer Graphics, 21(4):163–169, August 1987. [32] Nelson Max. Optical models for direct volume rendering. IEEE Trans- actions on Visualization and Computer Graphics, 1(2):99–108, June 1995. [33] Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, and Daniel Cohen-Or. Latent-nerf for shape-guided generation of 3d shapes and textures. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12663–12673, 2023. 1, 2, 5, 6 [34] Oscar Michel, Roi Bar-On, Richard Liu, Sagie Benaim, and Rana Hanocka. Text2mesh: Text-driven neural stylization for meshes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13492–13502, 2022. [35] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Bar- ron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, 2021. 3 [36] Nasir Mohammad Khalid, Tianhao Xie, Eugene Belilovsky, and Tiberiu Popa. Clip-mesh: Generating textured meshes from text using pretrained image-text models. In SIGGRAPH Asia 2022 conference papers, pages 1–8, 2022. 2 [37] Alex Mohr and Michael Gleicher. Building efficient, accurate char- acter skins from examples. ACM Transactions on Graphics (TOG), 22(3):562–568, 2003. [38] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021. 2 [39] Gregory M. Nielson and Bernd Hamann. The asymptotic decider: Removing the ambiguity in marching cubes. In Proc. Visualization, pages 83–91, Los Alamitos, 1991. IEEE Computer Society. [40] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Ex- pressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10975–10985, 2019. [41] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Ex- pressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10975–10985, 2019. 4, 9 [42] Yicong Peng, Yichao Yan, Shengqi Liu, Yuhao Cheng, Shanyan Guan, Bowen Pan, Guangtao Zhai, and Xiaokang Yang. Cagenerf: Cage- based neural radiance field for generalized 3d deformation and anima- tion. Advances in Neural Information Processing Systems, 35:31402– 31415, 2022. [43] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Milden- hall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022. 2 [44] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Milden- hall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022. 1 [45] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual mod- els from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021. 2 [46] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual mod- els from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021. [47] Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubin- stein, Jonathan Barron, et al. Dreambooth3d: Subject-driven text-to- 3d generation. arXiv preprint arXiv:2303.13508, 2023. 1 [48] Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, and Daniel Cohen-Or. Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721, 2023. [49] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High-resolution image synthesis with latent diffu- sion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022. 1, 2 [50] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High-resolution image synthesis with latent diffu- sion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022. 6 [51] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text- to-image diffusion models with deep language understanding, 2022. URL https://arxiv. org/abs/2205.11487, 4. 2 [52] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neu- ral Information Processing Systems, 35:36479–36494, 2022. [53] Aditya Sanghi, Hang Chu, Joseph G Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, and Kamal Rahimi Malekshan. Clip-forge: In Proceedings of the Towards zero-shot text-to-shape generation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18603–18613, 2022. 2 [54] Gusi Te, Xiu Li, Xiao Li, Jinglu Wang, Wei Hu, and Yan Lu. Neural capture of animatable 3d human from monocular video. In European Conference on Computer Vision, pages 275–291. Springer, 2022. 1 [55] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. In Proceedings of the European conference on computer vision (ECCV), pages 52–67, 2018. 1 [56] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Ko- mura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689, 2021. 5 [57] Colin Ware. Information Visualization: Perception for Design. Mor- gan Kaufmann Publishers Inc., San Francisco, 2nd edition, 2004. [58] Zhenzhen Weng, Zeyu Wang, and Serena Yeung. Zeroavatar: Zero- arXiv preprint shot 3d avatar generation from a single image. arXiv:2305.16411, 2023. 1 [59] Geoff Wyvill, Craig McPheeters, and Brian Wyvill. Data structure for soft objects. The Visual Computer, 2(4):227–234, August 1986. [60] Yuliang Xiu, Jinlong Yang, Xu Cao, Dimitrios Tzionas, and Michael J Black. Econ: Explicit clothed humans obtained from normals. arXiv preprint arXiv:2212.07422, 2022. 1 [61] Jiale Xu, Xintao Wang, Weihao Cheng, Yan-Pei Cao, Ying Shan, Xi- aohu Qie, and Shenghua Gao. Dream3d: Zero-shot text-to-3d syn- thesis using 3d shape prior and text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20908–20918, June 2023. [62] Tianhan Xu and Tatsuya Harada. Deforming radiance fields with cages. In European Conference on Computer Vision, pages 159–175. Springer, 2022. [63] Zhitao Yang, Zhongang Cai, Haiyi Mei, Shuai Liu, Zhaoxi Chen, Weiye Xiao, Yukun Wei, Zhongfei Qing, Chen Wei, Bo Dai, et al. Synbody: Synthetic dataset with layered human models for 3d human perception and modeling. arXiv preprint arXiv:2303.17368, 2023. [64] Kim Youwang, Kim Ji-Yeon, and Tae-Hyun Oh. Clip-actor: Text-driven recommendation and stylization for animating human In European Conference on Computer Vision, pages 173– meshes. 191. Springer, 2022. [65] Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, and Lin Gao. Nerf-editing: geometry editing of neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18353–18364, 2022. [66] Jianfeng Zhang, Zihang Jiang, Dingdong Yang, Hongyi Xu, Yichun Shi, Guoxian Song, Zhongcong Xu, Xinchao Wang, and Jiashi Feng. Avatargen: a 3d generative model for animatable human avatars. In European Conference on Computer Vision, pages 668–685. Springer, 2022. 1 [67] Zerong Zheng, Tao Yu, Yebin Liu, and Qionghai Dai. Pamir: Paramet- ric model-conditioned implicit representation for image-based human reconstruction. IEEE transactions on pattern analysis and machine intelligence, 44(6):3170–3184, 2021.
ai_researcher
2
Empowering_Robot_Designers_A_Digital_Tool_for_Early-Stage_Social_Robot_Prototyping_and_Communication.pdf
The RoSiD Tool: Empowering Users to Design Multimodal Signals for Human-Robot Collaboration Nathaniel Dennler1, David Delgado1, Daniel Zeng1, Stefanos Nikolaidis1, and Maja Matarić1 University of Southern California, Los Angeles CA {dennler,nikolaid,mataric}@usc.edu http://www.springer.com/gp/computer-science/lncs Abstract. Robots that cooperate with humans must be effective at communicating with them. However, people have varied preferences for communication based on many contextual factors, such as culture, envi- ronment, and past experience. To communicate effectively, robots must take those factors into consideration. In this work, we present the Robot Signal Design (RoSiD) tool to empower people to easily self-specify com- municative preferences for collaborative robots. We show through a par- ticipatory design study that the RoSiD tool enables users to create signals that align with their communicative preferences, and we illuminate how this tool can be further improved. Keywords: Human-robot Interaction · Personalization · Signalling. 1 Introduction For robots to be effective collaborative partners, they must communicate infor- mation about their current state, task completion, and knowledge. People are usually effective at using different signals during collaboration [8]; however, de- signing signals that allow robots to be effective remains challenging. People excel at adapting the way they communicate to their environment and collaborators [8]. For robots to be effective, they must similarly adapt to a variety of contex- tual factors, however, robots have the additional challenge of understanding how their own embodiment affects how people expect to interact with them [6]. To address this problem, we aim to develop a way for people to encode the important contextual factors by allowing them to rapidly design signals themselves. This work introduces the Robotic Signal Designer (RoSiD) tool we devel- oped based on insights from exploratory research in human-computer interac- tion (HCI) and preference learning research in human-robot interaction (HRI). RoSiD facilitates the design embodied signals with three kinds of signal compo- nents used in robotic systems: visual, auditory, and kinetic. Importantly, RoSiD is an HRI tool that involves communication channels beyond HCI due to the robot’s embodiment and ability to interact with the user and objects in the physical world, resulting in well-documented improvements in user engagement and task performance [5]. 4 2 0 2 n a J 5 ] O R . s c [ 1 v 8 8 0 3 0 . 1 0 4 2 : v i X r a 2 N. Dennler et al. (a) Query-based interface for choosing among three signals per modality. (b) Search-based interface for browsing all options for each modality. (c) Participant us- ing the RoSiD tool in our study. Fig. 1: Interfaces for the RoSiD tool. We explored three user study hypotheses related to system use characteristics to evaluate RoSiD: H1: Participants will rate the system as usable according to the System Usability Scale [2]. H2: Participants will spend the most time designing the first signal to learn how to use the system. H3: Participants will benefit more from having suggested signals based on the signals other participants designed than random signals. 2 Designing RoSiD Following previous work, we considered robot signals as multimodal behaviors that consist of visual, auditory, and kinetic components. For each type of stim- ulus, we collected a large dataset of viable options from public websites. Specif- ically, we used 5,912 animated videos that represented the visual components, 867 sound clips that represented auditory components, and 2,125 head motions that represented the kinetic components1. Based on the literature in preference learning and exploratory search, [1,3,4,11], we employed two main interactions to select from these options: query-based and search-based interactions. Query-based interactions are often used to learn user preferences in the field of human-robot interaction [11]. In these interactions, users review a small num- ber of robot behaviors and specify the behavior they think is best-suited for a given task. The behavior that the user selects from the small set of behaviors provides information about what they would like the robot to do in its particular context. The formulation of how preferences are modeled is provided in Section 3.1. Our query-based interface is shown in Figure 1a. In this work, a single query, Q, consists of three specific videos played on Kuri’s screen, sound clips played through Kuri’s speakers, or motions played on Kuri’s head. We include an option for the user to specify that none of the three items in the query are what they are looking for. 1 All files are publicly available on github. Title Suppressed Due to Excessive Length 3 Search-based interactions are used in exploratory search contexts in human- computer interaction[4]. In these interactions, users are presented with a large number of possible options that can be filtered with key words. The order the options are presented in is important [1]. We use the preference data from the query-based interaction to inform the order of the search-based results. Our search-based interface is shown in Figure 1b. 3 Technical Approach 3.1 Understanding User Preferences We adopt the formulation of preference learning, where preferences are repre- sented as a linear combination of a set of features that describe a time-series, as described by Sadigh et al. [11]. Our goal is to learn the parameterization of the user’s preference, ω. We evaluate how well a particular query aligns with a user’s preferences (to assess H3) using the following alignment metric inspired by [11]: alignment = E (cid:20) max q∈Q (cid:18) ϕq · ϕselected |ϕq| · |ϕselected| (cid:19)(cid:21) (1) where q represents the element in the query Q (consisting of 3 items per modality in our system), and ϕ denotes the features of the particular stimulus, with ϕselected representing the features of the stimulus the participant selected at the end of the experiment. The maximum alignment score is 1 and the minimum alignment score is -1. 3.2 Creating Features for Multimodal Data Our assessment relies on the stimuli used in our experiments being represented by a vector that encapsulates the characteristics of the stimulus (i.e., ϕ in Equa- tion 1). We chose to use a learned encoding from pretrained models, as non-linear features have been shown to be effective for preference learning [3]. All embed- dings were reduced to 32 dimensions using PCA because dimension largely affects speed in preference learning, and the system was designed to run in real time. Visual: To create embeddings for the visual features, we used embeddings from a pretrained CLIP model available from the transformers library [13]. Each video had a representative frame selected as the image component, and a short description used as a language component. Auditory: Embeddings for the auditory features were generated by encoding our audio files with the pretrained VGGish model [7]. Kinetic: Embeddings for kinetic features came from a GRU model trained through a Seq2Seq task [12] on our movement data, where the series of states of the robot’s head (pan, tilt, eyes) were encoded through a recurrent network, and a second recurrent network was initialized with the embedding to reproduce the original sequence. 4 N. Dennler et al. 3.3 Generating Queries from User Data To address H3, we propose a method to generate queries Q for signal design that contain items that are more aligned with what the users ultimately choose. For each signal, we have a dataset for each modality D that contains the final items selected by the users. We base this approach on the insight that user preferences are a smaller set of all possible items in our datasets of signal components. We attempt to find clusters in preferences from the signals that users designed by using RoSiD. To do this, we partition D into k groups based on the features of the signal components, ϕ. We then randomly select an item from each of these queries to create more meaningful suggestions. This process is outlined in Algorithm 1. Algorithm 1 Generating queries from user data 1: Input: D, a dataset of designed signals; k, the number of items in the resultant query; cluster(D, k), a partitioning method that returns f : D → {1, 2, ..., k}; 2: Output: Q, a set of k options for the user to select from when designing signals; 3: Q ← ∅; f ← cluster(D, k); 4: for i ∈ {1, 2, ..., k} do 5: 6: 7: end for 8: Return Q qi ∈R {d | ∃d ∈ D, f (d) = i} Q ← Q ∪ qi 4 Design Session In this section, we describe the details of the interactions users had with the robot while using the RoSiD tool to design the robot’s signals and evaluate those signals. Our protocols were approved by the university’s Institutional Review Board under #UP-23-00408. 4.1 Study Description Participants engaged in a one-hour design session. Upon entering the experiment space, they were told that they would be designing four signals for a robot that will assist them with finding items around the experiment space. The signals consisted of three components: visual, auditory, and kinetic. Participants designed signals for a modified Mayfield Kuri robot (shown in Figure 1c). Since the robot does not have a screen or affordances for carrying items, we added an external screen to provide a salient visual component to the signalling and a backpack to hold the Raspberry Pi and power supply, with a pouch for holding objects being transported. The four signals participants designed were: Title Suppressed Due to Excessive Length 5 Fig. 2: Structure of the design session with approximate times for each section. 1. Idle: played every 10 seconds while the robot waits for commands, indicating that the robot is ready to accept a command. 2. Searching: played every 10 seconds while the robot searches for objects, indicating that the robot is actively searching for an item. 3. Has Item: played once, when the robot has an item in its pouch and is ready for the participant to remove the item. 4. Has Information: played once, when the robot has found an object, but the object is inaccessible. The participant can follow the robot to the location of the object to retrieve it. Each participants was then introduced to the RoSiD interface as described in Section 4.1 and designed the four signals in a randomized order to mitigate any ordering effects. The participant was free to use the interface however they liked, for as long as they liked. Participants tended to favor either the query-based and search-based interactions in their design process, but this was dependent on the individual. After finishing designing all four signals, the participant filled out the System Usability Scale [2]. The participant next engaged in an interaction with the robot, where the robot was piloted by an experimenter. To simulate being occupied as the robot roamed around the environment, the participant was also engaged in a word search task. To complete the word search, the participant had to ask the robot to help them search for items around the room, which had words for the word search printed on the item. For example, participants were tasked to ask Kuri to find a stapler, and the stapler had the word "haptic" printed on it. The participant then located "haptic" in the word search. The time limit for this section was 10 minutes. Following the interaction, participants engaged in a semi-structured interview and were compensated with a 20 USD Amazon gift card. The entire study design is illustrated in Figure 2. 4.2 Participants Participants were recruited from the USC student population through email, flyers, and word-of-mouth. A total of 25 participant were part of the study, with ages that ranged from 19 to 43 (median 25); participants self-declared as men (13), women (10), and genderqueer, nonbinary, or declined to state (3, aggregated for privacy; some participants belonged to multiple groups), 13 participants iden- tified as LGBTQ+. All participants were able to create signals they liked for all four categories, and all successfully interacted with the robot to collect all the items in the word search task. 6 N. Dennler et al. 5 Results 5.1 System Usability Scores We examined the participants’ SUS scores based on recommendations from a meta-analysis of several extant systems [9]. The participants rated the system with a median score of 75 out of 100 on the SUS scale, suggesting that the system is between "good" and "excellent" on an adjective rating scale, and a letter grade of ’B’ demonstrating an above-average user experience. Using a Mann-Whitney U-Test, we determined that the ratings were significantly higher than a 65 of 100 on the SUS scale (U = 10.0, p = .015), indicating that our system is above average in its ease of use, supporting H1. (a) Time to design by order. (b) Time to design by signal. Fig. 3: Box plots showing the times users spent deigning signals. 5.2 Time Spent Designing Signals We examined how long it took users in our study to design the signals. An ANOVA revealed that the time to design signals depended on the order that they were designed in (F (3, 96) = 26.549, p < .001), as illustrated in Figure 3a. Post hoc analysis showed that the only significant pairwise differences were between the first signal designed and the rest. This indicates that our system is easy to learn to use, because the time to design signals stabilized after the first designed signal, supporting H2. We also found no significant differences between the kind of signal and the time to design the signal, illustrated in Figure 3b, indicating that the signals were similarly easy to design. This implies that the particular signals we selected were easily understandable for the participants. The type of signal had little effect on the results of our analysis. Title Suppressed Due to Excessive Length 7 (a) Visual components. (b) Auditory components. (c) Kinetic components. Fig. 4: Box plots comparing the alignment of initial queries based on random suggestions and the proposed clustered suggestions. 5.3 Using Clusters to Initialize Queries We examined how we could use prior information based on the signals collected from other users to generate queries that are more aligned with what partici- pants ultimately chose when designing their own signals. We used a leave-one-out cross-validation setting for each participant and formed clusters from all but one participant following the process in Section 3.3. For our clustering method we used agglomerative clustering as implemented in scikit-learn [10]. We calculated the alignment score as described in Section 3.1 for the clustering method as compared to randomly selecting queries for each of the three modalities. We performed an ANOVA analysis for each of the modalities to study the effect of including other user’s information on the maximum query alignment for new users. We found that for the visual modality there is a significant main effect across query method (F (1, 3) = 44.106, p < .001), with an average increase in initial alignment of .117 across all signals when using the clustering method over ran- domly selecting stimuli. For the auditory modality there was a significant main effect of query method (F (1, 3) = 19.544, p < .001), with an average increase in initial alignment of .141 across all signals. For the kinetic modality there was also a significant effect of query type (F (1, 3) = 49.393, p < .001). For the ki- netic modality there was an average increase in initial alignment of .132 across all signal types. 6 Conclusions and Future Work In this work we developed the RoSiD tool, which enables users to design their own robot signals for collaborative tasks with robots. Our results show that users find this system easy to use, quick to learn, and that using past user data can further improve the system’s usefulness. In continued work, we plan to further develop the RoSiD tool and its potential for use with other robot embodiments using the insights gained from this study. We will evaluate the improved tool by measuring performance and other behavioral metrics in a real item-finding task, 8 N. Dennler et al. as well as compare the effect of using personalized signals in contrast to using generic signalling methods. References 1. Allen, G., Peterson, B.L., Ratakonda, D.K., Sakib, M.N., Fails, J.A., Kennington, C., Wright, K.L., Pera, M.S.: Engage!: co-designing search engine result pages to foster interactions. In: Interaction Design and Children. pp. 583–587 (2021) 2. Brooke, J.: Sus: a “quick and dirty’usability. Usability evaluation in industry 189(3), 189–194 (1996) 3. Brown, D., Coleman, R., Srinivasan, R., Niekum, S.: Safe imitation learning via fast bayesian reward inference from preferences. In: International Conference on Machine Learning. pp. 1165–1177. PMLR (2020) 4. Chang, J.C., Hahn, N., Perer, A., Kittur, A.: Searchlens: Composing and cap- turing complex user interests for exploratory search. In: Proceedings of the 24th International Conference on Intelligent User Interfaces. pp. 498–509 (2019) 5. Deng, E., Mutlu, B., Mataric, M.J., et al.: Embodiment in socially interactive robots. Foundations and Trends® in Robotics 7(4), 251–356 (2019) 6. Dennler, N., Ruan, C., Hadiwijoyo, J., Chen, B., Nikolaidis, S., Matarić, M.: Design metaphors for understanding user expectations of socially interactive robot embod- iments. ACM Transactions on Human-Robot Interaction 12(2), 1–41 (2023) 7. Hershey, S., Chaudhuri, S., Ellis, D.P., Gemmeke, J.F., Jansen, A., Moore, R.C., Plakal, M., Platt, D., Saurous, R.A., Seybold, B., et al.: Cnn architectures for large-scale audio classification. In: 2017 ieee international conference on acoustics, speech and signal processing (icassp). pp. 131–135. IEEE (2017) 8. Knoblich, G., Butterfill, S., Sebanz, N.: Psychological research on joint action: theory and data. Psychology of learning and motivation 54, 59–101 (2011) 9. Lewis, J.R.: The system usability scale: past, present, and future. International Journal of Human–Computer Interaction 34(7), 577–590 (2018) 10. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, 2825–2830 (2011) 11. Sadigh, D., Dragan, A.D., Sastry, S., Seshia, S.A.: Active preference-based learning of reward functions (2017) 12. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. Advances in neural information processing systems 27 (2014) 13. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., et al.: Huggingface’s transformers: State-of- the-art natural language processing. arXiv preprint arXiv:1910.03771 (2019)
ai_researcher
2
Beyond_Utility_Evaluating_LLM_as_Recommender.pdf
vviewpoints DOI:10.1145/1735223.1735234 Erik Brynjolfsson, Paul Hofmann, and John Jordan economic and Business dimensions Cloud Computing and electricity: Beyond the Utility Model Assessing the strengths, weaknesses, and general applicability of the computing-as-utility business model. cloud computing and the electricity model for cloud computing Definitions vary. From a practitioner standpoint: “Cloud computing is on-demand ac- cess to virtualized IT resources that are housed outside of your own data center, shared by others, simple to an overly simplistic reliance on the utility model risks blinding us to the real opportunities and challenges of cloud computing. BUsiNesses re LY N o less on electricity than on IT. Yet corporations don’t need a “Chief Electricity Officer” and a staff of highly trained professionals to manage and integrate electricity into their businesses. Does the historical adoption of electricity of- fer a useful analogy for today’s innova- tions in cloud computing? While the utility model offers some insights, we must go beyond this sim- ple analogy to understand cloud com- puting’s real challenges and opportu- nities. Technical issues of innovation, scale, and geography will confront managers who attempt to take advan- tage of offsite resources. In addition, business model challenges related to interoperability, complementarity, and security will make it difficult for a stable cloud market to emerge. An overly simplistic reliance on the util- ity model risks blinding us to the real opportunities and challenges of cloud computing. use, paid for via subscription, and ac- cessed over the Web.” From an aca- demic perspective: “Cloud computing refers to both the applications deliv- ered as services over the Internet and the hardware and systems software in the data centers that provide those services. … The data center hardware and software is what we will call a cloud. When a cloud is made available in a pay-as-you-go manner to the pub- lic, we call it a public cloud; the service being sold is utility computing.”1 Both definitions imply or explicitly use the “utility” model that embeds the logic of water supply, electrical grids, or sewage systems. This model is ubiqui- tous. While it has important strengths, it also has major weaknesses. Hardware providers introduced the language of “utility” computing into the market. But perhaps the most rigorous and vigorous assertion of the electric- ity model comes from Nicholas Carr, an independent blogger in his recent book, The Big Switch: “At a purely eco- 32 co m municat ions o f the ac m | may 2010 | vol. 53 | no. 5 v nomic level, the similarities between electricity and information technology are even more striking. Both are what economists call general-purpose tech- nologies. … General-purpose technolo- gies, or GPTs, are best thought of not as discrete tools but as platforms on which many different tools, or applications, can be constructed. … Once it becomes possible to provide the technology cen- trally, large-scale utility suppliers arise to displace the private providers. It may take decades for companies to abandon their proprietary supply operations and all the investment they represent. But in the end the savings offered by utili- ties become too compelling to resist, even for the largest enterprises. The grid wins.”4 strengths of the utility model Carr correctly highlights the concept of a general-purpose technology. This class of technology has historically been the greatest driver of productivity growth in modern economies. They not only contribute directly, but also by cat- alyzing myriad complementary innova- tions.3 For electricity, this includes the electric lighting, motors, and machin- ery. For IT, this includes transaction processing, ERP, online commerce and myriad other applications and even business model innovations. Some of the economies of scale and cost savings of cloud computing are also akin to those in electricity genera- tion. Through statistical multiplexing, centralized infrastructure can run at higher utilization than many forms of distributed server deployment. One system administrator, for example, can tend over 1,000 servers in a very large data center, while his or her equivalent in a medium-sized data center typical- ly manages approximately 140.7 By moving data centers closer to energy production, cloud computing creates additional cost savings. It is far cheaper to move photons over the fiber- optic backbone of the Internet than it is to transmit electrons over our power grid. These savings are captured when data centers are located near low-cost power sources like the hydroelectric dams of the northwest U.S. Along with its strengths, however, the electric utility analogy also has three technical weaknesses and three business model weaknesses. d r o f d a r b t r a u t s y b n o i t a r t s u l l i viewpoints technical Weaknesses of the utility model The Pace of Innovation. The pace of in- novation in electricity generation and distribution happens on the scale of decades or centuries.8 In contrast, Moore’s Law is measured in months. In 1976, the basic computational pow- er of a $200 iPod would have cost one billion dollars, while the full set of ca- pabilities would have been impossible to replicate at any price, much less in a shirt pocket. Managing innovative and rapidly changing systems requires the attention of skilled, creative people, even when the innovations are creat- consistency and scalability at the same time. The problem of scalable data stor- age in the cloud with an API as rich as SQL makes it difficult for high-volume, mission-critical transaction systems to run in cloud environments. Meanwhile, companies of a certain size can get the best of both worlds by deploying private clouds. Intel, for ex- ample, is consolidating its data centers from more than 100 eventually down to about 10. In 2008 the total fell to 75, with cost savings of $95 million. According to Intel’s co-CIO Diane Bryant, 85% of In- tel’s servers support engineering com- putation, and those servers run at 90% ed by others, unlike managing stable technologies. The Limits of Scale. The rapid avail- ability of additional server instances is a central benefit of cloud computing, but it has its limits. In the first place, paral- lel problems are only a subset of diffi- cult computing tasks: some problems and processes must be attacked with other architectures of processing, mem- ory, and storage, so simply renting more nodes will not help. Secondly, many business applications rely on consis- tent transactions supported by RDBMS. The CAP Theorem says one cannot have utilization—a combination of strategic importance and operational perfor- mance that would negate any arguments for shifting that load to a cloud vendor. Ironically, even as the utility model is being touted for computing, the highly centralized approach is becoming less effective for electricity itself: an emerg- ing distributed power generation sys- tem features smaller nodes running micro-hydro, wind, micro-turbines and fuel cells. What’s more, many enterpris- es do in fact generate their own electric- ity or steam, for the same reasons they will continue to keep certain classes of may 2010 | vol. 53 | n o. 5 | com munications of the acm 33 viewpoints IT in house: reliability, strategic advan- tage, or cost visibility. Latency: Distance is Not Dead. One of the few immutable laws of physics is the speed of light. As a result, latency remains a formidable challenge. In the network realm, the demands for nearly instantaneous execution of machine-to- machine stock trades has led financial services firms to locate their data cen- ters as physically close to stock exchang- es as possible. The read/write limits of magnetic disks can only drop so far, but increased speed comes at the cost of ca- pacity: big disks are slow, and fast disks are small. For many classes of applica- tions, performance, convenience, and security considerations will dictate that computing be local. Moving data cen- ters away from their customers may save on electricity costs, but those savings are often outweighed by the costs of latency. Beyond electricity: the Business model of the cloud Important as the technical differences are between electricity and cloud com- puting, the business model differences are even more profound. Complementarities and Co-invention. Like electricity, IT is a general-purpose technology. This means that critical benefits come from the co-inventions that the basic technology makes pos- sible. It took 30 to 40 years for the full benefits of electricity to redound to America’s factories.5 Initially, assembly lines and production processes were not redesigned to take advantages of electricity: large central steam engines were simply replaced with large elec- tric motors, and then hooked up to the same old crankshafts and cogs. Only with the reinvention of the production process was the potential of electrifica- tion realized. Today, electricity has ma- tured to become a relative commodity. In contrast, computing is still in the midst of an explosion of innovation and co-invention.2 Firms that simply replace corporate resources with cloud computing, while changing nothing else, are doomed to miss the full ben- efits of the new technology. The opportunities, and risks, from IT-enabled business model innova- tion and organizational redesigns are reshaping entire industries.3 For in- stance, Apple’s transition from a per- petual license model to the pay-per-use if the utility model were adequate, the challenges to cloud computing could be solved with electricity-like solutions—but they cannot. iTunes store helped it quadruple reve- nues in four years. The tight integration between Apple’s ERP system and the billing engine handling some 10 mil- lion sales per day would have been dif- ficult, if not impossible, in the cloud. Lock-in and Interoperability. Lock-in issues with electricity were addressed long ago by regulation of monopolies, then later by legal separation of gen- eration from transmission and the creation of market structures. Markets work because electrons are fungible. The rotary converter that enabled in- terconnection of different generating technologies in the 1890s has no ana- log for the customer of multiple cloud vendors, and won’t anytime soon. For enterprise computing to behave like line voltage will require radically differ- ent management of data than what is on anyone’s technology roadmap. Perhaps most critically, bits of infor- mation are not electrons. Depending on the application, its engineering, and its intended use, cloud offerings will not be interchangeable across cloud pro- viders. Put more simply, the business processes supported by enterprise com- puting are not motors or light bulbs. Security. The security concerns with cloud computing have no electricity analog. No regulatory or law enforce- ment body will audit a company’s electrons, but processes related to customer data, trade secrets, and clas- sified government information are all subject to stringent requirements and standards of auditability. The typically shared and dynamic resources of cloud computing (including CPU, network- ing, and so forth) reduce control for 34 co m municat ions o f the acm | may 2010 | vol. 53 | no. 5 the user and pose severe new security issues not encountered by on-premise computing behind firewalls. conclusion If the utility model were adequate, the challenges to cloud computing could be solved with electricity-like solu- tions—but they cannot. The reality is that cloud computing cannot achieve the plug-and-play simplicity of electric- ity, at least, not as long as the pace of innovation, both within cloud comput- ing itself, and in the myriad applica- tions and business models it enables, continues at such a rapid pace. While electric utilities are held up as models of simplicity and stability, even this in- dustry is not immune from the trans- formative power of IT.8,9 Innovations like the “smart grid” are triggering fun- damental changes at a pace not seen since the early days of electrification. The real strength of cloud computing is that it is a catalyst for more innovation. In fact, as cloud computing continues to be- come cheaper and more ubiquitous, the opportunities for combinatorial innova- tion will only grow. It is true that this inev- itably requires more creativity and skill from IT and business executives. In the end, this not something to be avoided. It should be welcomed and embraced. References 1. armbrust, m. et al. a view of cloud computing. Commun. ACM 53, 4 (apr. 2010), 50–58. 2. bresnahan, t., greenstein, s., brownstone, d. and flamm, k. technical progress and co-invention in computing and in the uses of computers. Brookings Papers on Economic Activity—Microeconomics (1996), 1–83. 3. brynjolfsson, e. and saunders, a. Wired for Innovation: How IT is Reshaping the Economy. mit Press, cambridge, ma, 2010. 4. carr, n. The Big Switch: Rewiring the World, from Edison to Google. norton, new york, 2008. 5. david, P. the dynamo and the computer: an historical perspective on the modern productivity paradox. American Economic Review 80, 2 (1990), 355–361. 6. foley, J. Plug into the cloud. InformationWeek (sept. 28, 2008). 7. hamilton, J. internet-scale service efficiency. in Proceedings of the Large-Scale Distributed Systems and Middleware (LADIS) Workshop, (sept. 2008). 8. hughes, t. Networks of Power: Electrification in Western Society, 1880–1930. Johns hopkins university Press, baltimore, md, 1983. 9. Waltz, d. and king, J. Information Technology and America’s Energy Future. computing research association White Paper, Washington, d.c., 2009. Erik Brynjolfsson ([email protected]) is a professor at the mit sloan school and the director of the mit center for digital business in cambridge, ma. Paul hofmann ([email protected]) is a vice president at saP labs in Palo alto, ca. John Jordan ([email protected]) is a senior lecturer in the smeal college of business at Penn state university. copyright held by author.
ai_researcher
2
From_Black_Boxes_to_Actionable_Insights_A_Perspective_on_Explainable_Artificial_Intelligence_for_Scientific_Discovery.pdf
1 2 0 2 r a M 8 1 ] V C . s c [ 2 v 7 4 3 5 0 . 3 0 1 2 : v i X r a Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack He Wang1*, Feixiang He1, Zhexi Peng2, Tianjia Shao2†, Yong-Liang Yang3, Kun Zhou2, David Hogg1 1University of Leeds, UK 2State Key Lab of CAD&CG, Zhejiang University, China 3University of Bath, UK {h.e.wang, scfh, D.C.Hogg}@leeds.ac.uk, {zhexipeng, tjshao, kunzhou}@zju.edu.cn, [email protected] Abstract Action recognition has been heavily employed in many applications such as autonomous vehicles, surveillance, etc, where its robustness is a primary concern. In this paper, we examine the robustness of state-of-the-art action recog- nizers against adversarial attack, which has been rarely in- vestigated so far. To this end, we propose a new method to attack action recognizers which rely on the 3D skeletal motion. Our method involves an innovative perceptual loss which ensures the imperceptibility of the attack. Empiri- cal studies demonstrate that our method is effective in both white-box and black-box scenarios. Its generalizability is evidenced on a variety of action recognizers and datasets. Its versatility is shown in different attacking strategies. Its deceitfulness is proven in extensive perceptual studies. Our method shows that adversarial attack on 3D skeletal mo- tions, one type of time-series data, is significantly different from traditional adversarial attack problems. Its success raises serious concern on the robustness of action recogniz- ers and provides insights on potential improvements. 1. Introduction The research in adversarial attack has proven that deep learning is vulnerable to certain imperceptible perturbation on data, leading to security and safety concerns [36]; mean- while, adversarial attack has been useful in improving the robustness of classifiers [20]. Starting from object recogni- tion, the list of target tasks for adversarial attack has been rapidly expanding, now including face recognition [32], point clouds [45], 3D meshes [47], etc. While adversar- ial attack on static data (images, geometries, etc.) has been well explored, its effectiveness on time-series has only been attempted under a few settings such as videos [14, 43]. In this paper, we look into another type of time-series data: 3D *https://youtu.be/DeMkN3efp9s †Corresponding author skeletal motion, for action recognition tasks. Skeletal motion has been widely used in action recog- nition [7]. It can greatly improve the recognition accuracy by mitigating issues such as lighting, occlusion and posture ambiguity. In this paper, we show that 3D skeletal motions are vulnerable to adversarial attack but their vulnerability is different from other data. The adversarial attack on 3D skeletal motion faces two unique and related challenges: low redundancy and perceptual sensitivity. When attacking images/videos, it is possible to perturb some pixels without causing too much visual distortion. This largely depends on the redundancy in the image space [37]. Unlike images, which have thousands of Degrees of Freedom (DoFs), a skeletal motion is usually parameterized by fewer than 100 DoFs, i.e. the joints of the skeleton. This not only restricts the space of possible attacks [37], but also affects the imper- ceptibility of the adversarial samples: a small perturbation on a single joint can be easily noticed. Furthermore, coordi- nated perturbations on multiple joints in only one frame can hardly work either, because in the temporal domain, simi- lar constraints apply. Any sparsity-based perturbation (on single joints or individual frames) will greatly affect the dy- namics (causing jittering or bone-length violations) and will be very obvious to an observer. One consequence is that the perturbation magnitude alone is not anymore a reliable met- ric to judge the imperceptibility of an attack, as an overall small perturbation could still break the dynamics. This is very different from existing attack tasks where the pertur- bation magnitude can be heavily relied upon. To systematically investigate the robustness of action recognizers, we propose a straightforward yet very effec- tive method, Skeletal Motion Action Recognition Attack (SMART), based on an optimization framework that explic- itly considers motion dynamics and skeletal structures. The optimization finds perturbations by balancing between clas- sification goals and perceptual distortions, formulated as classification loss and perceptual loss. Varying the classi- fication loss leads to different attacking strategies. The new perceptual loss fully utilizes the dynamics of the motions 1 and bone structures. SMART is effective in both white-box and black-box settings, on several state-of-the-art models, across a variety of datasets. Formally, we systematically investigate the vulnerabil- ity of a wide range of state-of-the-art methods under ad- versarial attack and identify their weaknesses for potential improvements. To this end, we propose a new adversarial attack method with a novel perceptual loss function captur- ing the perceptual realism and fully exploiting the motion dynamics. We also provide insights into the role of dynam- ics in the imperceptibility of the adversarial attack based on comprehensive perceptual studies, showing that it is not enough to only constrain the perturbation magnitude, which differs significantly from widely accepted approaches. 2. Related Work 2.1. Skeleton-based Action Recognition Action recognition is crucial in many applications, namely surveillance, human-robot interaction and entertain- ment. Recent advances in 3D sensing and pose estimation motivate the use of clean skeleton data to robustly classify human actions, overcoming the biases in raw RGB videos due to body occlusion, scattered background, lighting vari- ation, etc. Unlike conventional approaches that are limited to handcrafted skeletal features [38, 9, 6], recent methods taking the advantage of trained features from deep learning have gained state-of-the-art performance. Based on the rep- resentation of skeletal data, deep learning based methods can be classified into three categories, including sequence- based, image-based, and graph-based methods. Sequence-based methods represent a skeletal motion as a chronological sequence of poses, each of which con- sists of the coordinates of all the joints. Then RNN- based architecture is employed to perform the classifica- tion [7, 23, 35, 53]. Image-based methods represent a skele- tal motion as a pseudo-image, which is a 2D tensor where one dimension corresponds to time, and the other dimen- sion stacks all the joints of a single skeleton. Such rep- resentation enables CNN-based image classification to be applied to action recognition [24, 16]. Different from the previous two categories that mainly rely on skeleton ge- ometry represented by the joint coordinates, graph-based methods utilize graph representations to naturally consider the skeleton topology (i.e. joint connectivity) which is en- coded by bones that connect neighboring joints. Graph neural networks (GNN) are then used to recognize the ac- tions [33, 4, 25, 54, 56]. Based on the code released by the authors, we perform adversarial attacks on the two most representative categories (i.e. RNN- and GNN-based), demonstrating the vulnerability of existing methods. 2.2. Adversarial Attacks Despite their significant successes, deep neural networks are vulnerable to carefully crafted adversarial attacks as firstly identified in [36]. Delicately designed neural net- works with high performance can be easily fooled by unno- ticeable perturbations on the input data. With the concern raised, researchers have extensively investigated adversar- ial attacks on different data types, including 2D images [10, 30, 27, 46, 48], videos [44, 42], 3D shapes [21, 52, 47, 45], physical objects [18, 1, 8], graphs [5], while little attention has been paid to 3D skeletal motions. The adversarial attack in the context of action recogni- tion is much less explored. Inkawhich et al. [12] perform adversarial attacks on optical-flow based action classifiers, which is mainly inspired by image-based attacks and differs from our work in terms of the input data. The adversarial attack on skeletal motions has just been attempted recently [22, 58] (arXiv only). However, they did not investigate the imperceptibility systematically, which is crucial as shown in our perceptual studies because imperceptibility is a strong requirement on adversarial attack. In our work, we demon- strate better results using a perceptual loss that minimizes the motion derivative deviation relative to the original skele- tal motion, thereby preserving the motion dynamics which are intrinsic to actions. This is crucial in attacking highly dynamic motions such as running and jumping. We also perform a perceptual study to systematically validate the imperceptibility of the perturbed skeletal motions and the effectiveness of our choice of perceptual loss. We demonstrate successful attacks on a range of network architectures, including RNN and GNN based methods, on three datasets. Finally, we present results of three different attacking strategies, including the novel objective of plac- ing the correct action beneath the first n actions in a ranked classification, for a given n. 3. Methodology SMART is formulated as an optimization problem, where the minimizer is an adversarial sample, for a given motion, that minimizes the perceptual distortion while fool- ing the target classifier. The optimization has variants con- structed for three different attacking strategies: Anything- but Attack, Anything-but-N Attack and Specified Attack. They are used in white-box and black-box scenarios. 3.1. Optimization for Attack Given a motion q = {q0, q1, ..., qt}, where qt is the frame at time t and consists of stacked 3D joint locations, a trained classifier Φ can predict its class label yq = C(Φ(q)), where Φ is namely a deep neural network and Φ(q) is the pre- dicted distribution over class labels. C is usually a softmax function and yq is the predicted label. We aim to find a per- 2 turbed example, ˆq, for q, such as yq (cid:54)= yˆq. A common method is to find the minimal perturbation [49] through solving a constrained optimization. We start with the C&W formulation[2]: min Lp(q, ˆq) sub. to C(Φ(ˆq)) = c and ˆq ∈ [0, 1]n (1) where Lp is a distance function and C is a hard constraint dictating that the predicted class of ˆq (bounded in [0, 1]n) being c. However, directly solving Eq. 1 is difficult due to that C is highly non-linear [2]. So it can be relaxed by moving the hard constraint into the objective: minimize L = wLc(yˆq, c) + (1 − w)Lp(q, ˆq) (2) where Lc is a classification loss and w = 0.4. Lp is nor- mally the perturbation magnitude [2]. But we use a new perceptual loss which is explained later. Eq.2 has intuitive interpretation: there are two forces governing ˆq. Lc is the classification loss (a relaxed C in Eq.1) where we can de- sign different attacking strategies. Lp is the perceptual loss which dictates that ˆq should be visually indistinguishable from q. To optimize for ˆq, we have only one assumption: we can compute the gradient: ∂L ∂ ˆq . This way, we can com- pute ˆq iteratively by ˆqt+1 = ˆqt + (cid:15)f ( ∂L , ˆqt) where ˆqt is ˆq at ∂ ˆqt step t, f computes the updates and (cid:15) is the learning rate. We set ˆq0 = q and use Adam [17] for f . 3.2. Perceptual Loss Imperceptibility (governed by Lp in Eq.2) is a hard con- straint in adversarial attacks. It requires that human cannot distinguish easily between the adversarial samples and real data. Existing approaches on images and videos achieve im- perceptibility by constraining the pixel-wise or frame-wise perturbation magnitude measured by l norms. One major difference in our problem is motion dynamics. To fully represent the dynamics of a motion, we need the derivatives from zero-order (joint location), first-order (joint velocity) up to nth-order. One common approxima- tion is to use first n terms. When it comes to impercepti- bility, the perceived motion naturalness is vital and not all derivatives are at the same level of importance [40]. In- spired by the work in character animation [39, 41, 3], we propose a new perceptual loss: Lp(q, ˆq) = αldyn + (1 − α)lbl (3) lbl = ||Bl(q) − Bl(ˆq)||2 2 = 1 M M (cid:88) i=1 ||Bl(qi) − Bl(ˆqi)||2 2 (4) ldyn = ∞ (cid:88) n=0 βn||(qn − ˆqn)||2 2 where ∞ (cid:88) n=0 βn = 1 (5) R24×1 is the bone length vector of frame qi. Theoretically, bone lengths do not change over time. However, they do vary in the original data due to tracking errors. This is why lbl is designed to be frame-wise. ldyn is the dynamics loss. We use a strategy called derivative matching. It is a weighted (by βn) sum of the l2 distance between qn and ˆqn, where qn and ˆqn are the nth- order derivatives and can be computed by forward differ- encing. Although n goes up to infinity, in practice, we ex- plored up to n = 4, which includes joint position, velocity, acceleration, jerk and snap. After exhaustive experiments, we find that enforcing the 0th, 2nd and 4th order deriva- tives while discarding other derivatives gives good results, with the 4th derivative adding small gains. Including con- secutive derivatives (e.g. 0th, 1st and 2nd) over-constrains the system. Also, the gain of including higher order deriva- tives diminishes while incurring more computation. A good compromise is to set β0 = 0.6 and β2 = 0.4. Match- ing the 2nd-order profiles of two motions is critical. For skeletal motions, small location deviations can still gener- ate large acceleration differences, resulting in two distinc- tive motions. More often, it generates severe jittering and thus totally unnatural motions. An alternative way of reg- ulating the dynamics is to purely smooth the motion, by e.g. minimizing the acceleration. But it dampens highly dynamic motions such as jumping [40]. Also, consider- ing more derivatives above n = 4 makes the optimization harder to solve and over-weighs their benefits. 3.3. White-box Attack With the perceptual loss designed, varying the formula- tion of the classification loss (Lc in Eq.2) allows us to form different attacking strategies. We present three strategies. Anything-but Attack (AB) aims to fool the classifier so that yq (cid:54)= yˆq. This can be achieved by maximizing the cross entropy between Φ(q) and Φ(ˆq): Lc(q, ˆq) = −cross entropy(Φ(q), Φ(ˆq)) (6) Anything-but-N Attack (ABN) is a generalization of AB. It aims to confuse the classifier so that it has similar confidence levels in multiple classes. ABN is more suitable to confuse classifiers which rely on top N accuracy. In addi- tion, we find that it performs better in black-box attacks by transferability, which will be detailed in experiments. One naive solution is to use multiple AB losses for the top n classes, but it will make the optimization difficult and will not scale as the class number increases. Instead, we pro- pose an easier loss function, maximizing the entropy of the predicted distribution of ˆq: Lc(q, ˆq) = −Entropy(Φ(ˆq)), yq (cid:54)∈ T opN (Φ(ˆq)) (7) where α = 0.3. lbl penalizes any bone length deviations in every frame where M is the total frame number. Bl(qi) ∈ where T opN is the set of the top n class labels in the predic- tive distribution Φ(ˆq). By minimizing Lc, we actually max- 3 imize the entropy of Φ(ˆq), i.e. forcing it to be flat over all the class labels and thus reduce the confidence of the classi- fier over any particular class. We stop the optimization once the ground-truth label falls beyond the top n classes. ABN is a harder optimization problem than AB because it needs the predictive distribution to be as flat as possible. 3.3.1 Specified Attack (SA) Different from AB and ABN, sometimes it is useful to fool the classifier with a pre-defined class label. Given a fake label yˆq, we can use its class label distribution Φˆq, a one- hot vector, and minimize the cross entropy: Lc(q, ˆq) = cross entropy(Φ(q), Φ(ˆq)) (8) This is the most difficult scenario because it highly depends on the similarity between the source and target label. While turning ‘clapping over the head’ into ‘raising two hands’ is achievable with minimal visual changes, turning ‘running’ into ‘squat’ without being noticed is much harder. 3.4. Black-box Attack While the white-box attack relies on the ability to esti- mate ∂L ∂ ˆq , which requires the access to the target classifier and is not always possible, black-box attack assumes that the full knowledge of the target classifier is inaccessible. We therefore cannot directly compute ∂L ∂ ˆq . Under such cir- cumstances, we use attack-via-transferability [37]. It be- gins with training a surrogate classifier. Then adversarial samples are computed by white-box attacks on the surro- gate classifier. Finally, the adversarial samples are used to attack the target classifier in a black-box setting. In this pa- per, we do not construct our own surrogate model. Instead, we use an existing classifier as our surrogate classifier to at- tack others. In experiments, we attack several state-of-the- art models. To test the transferability and generalizability of our method, we use every model in turn as the surrogate model and attack the others. 4. Experimental Results We first introduce the datasets and models for our exper- iments, followed by our white-box and black-box results. We then present our perceptual studies on the impercepti- bility and compare SMART with other methods. During the attack, we first use the source code shared by the authors if available or implement the methods ourselves. Then we train them strictly following the protocols in their papers. Next, we test the models and collect the data samples that the trained classifiers can successfully recognize, to create our adversarial attack datasets. Finally, we compute the ad- versarial samples using different attacking strategies. 4.1. Datasets We choose three widely used datasets. HDM05 [28] contains 2337 sequences for 130 actions performed by 5 non-professional actors. The 3D joint locations of the sub- jects are provided in each frame. MHAD [29] is captured using a multi-modal acquisition system, consisting of 11 ac- tions performed by 12 subjects, where 5 repetitions are per- formed for each action, resulting in 659 sequences. In each frame, the 3D joint positions are extracted based on the 3D marker trajectories. NTU60 [31] is captured by Kinect v2 and is currently one of the largest publicly available datasets for 3D action recognition. It is composed of more than 56,000 action sequences. A total of 60 action classes are performed by 40 subjects. The 3D coordinates of joints are provided by Kinect. Due to the huge number of sam- ples and the large intra-class and viewpoint variations, the NTU60 is very challenging and is highly suitable to vali- date the effectiveness and generalizability of our approach. Note we exclude Kinectics [15], a dataset that is also used in many papers, for two reasons. First, some older recogniz- ers we investigate cannot achieve reasonable classification accuracy on it. Second, its quality is too low to evaluate the success of the attack, explained in Section 4.5. 4.2. Target Models Rather than focusing only on the most recent methods, we select a range of methods: HRNN [51], ST-GCN [50], AS-GCN [19], DGNN [33], 2s-AGCN [34], MSG3D [26] and SGN [55], and investigate their vulnerability under dif- ferent scenarios. They include both RNN- and GNN-based models. We implement HRNN following the paper and use the code shared online for the rest of the methods. We also follow their protocols in data pre-processing. Specif- ically, we preprocess the HDM05 and MHAD as in [51] (where HDM05 is grouped into 60 classes), and the NTU60 as in [34]. We also map different skeletons to a standard 25-joint skeleton as in [40]. 4.3. White-box Attack In this section, we qualitatively and quantitatively evalu- ate the performance of SMART. We use a learning rate be- tween 0.005 and 0.0005 and a maximum of 300 iterations. The setting for AB and ABN is straightforward. In SA, the number of experiments needed would be prohibitively large if we were to attack every motion with every other label but the ground-truth. Instead, we randomly select fake labels to attack. Since the number of motions attacked is large, the results are sufficiently representative. Note that this is a very strict test as most of the motions are rather distinctive. For simplicity, we only show representative results in the paper. For more results, please refer to the supplementary materials and video. 4 4.3.1 Attack Results. We show the quantitative results of AB in Table 1 Left. High success rates are universally achieved across different datasets and target models, demonstrating the generalizabil- ity of SMART. For adversarial attack, it is not surprising if the before-attack and after-attack labels are semantically similar, e.g. from drinking water to eating. In SMART, a variety of examples are found where the after-attack labels are significantly different from the original ones. Due to the space limit, we leave all the details in the supplemen- tary video and materials and only give a couple of examples here. In HDM05, high confusion is found between turn L (turn left) and walk rightRC (walk sideways, to the right, feet cross over alternately front/back) in HRNN. Similarly, in NTU, high confusion is found between standing up (from sitting) and wear a shoe in 2SAGCN. These labels have completely different semantics and involve different body parts and motion patterns. Moreover, this kind of confusion is observed across all datasets and models. We show the ABN results in Table 1 Mid, in two varia- tions: AB3 and AB5, as a generalization of AB. They are good for attacking classifiers based on top N accuracy. ABN is a harder problem than AB, with AB5 being harder than AB3, hence has a lower success rate. In terms of datasets, MHAD is the hardest for ABN because there are only 11 classes as opposed to 65 and 60 in the other two. Excluding the ground-truth label from the top 5 out of 11 classes is much more challenging than that of 65 and 60 classes. Table 1 Right shows the SA results. SA is the most diffi- cult because randomly selected class labels often come from significantly different action classes. Although it might be easy to confuse the model between ‘deposit’ and ‘grab’, it is extremely difficult to do so for ‘jumping’ and ‘wear-a- shoe’. However, even under such circumstances, SMART is still able to succeed in more than 70% cases on average, with multiple tests above 96% and even achieving 100%. Performance. The major computational cost comes from the gradient estimation which depends on the target model because it requires back-propagation. We run a max- imum of 300 iterations. The total amount of time each it- eration takes are on average 0.102s, 0.267s, 0.419s, 0.275s and 0.738s on HRNN, ST-GCN, AS-GCN, DGNN and 2S- AGCN respectively, on Nvidia GTX 1080Ti (DGNN and 2S-AGCN) and TitanXp (HRNN, ST-GCN and AS-GCN). 4.4. Black-box Attack In the black-box setting, we attack the NTU dataset. Since we need a surrogate model to fool the target mod- els, we first use 2s-AGCN as the surrogate model to attack DGNN, AS-GCN, MSG3D and SGN. The results are shown in Table 2. We notice that SMART achieves successes on all target models except MSG3D, which indicates that not all target models are equally easy to fool by the transferred black-box attack. To further investigate it, we use three models: AS-GCN, DGNN and 2s-AGCN, and in turn take every model as the surrogate model and produce adversarial examples using AB and AB5. Results are shown in Table 3. AB5 results are in gen- eral better than AB. We speculate that there are two factors. First, the predictive class distribution of AB5 is likely to be flatter than AB. The flatness improves the transferability be- cause a target model with similar decision boundaries will also produce a similarly flat predictive distribution, and thus is more likely to be fooled. Besides, since the ground-truth label is pushed away from the top 5 classes in the surrogate model, it is also likely to be far away from the top in the target model. We also notice that the transferability is not universally successful. DGNN and AS-GCN cannot easily fool one another. Meanwhile, 2S-AGCN can fool and be fooled by both of them. Since the transferability can be de- scribed by distances between decision boundaries [37], our speculation is that 2S-AGCN’s boundary structure overlaps with both DGNN and AS-GCN significantly but the other two overlap little. The theoretical reason is hard to identify, as the formal analysis on transferability has just emerged on static data [37, 57]. The theoretical analysis of time-series data is beyond the scope of this paper and is therefore left for future work. 4.5. Perceptual Study One key difference between SMART and existing work is that we employ both numerical accuracy and rigorous perceptual studies to evaluate the success of attacks. Imper- ceptibility is a requirement for any adversarial attack. All the success shown above would have been meaningless if the attack were noticeable to humans. To evaluate imper- ceptibility, qualitative visual comparisons can be used on the image-based attack, but rigorous perceptual studies are needed for complex data [47], as the numerical success can always be achieved by sacrificing the imperceptibility. This is especially the case for motions. Also, the necessity of per- ceptual studies restricts us from using noisy datasets (e.g. Kinetics [15]) because the subjects are unable to identify perturbations in side-by-side comparisons due to the exces- sive jittering and tracking errors in the original data. We conduct three user studies (Deceitfulness, Natural- ness and Indistinguishability). Since our sample space is huge (7 models × 3 datasets × 3 attacking strategies), we choose the most representative setting. We use the adversar- ial samples under AB in HDM05 and MHAD. NTU dataset is only used in visual evaluation, not perceptual study due to motion jittering in the original data (see the video for de- tails). In total, we recruited totally 41 subjects (age between 18 and 37). Details are in the supplementary materials. Deceitfulness. In each user study, we randomly choose 100 motions with the ground-truth and the after-attack la- 5 Model/Data HDM05 MHAD NTU 99.56 100 97.43 92.51 100 97.9 HRNN ST-GCN AS-GCN DGNN 2s-AGCN mean 100 99.96 92.84 94.46 95.97 96.65 100 99.57 99.36 96.09 99.18 98.84 HDM05 100/100 93.30/90.28 91.46/82.83 93.55/86.32 83.40/75.2 92.34/86.93 MHAD 100/100 76.86/70.5 42.07/22.34 87.54/74.27 55.9/32.08 72.47/59.84 NTU 99.84/99.62 95.86/91.32 91.18/82.47 98.73/97.62 100/100 97.12/94.21 HDM05 MHAD NTU 49.17 100 99.48 99.99 100 89.73 57.41 66.93 40.18 96.13 97.53 71.64 67.19 74.95 64.62 97.26 96.72 80.15 Table 1. Success rate. Left: Anything-but (AB) Attack. Mid: Anything-but-N Attack. The results are AB3/AB5 when n = 3 (AB3) and 5 (AB5). Right: Specified Attack (SA). Figure 1. Visual comparison between different losses. Highlighted spine areas in the same frame show key visual differences. DGNN AS-GCN MSG3D 98.10 98.37 3.08% 97.75% SGN Table 2. Success rate of AB black-box attack, using 2s-AGCN. DGNN DGNN n/a AS-GCN 7.24(7.63) 98.10(98.96) n/a Table 3. Success rate (AB/AB5) of black-box attack. 2s-AGCN 90.6(90.99) n/a 91.17(91.99) 2s-AGCN 98.37(98.46) AS-GCN 10.90(12.97) bel for 100 trials. In each trial, the video is played for 6 seconds and then the subject is asked to choose which la- bel best describes the motion with no time limit. This is to test whether SMART visually changes the semantics of the motion. This is also to test whether people can distinguish actions by only observing skeletal motions. Naturalness. Since unnatural motions can be easily identified as a result of the attack, we perform ablation tests on different loss term combinations. We design four set- tings: l2, l2-acc, l2-bone, SMART. l2 is where only the l2 norm of joint perturbation is used, which is also widely used in existing methods such as image/video/mesh attack. l2- acc is l2 plus the acceleration loss, l2-bone is l2 plus the bone-length loss and SMART is the proposed perceptual loss. We first show static poses in Figure 1. Motion com- parisons are available in the supplementary video. Visually, SMART is the best. Even from static poses, one can easily see the artifacts caused by joint displacements. The spinal joints are the most obvious. The joint displacements cause 6 unnatural zig-zag bending in l2, l2-acc and l2-bone, which is even more obvious in motions. Next, we conduct perceptual studies. In each study, we randomly select 50 motions. For each motion, we make two trials. The first includes one attacked motion by SMART and one randomly selected from l2, l2-acc and l2-bone. The second includes two motions randomly drawn from l2, l2- acc and l2-bone. The first trial evaluates our results against other alternatives and the second reveals the impact of dif- ferent perceptual loss terms. In each of the 100 trials, two motions are played together for 6 seconds twice, and then the subject is asked to choose which motion looks more nat- ural or cannot tell the difference, with no time limit. Indistinguishability. In this study, we conduct a very strict test to see if the users can tell if a motion is perturbed in any way at all. In each experiment, 100 pairs of mo- tions are randomly selected. In each trial, the left motion is always the original and the user is told so. The right one can be the original (sensitivity) or attacked (perceiv- ability). We ask if the user can see any visual differences. Each video is played for 6 seconds then the user is asked to choose if the right motion is a changed version of the left, with no time limit. This user study serves two purposes. Perceivability is a direct test on Indistinguishability on the attack while sensitivity is to screen out subjects who tend to give random choices. Most users are able to recognize if two motions are the same (close to 100% accuracy), but there are a few whose choices are more random. We dis- card any user data which falls below 80% accuracy on the sensitivity test. 4.5.1 Results. The success rate of Deceitfulness is 93.32% overall, which means that most of the time SMART does not visually change the semantics of the motions. When looking into the success rate on different datasets, SMART achieves 86.77% on HDM05 and 96.38% on MHAD. This also shows that most of the time people can tell different actions by observ- ing skeletal motions, even for similar actions. Next, Figure 2 Left shows the results of Naturalness. Users’ preferences over different losses are SMART > l2-acc > l2 > l2-bone. SMART leads to the most natural results as expected. Finally, we conduct the Indistinguishability test. The final results are 81.9% on average, 80.83% on HDM05 and 83.97% on MHAD. Note that this is a side-by-side compar- ison and thus is very harsh. The users are asked to find any visual differences. To avoid situations where motions are too fast to spot any differences (e.g. kicking and jumping motions), we also play the motions three times more slowly than the original. Even under such harsh tests, humans still cannot spot any difference most of the time. 4.6. Classifier Robustness under SMART Attack After rigorously confirming the effectiveness of SMART across datasets and models, we analyze the results to inves- tigate the vulnerability of the target models. We start by looking at which joint or joint groups are attacked the most. Initially, if some joints tend to be attacked together, the cor- relations between the joint perturbations should be high. So we compute the Pearson correlations of joint perturbations, shown in Figure 3 Left. Although some local high correla- tions can be found (e.g. between joint 2 and 3, 6 and 7, 9 and 10, 20 and 21), they are not universal. Please see other results in the supplementary material. Next, we assume that the attack behavior might be class-dependent, i.e. depend- ing on actions. However, after computing the joint pertur- bation correlations based on actions, no consistent and ob- vious patterns is found either. Finally, we find that the displacement-speed and displacement-acceleration correlations reveal a consistent description of the vulnerability, shown in Figure 3 Mid and Right. The correlations are computed between the joint displacements and the original velocities and accelerations, respectively. These two correlations reveal the joint vul- the higher the speed/acceleration is, the more nerability: the joint is attacked (shown by the high values along the main diagonal). In addition, they also reveal some consis- tent across-joint correlations (as shown by red boxes). Note that the joints in a red box belong to one part of the body (four limbs and one trunk). These joints normally have high within-group correlations in motions. Coordinated attacks on them easily fool the action recognizers. The analysis suggests that joints with high velocity and acceleration are important features in the target models be- cause these joints are attacked the most. This is especially so for joint groups with high within-group correlations. Most of the tested models are very sensitive to perturba- tions to these features, raising a big concern. Meanwhile, the analysis also suggests that reducing the sensitivity of a classifier over these features will increase its resistance to adversarial attack. To this end, one possible solution is to induce noises around the perturbation gradient dur- ing training, instead of purely white noises used by many methods. Another possibility is to introduce semantic de- scriptors (e.g. featuring a waving motion as one hand mov- ing side-to-side above the head) which are not sensitive to small changes in these raw features. Dynamics in Attack Imperceptibility. To investigate the role of dynamics compared with joint-only perturbation, we conduct further analysis on SMART-vs-l2 where users pre- fer SMART to l2. We first compute their respective joint- wise deviations from the original motions, shown in Fig- ure 2 Right. In general, the perturbations of SMART are in general higher than l2 and have larger standard devia- tions. However, the users still choose SMART over l2. It indicates that with proper exploitation of dynamics, larger perturbations can generate even more desirable results. This is somewhat surprising and significantly different from the static data (e.g. images), where it is believed that the per- turbation magnitude is tightly tied to imperceptibility [11]. This also suggests that classifiers could use perturbations on the dynamics to make the training more robust, which is complementary to the afore-mentioned suggestion of in- ducing noises around the perturbation gradient. 4.7. Comparison To show that SMART is an effective tool for attack anal- ysis, we compare SMART with IAA [13] and CIASA [22]. As there are two competing factors (attack success vs im- perceptibility), we fix one and compare the other. The suc- cess rate is largely governed by the clipping threshold of the perturbation magnitude in IAA and CIASA, and is hence easily tunable, while user studies on imperceptibility are expensive. We, therefore, tune IAA & CIASA to achieve similar success rates, then conduct perceptual studies for comparison. Specifically, we conduct AB attack on HDM05 and the Indistinguishability test, as AB is also used in both papers. Each experiment includes 120 pairs of motions in- cluding motions evenly sampled from the original motions, SMART, IAA and CIASA results (30 motions each). In each trial, the left motion is the original motion while the right one is either the original motion, a SMART sample, an IAA sample or a CIASA sample. Results are shown in Table 4. While the attack success rates of the three methods 7 Figure 2. Left: Normalized user preference on Naturalness. our: SMART. bone: l2-bone. acc: l2-acc. The vertical axis is the percentage of user preference. Right: The mean (Top) and standard deviation (Bottom) of the joint-wise deviations of SMART and l2. Figure 3. 2S-AGCN on HDM05, displacement-displacement cor- relations (Left), displacement-speed correlations (Middle) and displacement-acceleration correlations (Right). Model/Method HRNN STGCN 2S-AGCN HRNN STGCN 2S-AGCN IAA CIASA SMART 100% 98.12% 98.75% 99.57% 99.57% 99.56% 99.18% 98.77% 98.98% 42.22% 36.67% 32.22% 90.00% 87.5% 90.00% 80.83% 35.33% 49.33% Table 4. Success rate in attack (Upper) and Indistinguishability (Lower). The attack success rate is the best results for SMART, IAA and CIASA. are similar, SMART, in general, generates more indistin- guishable adversarial samples than IAA and CIASA do. We notice that most failures of IAA and CIASA are caused by broken motion dynamics and are therefore easily perceiv- able. This is understandable because IAA does not consider dynamics and thus generates jittering motions; CIASA uses GANs to govern the motion quality, which can only gener- ate plausible motions, but not imperceptible samples. De- tails can be found in the supplementary materials. 5. Discussion Imperceptibility is vital in adversarial attack. When it comes to skeletal motions, perceptual studies are essential because there is no widely accepted metrics that fully re- flect perceived realism/naturalness/quality. In addition, it helps us to uncover a unique feature of attacking skeletal motions. Losses solely based on perturbation magnitude are often overly conservative because they are mainly de- signed for attacking static data and unable to fully utilize the dynamics. Next, forming the joint deviation as a hard constraint [22] via clipping is not the best strategy. The threshold needs to be manually tuned and it varies based on data. Besides, our perceptual study shows that larger pertur- bations can be used if the dynamics are exploited properly. SMART is a straightforward but surprisingly effective at- tack method across datasets, models, attack strategies, and harsh perceptual studies. The simplicity of SMART raises an alarming concern for current action recognition research as it does not require complex computation to attack the state-of-the-art models. Through analysing SMART’s be- havior, we identified one key cause of their vulnerability: the over-sensitivity to joints with high velocity and accel- eration, which we hope will help the future research to im- prove the recognition robustness. 6. Conclusion and Future Work We demonstrated the vulnerability of several state-of- the-art action recognizers under adversarial attack. To this end, we proposed a new method, SMART, to attack ac- tion recognizers based on 3D skeletal motions. Through comprehensive qualitative and quantitative evaluations, we showed that SMART is general across multiple state-of- the-art models on various benchmark datasets. Moreover, SMART is versatile since it can deliver both white-box and black-box attacks with multiple attacking strategies. Fi- nally, SMART is deceitful as verified in extensive percep- tual studies. Based on SMART, we revealed possible causes of the vulnerability of several state-of-the-art models. In the future, we would like to theoretically investigate why the transferability varies between different models under the black-box attack. We will also investigate how to systemat- ically resist adversarial attack. Acknowledgements: We thank Qun-Ce Xu and Kai-Wen Hsiao for their help on the perceptual study. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 899739 CrowdDNA, EPSRC (EP/R031193/1), NSF 61772462, No. U1736217), RCUK grant China (No. CAMERA (EP/M023281/1, EP/T014865/1) and the 100 Talents Program of Zhejiang University. 8 References [1] Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. arXiv, abs/1707.07397, 2017. 2 [2] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57, 2017. 3 [3] Wenheng Chen, He Wang, Yi Yuan, Tianjia Shao, and Kun Zhou. Dynamic future net: Diversified human motion gener- ation. In Proceedings of the 28th ACM International Confer- ence on Multimedia, MM ’20, page 2131–2139, New York, NY, USA, 2020. Association for Computing Machinery. 3 [4] Ke Cheng, Yifan Zhang, Xiangyu He, Weihan Chen, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition In Proceedings of with shift graph convolutional network. the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 2 [5] Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack on graph structured In Jennifer Dy and Andreas Krause, editors, Pro- data. ceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1115–1124, Stockholmsm¨assan, Stockholm Sweden, 10–15 Jul 2018. PMLR. 2 [6] M. Devanne, H. Wannous, S. Berretti, P. Pala, M. Daoudi, and A. Del Bimbo. 3-d human action recognition by shape analysis of motion trajectories on riemannian mani- fold. IEEE Transactions on Cybernetics, 45(7):1340–1352, 2015. 2 [7] Yong Du, Wei Wang, and Liang Wang. Hierarchical recur- rent neural network for skeleton based action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1110–1118, 2015. 1, 2 [8] Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Ta- dayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. Robust physical-world attacks on machine learning models. arXiv, abs/1707.08945, 2017. 2 [9] B. Fernando, E. Gavves, M. Jos´e Oramas, A. Ghodrati, and T. Tuytelaars. Modeling video evolution for action recog- nition. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5378–5387, 2015. 2 [10] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples, 2014. 2 [11] Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge J. Belongie, and Ser-Nam Lim. Enhancing adversarial exam- ple transferability with an intermediate level attack. CoRR, abs/1907.10823, 2019. 7 [12] Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, and Hai Li. Adversarial attacks for optical flow-based action recognition classifiers. arXiv, abs/1811.11875, 2018. 2 [13] H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P. Muller. Adversarial attacks on deep neural networks for time series classification. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–8, 2019. 7 [14] F. Karim, S. Majumdar, and H. Darabi. Adversarial attacks on time series. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2020. 1 [15] Will Kay, Jo˜ao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. The kinetics human action video dataset. CoRR, abs/1705.06950, 2017. 4, 5 [16] Q. Ke, M. Bennamoun, S. An, F. Sohel, and F. Boussaid. A new representation of skeleton sequences for 3d action recognition. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4570–4579, 2017. 2 [17] Diederik Kingma and Jimmy Ba. Adam : A method for stochastic optimization. arXiv, abs/1412.6980v9, 2014. 3 [18] Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. arXiv, Adversarial examples in the physical world. abs/1607.02533, 2016. 2 [19] Maosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang, and Qi Tian. Actional-structural graph convolutional networks for skeleton-based action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 4 [20] Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against adversarial at- tacks using high-level representation guided denoiser. In The IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), June 2018. 1 [21] Hsueh-Ti Derek Liu, Michael Tao, Chun-Liang Li, Derek Nowrouzezahrai, and Alec Jacobson. Beyond pixel norm- balls: Parametric adversaries using an analytically differen- In International Conference on Learning tiable renderer. Representations, 2019. 2 [22] Jian Liu, Naveed Akhtar, and Ajmal Mian. Adversarial at- tack on skeleton-based human action recognition. arXiv, abs/1909.06500, 2019. 2, 7, 8 [23] Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang. Spatio-temporal lstm with trust gates for 3d human action recognition. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – ECCV 2016, pages 816–833, 2016. 2 [24] Mengyuan Liu, Hong Liu, and Chen Chen. Enhanced skele- ton visualization for view invariant human action recogni- tion. Pattern Recogn., 68(C):346–362, 2017. 2 [25] Ziyu Liu, Hongwen Zhang, Zhenghao Chen, Zhiyong Wang, and Wanli Ouyang. Disentangling and unifying graph convo- lutions for skeleton-based action recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 2 [26] Ziyu Liu, Hongwen Zhang, Zhenghao Chen, Zhiyong Wang, and Wanli Ouyang. Disentangling and unifying graph convo- lutions for skeleton-based action recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 4 [27] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: A simple and accurate method to fool deep neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2574–2582, 2016. 2 [28] M. M¨uller, T. R¨oder, M. Clausen, B. Eberhardt, B. Kr¨uger, and A. Weber. Documentation mocap database hdm05. Technical Report CG-2007-2, Universit¨at Bonn, 2007. 4 9 [29] F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy. Berkeley mhad: A comprehensive multimodal human action database. In 2013 IEEE Workshop on Applications of Com- puter Vision (WACV), pages 53–60, Jan 2013. 4 [30] Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. arXiv, abs/1511.07528, 2015. 2 [31] Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. Ntu rgb+d: A large scale dataset for 3d human activity anal- ysis. In IEEE Conference on Computer Vision and Pattern Recognition, June 2016. 4 [32] Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 23rd ACM SIGSAC Conference on Computer and Com- munications Security, 2016. 1 [33] Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition with directed graph neu- ral networks. In The IEEE Conference on Computer Vision and Pattern Recognition, pages 7912–7921, June 2019. 2, 4 [34] Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Two- stream adaptive graph convolutional networks for skeleton- based action recognition. In The IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), June 2019. 4 [35] Sijie Song, Cuiling Lan, Junliang Xing, Wenjun Zeng, and Jiaying Liu. An end-to-end spatio-temporal attention model In Pro- for human action recognition from skeleton data. ceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, pages 4263–4270, 2017. 2 [36] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Ian Goodfellow, and Rob Fer- arXiv, Synthesizing robust adversarial examples. Bruna, Dumitru Erhan, gus. abs/1312.6199, 2014. 1, 2 [37] Florian Tram`er, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, and Patrick D. McDaniel. The space of transferable adversarial examples. arXiv, abs/1704.03453, 2017. 1, 4, 5 [38] R. Vemulapalli, F. Arrate, and R. Chellappa. Human action recognition by representing 3d skeletons as points in a lie group. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 588–595, 2014. 2 [39] He Wang, Edmond SL Ho, and Taku Komura. An energy- driven motion planning method for two distant postures. IEEE transactions on visualization and computer graphics, 21(1):18–30, 2015. 3 [40] H. Wang, E. S. L. Ho, H. P. H. Shum, and Z. Zhu. Spatio- temporal manifold learning for human motions via long- horizon modeling. IEEE Transactions on Visualization and Computer Graphics, pages 1–1, 2019. 3, 4 [41] He Wang, Kirill A Sidorov, Peter Sandilands, and Taku Ko- mura. Harmonic parameterization by electrostatics. ACM Transactions on Graphics (TOG), 32(5):155, 2013. 3 [42] Jue Wang and Anoop Cherian. Learning discriminative video representations using adversarial perturbations. In Vitto- rio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss, editors, Computer Vision – ECCV 2018, pages 716– 733, 2018. 2 10 [43] Xingxing Wei, Jun Zhu, and Hang Su. Sparse adversarial perturbations for videos. arXiv, abs/1803.02536, 2018. 1 [44] Xingxing Wei, Jun Zhu, Sha Yuan, and Hang Su. Sparse adversarial perturbations for videos. In AAAI, 2018. 2 [45] Chong Xiang, Charles Qi, and Bo Li. Generating 3d adver- sarial point clouds. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 9136–9144, June 2019. 1, 2 [46] Chaowei Xiao, Bo Li, Jun yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks. In IJCAI, pages 3905–3911, 2018. 2 [47] Chaowei Xiao, Dawei Yang, Bo Li, Jia Deng, and Mingyan Liu. Meshadv: Adversarial meshes for visual recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6898–6907, 2019. 1, 2, 5 [48] Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. Spatially transformed adversarial ex- amples. In International Conference on Learning Represen- tations, 2018. 2 [49] Han Xu, Yao Ma, Haochen Liu, Debayan Deb, H. S. Liu, Jiliang Tang, and Anil Jain. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing, 17:151–178, 2020. 3 [50] Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial tempo- ral graph convolutional networks for skeleton-based action recognition. In AAAI, 2018. 4 [51] Yong Du, W. Wang, and L. Wang. Hierarchical recur- rent neural network for skeleton based action recognition. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1110–1118, June 2015. 4 [52] Xiaohui Zeng, Chenxi Liu, Yu-Siang Wang, Weichao Qiu, Lingxi Xie, Yu-Wing Tai, Chi-Keung Tang, and Alan L. Yuille. Adversarial attacks beyond the image space. In The IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), pages 4302–4311, 2019. 2 [53] P. Zhang, C. Lan, J. Xing, W. Zeng, J. Xue, and N. Zheng. View adaptive neural networks for high perfor- IEEE mance skeleton-based human action recognition. Transactions on Pattern Analysis and Machine Intelligence, 41(8):1963–1978, 2019. 2 [54] Pengfei Zhang, Cuiling Lan, Wenjun Zeng, Junliang Xing, Jianru Xue, and Nanning Zheng. Semantics-guided neural networks for efficient skeleton-based human action recog- In IEEE/CVF Conference on Computer Vision and nition. Pattern Recognition (CVPR), June 2020. 2 [55] Pengfei Zhang, Cuiling Lan, Wenjun Zeng, Junliang Xing, Jianru Xue, and Nanning Zheng. Semantics-guided neural networks for efficient skeleton-based human action recog- In IEEE/CVF Conference on Computer Vision and nition. Pattern Recognition (CVPR), June 2020. 4 [56] Xikun Zhang, Chang Xu, and Dacheng Tao. Context aware graph convolution for skeleton-based action recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 2 [57] Chenxiao Zhao, P. Fletcher, Mixue Yu, Yaxin Peng, Guixu Zhang, and Chaomin Shen. The adversarial attack and detec- tion under the fisher information metric. Proceedings of the AAAI Conference on Artificial Intelligence, 33:5869–5876, 07 2019. 5 [58] Tianhang Zheng, Sheng Liu, Changyou Chen, Junsong Yuan, Bangmin Li, and Kui Ren. Towards understanding the ad- versarial vulnerability of skeleton-based action recognition. ArXiv, abs/2005.07151, 2020. 2 11
ai_researcher
3
ReAct_Synergizing_Reasoning_and_Acting_in_Language_Models.pdf
Focused ReAct: Improving ReAct through Reiterate and Early Stop Shuoqiu Li Carnegie Mellon University [email protected] Han Xu University of Illinois at Urbana-Champaign [email protected] Haipeng Chen William & Mary [email protected] 4 2 0 2 t c O 4 1 ] I A . s c [ 1 v 9 7 7 0 1 . 0 1 4 2 : v i X r a Abstract Large language models (LLMs) have signifi- cantly improved their reasoning and decision- making capabilities, as seen in methods like Re- Act. However, despite its effectiveness in tack- ling complex tasks, ReAct faces two main chal- lenges: losing focus on the original question and becoming stuck in action loops. To address these issues, we introduce Focused ReAct, an enhanced version of the ReAct paradigm that incorporates reiteration and early stop mech- anisms. These improvements help the model stay focused on the original query and avoid repetitive behaviors. Experimental results show accuracy gains of 18% to 530% and a runtime reduction of up to 34% compared to the original ReAct method. 1 Introduction Recent advancements in large language mod- els (LLMs) have enabled more sophisticated techniques for reasoning and decision-making. One such technique, the ReAct framework (Rea- son+Act), has gained popularity for its dual ap- proach of alternating between reasoning and ac- tion (Yao et al., 2023). This combination allows ReAct to excel in handling complex tasks by better adapting to dynamic environments (Wang et al., 2024). Despite its strengths in general question- answering (QA), ReAct sometimes falls short in delivering accurate results, as demonstrated in Fig- ure 1. When confronted with lengthy or intri- cate questions, the model—paired with the ReAct framework—may lose track of the original ques- tion, leading to incorrect responses. Additionally, the ReAct method can get trapped in repetitive action loops, continuously performing the same action without progressing toward a solution. To address these challenges, we introduce Focused ReAct, an enhanced version of the original ReAct method specifically designed for QA tasks. Figure 1: The yellow highlighted region illustrates where the model loses focus on the original question, while the red highlighted region depicts repeated actions that result in a failed response. 2 Methodology This section outlines the two core features of the Focused ReAct method: reiterate and early stop. 2.1 Reiterate to Solve Context Loss As indicated in the introduction, The first challenge stems from the extended reasoning process, where the relevance of the original question diminishes as additional reasoning and actions are appended. To resolve this, the concept of reiterate is introduced. In this approach, the original question is restated at the beginning of each reasoning step in the ReAct cycle. This technique is illustrated in the green- highlighted region of Figure 2. By reiterating the original question at each step, the model continually emphasizes the user’s query, preventing it from being overshadowed by the in- creasingly long context that ReAct tends to create. This simple yet effective strategy mitigates the con- text dilution problem illustrated in Figure 1, ensur- ing that the output remains aligned with the user’s request, even in complex or multi-step tasks. Question: Howard Van Doren Shaw was the designer of which 1917 Progressive Era community?Thought: 1917 is the year of the community, so I need to search Howard Van Doren Shaw and Progressive Era.Action: Search [Howard Van Doren Shaw]Observation: ……Thought: 1900 is the year of the remodel, so I need to search Marktown next.Action: Search [Marktown]Observation: ……Thought: 1917 is the year of the community, so I need tosearch Marktown next.Action: Search [Marktown]……Evaluation: False Table 1: Accuracy Comparison of ReAct vs. Focused ReAct Model Gemma 2 2B Phi-3.5-mini 3.8B Llama 3.1 8B ReAct 2.0% 22.0% 14.0% abs./rel. diff Focused ReAct 12.6% +10.6 / 530% 26.0% 23.3% +4.0 / 18% +9.3 / 66% the Gemma 2, Phi-3.5, and Llama 3.1 models. Fo- cused ReAct demonstrates an 18%-530% improve- ment in accuracy. Table 2: Runtime Comparison (Average and Std) for ReAct vs. Focused ReAct Model ReAct 11.68±2.66s Focused ReAct 7.68±2.41s abs./rel. diff -4.0 / 34% 23.23±8.42s 22.50±11.19s -0.73 / 3% 24.10±23.48s 23.12±25.35s -0.98 / 4% Gemma 2 2B Phi-3.5 -mini 3.8B Llama 3.1 8B Table 2 summarizes the average runtime and standard deviation (std) for both the original Re- Act and Focused ReAct methods. Models with fewer parameters show a 34% reduction in runtime, while models with larger parameter sizes exhibit no significant decrease. This discrepancy may be at- tributed to the fact that smaller models, with weaker reasoning capabilities, benefit more from Focused ReAct optimizations. In contrast, larger models are more robust at maintaining context and performing deeper reasoning, which may reduce the relative impact of Focused ReAct’s efficiency gains. As a result, the runtime benefits are less pronounced compared to smaller models. 4 Conclusion This paper identifies two common issues with the ReAct method in QA: losing focus on the original question during extended reasoning and becoming stuck in repetitive action loops. To overcome these problems, we propose Focused ReAct, which in- corporates reiteration and early stop to improve upon the ReAct framework. Compared to the orig- inal ReAct method, the new approach achieves accuracy improvements between 18% and 530%, along with a reduction in runtime of up to 34%. For future work, we plan to extend Focused ReAct to a broader range of tasks and scenarios, evaluate its generalizability and robustness, and explore techniques to further accelerate its perfor- mance (Xu et al., 2024). Figure 2: The QA process by Focused ReAct for the same question, which applies reiteration (highlighted in yellow) and early stop (highlighted in red) to resolve the context loss and the repeated action issue. 2.2 Early Stop to Prevent Action Repetition The second challenge, as outlined in the introduc- tion, occurs when the model gets caught in repet- itive loops, generating the same response without progressing toward the correct answer. To tackle this, we propose an early stop mechanism. It as- sumes that by the time a duplicate action occurs, sufficient information has been gathered. When the program detects repeated actions, it triggers a termination request - highlighted in red in Figure 2 - instructing the model to generate a final answer based on the existing information. This approach prevents unnecessary repetition and helps the QA process arrive at an accurate response more efficiently. 3 Experimentation We evaluate Focused ReAct against the ReAct base- line using the Gemma 2 2B (Team et al., 2024), Phi- 3.5-mini 3.8B (Abdin et al., 2024) and Llama 3.1 8B (Dubey et al., 2024) models. The implementa- tion uses the PyTorch and Transformers libraries1, with experiments conducted on a single NVIDIA L4 GPU with 24GB of memory. The dataset con- sists of 150 QA tasks, randomly selected from Hot- PotQA (Yang et al., 2018). We measure accuracy as the ratio of correctly answered tasks to the total number of tasks, while runtime is recorded for the completion of each task. Table 1 presents the accuracy comparison be- tween the vanilla ReAct and Focused ReAct across 1Our code implementation and experiments are available at https://github.com/wmd3i/Focused-ReAct. Question: Howard Van Doren Shaw was the designer of which 1917 Progressive Era community?Thought: 1917 is the year of the community, so I need to search Howard Van Doren Shaw and Progressive Era.Action: Search [Howard Van Doren Shaw]Observation: I found the following information in order to answer "Howard Van Doren Shaw was the designer of which 1917 Progressive Era community? ".…...Thought: Hereby I have provide enough information. Please give out the answer to "Howard Van Doren Shaw was the designer of which 1917 Progressive Era community? " with the format of ……Action: Finish [Marktown]Evaluation: True References Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harki- rat Behl, et al. 2024. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. 2024. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024. A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6):186345. Han Xu, Jingyang Ye, Yutong Li, and Haipeng Chen. 2024. Can speculative sampling accelerate react with- out compromising reasoning quality? In The Second Tiny Papers Track at ICLR 2024. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations.
ai_researcher
2
Unleashing_Reasoning_Capability_of_LLMs_via_Scalable_Question_Synthesis_from_Scratch.pdf
Preprint UNLEASHING REASONING CAPABILITY OF LLMS VIA SCALABLE QUESTION SYNTHESIS FROM SCRATCH Yuyang Ding, Xinyu Shi, Xiaobo Liang, Juntao Li∗, Qiaoming Zhu, Min Zhang Soochow University {yyding23,xyshi02,xbliang3}@stu.suda.edu.cn {ljt,qmzhu,minzhang}@suda.edu.cn ABSTRACT The availability of high-quality data is one of the most important factors in im- proving the reasoning capability of LLMs. Existing works have demonstrated the effectiveness of creating more instruction data from seed questions or knowledge bases. Recent research indicates that continually scaling up data synthesis from strong models (e.g., GPT-4) can further elicit reasoning performance. Though promising, the open-sourced community still lacks high-quality data at scale and scalable data synthesis methods with affordable costs. To address this, we intro- duce ScaleQuest, a scalable and novel data synthesis method that utilizes “small- size” (e.g., 7B) open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints. With the efficient ScaleQuest, we automatically constructed a mathematical reasoning dataset con- sisting of 1 million problem-solution pairs, which are more effective than existing open-sourced datasets. It can universally increase the performance of mainstream open-source models (i.e., Mistral, Llama3, DeepSeekMath, and Qwen2-Math) by achieving 29.2% to 46.4% gains on MATH. Notably, simply fine-tuning the Qwen2-Math-7B-Base model with our dataset can even surpass Qwen2-Math-7B- Instruct, a strong and well-aligned model on closed-source data, and proprietary models such as GPT-4-Turbo and Claude-3.5 Sonnet.1 4 2 0 2 t c O 4 2 ] L C . s c [ 1 v 3 9 6 8 1 . 0 1 4 2 : v i X r a Figure 1: Left: Results of different models on MATH, where -ScaleQuest denotes ours. Right: Results of Llama3-8B fine-tuned on publicly available datasets constructed by different methods. ∗Juntao Li is the corresponding author. 1Code, data, and models are publicly available: https://github.com/yyDing1/ScaleQuest. 1 7B Scale ModelsProprietary Models & 70B Scale Models+ScaleQuest Preprint 1 INTRODUCTION How to improve the reasoning capabilities of Large Language Models (LLMs) has attracted sig- nificant attention. The success of recent advanced models, such as OpenAI o1 and Claude-3.5, heavily depends on access to extensive, diverse, and high-quality reasoning datasets. However, the proprietary nature of the data presents a significant barrier to the open-source community. Recent works have highlighted data synthesis as a promising approach (Ntoutsi et al., 2020) to address data scarcity for instruction tuning (Inan et al., 2023). As recent works have disclosed that crafting the right questions is crucial for eliciting the reasoning capabilities of LLMs (Yu et al., 2023a; Shah et al., 2024), the core of reasoning data synthesis lies in creating large-scale and novel questions. Previous efforts in reasoning data synthesis have demonstrated the effectiveness of leveraging pow- erful language models to generate instructions. We categorize these approaches into two types: question-driven approaches and knowledge-driven approaches. Question-driven methods include question rephrasing (Yu et al., 2023a), evol-instruct (Xu et al., 2023; Luo et al., 2023; Zeng et al., 2024), question back-translation (Lu et al., 2024), or providing few-shot examples (Mitra et al., 2024). These methods are limited in data diversity, as the generated problems closely resemble the seed questions, with only minor modifications such as added conditions or numerical changes. This lack of diversity hampers their scalability potential. To improve question diversity, recent knowledge-driven works (Huang et al., 2024b) scale question synthesis by constructing knowledge bases (Li et al., 2024b) or concept graphs (Tang et al., 2024) and sampling key points (Huang et al., 2024a) from them to generate new questions. Nevertheless, the above two types of approaches com- monly rely on strong models, like GPT-4, to synthesize new questions, but the high API costs make it impractical to generate large-scale data. As a result, despite these advancements, the open-source community still faces a shortage of high-quality data at scale and cost-effective synthesis methods. To meet this requirement, we explore a scalable, low-cost method for data synthesis. We observe that using problem-solving models to directly synthesize reasoning questions, as explored in Yu et al. (2023b) and Xu et al. (2024), falls short in synthesizing reasoning data, as shown in Figure 1 (see Llama3-8B-Magpie results). Accordingly, we propose a novel, scalable, and cost-effective data synthesis method, ScaleQuest, which first introduces a two-stage question-tuning process consist- ing of Question Fine-Tuning (QFT) and Question Preference Optimization (QPO) to unlock the question generation capability of problem-solving models. Once fine-tuned, these models can then generate diverse questions by sampling from a broad search space without the need for additional seed questions or knowledge constraints. The generated questions can be further refined through a filtering process, focusing on language clarity, solvability, and appropriate difficulty. Moreover, we introduce an extra reward-based filtering strategy to select high-quality responses. We generated data based on two lightweight, open-source models: DeepSeekMath-7B-RL (Shao et al., 2024) and Qwen2-Math-7B-Instruct (Yang et al., 2024a), producing a final dataset of 1 mil- lion question-answer pairs. As shown in Figure 1, our synthetic dataset boosts performance by 29.2% to 46.4% across four major open-source models: Mistral-7B (Jiang et al., 2023), Llama3- 8B (Dubey et al., 2024), DeepSeekMath-7B (Shao et al., 2024), and Qwen2-Math-7B (Yang et al., 2024a). Compared with other publicly available datasets such as MetaMath (Yu et al., 2023a), DART-Math (Tong et al., 2024), and NuminaMath (Li et al., 2024c), our approach demonstrates great scalability in both in-domain and out-of-domain evaluation. In terms of in-domain evaluation, our method outperforms existing high-quality open-source datasets, achieving better results with the same amount of data. For out-of-domain evaluation, compared with other datasets, the performance of our synthetic dataset continues to show promising trends as the volume of training data increases, indicating significant potential for further improvements through ongoing data scaling. 2 SCALEQUEST: SCALING QUESTION SYNTHESIS FROM SCRATCH In this section, we first explain the motivation and process of our question generation method (sec- tion 2.1). Then, we introduce how to train a question generator via Question Fine-Tuning (sec- tion 2.2) and Question Preference Optimization (section 2.3). Next, we use the question generator to generate math questions, followed by a filtering process (section 2.4). Finally, we describe the response generation process (section 2.5). The overview of our method is illustrated in Figure 2. 2 Preprint Figure 2: Overview of our ScaleQuest method. 2.1 QUESTION GENERATION FROM SCRATCH The question generation process involves providing only a few prefix tokens from an instruc- tion template (e.g., “<|begin of sentence|>User:”) in question language model, which has learned to generate responses generation. A fine-tuned causal based on question-answer pairs (e.g., “<|begin of sentence|>User: {Question}. {Response}”), could potentially be leveraged to generate questions directly (Xu Assistant: et al., 2024). This is because, during instruction tuning, the model is trained using a causal mask, where each token only attends to preceding tokens. This ensures that the hidden states evolve based on past context without future token influence. However, during instruction tuning, the actual loss is calculated based on the response, i.e., to guide the model where X = {x1, x2, . . . , xm} denotes question and Y = {y1, y2, . . . , yn} denotes response. Since P (xi|x<i) is inherently modeled, we need to activate the model’s capability for question generation. L = − log P (yi|X, y<i), (1) 2.2 QUESTION FINE-TUNING (QFT) To activate the model’s question generation capabil- ity, we first perform Question Fine-Tuning (QFT), where we train the problem-solving model using a small set of problems. To ensure that the genera- tor stops after producing the questions and does not continue generating a response, we added an end-of- sentence token at the end of each question. We used approximately 15K problems (without solutions) by mixing the training set of GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021) datasets as training samples. We train DeepSeekMath- 7B-RL Shao et al. (2024) and Qwen2-Math-7B- Instruct Yang et al. (2024a) with these samples. The purpose of utilizing these problems is to activate the model’s question-generation capability rather than to make the model memorize them. To validate this hypothesis, we trained the model separately us- ing the GSM8K and MATH datasets and compared whether the distribution of the generated questions matched that of the training data. To evaluate the 3 Figure 3: The difficulty distribution of two real-world datasets and two synthetic datasets. The difficulty score is calculated based solely on the problem part. Base Problem Writer Professional Problem SolverStep 1: Query Fine-Tuning (QFT) Step 2: Query Preference Optimization (QueryPO)<|begin▁of▁sentence|>User: Prompts used forQuestion Generation Expert & Advanced Problem DesignerQuestion 1: The product of threeconsecutive whole numbers is 900.What is the sum of these three numbers?Question 2: How many different 3-digitpositive integers are divisible by 8?......Question 1: The product of threeconsecutive whole numbers is 990.What is the sum of these three numbers?Question 2: How many three-digitpositive integers are divisible by both 5and 9? (exclude those divisible by 30.)......unsolvable!DiversitySimple!SolvabilityDifficultyTraining Question GeneratorsQuestion GenerationDiversitySolvabilityDifficultyFinal Data Construction 2M Synthetic Questions ScaleQuest Datasets (1M) Difficulty Sampling Solvability Filtering Language Filtering 1M Hight-Quality Questions Answer Generation Reward Filtering0.00.20.40.60.81.0024681012DensityGSM8KMATH0.00.20.40.60.81.0Difficulty Score0.00.51.01.52.02.5DensityQwen2-QFT-GSM8KQwen2-QFT-MATH Preprint question distribution, we used a difficulty classifier, which maps a question into a difficulty score (details in Section 2.4). We performed QFT based on Qwen2-Math-7B (Yang et al., 2024a), then used the two QFT models, Qwen2-QFT-GSM8K and Qwen2-QFT-MATH, to synthesize 10K ques- tions. The difficulty distribution of these four datasets is shown in Figure 3. We found that the gen- erated questions separately differed from both GSM8K and MATH, yet they both converged toward the same distribution. Additionally, the QFT model, trained on English questions, demonstrated the ability to generate a substantial number of questions in other languages. Both phenomena sug- gest that the QFT process enhances the model’s question-generation capabilities without leading to overfitting the training data. 2.3 QUESTION PREFERENCE OPTIMIZATION (QPO) The model is able to generate meaningful and di- verse questions after QFT, but the quality is still not high enough, as shown in Figure 2. This is reflected in two aspects: (1) solvability: the math problem should have appropriate constraints and correct an- swers, and (2) difficulty: the model needs to learn from more challenging problems, yet some of the generated questions are still too simple. To address these two aspects, we applied Question Preference Optimization (QPO). We first used the model after QFT to generate 10K questions. Then, we optimized these samples using an external LLM, focusing primarily on solvability and difficulty. We found that simultaneously opti- mizing both posed a challenge for the LLMs. There- fore, for each sample, we randomly selected one of the two optimization directions, prioritizing either solvability or difficulty. The optimization prompts can be found in Figure 9 and 10. The optimized questions, denoted as yw, are treated as preferred data, while the original questions before optimization, denoted as yl, are considered dispreferred data. We modified the loss for Direct Preference Optimization (DPO) (Rafailov et al., 2024) formu- lation to fit our approach: Figure 4: The solvability and difficulty of the raw questions generated by the QFT model and the optimized ones. LQPO(πθ; πref) = −E(yw,yl)∼D (cid:20) (cid:18) log σ β log πθ(yw) πref(yw) − β log (cid:19)(cid:21) . πθ(yl) πref(yl) (2) The question optimization process placed significant demands on the model’s ability to follow complex instructions. We experimented with two question optimization models: Qwen2-Math-7B- Instruct and GPT-4o-mini. To evaluate improvements in solvability and difficulty, we used GPT-4o, with the prompts for this evaluation provided in Figure 11 and 12. The results are shown in Figure 4. In terms of solvability, Qwen2-Math-7B-Instruct proved inadequate for this task, as the optimized questions resulted in decreased solvability. A possible reason for this is the model’s insufficient ability to follow instructions accurately, resulting in many answers that fail to meet the specified op- timization constraints. Consequently, we selected GPT-4o-mini as the question optimization model. 2.4 QUESTION FILTERING After the QFT and QPO phases, we obtained two question generators: DeepSeekMath-QGen and Qwen2-Math-QGen. There are still some minor issues in the generated questions, primarily re- lated to language, solvability, and difficulty. To address these challenges, we applied the following filtering steps: Language Filtering The question generator models still produce a substantial number of math questions in other languages, accounting for approximately 20%. Since our focus is on English 4 Qwen2-Math-7B-InsGPT-4o-miniOptimization Model0255075Solvability Ratio (%)72.272.271.283.7RawOptimizedeasymediumhardDifficulty (Optimized by GPT-4o-mini)02040Difficulty Ratio (%)34.554.211.36.055.138.9 Preprint math questions, we removed non-English questions by identifying questions containing non-English characters and filtering out those samples. Solvability Filering Although QPO effectively enhances the solvability of generated questions, some questions remain nonsensical. This is primarily due to (1) poorly constrained questions, where missing conditions, redundant conditions, or logical inconsistencies occur, and (2) questions that do not yield meaningful outcomes (e.g., answers involving the number of people should result in a non-negative integer). To filter out such samples, we used Qwen2-Math-7B-Instruct to evaluate whether the question is meaningful and whether the conditions are sufficient. The prompts used for the solvability check are provided in Figure 11. Difficulty Sampling We measure the difficulty of a question using the fail rate (Tong et al., 2024) — the proportion of incorrect responses when sampling n responses for a given question. This met- ric aligns with the intuition that harder questions tend to result in fewer correct responses. Following Tong et al. (2024), we used DeepseekMath-7B-RL as the sampling model to evaluate the difficulty of each question in the training sets of GSM8K and MATH, obtaining the fail rate for each question as its difficulty score. We then used this data to train a difficulty scorer. Specifically, we built upon DeepseekMath-7B-Base and added a classification head on top of the model’s hidden state. The difficulty score d is computed and optimized as: d = W hl + b, L = 1 N N (cid:88) i=1 (yi − di)2, (3) where W and b are the weights and biases of the classification head, hl represents the last hidden state of the sequence, and di is the predicted difficulty score for the i-th question. The loss func- tion L is the mean squared error (MSE), where yi represents the true difficulty score for the i-th question. We then used the scorer to predict the difficulty of each synthetic question and sample based on the question’s difficulty. Specifically, we filtered out a portion of the questions generated by DeepSeekMath-QGen that were overly simple. In contrast, the difficulty distribution of Qwen2- Math-QGen was more balanced, so no sampling was necessary. 2.5 RESPONSE GENERATION WITH REWARD FILTERING Prior efforts to guarantee the quality of solutions include two aspects: (1) rejection sampling (Yuan et al., 2023): Large language models (LLMs) are tasked with generating multiple responses, specif- ically reasoning paths, for each instruction. Only reasoning paths that lead to the correct answer are preserved as solutions (Tong et al., 2024). (2) If the correct answer is unavailable, a majority voting method is used (Huang et al., 2024a), selecting the answer that appears most frequently across mul- tiple reasoning paths and retaining these as the solutions. We use the reward model score as a metric for evaluating the quality of responses, considering its broader applicability, as there is often no sin- gle correct answer in other reasoning tasks like code generation and tool planning. Specifically, for each question, we generate 5 solutions and select the solution with the highest reward model scores as the preferred solution. In our experiments, we use InternLM2-7B-Reward (Cai et al., 2024) as our reward model. 3 EXPERIMENT 3.1 EXPERIMENTAL SETUP Training Problem Designers Our question synthesis process relies on two problem designer mod- els: Deepseek-QGen and Qwen2-Math-QGen, which were trained using QFT (section 2.2) and QPO (section 2.3), based on DeepSeekMath-7B-RL (Shao et al., 2024) and Qwen2-Math-7B- Instruct (Yang et al., 2024a), respectively. During the QFT stage, both models are trained on a mixed training subset of GSM8K and MATH problems, containing a total of 15K problems. We trained for only 1 epoch, considering that training for more epochs might cause the models to overfit the training problems and negatively impact the diversity of generated questions. We also used sequence packing (Krell et al., 2021) to accelerate training. In the QPO stage, we use 10K preference data for training, with a learning rate of 5e-7 and a batch size of 128. 5 Preprint Table 1: Main results on four mathematical reasoning benchmarks. Bold means the best score within the respective base model. The baselines use different synthesis models, such as GPT-4, GPT-4-Turbo, GPT-4o, DeepSeekMath, and Qwen2-Math. If multiple models are used, only the latest released one is marked. More details concerning these datasets are shown in Figure 5. Model Synthesis Model GSM8K MATH College Math Olympiad Bench Average Teacher Models in Data Synthesis GPT-4-0314 GPT-4-Turbo-24-04-09 GPT-4o-2024-08-06 DeepSeekMath-7B-RL Qwen2-Math-7B-Instruct Mistral-7B-WizardMath Mistral-7B-MetaMath Mistral-7B-MMIQC Mistral-7B-MathScale Mistral-7B-KPMath Mistral-7B-DART-Math Mistral-7B-NuminaMath Mistral-7B-ScaleQuest Llama3-8B-MetaMath Llama3-8B-MMIQC Llama3-8B-DART-Math Llama3-8B-NuminaMath Llama3-8B-ScaleQuest DeepSeekMath-7B-Instruct DeepSeekMath-7B-MMIQC DeepSeekMath-7B-KPMath-Plus DeepSeekMath-7B-DART-Math DeepSeekMath-7B-Numina-Math DeepSeekMath-7B-ScaleQuest Qwen2-Math-7B-MetaMath Qwen2-Math-7B-DART-Math Qwen2-Math-7B-Numina-Math Qwen2-Math-7B-ScaleQuest - - - - - - 94.7 94.5 92.9 88.2 89.5 General Base Model GPT-4 GPT-3.5 GPT-4 GPT-3.5 GPT-4 DSMath-7B-RL GPT-4o Qwen2-Math-7B-Ins GPT-3.5 GPT-4 DSMath-7B-RL GPT-4o Qwen2-Math-7B-Ins 81.9 77.7 75.7 74.8 82.1 81.1 82.1 88.5 77.3 77.6 81.1 77.2 87.9 Math-Specialized Base Model GPT-4 GPT-4 DSMath-7B-RL GPT-4o Qwen2-Math-7B-Ins GPT-3.5 DSMath-7B-RL GPT-4o Qwen2-Math-7B-Ins 82.7 79.0 83.9 86.8 75.4 89.5 83.9 88.6 84.6 89.7 52.6 73.4 81.1 52.4 73.1 33.3 28.2 36.3 35.2 46.8 45.5 49.4 62.9 32.5 39.5 46.6 50.7 64.4 46.9 45.3 48.8 53.6 55.2 66.6 49.5 58.8 65.6 73.4 24.4 - 50.2 41.4 50.5 21.5 19.1 24.8 21.8 - 29.4 33.8 43.5 20.6 29.5 28.8 33.2 42.8 37.1 35.3 - 40.7 36.9 47.7 39.9 45.4 45.5 50.0 - - 43.3 19.0 37.8 8.6 5.8 10.8 - - 14.7 19.4 26.8 5.5 9.6 14.5 17.8 25.3 14.2 13.0 - 21.7 19.9 29.9 17.9 23.1 33.6 38.5 - - 66.9 49.3 62.7 36.3 32.7 36.9 - - 42.7 46.2 55.4 34.0 39.1 42.8 44.7 55.1 45.2 43.2 - 50.7 46.9 58.4 47.8 54.0 57.3 62.9 Question Generation The two question generation models were then utilized to generate a total of 2 million questions, with 1 million from each model. During this process, we set the maximum generation length to 512, a temperature of 1.0, and a top-p value of 0.99. To ensure quality, we ap- plied a question filtering pipeline (section 2.4) that involved language filtering, solvability filtering, and difficulty sampling. This process refined the dataset, leaving approximately 1M questions to form the final question pool, 400K from Deepseek-QGen and 600K from Qwen2-Math-QGen. Response Generation Based on the problems, we synthesized responses (section 2.5) using Qwen2-Math-7B-Instruct (Yang et al., 2024a). In the process, we set the maximum generation length to 2048, with a temperature of 0.7 and top-p of 0.95. We use chain-of-thought prompt (Wei et al., 2022) to synthesize solutions. We use vLLM (Kwon et al., 2023) to accelerate the generation and Ray (Moritz et al., 2018) to deploy distributed inference. For each problem, we sampled 5 so- lutions and selected the one with the highest reward score as the final response. The final dataset consists of 1 million problem-solution pairs. Instruction Tuning We conducted instruction tuning on the synthetic problems and solutions us- ing two general base models, Mistral-7B (Jiang et al., 2023) and Llama3-8B (Dubey et al., 2024), as well as two math-specialized base models, DeepSeekMath-7B (Shao et al., 2024) and Qwen2-Math- 7B (Yang et al., 2024a). All models were fine-tuned for 3 epochs in our experiments unless specified otherwise. We used a linear learning rate schedule with a 3% warm-up ratio, reaching a peak of 5e-5 for Llama3 and DeepSeekMath and 1e-5 for the other models, followed by cosine decay to zero. 6 Preprint Evaluation and Metrics We assessed the fine-tuned models’ performance across four datasets of increasing difficulty. Along with the widely used GSM8K (elementary level) and MATH (competi- tion level), we included two more challenging benchmarks: College Math (Yuan et al., 2023) (col- lege level) and Olympiad Bench (He et al., 2024) (Olympiad level). For evaluation, we employed the script from Tong et al. (2024) to extract final answers and determine correctness by comparing an- swer equivalency. The generated outputs were all in the form of natural language Chain-of-Thought (CoT) reasoning (Wei et al., 2022) through greedy decoding, with no tool integration, and we report zero-shot pass@1 accuracy. Compared Baselines The main point of comparison is data synthesis methods, including: (1) WizardMath (Luo et al., 2023) proposes a reinforced Evol Instruct method; (2) MetaMath (Yu et al., 2023a) introduces three types of question bootstrapping; (3) MMIQC (Liu & Yao, 2024) proposes an iterative question composing method; (4) Orca-Math (Mitra et al., 2024) augments existing datasets using an Agent-Instruct method; (5) KPMath (Huang et al., 2024a) utilizes inherent topics and key points to synthesize problems; and (6) MathScale (Tang et al., 2024) builds a concept graph to generate new questions. In addition to this, we also involved other large math corpus like (7) DART-Math (Tong et al., 2024) enhances the response generation process through difficulty-guided rejection sampling; (8) Numina-Math (Li et al., 2024c) collects a large corpus by combining existing synthetic data with real-world datasets. More details of these datasets are shown in Table 5. We found that different scripts yielded varying evaluation results. To ensure consistency, we evaluated all released models using the same evaluation scripts. For methods without available results or released models, we retrained the models using their publicly available data. 3.2 MAIN RESULTS ScaleQuest significantly outperforms others Table 1 presents the results. ScaleQuest signifi- cantly outperforms previous synthetic methods, with average performance improvements ranging from 5.6% to 11.5% over the prior state-of-the-art (SoTA) on both general base models and math- specialized foundation models. Qwen2-Math-7B-ScaleQuest achieved a zero-shot pass@1 accuracy of 73.4 on the MATH benchmark, matching the performance of GPT-4-Turbo. For out-of-domain tasks, Qwen2-Math-7B-ScaleQuest outperformed its teacher model, Qwen2-Math-7B-Instruct, with scores of 89.7 on the GSM8K benchmark, 73.4 on the MATH benchmark, and 38.5 on the Olympiad benchmark. It’s important to highlight that Qwen2-Math-7B-Instruct has undergone Group Relative Policy Optimization (GRPO) (Shao et al., 2024), utilizing the powerful reward model Qwen2-Math- RM-72B (Yang et al., 2024a), while our model is only an instruction tuning version. To ensure a fair comparison with other baselines, we have only applied supervised fine-tuning (SFT) in this work, leaving the preference tuning process for future work. ScaleQuest scales well with increasing data We also explored the scalability of our dataset. We used our constructed dataset along with publicly available datasets, including MetaMath (Yu et al., 2023a), DART-Math (Tong et al., 2024), and Numina-Math (Li et al., 2024c). We trained the model using Llama3-8B and observed how its performance scaled with increasing data size. The results are presented in Figure 1. For the in-domain evaluation (MATH), our method demonstrates high data efficiency, achieving superior results with the same amount of data. In out-of-domain evaluations (Olympiad Bench), it also shows strong scalability, continuing to improve even as other datasets reach their limits. A limited question set leads to constrained improvements in model performance, as demonstrated by the results of DART-Math, which relies on a small number of questions and generates numerous correct answers through rejection sampling. Limited questions face a scalability ceiling, as the lack of diversity in the question set restricts further performance growth. Our results further demonstrate that diverse questions support sustained performance growth, emphasizing the need for broader and more varied question generation. 3.3 ABLATION STUDY Ablation on each sub-method To validate the effectiveness of each of our sub-methods, including QFT, QPO, and reward filtering, we conducted an ablation study. We evaluated the quality of the questions generated by the models across three dimensions: solvability, difficulty, and performance in instruction tuning. To assess the model’s solvability and difficulty, we used GPT-4o-mini as the 7 Preprint Figure 5: A comparison of the synthetic dataset generated by the raw instruct model, the model after QFT, the model after QPO, and the final dataset after applying reward filtering. The evaluation covers question solvability, difficulty, and instruction tuning effectiveness on Llama3-8B. evaluation model, with the prompts provided in the Figure 11 and 12. For difficulty evaluation, we calculated the dataset’s average difficulty score based on ratings for each question: “very easy” is rated as 20 points, “easy” as 40 points, “medium” as 60 points, “hard” as 80 points, and “very hard” as 100 points. The results are shown in Figure 5. The “raw model” refers to using the instruct model to directly generate instructions and responses, as done in Xu et al. (2024). To ensure fairness, we also gener- ated 1M question-response pairs using their method based on Qwen2-Math-7B-Instruct, which were used to train Llama3-8B. After applying QFT and QPO, the model’s performance improved across all three evaluation dimensions, demonstrating the effectiveness of our approach. Furthermore, by filtering for solvable questions and applying reward filtering to the responses, the quality of our dataset increased, resulting in significant improvements across all four evaluation benchmarks. Question matters for data synthesis To directly compare the question quality of our constructed data with other open-source datasets, we used the same model, Qwen2-Math-7B-Instruct, to gener- ate responses and fine-tuned DeepSeekMath-7B based on the synthetic datasets. As shown in Ta- ble 2, using the same response generation method, our model outperformed other synthetic datasets like MetaMath and OrcaMath, highlighting the high quality of our questions. NuminaMath also demonstrated competitive performance, largely due to the fact that many of its questions are drawn from real-world scenarios. This also highlights that question quality is crucial for synthetic data. Multiple question generators enhance data diversity We use two models as question generators: DSMath-QGen and Qwen2-Math-QGen, which are based on DeepSeekMath (Shao et al., 2024) and Qwen2-Math (Yang et al., 2024a), respectively. To explore the impact of using multiple question generators, we compared the effects of using data synthesized by a single generator versus a mix of data from both. We fixed the total dataset size at 400K and used it to fine-tune Mistral-7B. As shown in Table 3, we found that the mixed data outperformed the data generated by either single generator. A possible explanation for this improvement is the increased data diversity. In fact, we observed that DSMath-QGen tends to generate simpler, more real-world-oriented questions, while Qwen2-Math-QGen produces more challenging, theory-driven ones. From this, we recognize the potential of using multiple question generators, and we plan to incorporate more question generators as part of our future work. 3.4 COST ANALYSIS The data synthesis process was conducted on a server with 8 A100-40G-PCIe GPUs. We summarize our overall costs in Table 4. Generating 1 million data samples required only 522.9 GPU hours (ap- proximately 2.7 days on an 8-GPU server), with an estimated cost of $680.8 for cloud server rental.2 This is only about 10% of the cost of generating the same data using GPT-4o. This demonstrates that our data generation method is significantly more cost-effective. 2https://lambdalabs.com/service/gpu-cloud 8 SolvabilityDifficulty020406080Percentage (%)37.349.875.450.383.651.7GSM8KMATHCollege MathOlympiad Bench020406080Accuracy (%)70.943.035.212.385.961.539.125.086.161.640.625.087.964.442.825.3Raw+QFT+QPO+Reward Filtering Preprint Table 2: We directly compared the question quality of different open-source datasets. To ensure consistency, all responses were generated using Qwen2-Math-7B-Instruct. Questions Source Response Synthesis Model GSM8K MATH College Math Olympiad Bench Average MetaMath OrcaMath NuminaMath ScaleQuest Qwen2-Math-7B-Instruct Qwen2-Math-7B-Instruct Qwen2-Math-7B-Instruct Qwen2-Math-7B-Instruct 84.5 84.2 86.0 89.5 53.8 53.7 65.9 66.6 40.1 40.5 46.1 47.7 22.1 23.7 30.2 29.9 50.1 50.5 57.1 58.4 Table 3: The performance of Mistral-7B-v0.1 fine-tuned on ScaleQuest-DSMath, ScaleQuest- Qwen2, and a mix of both. In this setup, the instructions for ScaleQuest-DSMath and ScaleQuest- Qwen2-Math were generated by DSMath-QGen and Qwen2-Math-QGen, respectively. We fixed the training data size at 400K and found that the mixed data resulted in the greatest improvement. Synthetic Dataset # Samples GSM8K MATH College Math Olympiad Bench Average ScaleQuest-DSMath ScaleQuest-Qwen2-Math Mixed 400K 400K 400K 87.6 86.8 87.8 52.2 56.1 58.0 39.8 39.6 40.1 19.4 18.7 22.2 49.8 50.3 52.0 4 RELATED WORK 4.1 MATHEMATICAL REASONING Solving math problems is regarded as a key measure of evaluating the reasoning ability of LLMs. Recent advancements in mathematical reasoning for LLMs, including models like OpenAI o1, Claude-3.5, Gemini (Reid et al., 2024), DeepSeekMath (Shao et al., 2024), InternLM2-Math (Cai et al., 2024), and Qwen2.5-Math (Yang et al., 2024b), have spurred the development of various approaches to improve reasoning capabilities of LLMs on math-related tasks. To strengthen the math reasoning capabilities of LLMs, researchers have focused on areas such as prompting tech- niques (Chia et al., 2023; Chen et al., 2023; Zhang et al., 2023), data construction for pretrain- ing (Lewkowycz et al., 2022; Azerbayev et al., 2023; Zhou et al., 2024; Shao et al., 2024) and instruction tuning (Luo et al., 2023; Yue et al., 2023), tool-integrated reasoning(Chen et al., 2022; Gao et al., 2023; Gou et al., 2023; Wang et al., 2023; Yue et al., 2024; Yin et al., 2024; Zhang et al., 2024), and preference tuning (Ma et al., 2023; Luong et al., 2024; Shao et al., 2024; Lai et al., 2024). Our work primarily focuses on math data synthesis for instruction tuning. 4.2 DATA SYNTHESIS FOR MATH INSTRUCTION TUNING High-quality reasoning data, particularly well-crafted questions, is in short supply. Prior efforts have mostly started with a small set of human-annotated seed instructions and expanded them through few-shot prompting. We categorize them into two types: question-driven augmentation and knowledge-driven augmentation. Previous works focus on enhancing seed questions by introducing additional constraints or numerical changes to increase the reasoning steps required. For instance, WizardMath (Luo et al., 2023) uses a series of operations to increase the complexity of questions and answers with GPT-3.5. MetaMath (Yu et al., 2023a) enhances the questions in GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021) by rewriting them in various ways, such as through semantic rephrasing, self-verification, and backward reasoning. Xwin-Math (Li et al., 2024a) and MMIQC (Liu & Yao, 2024) further explore the scalability of the synthetic data. However, these methods face a diversity challenge, as few-shot prompting often results in new instructions that are too similar to the original seed questions (Li et al., 2024b). To increase diversity, recent works have focused on knowledge-driven data synthesis, where they summarize world knowledge from the seed questions and use it to generate synthetic datasets (Didolkar et al., 2024; Shah et al., 2024). MathScale (Tang et al., 2024) extracts math concepts from seed questions and then generate math reasoning data. KPMath (Huang et al., 2024a) begins by extracting topics and key points from seed 9 Preprint Table 4: Cost analysis of the entire data synthesis process. We also estimated the cost of generating the same number of tokens using proprietary models GPT-4 and GPT-4o for comparison. Phase Type # Samples GPU hours Cost ($) QFT QPO Data Synthesis Total Training DSMath-QFT Training Qwen2-Math-QFT Generate Questions Construct Preference Data QPO Training Question Generation solvability & difficulty check Response Generation Reward Scoring Train Train Infer API Train Infer Infer Infer Infer GPT-4 cost (generating the same number of tokens) GPT-4o cost (generating the same number of tokens) 15K 15K 10K×2 10K×2 10K×2 2M 2M 1M×5 1M×5 1M - - 2.0 1.9 0.4 - 6.6 38.4 110.6 251.0 112.0 522.9 - - 2.6 2.5 0.5 6.2 8.5 49.5 142.7 323.8 144.5 680.8 24,939.5 6,115.9 problems using a labeling model, and sample multiple topics and key points for instruction synthe- sis. There are other methods for enhancing dataset quality as well. DART-Math (Tong et al., 2024) focuses on enhancing the quality of responses by using rejection sampling to generate multiple cor- rect answers for each query from GSM8K and MATH. In contrast, Numina-Math (Li et al., 2024c) improves its dataset by collecting more real-world and synthetic data, then reformatting (Fan et al., 2024) the responses using GPT-4o. This high-quality data can be integrated with our constructed dataset, resulting in an improved data mix for more effective instruction tuning. 5 CONCLUSION In this work, we propose ScaleQuest, a novel data synthesis framework that unlocks the ability of open-source smaller models to independently generate large-scale, high-quality reasoning data from scratch, at a low cost. By training the problem-solving models on a small subset of questions, we effectively activate their question-generation capabilities. We also introduce a response enhance- ment method. With these techniques, we successfully developed a fully synthetic math reasoning dataset consisting of 1 million question-answer pairs. Using this dataset, we fine-tuned the model and achieved remarkable improvements, with gains ranging from 29.2% to 46.4% compared to the base model. The fine-tuned 7B model, Qwen2-Math-7B-ScaleQuest, outperforms all competitors in the 7B-70B range and even surpasses proprietary models like GPT-4-Turbo and Claude-3.5-Sonnet. Due to time and cost constraints, there are several areas where our approach can be further optimized. For instance, leveraging more powerful, larger problem-solving models like Qwen2.5-Math-72B- Instruct (Yang et al., 2024b) for question and response generation, using advanced models such as GPT-4o for constructing preference data for Question Preference Optimization, and further scaling up the generation of synthetic data. Each stage of our process has significant room for improvement. In this paper, we have demonstrated the potential of this framework, laying the groundwork for future enhancements. Furthermore, despite the progress made in this work, there are still several limitations that need to be addressed. In our future research, we will concentrate on the following areas: • Large-scale and diverse high-quality data: This work chooses mathematical reasoning as a case study to demonstrate the effectiveness of our method. In the future, we will focus on broader and more complex tasks such as science and competitive programming. Additionally, future research will aim to continuously scale data synthesis to explore the scaling laws for synthetic data and seek a more efficient approach to scaling data generation. • Self-improvement capability: Our experiments demonstrate the model’s self-improvement ca- pability, meaning that it can generate data of higher quality than its original training set. This is evident as Qwen2-Math-7B-ScaleQuest slightly outperforms Qwen2-Math-7B-Instruct. To fur- 10 Preprint ther explore the upper bounds of self-improvement, our future research will focus on synthesizing preference-tuning data to better align the LLMs. REFERENCES Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Al- bert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.10631, 2023. Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, et al. Internlm2 technical report. arXiv preprint arXiv:2403.17297, 2024. Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, and Jianshu Chen. Skills-in-context prompting: Unlocking compositionality in large language models. arXiv preprint arXiv:2308.00304, 2023. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022. Yew Ken Chia, Guizhen Chen, Luu Anh Tuan, Soujanya Poria, and Lidong Bing. Contrastive chain- of-thought prompting. arXiv preprint arXiv:2311.09277, 2023. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Aniket Didolkar, Anirudh Goyal, Nan Rosemary Ke, Siyuan Guo, Michal Valko, Timothy Lillicrap, Danilo Rezende, Yoshua Bengio, Michael Mozer, and Sanjeev Arora. Metacognitive capabilities of llms: An exploration in mathematical problem solving. arXiv preprint arXiv:2405.12205, 2024. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. Run-Ze Fan, Xuefeng Li, Haoyang Zou, Junlong Li, Shwai He, Ethan Chern, Jiewen Hu, and Pengfei Liu. Reformatted alignment. arXiv preprint arXiv:2402.12219, 2024. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pp. 10764–10799. PMLR, 2023. Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen, et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint arXiv:2309.17452, 2023. Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008, 2024. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. Yiming Huang, Xiao Liu, Yeyun Gong, Zhibin Gou, Yelong Shen, Nan Duan, and Weizhu Chen. Key-point-driven data synthesis with its enhancement on mathematical reasoning. arXiv preprint arXiv:2403.02333, 2024a. Yinya Huang, Xiaohan Lin, Zhengying Liu, Qingxing Cao, Huajian Xin, Haiming Wang, Zhenguo Li, Linqi Song, and Xiaodan Liang. Mustard: Mastering uniform synthesis of theorem and proof data. arXiv preprint arXiv:2402.08957, 2024b. 11 Preprint Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Michael Tontchev, Qing Hu, Brian Fuller, Davide Testuggine, et al. Llama guard: Llm-based input-output safeguard for human-ai conversations. arXiv preprint arXiv:2312.06674, 2023. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. Mario Michael Krell, Matej Kosec, Sergio P Perez, and Andrew Fitzgibbon. Efficient sequence packing without cross-contamination: Accelerating large language models without impacting per- formance. arXiv preprint arXiv:2107.02027, 2021. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. Xin Lai, Zhuotao Tian, Yukang Chen, Senqiao Yang, Xiangru Peng, and Jiaya Jia. Step-dpo: Step- wise preference optimization for long-chain reasoning of llms. arXiv preprint arXiv:2406.18629, 2024. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ra- masesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Advances in Neural Information Processing Systems, 35:3843–3857, 2022. Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan Wei, Nanning Zheng, Han Hu, Zheng Zhang, and Houwen Peng. Common 7b language models already possess strong math capabilities. arXiv preprint arXiv:2403.04706, 2024a. Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang, Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, et al. Synthetic data arXiv preprint (almost) from scratch: Generalized instruction tuning for language models. arXiv:2402.13064, 2024b. Jia Li, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Huang, Kashif Rasul, Longhui Yu, Albert Q Jiang, Ziju Shen, et al. Numinamath: The largest public dataset in ai4maths with 860k pairs of competition math problems and solutions. Hugging Face repository, 2024c. Haoxiong Liu and Andrew Chi-Chih Yao. Augmenting math word problems via iterative question composing. arXiv preprint arXiv:2401.09003, 2024. Zimu Lu, Aojun Zhou, Houxing Ren, Ke Wang, Weikang Shi, Junting Pan, Mingjie Zhan, and Hongsheng Li. Mathgenie: Generating synthetic data with question back-translation for enhanc- ing mathematical reasoning of llms. arXiv preprint arXiv:2402.16352, 2024. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qing- wei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023. Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. Reft: Rea- soning with reinforced fine-tuning. arXiv preprint arXiv:2401.08967, 2024. Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia Yang. Let’s reward step by step: Step-level reward model as the navigators for reasoning. arXiv preprint arXiv:2310.10080, 2023. Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. Orca-math: Unlocking the potential of slms in grade school math. arXiv preprint arXiv:2402.14830, 2024. 12 Preprint Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I Jordan, et al. Ray: A distributed frame- work for emerging {AI} applications. In 13th USENIX symposium on operating systems design and implementation (OSDI 18), pp. 561–577, 2018. Eirini Ntoutsi, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria-Esther Vidal, Salvatore Ruggieri, Franco Turini, Symeon Papadopoulos, Emmanouil Krasanakis, et al. Bias in data-driven artificial intelligence systems—an introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3):e1356, 2020. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024. Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean- baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. Gem- ini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530, 2024. Vedant Shah, Dingli Yu, Kaifeng Lyu, Simon Park, Nan Rosemary Ke, Michael Mozer, Yoshua Bengio, Sanjeev Arora, and Anirudh Goyal. Ai-assisted generation of difficult math questions. arXiv preprint arXiv:2407.21009, 2024. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Yu Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. Zhengyang Tang, Xingxing Zhang, Benyou Wan, and Furu Wei. Mathscale: Scaling instruction tuning for mathematical reasoning. arXiv preprint arXiv:2403.02884, 2024. Yuxuan Tong, Xiwen Zhang, Rui Wang, Ruidong Wu, and Junxian He. Dart-math: Difficulty-aware rejection tuning for mathematical problem-solving. arXiv preprint arXiv:2407.13690, 2024. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi Song, Mingjie Zhan, and Hongsheng Li. Mathcoder: Seamless code integration in llms for en- hanced mathematical reasoning. arXiv preprint arXiv:2310.03731, 2023. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. Zhangchen Xu, Fengqing Jiang, Luyao Niu, Yuntian Deng, Radha Poovendran, Yejin Choi, and Bill Yuchen Lin. Magpie: Alignment data synthesis from scratch by prompting aligned llms with nothing. arXiv preprint arXiv:2406.08464, 2024. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, arXiv preprint Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. arXiv:2407.10671, 2024a. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jian- hong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024b. Shuo Yin, Weihao You, Zhilong Ji, Guoqiang Zhong, and Jinfeng Bai. Mumath-code: Combin- ing tool-use large language models with multi-perspective data augmentation for mathematical reasoning. arXiv preprint arXiv:2405.07551, 2024. 13 Preprint Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhen- guo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023a. Weichen Yu, Tianyu Pang, Qian Liu, Chao Du, Bingyi Kang, Yan Huang, Min Lin, and Shuicheng Yan. Bag of tricks for training data extraction from language models. In International Conference on Machine Learning, pp. 40306–40320. PMLR, 2023b. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825, 2023. Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653, 2023. Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from the web. arXiv preprint arXiv:2405.03548, 2024. Weihao Zeng, Can Xu, Yingxiu Zhao, Jian-Guang Lou, and Weizhu Chen. Automatic instruction evolving for large language models. arXiv preprint arXiv:2406.00770, 2024. Beichen Zhang, Kun Zhou, Xilin Wei, Xin Zhao, Jing Sha, Shijin Wang, and Ji-Rong Wen. Evalu- ating and improving tool-augmented computation-intensive math reasoning. Advances in Neural Information Processing Systems, 36, 2024. Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew Chi-Chih Yao. Cumulative reasoning with large language models. arXiv preprint arXiv:2308.04371, 2023. Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, and Yuntian Deng. Wildchat: 1m chatgpt interaction logs in the wild. arXiv preprint arXiv:2405.01470, 2024. Kun Zhou, Beichen Zhang, Jiapeng Wang, Zhipeng Chen, Wayne Xin Zhao, Jing Sha, Zhichao Sheng, Shijin Wang, and Ji-Rong Wen. Jiuzhang3. 0: Efficiently improving mathematical reason- ing by training small data synthesis models. arXiv preprint arXiv:2405.14365, 2024. 14 Preprint A ADDITIONAL DATA STATISTICS Filtering process The entire data generation process is illustrated in Figure 6. After using the two question generators to produce 2 million questions from scratch, we performed a filtering process, including language filtering, solvability checks, and difficulty sampling. These steps filtered out 20.1%, 19.4%, and 9.2% of the samples, respectively, resulting in a final question set of 1 million questions. In the subsequent response generation process, we filtered out responses without answers by checking for key phrases such as “The answer is” or “\boxed{}”. This step eliminated a negli- gible portion of the samples, as most of the filtered questions were solvable and did not pose any confusion for the response generation model. Figure 6: Overview of our filtering process. Dataset Coverage We analyze the dataset coverage through two aspects: (1) Problem Topic Cov- erage, such as algebra and geometry. Following Huang et al. (2024a), we use GPT-4o to categorize the topics of the given questions, with prompt illustrated in Figure 13. Figure 7 presents the results. We found that the topics covered the major areas of mathematics, such as arithmetic, algebra, geom- etry, and others. (2) Embedding space analysis. Following Zhao et al. (2024) and Xu et al. (2024), we first compute the input embeddings of the questions and then project them into a two-dimensional space using t-SNE (Van der Maaten & Hinton, 2008). We included only real-world datasets, such as GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021), and NuminaMath (Li et al., 2024c) (which contains a small portion of synthetic questions). As shown in Figure 8, our synthetic data closely resembles the real-world questions. Figure 7: Topic distribution of our generated dataset. Figure 8: MATH, and NuminaMath. t-SNE plot of our dataset, with GSM8K, 15 Problem Designer Solution Generation2M ProblemsLanguageFilteringDifficultySampling1M ProblemsSolvabilityFilteringSampling5 solutionsFlatten &RewardFiltering20.1%1M Problem-solution pairsx 519.4%9.2% Reward ScoringArithmetic31.1%Algebra25.1%Geometry15.2%Others10.6%Number Theory7.33%Combinatorics4.71%Probability3.57%Trigonometry2.35%Loading [MathJax]/extensions/MathMenu.js806040200204060806040200204060GSM8KMATHNuminaMathOurs Preprint Safety Analysis We used Llama3-8B-Guard (Inan et al., 2023) as a discriminator model to detect any unsafe elements in the data. After sampling 10K instances from the 1 million samples, we found that only 0.1% were flagged as unsafe. Generated Examples We sampled several generated examples from our datasets, as shown in Figure 16, 17 and 18. The generated math problems are of high quality, driving effective learning. Table 5: Comparison between our constructed dataset and previous datasets. Dataset WizardMath (Luo et al., 2023) MetaMath (Yu et al., 2023a) MMIQC (Liu & Yao, 2024) Orca-Math (Mitra et al., 2024) Xwin-Math (Li et al., 2024a) KPMath-Plus (Huang et al., 2024a) MathsScale (Tang et al., 2024) DART-Math (Tong et al., 2024) Numina-Math (Li et al., 2024c) ScaleQuest Size Synthesis Model 96K GPT-4 395K GPT-3.5-Turbo 2294K GPT-4 & GPT-3.5-Turbo & Human 200K GPT-4-Turbo 1440K GPT-4-Turbo 1576K GPT-4 2021K GPT-3.5 & Human 585K DeepSeekMath-7B-RL 860K GPT-4 & GPT-4o 1000K DeepSeekMath-7B-RL Qwen2-Math-7B-Instruct Public ✗ ✓ ✓ ✓ ✗ ✗ ✗ ✓ ✓ ✓ B PROMPTS Prompts for Problem Solvability Optimization Please act as a professional math teacher. Your goal is to create high quality math word problems to help students learn math. You will be given a math question. Please optimize the Given Question and follow the instructions. To achieve the goal, please follow the steps: # Please check that the given question is a math question and write detailed solution to the Given Question. # Based on the problem-solving process, double check the question is solvable. # If you feel that the given question is not a meaningful math question, rewrite one that makes sense to you. Otherwise, modify the Given question according to your checking comment to ensure it is solvable and of high quality. # If the question can be solved with just a few simple thinking processes, you can rewrite it to explicitly request multiple-step reasoning. You have five principles to do this: # Ensure the optimized question only asks for one thing, be reasonable and solvable, be based on the Given Question (if possible), and can be answered with only a number (float or integer). For example, DO NOT ask, ‘what is the amount of A, B and C?’. # Ensure the optimized question is in line with common sense of life. For example, the amount someone has or pays must be a positive number, and the number of people must be an integer. If you want to # Ensure your student can answer the optimized question without the given question. use some numbers, conditions or background in the given question, please restate them to ensure no information is omitted in your optimized question. # Please DO NOT include solution in your question. Given Question: problem Your output should be in the following format: CREATED QUESTION: [your created question] VERIFICATION AND MODIFICATION: [solve the question step-by-step and modify it to follow all principles] FINAL QUESTION: [your final created question] Figure 9: The prompts used to optimize the solvability of questions for QPO Training. 16 Preprint Prompts for Problem Difficulty Optimization You are an Math Problem Rewriter that rewrites the given #Problem# into a more complex version. Please follow the steps below to rewrite the given ”#Problem#” into a more complex version. Step 1: Please read the ”#Problem#” carefully and list all the possible methods to make this prob- lem more complex (to make it a bit harder for well-known AI assistants such as ChatGPT and GPT4 to handle). Note that the problem itself might be erroneous, and you need to first correct the errors within it. Step 2: Please create a comprehensive plan based on the #Methods List# generated in Step 1 to make the #Problem# more complex. The plan should include several methods from the #Methods List#. Step 3: Please execute the plan step by step and provide the #Rewritten Problem#. #Rewritten Problem# can only add 10 to 20 words into the ”#Problem#”. Step 4: Please carefully review the #Rewritten Problem# and identify any unreasonable parts. Ensure that the #Rewritten Problem# is only a more complex version of the #Problem#. Just provide the #Finally Rewritten Problem# without any explanation and step-by-step reasoning guidance. Please reply strictly in the following format: Step 1 #Methods List#: Step 2 #Plan#: Step 3 #Rewritten Problem#: Step 4 #Finally Rewritten Problem#: #Problem#: Problem Figure 10: The prompts used to optimize the difficulty of questions for QPO Training. Prompts for Problem Solvability Check Please act as a professional math teacher. Your goal is to determine if the given problem is a valuable math problem. You need to consider two aspects: 1. The given problem is a math problem. 2. The given math problem can be solved based on the conditions provided in the problem (You can first try to solve it and then judge its solvability). Please reason step by step and conclude with either ‘Yes’ or ‘No’. Given Problem: Problem Figure 11: The prompts used to check the solvability of questions. 17 Preprint Prompts for Difficulty Classification # Instruction You first need to identify the given user intent and then label the difficulty level of the user query based on the content of the user query. ## User Query ‘‘‘ Input ‘‘‘ ## Output Format Given the user query, in your output, you first need to identify the user intent and the knowledge needed to solve the task in the user query. Then, rate the difficulty level of the user query as very easy, easy, medium, hard, or very hard. the user intent and difficulty level below in a json format by filling in the Now, please output placeholders in []: ‘‘‘ {{ “intent”: “The user wants to [....]”, “knowledge”: “To solve this problem, the models need to know [....]”, “difficulty”: “[very easy/easy/medium/hard/very hard]” }} ‘‘‘ Figure 12: The prompts used to judge the difficulty level of questions. Prompts for Topic Classification As a mathematics education specialist, please analyze the topics of the provided question and its answer. Specific requirements are as follows: 1. You should identify and categorize the main mathematical topics involved in the problem. If knowledge from non-mathematical fields is used, it is classified into Others - xxx, such as Others - Problem Context. 2. You should put your final answer between <TOPIC> and </TOPIC>. —— Question: Compute cos 330◦. Answer: We know that 330◦ = 360◦ − 30◦. Since cos(360◦ − θ) = cos θ for all angles θ, we have cos 330◦ = cos 30◦. Since cos 30◦ = √ 3 2 , we can conclude that cos 330◦ = √ 3 2 . Analysis: <TOPIC>Trigonometry - Cosine Function</TOPIC> —— Question: Question Answer: Answer Analysis: Figure 13: The prompts used for topic classification. 18 Preprint Examples for Solvability Optimization Problems 1 (Before Optimization): There are 10 survivors in an emergency room. Each survivor is either a child, a woman, or a man. If there are 4 men and 3 times as many women as men, how many children are there? Problems 1 (After Optimization): There are 10 survivors in an emergency room. Each survivor is either a child, a woman, or a man. If there are 4 men and an equal number of women as men, how many children are there? Problems 2 (Before Optimization): How many sides does a polygon have if it is a regular polygon? Problems 2 (After Optimization): How many sides does a regular polygon have if each interior angle is 120 degrees? Problems 3 (Before Optimization): Find the sum of the first three terms of this series. Problems 3 (After Optimization): Calculate the sum of the first three terms of the arithmetic series where the first term is 5 and the common difference is 3. Figure 14: Three examples for solvability optimization by GPT-4o-mini. Examples for Difficulty Optimization Problems 1 (Before Optimization): How many 4-digit positive integers are there? Problems 1 (After Optimization): How many 4-digit positive integers can be formed using non-repeating digits where the sum of these digits must be even, and the integers fall within the range of 1000 to 9999? Problems 2 (Before Optimization): The average of 15 numbers is 32. An additional number is then added to the list, and the new average of the 16 numbers is 34. What number was added to the list? Problems 2 (After Optimization): The average of 15 positive integers is 32, but one integer fluctuates to 30 before adding a new number. After adding this new number, the average of the 16 integers becomes 34. Calculate the added number and find the standard deviation of all 16 integers, considering their ascending order. Problems 3 (Before Optimization): A fair coin is tossed 50 times, what is the probability of getting heads at least 25 times? Problems 3 (After Optimization): A fair coin is tossed 50 times; what is the probability of obtaining heads at least 25 times, and can you also calculate the expected number, variance, and standard deviation of heads while determining the likelihood that the total number of heads exceeds 30? Figure 15: Three examples for difficulty optimization by GPT-4o-mini. 19 Preprint ScaleQuest Example 1 QUESTION: My orthogonal projection matrix P projects onto the plane with equation x + y + z = 0. Find the matrix P . SOLUTION: To find the orthogonal projection matrix P that projects onto the plane with the equation x + y + z = 0, we can follow these steps: 1. Identify the normal vector to the plane: The plane x + y + z = 0 has a normal vector n =    . 1 1 1 2. Express the projection matrix in terms of the normal vector: The orthogonal projection matrix P onto the plane can be written as: P = I − nnT nT n where I is the 3 × 3 identity matrix, n is the normal vector, and nT is the transpose of the normal vector. 3. Calculate nT n: nT n = (cid:0)1 1 1(cid:1)   = 12 + 12 + 12 = 3   1 1 1 4. Calculate nnT :   nnT =   1 1 1 (cid:0)1 1 1(cid:1) = 5. Substitute these into the projection matrix formula:   1 1 1 1 1 1   1 1 1 P = I − 1 3   1 1 1 1 1 1   1 1 1 where I is the 3 × 3 identity matrix: I = 6. Perform the matrix subtraction:   1 0 0 0 1 0   0 0 1   − 0 0 1   1 1 1 1 3 2 3 − 1 3 − 1 − 1 2 3 3 − 1 3 − 1 3 2 3 3 3 − 1   1 1 1 1 1 1   P =   1 0 0 P = 0 1 0   Therefore, the orthogonal projection matrix P is:   2 3 − 1 3 − 1 − 1 2 3 3 − 1 3 − 1 3 2 3 3 3 − 1   Figure 16: An example from our constructed dataset. 20 Preprint ScaleQuest Example 2 QUESTION: Suppose a, b are positive real numbers such that a + b = 1. (a) minimize 1 a + 1 b ; (b) minimize a2 + b2. SOLUTION: To solve these problems, we will use the method of Lagrange multipliers and the AM-HM inequality. Part (a): Minimize 1 First, we use the method of Lagrange multipliers. Define the function f (a, b) = 1 g(a, b) = a + b − 1 = 0. The Lagrangian is: b given a + b = 1 a + 1 a + 1 b and the constraint L(a, b, λ) = + + λ(a + b − 1) 1 a 1 b Taking the partial derivatives and setting them to zero, we get: ∂L ∂a ∂L ∂b = − = − 1 a2 + λ = 0 =⇒ λ = 1 b2 + λ = 0 =⇒ λ = 1 a2 1 b2 ∂L ∂λ = a + b − 1 = 0 Since λ = 1 a2 and λ = 1 b2 , we have: 1 a2 = Substituting a = b into the constraint a + b = 1, we get: 1 b2 =⇒ a2 = b2 =⇒ a = b (since a, b > 0) a + a = 1 =⇒ 2a = 1 =⇒ a = 1 2 =⇒ b = 1 2 Now, we calculate 1 a + 1 b : 1 a + 1 b = + 1 1 2 1 1 2 = 2 + 2 = 4 Thus, the minimum value of 1 Part (b): Minimize a2 + b2 given a + b = 1 We use the identity for the square of a sum: b is 4 . a + 1 (a + b)2 = a2 + b2 + 2ab Given a + b = 1, we have: 12 = a2 + b2 + 2ab =⇒ 1 = a2 + b2 + 2ab =⇒ a2 + b2 = 1 − 2ab To minimize a2 + b2, we need to maximize ab. Using the AM-GM inequality: √ a + b 2 ≥ ab =⇒ √ 1 2 ≥ ab =⇒ (cid:19)2 (cid:18) 1 2 ≥ ab =⇒ 1 4 ≥ ab The maximum value of ab is 1 Substituting ab = 1 4 into the expression for a2 + b2: 4 , which occurs when a = b = 1 2 . a2 + b2 = 1 − 2ab = 1 − 2 (cid:19) (cid:18) 1 4 = 1 − 1 2 = 1 2 Thus, the minimum value of a2 + b2 is 1 2 . Figure 17: An example from our constructed dataset. 21 Preprint ScaleQuest Example 3 QUESTION: Compute ∞ (cid:88) n=1 4n + 5n + 6n 12n . SOLUTION: To compute the sum (cid:80)∞ n=1 4n+5n+6n 12n , we can split the sum into three separate sums: ∞ (cid:88) 4n + 5n + 6n 12n ∞ (cid:88) = 4n 12n + ∞ (cid:88) ∞ (cid:88) 5n 12n + 6n 12n . n=1 Each of these sums is a geometric series. Let’s evaluate each one separately. 1. For the first sum (cid:80)∞ n=1 n=1 n=1 4n 12n : n=1 ∞ (cid:88) n=1 4n 12n = ∞ (cid:88) n=1 (cid:18) 4 12 (cid:19)n = (cid:19)n . ∞ (cid:88) n=1 (cid:18) 1 3 The sum of an infinite geometric series (cid:80)∞ so: n=1 arn−1 is given by a (cid:18) 1 3 1 3 1 − 1 3 (cid:19)n 1 3 2 3 = = = 1 2 . ∞ (cid:88) n=1 1−r for |r| < 1. Here, a = 1 3 and r = 1 3 , 2. For the second sum (cid:80)∞ n=1 5n 12n : Here, a = 5 12 and r = 5 12 , so: 3. For the third sum (cid:80)∞ n=1 6n 12n : ∞ (cid:88) n=1 5n 12n = ∞ (cid:88) n=1 (cid:18) 5 12 (cid:19)n . (cid:19)n ∞ (cid:88) n=1 (cid:18) 5 12 = 5 12 1 − 5 12 = 5 12 7 12 = 5 7 . Here, a = 1 2 and r = 1 2 , so: ∞ (cid:88) n=1 6n 12n = ∞ (cid:88) n=1 (cid:18) 6 12 (cid:19)n = (cid:19)n . ∞ (cid:88) n=1 (cid:18) 1 2 (cid:19)n ∞ (cid:88) n=1 (cid:18) 1 2 = 1 2 1 − 1 2 = 1 2 1 2 = 1. Adding these three results together, we get: ∞ (cid:88) n=1 4n + 5n + 6n 12n = 1 2 + 5 7 + 1. To add these fractions, we need a common denominator. The least common multiple of 2, 7, and 1 is 14. So we convert each fraction: 1 2 = 7 14 , 5 7 10 14 = , 1 = 14 14 . Adding these fractions together, we get: Thus, the sum is: 7 14 + 10 14 + 14 14 = 7 + 10 + 14 14 = 31 14 . 31 14 . Figure 18: An example from our constructed dataset. 22
ai_researcher
2
Collaborative_idea_exchange_and_material_tinkering_influence_families’_creative_engineering_practices_and_products_during_engineering_programs_in_informal_learning_environments.pdf
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills Yun-Cheng Tsai1 National Taiwan Normal University, Taipei 10610, Taiwan, R.O.C. Abstract. The ChatGPT Python API plays a crucial role in promoting Learner-Centered Instruc- tion (LCI) and aligns with the principles of Tinker Learning, allowing students to discover their learning strategies. LCI emphasizes the importance of active, hands-on learning experiences and encourages students to take responsibility for their learning journey. By integrating the ChatGPT Python API into the educational process, students can explore various resources, generate new ideas, and create content in a more personalized manner. This innovative approach enables students to engage with the learning material deeper, fostering a sense of ownership and motivation. As they work through the Creative Learning Spiral, students develop essential skills such as critical think- ing, problem-solving, and creativity. The ChatGPT Python API is a valuable tool for students to explore different solutions, evaluate alternatives, and make informed decisions, all while encourag- ing self-directed learning. In Tinker Learning environments, the integration of ChatGPT Python API empowers students to experiment and iterate, allowing them to find the most effective learning strategies that cater to their individual needs and preferences. This personalized approach helps students to become more confident in their abilities, leading to tremendous academic success and long-term skill development. By leveraging the capabilities of the ChatGPT Python API, educa- tional institutions can create a more engaging, supportive, and dynamic learning environment. This approach aligns with the principles of Learner-Centered Instruction and Tinker Learning, promot- ing a culture of curiosity, exploration, and creativity among students while preparing them for the challenges of the fast-paced, ever-changing world. Keywords: ChatGPT Python API · Tinker Learning · Learner-Centered Instruction · Creative Learning Spiral. 1 Introduction Learner-Centered Instruction (LCI) fosters active learning experiences and empowers students to take charge of their educational journey [21]. This instructional approach emphasizes hands-on exploration, problem-solving, and collaboration as essential for knowledge construction [15]. LCI cultivates a culture of curiosity, exploration, and creativity, equipping students to face the rapidly evolving, dynamic world [6]. The Cone of Learning model posits that the most effective learning occurs through firsthand experiences, supplemented by practicing with the material, hearing about it, and lastly, reading about it [5]. The Tinker Learning approach advocates for learners to build their knowledge through hands-on exploration and discovery [18]. Grounded in the notion that students learn optimally by experimenting with and manipulating learning materials instead of receiving direct instructions [12], this approach closely aligns with the Cone of Learning, underscoring firsthand experience and practice as pivotal to effective learning [8]. The Creative Learning Spiral, a five-step process, guides learners through creative problem- solving [17], fostering creativity and promoting out-of-the-box thinking during challenges [19]. ChatGPT is a natural language processing technology developed by OpenAI that aims to generate fluent and coherent text. The API for this technology became available to the public in June 2020, allowing developers and researchers to harness the power of ChatGPT easily. ChatGPT is a commercial product released by OpenAI. Using the ChatGPT API is subject to their terms of service and policies [16]. We have extensively utilized the ChatGPT API in my classroom for cross-disciplinary text mining and programming teaching. This approach has proved effective in increasing students’ sense of achievement and breaking through the sample size limitations in qualitative analysis. By harnessing the power of ChatGPT, students can engage in deep thinking through the practical application of programming skills, leading to a more profound understanding of the subject matter. Moreover, the ChatGPT API has enabled us to explore new avenues in text analysis and language processing. It has provided students with the opportunity to learn 2 Yun-Cheng Tsai cutting-edge technologies and techniques. Through this teaching methodology, students have developed a strong foundation in programming, critical thinking, and problem-solving skills, equipping them with valuable skills essential for success in today’s rapidly evolving technological landscape. As such, using the ChatGPT API in the classroom is a promising pedagogical approach that can lead to significant educational benefits for educators and students. Incorporating resources such as the ChatGPT Python API, this paper delineates a process involv- ing imagination, creation, play, sharing, and feedback reception. The strategy aligns with the Tinker Learning approach and the Cone of Learning, emphasizing the significance of hands-on exploration, ex- perimentation, and trial-and-error in learning [4]. LCI is compatible with educational models like the Cone of Learning and Tinker Learning, which accentuate firsthand experiences and practical application in learning processes [8]. These three models underscore the importance of active engagement with learning through experimentation and hands-on activities. They concur that students learn most effectively when actively involved in and encouraged to explore and construct new knowledge or skills [14]. Each model offers distinct perspectives on learning, focusing on retention [7], hands-on activities [18], and creativity cultivation [17], respectively. By understanding LCI’s advantages and alignment with models such as the Cone of Learning and Tinker Learning, educators can make well-informed decisions regarding the most efficacious classroom management strategies for their students’ success [11]. The structure of this paper is as follows: in Section 2, We review previous research on computer science education and learner sourcing. In Section 3, We describe our research methods and research questions. The results are presented in Section 4 and discussed in Section 5. Finally, we conclude the article in Section 6. 2 Literature Review 2.1 Learner-Centered Instruction (LCI) In the recent ten years, Learner-Centered Instruction (LCI) has gained increasing prominence in educa- tion [22]. The World Economic Forum’s "Education 4.0" initiative, released in May 2022, emphasizes the importance of a learner-centered teaching model that utilizes technology and innovation to equip learners with diverse skills to navigate the challenges of the fourth industrial revolution [20]. The Cone of Learning model illustrates the learning levels based on retention and difficulty. It suggests that learners retain more information when actively learning instead of passively receiving information [9]. Therefore, learner-centered instruction, which puts the learner at the center of the learning process and allows them to be actively engaged, can effectively combine this model with a new learning strategy. This approach encourages learners to take ownership of their learning and develop higher-order thinking skills, such as analysis, synthesis, evaluation, and application. By incorporating problem-based, collaborative, and inquiry-based learning into the curriculum, learners can engage in meaningful and authentic learning experiences that can result in a deeper understanding and retention of the material [1][20] 2.2 Tinker Learning Research has shown that learner-centered approaches and related teaching strategies, such as the Tinker Learning method proposed by Seymour Papert, a professor at MIT, are effective in improving learning outcomes [13][3]. Tinker Learning is a learner-centered instruction strategy incorporating the Cone of Learning concept to create a new approach to learning [10]. The Cone of Learning theory suggests that people retain more information when actively involved in the learning process rather than passively receiving information. By combining these two concepts, Tinker Learning allows learners to actively explore and experiment with new ideas and concepts, leading to a deeper understanding and retention of the material. Through this process, learners can construct their knowledge and develop their own "learning how to learn" and 4C (creativity, critical thinking, communication, and collaboration) skills [2]. 2.3 Creative Learning Spiral The Tinker Learning approach, also known as the Creative Learning Spiral, involves a process of playful exploration, creation, and iteration in which students develop their knowledge through the repeated cycles of imagining, creating, executing, and completing projects, as well as sharing and receiving feedback [17]. This process is similar to children’s play, in which they experiment with different combinations and configurations, seeking new ideas and expressing their creativity through exploration and experimentation. 3 Fig. 1. Creative Learning Spiral by Mitchel Resnick A creative Learning Spiral is a teaching strategy that guides students through the process of creative thinking, from idea generation to implementation. Fig. 1 involves five steps as follows: 1. IMAGINE: trying out new ideas. 2. CREATE: exploring different solution paths. 3. PLAY: making adjustments. 4. SHARE: imagining new possibilities. 5. REFLECT: creatively expressing ideas. Students can build up their knowledge base in a spiral fashion by repeatedly attempting to develop new ideas, create, explore through play, and share and receive feedback. This approach helps students understand how creative ideas are generated and develops their skills as creative thinkers. 3 Methods The paper’s approach to teaching students the skills of using ChatGPT Python API is based on the principles of Tinker Learning, an approach developed by MIT Professor Seymour Papert. The critical principle of Tinker Learning is that it emphasizes providing learners with "sufficient learning environ- ments" in which they can construct their knowledge. The role of the teacher shifts from simply lecturing and demonstrating fixed examples to serving as a coach or mentor, and the classroom becomes more like a "swimming pool or sports field." The following process is an example of Methods for Analyzing Highly Cited Blockchain in Education Related Papers from 2019-2023 Using ChatGPT and LDA. 4 Yun-Cheng Tsai 1. Data Collection: We collected highly cited Blockchain in Education-related papers published from 2019-2023 through academic search engines such as Google Scholar, Microsoft Academic, and Semantic Scholar. 2. ChatGPT Python API: We utilized ChatGPT Python API, a language generation model, to generate summaries for each paper. These summaries provided a brief understanding of the content of each paper. 3. LDA: We applied LDA, a topic modeling algorithm, to identify the main topics discussed in the papers. We set the number of issues and ran the model to obtain each topic’s main word list and corresponding weights. 4. Analysis: The Latent Dirichelet Allocation (LDA) results were analyzed to identify the topics discussed in the highly cited Blockchain in Education-related papers. We also cross-checked the paper summaries to confirm the issues. 5. Conclusion: We used ChatGPT Python API and LDA to analyze highly cited Blockchain in Education- related papers from 2019-2023. Our analysis identified the main topics discussed in these papers, which will help to inform future research in this area. Data Collection ChatGPT Python API LDA Analysis Conclusion The length of the arrows in our teaching strategy represents the time it takes for both the teacher and student to progress to the next stage. With the help of ChatGPT, we aim to reduce the time spent on pre-processing text during qualitative analysis, allowing for more discussion and deep thinking after visualizing the data. We believe this approach can foster autonomous learning and analytical thinking skills among students, enabling them to take ownership of their learning experience. By utilizing the ChatGPT Python API, educators can empower students to delve deeper into their research topics and reach their full potential. This, combined with Tinker Learning’s learner-centered teaching strategies, can help students acquire the critical "how to learn" skills necessary for success in a rapidly evolving technological landscape. To implement this approach in our classroom, we use live coding and debugging techniques to help students build their knowledge step by step as they practice writing programming code to solve real-world problems. We also use the GitHub version control platform to track students’ code submissions and changes, allowing them to identify problems, design solutions, plan actions, collect data, make decisions, and present their work. This way, Tinker Learning transforms the traditional teacher- centered classroom into a more learner-centered one. We found that the programming languages students learn in school may differ from those they en- counter in the workforce. For example, in 2000, university students were mainly learning C language, as Python did not yet exist. However, through self-learning, we have become proficient in Python and can teach students how to apply it to problems of interest to us. This is because we have acquired the "how to learn" skills through our C language experience. We have been thinking for seven years about incorporating the skills of "how to learn" into a teaching strategy to help students "learn how to learn." It was not until we read Lifelong Kindergarten: Cultivating Creativity through Projects, Passion, Peers, and Play by Mitchel Resnick that we realized the core spirit of Tinker Learning’s teaching strategies is that good education is not just about how well teachers can teach, but about providing learners with a "sufficient learning environment" to construct their own body of knowledge. By drawing an analogy between "learners constructing their own body of knowledge" and "how to learn" skills, we have found that Tinker Learning may be the solution we have been seeking. We are now interested in exploring how we can use Tinker Learning’s teaching strategies to enable students to acquire the skills of "how to learn." Fig. 2 shows that we use Python API for ChatGPT, which offers a solution to the character limit present in some user interface windows, allowing for the analysis of large amounts of text through loops and def functions. This feature has the potential to save time previously spent on the tedious task of preprocessing text. By freeing up this time, students can focus on the more critical functions of analysis, discussion, and data visualization. This approach encourages autonomous learning and fosters analytical thinking skills, empowering students to take ownership of their learning experience. By utilizing the ChatGPT API, educators can give students the tools to delve deeper into their research topics and reach their full potential. 3.1 Learning How to Learn The goal of this teaching is "learning how to learn" and the 4Cs: critical thinking, communication, collaboration, and creativity. No one can honestly know how the future will change, and any assumptions 5 Fig. 2. The example of Methods for Analyzing Highly Cited Blockchain in Education Related Papers from 2019- 2023 Using ChatGPT and LDA. may be far from the actual end. Therefore, what schools should give students now should be "learning how to learn" and the 4Cs: critical thinking, communication, collaboration, and creativity. More broadly, the curriculum should emphasize general information skills that can be integrated into daily life and, most importantly, the ability to adapt, learn new things, maintain mental balance in unfamiliar situations, and find appropriate solutions to problems. In such a world, "Less is more." Teachers do not need to teach students more information. Students must understand data, judge what is essential, and combine bits of information to form a holistic worldview. All technical hands-on courses using programming languages as the development tool are the best way to approach such a world and achieve the 4Cs. Like the brain bookshelf in the picture below, after the teacher gives students the minimum essential tools, the rest of the knowledge and ability development is for students to build their learning through hands-on work. Good education is not just about making teachers teach well but providing a "sufficient learning environment" for learners to construct their knowledge. We found that Tinker Learning was the answer we had been looking for. We want to find out how we acquired the skills of "how to learn" through this project and to use the teaching strategies of Tinker Learning to enable students to develop the skills of "how to learn." 3.2 Tinker Learning, Live Coding, and Live debugging The goal of teaching is "learning how to learn" and the 4Cs. Because no one can honestly know how the future will change and any assumptions may be far from reality, schools should focus on teaching students "how to learn" and the 4Cs: critical thinking, communication, collaboration, and creativity. The curriculum should emphasize general information skills that can be integrated into daily life and, most 6 Yun-Cheng Tsai importantly, the ability to adapt, learn new things, maintain mental balance in unfamiliar situations, and find appropriate solutions to problems. In such a world, "less is more." Instead of overwhelming students with information, it is more critical for them to understand data, judge what information is essential, and combine bits of information to form a holistic view of the world. Technical, hands-on courses using programming languages as development tools can provide an excellent approach to achieving these goals. Tinker Learning, Live Coding, and Live debugging are teaching methods in this course. Tinker Learning involves a process of playful exploration, creation, and iteration. It is similar to children’s play, where stu- dents experiment with different combinations and configurations, seeking new ideas and expressing their creativity through exploration and experimentation. This process helps students develop their knowledge through repeated cycles of imagining, creating, executing, and completing projects as sharing and receiv- ing feedback. Live Coding and Live Debug involve students writing code to solve real-world problems, practicing identifying problems, designing projects, planning actions, collecting data, solving problems, and making decisions. These methods transform the traditional teacher-centered classroom into a more learner-centered environment, creating a "co-learning and co-creation" atmosphere at the school where the role of the teacher shifts from simply presenting fixed examples to serving as a coach. Using GitHub, teach- ers can help students track code submissions and changes, fostering the development of cross-disciplinary professionals with practical implementation skills. Tinker Learning promotes active, hands-on learning and encourages students to take ownership of their learning process through live coding, debugging, and tracking code submissions and changes via the GitHub platform. 3.3 Implementation of the Study A program to analyze the records of all students on GitHub following the research framework. The teacher should guide the students to find a problem in their daily lives that interests them and discuss how to use the tools learned in the week to solve the problem. The teacher should demonstrate how they used specific techniques and tools to solve the problem and how these techniques can be extended. Through the Tinker Learning teaching strategy, students can try new ideas, explore different paths to solve problems, make adjustments, imagine new possibilities, and creatively express their views. The Creative Learning Spiral process allows students to understand how creativity develops from idea to implementation and become creative thinking practitioners. By repeating the process of trying to imagine, create, execute, and complete creations through play, sharing, and receiving feedback, students’ knowledge is built up step by step like a spiral. Fig. 3. Our Creative Learning Spiral for teaching students the skills of ChatGPT Python API. 7 Fig. 3 shows our Creative Learning Spiral for teaching the skills of ChatGPT Python API to students, incorporates the following five steps: 1. IMAGE: The ChatGPT Python API can assist students in exploring and refining their ideas by providing them with relevant information and insights based on their input. 2. CREATE: The ChatGPT Python API can be used as a tool to help students develop their projects by generating text and language models based on their requirements. 3. PLAY: Students can use the ChatGPT Python API to analyze and understand large amounts of text data, identify patterns and relationships, and draw insights from their findings. By working with the API, students can develop their analytical skills and gain a deeper understanding of the subject matter. 4. SHARE: GitHub can serve as a powerful platform for students to work together on coding projects, share code snippets, and provide feedback to one another. By incorporating the ChatGPT Python API in their projects, students can collaborate on analyzing large amounts of text data and gain insights from their findings. 5. REFLECT: By receiving feedback and reviews from their peers and instructors, students can reflect on what they have learned and how they can improve their skills in the future. The ChatGPT Python API can provide them with a valuable tool for identifying improvement areas and refining their problem-solving approach. This approach emphasizes active engagement and experimentation as key components of effective learning and uses the Creative Learning Spiral process to guide students through creatively solving prob- lems. 4 Results This section will explore how students can demonstrate their proficiency and practical implementation skills by creating a portfolio of finished projects on their GitHub accounts. By analyzing the progress of their projects over time, we can track their growth and learn throughout the course. To achieve this, we will obtain all students’ GitHub accounts, and using action research and case study methods, we will observe their weekly learning progress. 4.1 Participants’ GitHub Our approach involves comparing and analyzing changes in the students’ code from the initial blank framework to the progress made during their first assignments and the final completed content. We can infer how they construct their knowledge by repeatedly verifying the students’ assignments. The GitHub link containing the records of all 44 out of 45 students who made progress in the first three assignments through the tutorials is available at https://reurl.cc/7jzggN. It is evident that only two students did not receive full marks in the second assignment, and six students did not achieve full effects in the first assignment. Fig. 4 shows all participants’ performance in assignments, which are available on GitHub sub-sheets, including HW1-HW3, HW4-HW5, and the final project. These assignments aim to confirm students’ skills in data visualization, integrating programmatic skills, and exploring large amounts of data on the internet. We will use their GitHub accounts to document their growth and progress over time, ensuring they have a work portfolio demonstrating their skills upon completing the course. Python is recommended as a beginner-friendly programming language due to its intuitive syntax, availability of resources and libraries, cross-platform compatibility, and numerous use cases. The ChatGPT Python API and GitHub can promote critical thinking, collaboration, creativity, and communication skills in a more learner-centered approach. 4.2 Students Learning Performance and Evaluation All student assignments and projects must be submitted to the GitHub version control platform, with five assignments and one mini-hackathon project to build the habit of turning ideas into work. Through the automatic version control tracking mechanism on GitHub, the submission tracking. 8 Yun-Cheng Tsai Fig. 4. The performance of all participants in all assignments. 1. Assignment 1, worth 15 points, is designed to use Sets to apply intersection union difference sets, allowing students to choose the dataset problem they want to solve for the first assignment. With the ChatGPT Python API, students can use natural language processing to analyze and understand their chosen dataset and generate insights and recommendations based on the data. 2. Assignment 2 is worth 15 points and is designed to confirm that all students are proficient in using JSON and Python Dict features with semi-structured data to solve problem styles. Analyze a mix of structured and semi-structured data. Verify that students can apply all syntax flexibly to the problem they are trying to solve. With the ChatGPT Python API, students can generate natural language descriptions of their solutions and analyze large amounts of text data to gain insights into the structure and organization of the data. 3. Assignment 3 is worth 15 points. It is designed to confirm that all students understand the inductive logic of text structure and can quickly batch-process large amounts of repetitive key content extraction through data regularization. Assure that all students can successfully use web crawling skills to extract large amounts of data of interest for analysis and application in their projects. With the ChatGPT Python API, students can use natural language processing to remove key content and insights from large amounts of text data, allowing them to quickly and efficiently analyze and organize the data. 4. Assignment 4, with a score of 15, is designed to confirm that all students can use data visualization and related analysis tools to visually present large amounts of data of interest to them and perform in-depth interpretation and analysis. With the ChatGPT Python API, students can generate natural language descriptions of their visualizations and gain insights into the patterns and relationships in the data. 5. Assignment 5, worth 15 points, is designed to confirm that all students can integrate the programmatic skills built in the previous four assignments to present a large amount of data on the Internet that they are interested in, along with textual exploration skills, and conduct an in-depth exploration. The final project was designed to confirm that all students were able to integrate the acquired skills from the previous ten weeks and that they were able to take a global view and explore the text of a large amount of data on the Internet that they were interested in, along with co-occurring network analysis 9 skills. With the ChatGPT Python API, students can generate natural language descriptions of their findings and insights and collaborate with others on their analysis and exploration of the data. 6. The final project is worth 100 points and accounts for 25% of the total grade. The design is to design a user experience solution that incorporates all the acquired development skills into the problem they want to solve and the object they want to serve and visually represent the flow of use. With the ChatGPT Python API, students can generate natural language descriptions of their user experience solution and gain insights into the effectiveness of their solution based on user feedback and analysis of user behavior. We can use the ChatGPT Python API in assignments and projects to provide students with personalized and adaptive learning experiences. The API can generate natural language responses, analyze text data, and provide targeted feedback and support. Additionally, it can develop customized learning materials for students based on their learning styles. Incorporating the ChatGPT API promotes a learner-centered approach to learning and helps students build their problem-solving skills. The course syllabus is arranged appropriately Teaching to stimulate interest in learning Teachers teach from the heart Good interaction between teachers and students The evaluation method is reasonable Very Much in Line Still meets Disagree 7.50% 10% 7.50% 0% 5% 12.50% 15% 10% 12.50% 17.50% 80% 75% 82.50% 87.50% 77.50% Table 1. Course Evaluation. There are forty-one students participated in the classroom feedback. Table 1 shows that forty-one students participated in the classroom feedback, which showed that students thought that this teaching method could stimulate learning interest and allow good interaction between teachers and students in the classroom. Students can track each other’s changes and progress after submitting their codes and get the most realistic results of learning effectiveness. The live interactive video recording in the classroom will analyze the actual process of students working on the tasks and let students tell their progress and changes. With the confirmation of Tinker Learning teaching mode, students can construct their knowledge body step by step with the assistance of Live Coding & Live Debug. 5 Discussion This section will discuss several key ideas related to the teaching approach and learning process, such as Tinker Learning, Learner-Centered Instruction, Cone of Learning, and Creative Learning Spiral. These concepts emphasize hands-on, experiential learning and encourage students to explore and discover con- cepts independently. By implementing various learning methods, such as collaborative learning, hands-on projects, mini-hackathon, flipped teaching, live video recordings of teaching operations, and program ex- amples, students can demonstrate their implementation strategies for applying technology to educational training. The ChatGPT Python API has significant potential to enhance the learning experience for students and promote learner-centered instruction. By leveraging natural language processing, machine learning, and text analysis, educators can provide students with a more personalized and adaptive learning experience. The ChatGPT API can generate natural language responses, allowing more engaging and interactive interactions between students and the program. This can be especially beneficial for students who may require additional approval or who learn at a different pace from their peers. Additionally, the ChatGPT API can analyze large amounts of text data, providing educators with insights into students’ understanding of the subject matter and identifying areas where students may struggle. Using ChatGPT, educators can provide personalized support and guidance, generate personalized learning materials, and offer targeted feedback to help students improve. Overall, the ChatGPT Python API offers a powerful tool for educators to promote learner-centered instruction and provide students with a more personalized and adaptive learning experience, ultimately leading to more effective learning outcomes. 10 Yun-Cheng Tsai 6 Conclusion In our programming course, we have leveraged the ChatGPT Python API to enhance students’ sense of accomplishment and promote deeper thinking through qualitative analysis. By introducing problem situations and demonstrating how to solve them using Python language and packages, students have gained hands-on experience and practical skills that they can use to solve similar problems. We have also implemented the Tinker Learning teaching strategy, which encourages students to actively participate in writing code and constructing their own knowledge body. Using the ChatGPT API, students have explored new avenues in text analysis and language processing, enabling them to analyze larger samples and gain deeper insights into the subject matter. This has led to a significant increase in students’ sense of accomplishment and motivation to continue learning. By encouraging students to think about how to apply the tools demonstrated to solve real-life problems, we have promoted the development of the 4C skills: critical thinking, communication, collaboration, and creativity. Using the ChatGPT API, students have expanded their problem-solving skills and developed a deeper understanding of the subject matter. We can verify their growth and progress by analyzing their records on GitHub. Our teaching methodology has helped students develop the ability to "learn how to learn" and build their own knowledge body, leading to a more profound understanding of programming concepts and principles. Overall, the ChatGPT API has played a crucial role in enhancing our teaching strategy and promoting students’ sense of accomplishment and motivation. By incorporating problem-solving strategies and encouraging active participation, we have developed students’ 4C skills and equipped them with valuable skills that will serve them well in their future careers. References 1. Behiye Akcay. Problem-based learning in science education. Journal of Turkish science education, 6(1):28–38, 2009. 2. Anabela C Alves, Celina P Leão, Francisco Moreira, and Senhorinha Teixeira. Project-based learning and its effects on freshmen social skills in an engineering program. Human capital and competences in project management, 10, 2018. 3. Paul Baker, Costas Gabrielatos, Majid Khosravinik, Michał Krzyżanowski, Tony McEnery, and Ruth Wodak. A useful methodological synergy? combining critical discourse analysis and corpus linguistics to examine discourses of refugees and asylum seekers in the uk press. Discourse & society, 19(3):273–306, 2008. 4. Jerome S. Bruner. The act of discovery. Harvard Educational Review, 1961. 5. Edgar Dale. Audio-visual methods in teaching. Dryden Press, 1946. 6. Paul K. Duncan and David Kember. How Learning Happens: Seminal Works in Educational Psychology and What They Mean in Practice. Routledge, 2019. 7. Fergus Dwyer. Edgar dale’s pyramid of learning in medical education: a literature review. Medical Teacher, 32(11):e366–e367, 2010. 8. David A. Kolb. Experiential learning: Experience as the source of learning and development. Prentice-Hall, 1984. 9. Rose M Marra, David H Jonassen, Betsy Palmer, and Steve Luft. Why problem-based learning works: Theo- retical foundations. Journal on Excellence in College Teaching, 25, 2014. 10. Victoria J Marsick and Karen Watkins. Informal and incidental learning in the workplace (Routledge Revivals). Routledge, 2015. 11. Robert J. Marzano. The art and science of teaching: A comprehensive framework for effective instruction. ASCD, 2007. 12. Seymour Papert. Mindstorms: Children, Computers, and Powerful Ideas. Basic Books, 1980. 13. Seymour Papert. Why school reform is impossible (with commentary on o’shea’s and koschmann’s reviews of" the children’s machine"), 1997. 14. Jean Piaget. Science of education and the psychology of the child. Grossman, 1970. 15. Michael Prince. Does active learning work? a review of the research. Journal of Engineering Education, 93(3):223–231, 2004. 16. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 2019. 17. Mitchel Resnick. Give P’s a Chance: Projects, Peers, Passion, Play. Constructionism, 2014. 18. Mitchel Resnick. Lifelong Kindergarten: Cultivating Creativity through Projects, Passion, Peers, and Play. MIT Press, 2017. 19. Robert J. Sternberg. Wisdom, intelligence, and creativity synthesized. Cambridge University Press, 2003. 20. Yun-Cheng Tsai. The value chain of education metaverse. arXiv preprint arXiv:2211.05833, 2022. 21. Maryellen Weimer. Learner-centered teaching: Five key changes to practice. John Wiley & Sons, 2013. 22. Kaya Yilmaz. Social studies teachers’ views of learner-centered instruction. European journal of teacher education, 31(1):35–53, 2008.
ai_researcher
1
Primed_Design_Activities_Scaffolding_Young_Designers_During_Ideation.pdf
Visibility predicts priming within but not between people: a cautionary tale for studies of cognitive individual differences. Frederic Boy & Petroc Sumner Accepted for publication in the Journal of Experimental Psychology: General (07/2013) This article may not exactly replicate the final version published in the APA journal. It is not the copy of record. School of Psychology, Cardiff University Tower building / Park place Cardiff, CF10 3 AT, United Kingdom Abstract: With resurgent interest in individual differences in perception, cognition and behavioural control, as early indicators of disease, endophenotypes, or a means to relate brain structure to function, behavioural tasks are increasingly being transferred from within-subject settings to between-group or correlational designs. The assumption is that where we know the mechanisms underlying within-subject effects, these effects can be used to measure individual differences in those same mechanisms. However, between-subject variability can arise from an entirely different source from that driving within-subject effects, and here we report a clear-cut demonstration of this. We examined the debated relationship between the visibility of a masked-prime stimulus and the direction of priming it causes (positive or reversed). Such reversal of priming has been hypothesized to reflect an automatic inhibitory mechanism that controls partially activated responses and allows behavioural flexibility. Within subjects, we found an unambiguous systematic transition from reversed priming to positive priming as prime visibility increased, replicated seven times, and using different stimulus manipulations. However, across individuals there was never a relationship between prime discrimination ability and priming. Specifically, these data resolve the controversial debate on visibility and reversed priming, indicating that they arise from independent processes relying on partially shared stimulus signals. More generally, they stand as an exemplar case in which variance between individuals arises from a different source from that produced by stimulus manipulations. 1. Introduction Psychology has always contained within in it a division between two approaches (e.g. Hull, 1945; Cronbach, 1957). One seeks to assesses and explain differences between   1   individuals, normally via correlational methods, while the other investigates basic cognitive processes with experiments that treat individual differences as nuisance variation. Cronbach (1957) hoped that 'the two disciplines of psychology' would converge and integrate because, he argued, 'kept independent, they can give only wrong answers or no answers at all regarding certain important problems' (p. 673). It is likely that Cronbach would be disappointed with the degree of integration achieved more than 50 years later, but a new impetus for integration is now being driven from a direction that Cronbach might not have anticipated - psychological medicine, imaging and genetics, where there is increasing use of behavioural tasks from experimental psychology to measure individual differences in perception, cognition and behavioural control. To take three examples, there is the hope in psychiatric genetics of finding cognitive ‘endophenotypes’ – heritable and stable differences in cognitive mechanisms associated with psychiatric illness (Gottesman & Gould, 2003); there is growing endeavour in brain imaging to relate individual differences in structure, such as white matter connectivity, to differences in function; and there is accelerating interest in ageing, which is most easily studied cross-sectionally rather than longitudinally. The problem for integration. In these examples, it appears to be commonly assumed that where a task has been used successfully to reveal and investigate specific cognitive mechanisms through specific manipulations of stimuli or conditions, that task can then be simply used to measure how people differ in those mechanisms. In other words, that established within-subject phenomena will have easily-interpreted translation to individual differences. Unfortunately this is not true. Variance between individuals can arise from an entirely different source from that driving, and therefore studied using, within-subject effects (e.g. Borsboom, Kievit, Cervone & Hood, 2009). As we will show below, this can happen not just for complex individual differences such as IQ or personality, but even when a task is supposed to tap a much more basic mechanism, where it is intuitive to assume that individual differences would come from that same basic mechanism. The theoretical basis for the difficulty in integrating correlational and experimental approaches is that, 'barring perhaps the most basic laboratory tasks for which assumptions like ergodicity or measurement invariance over individuals might be taken to hold true, any theory on intra-individual processes is compatible with any theory of   2   inter-individual differences' (Borsboom, Kievit, Cervone & Hood, 2009, p 19). In other words, without additional simplifying assumptions, it is not possible to infer anything about within-person dependence from between-person comparisons, and vice versa. This problem is illustrated in Figure 1, which shows four hypothetical examples of how within-subject variance may or may not align with between-subject variance. A" " " 2 e r u s a e M B" C" D Measure"1" Figure 1: Schematic illustration of the theoretical independence of within- and between- subject variance through four examples. A) Within each individual (different gray tones), measurements 1 and 2 covary positively with each other, while individuals’ means (white diamonds) covary negatively across the sample. This situation may appear counter-intuitive, but we are familiar with it in many circumstances with bidirectional causality: e.g. clothing thickness will correlate positively with feelings of warmth if clothing is manipulated within individuals; but across individuals, people who on average feel colder are likely to put thicker clothes on. B) Within each individual, measurements 1 and 2 are positively correlated, and individual’s means also covary positively across the sample, as intuitively expected in many situations C) Within each individual, measurements 1 and 2 are not correlated, but individual’s means covary positively across the sample. D) Within each individual, measurements 1 and 2 are correlated (positively in this example), but the means do not covary across the sample. Multiply-determined traits vs. basic mechanisms Counterintuitive relationships like those depicted in Figure 1 do not require that the sources of individual differences are entirely independent from those driving within- subject effects. They might still arise if there are too many potential interacting mechanisms contributing to individual variability, but only some of which have been exploited and understood through experimental manipulations (Borsboom, Kievit, Cervone & Hood, 2009). This problem of relating multiply determined individual differences to within-subject experimental manipulations has been most commonly discussed with regard to multifactorial traits such as personality and IQ (Cronbach, 1957; Borsboom, Kievit, Cervone & Hood, 2009). However as alluded to in Borsboom et al.'s quote (“barring perhaps the most basic laboratory tasks”) the simpler the behaviour, the FIGURE'1'Boy'&'Sumner'   3   more likely it is to be strongly associated with one particular cognitive mechanism. In this case, it might be true that individual differences in task performance reflect the same underlying mechanism as studied in the experimental literature. Historically, Hull (1945) expressed a related view when he argued that there are common laws governing behaviour across individuals and even species. If an experimentally derived behavioural 'law' or equation - for example, relating stimulus strength to behavioural outcome - has variables (such as stimulus strength) and 'constants', then Hull argued that while experimentalists manipulate the 'variables' it is in the 'constant' parameters that individuals and species will differ. Those parameters would represent 'primary' individual differences, which then combine and interact to produce the 'secondary' differences measured in overt behaviour. It would be almost impossible to derive these 'laws' (i.e. understand basic cognitive mechanisms and derive functional models) from only studying individual differences. However, if you already know the laws/mechanisms from experimentation, and have tasks that can specify the parameters for different individuals, then you could understand individual differences with respect to these mechanisms. This, as we perceive it, is the basic hope and implicit assumption of the various ventures to use individual differences in well-known cognitive tasks to relate brain structure, clinical symptoms, or genetic variation to basic cognitive mechanisms. However, we are aware of no explicit investigation of whether this implicit assumption holds even for tasks that are considered to tap the most basic mechanisms. It is possible that individual variance still arises from a combination of sources, and that the main variability is independent from the mechanisms that are experimentally manipulated. Here, we focus on the relationship between the visibility of a prime stimulus and the effect it has on motor control. The case of visibility and motor priming It has become widely accepted that our ‘voluntary’ purposeful actions should rather be regarded as an interaction between processes occurring within and without the scope of conscious awareness (e.g., Aglioti, DeSouza, & Goodale, 1995; Boy, Husain, & Sumner, 2010; Neumann & Klotz, 1994). An important and perennial question arises from this framework: what is the relationship, if any, between our conscious awareness of a stimulus and the way in which that stimulus influences motor plans? A practical way to   4   investigate non-conscious motor influences has been implemented in the masked priming paradigm (e.g. Leuthold & Kopp, 1998). Generally, prime stimuli speed responses to subsequent “target” stimuli if they are associated with the same response (compatible) and slow responses if prime and target are associated with different responses (incompatible). This positive compatibility effect (PCE) has been taken to demonstrate that a prime can partially activate the response associated with it, even though the participant had no intention of responding to the prime and may not even have perceived it (note that for priming to occur, the participant must be intending to respond to the target). In some masked priming paradigms, a counterintuitive negative compatibility effect (NCE) has been measured, such that responses are faster and more accurate for incompatible primes than for compatible primes (for reviews, see Eimer & Schlaghecken, 2003; Sumner, 2007). Most interestingly for the present purpose, in the initial studies of the NCE, the direction of priming appeared to depend on whether the prime was above or below perceptual discrimination threshold: visible primes produced PCEs while invisible primes produced NCEs (Eimer & Schlaghecken, 2002; Klapp & Hinkley, 2002). There are theoretical reasons to expect a strong relationship between prime visibility and priming effects. Firstly, the original theory of the NCE (Eimer & Schlaghecken, 1998, 2002, 2003), proposed that the very role of the inhibitory mechanism indexed by the NCE was to suppress weak motor activation evoked by previously learnt stimulus- response associations, unless the stimulus reached conscious awareness. Presumably stimuli that reach awareness are more likely to be behaviourally relevant than those that do not, and so it might be efficient to allow positive motor priming by stimuli we are conscious of, but to suppress it when we are not conscious of its source. More generally, even if there were not any causal connection between visibility and the priming effect, we would expect a strong relationship if the same cascaded visual processing underlies both. We assume that some visual information gets through to motor areas regardless of whether it has reached conscious threshold – this is what causes subliminal motor priming – and we also assume that as the representation strength of visual information increases, then it is more likely to be consciously perceived. If we further suppose, not unreasonably, that increases in representation strength will also have a systematic effect on priming, then we predict there would   5   generally be a strong correlation between priming and visibility even without any direct causal connection between them. Perception separate from action? Alternative to the above theoretical frameworks are proposals that pathways subserving non-conscious processes are distinct from those serving conscious awareness, or, relatedly, that pathways linking vision to action are distinct from those leading to perceptual experience (Milner & Goodale, 1995). It is also possible that differences in temporal dynamics, rather than simply anatomical pathway, constitute an important distinction between information driving priming and information supporting conscious perception. For example, priming may be mainly driven by an initial transient forward sweep of information, while awareness is supported by subsequent sustained recurrent processing (Bompas & Sumner, 2008; Lamme & Roelfsema, 2000). These ‘dissociation’ accounts do not require a correlation between visibility and priming. As mentioned above, for the NCE paradigm, a strong relationship between visibility and the direction of priming was initially reported: invisible primes produced NCEs while visible primes produced PCEs (Eimer & Schlaghecken, 2002; Klapp & Hinkley, 2002). However, this relationship soon became controversial: NCEs were also found to occur when prime discrimination was above chance (e.g. Klapp, 2005; Klapp & Hinkley, 2002; Lleras & Enns, 2005; Mattler, 2006; Sumner, Tsai, Yu, & Nachev, 2006), and conversely, PCEs could occur with invisible primes (e.g. Lleras & Enns, 2006). It was also found that teaching subjects to discriminate the prime stimuli through extensive practice did not alter the NCE, which appears to rule out a causal relationship (Schlaghecken, Blagrove, & Maylor, 2008; we have replicated this in unpublished work). However, none of these results necessarily mean there is no relationship between visibility and the NCE. Learning to perform better on prime discrimination presumably relies on learning about the subtle clues present around the time of the onset of the mask. One way to do this would be to learn what to attend to. Such attentional learning might not transfer to make primes more visible in the masked priming blocks, because here the participant is instructed to ignore the primes and must pay attention to the target. Moreover, as outlined above, there does not need to be a causal role for visibility in reversed priming for us to expect a behavioural relationship between the two - it would emerge if common visual processing leads to visibility on the one hand and priming on the other. With regard to findings that in some experiments visible primes   6   can produce NCEs while in other experiments invisible primes can produce PCEs, this overturns a categorical distinction between invisible and visible primes, but it does not mean there is no relationship when other factors are held constant. Indeed, it is now generally accepted that there are at least two mechanisms that can contribute to NCEs, depending on the type of mask used (Boy, Clarke, & Sumner, 2008; Jaskowski, 2008; Klapp, 2005; Schlaghecken & Eimer, 2004; Sumner, 2008), and these may differ in their relationship to prime visibility. Overall, there are many indications in the literature that the more visible the prime, the more likely it is to produce a PCE, and conversely, NCEs are most easily produced with less visible or invisible primes (Klapp, 2005; Klapp & Hinkley, 2002; Lleras & Enns, 2004, 2005; Schlaghecken & Eimer, 2006; Sumner et al., 2006). However, like the counter-examples, these indications are not always free from other factors known to influence priming, such as differences in the mask. Experimental manipulation of prime visibility vs. individual differences Here we investigate two related questions: Firstly, is there a consistent relationship between average prime visibility, as measured by prime discrimination performance, and average priming effect when prime visibility is manipulated in different ways (prime contrast, prime-mask interval, mask contrast, mask density)? It is plausible that the relationship could depend on the way the stimuli are manipulated, because the different manipulations have different impacts on the initial visual burst of activity produced by the prime and the subsequent interaction of the prime activity and the mask activity. For example, if priming is primarily driven by the initial feed-forward sweep of prime activity, it may be more directly influenced by manipulating the contrast of the prime, than by manipulating the contrast of the mask, even though both affect prime visibility. Secondly, across participants, is there a systematic relationship between an individual’s priming effect and how well they can discriminate the prime? We would expect this to be the case if there is a causal relationship between visibility and priming, or if priming and visibility depend on the same perceptual representations, as discussed above. More generally, with the resurgence of interest in individual differences in perception and cognition, it seems to be a common assumption that established within- subject phenomena will have easily interpreted translation to individual differences. But as discussed at the beginning of the Introduction, a correlation between priming and visibility across people does not necessarily follow from a relationship between average priming and average visibility produced by manipulating the stimuli (question 1).   7   Several previous studies have assumed a correlation in their methodology, because they have used a individual’s prime discrimination ability to set the ‘appropriate’ prime strength for that participant in priming blocks (e.g. Boy & Sumner, 2010). This approach appeared to 'work', but was never directly compared to using the same prime strength for all participants. In other experiments, both the presence (Eimer & Schlaghecken, 2002; Klapp & Hinkley, 2002) and absence of a correlation has been reported (Hermens, Sumner, & Walker, 2010), but whenever the correlation has been measured with a single set of stimuli – i.e. with one particular prime strength – it could be misleading due to the biphasic nature of the priming effect. We will explain this in due course, along with the approach taken to overcome this problem, based on Eimer and Schlaghecken (2002). 2. Methods Overview of experiments We present five new experiments and reanalyses of two previous ones, making seven sets of data with which to answer the two questions set out above. Experiments 1 and 2 use prime duration and prime brightness, respectively, to manipulate visibility. The previous data sets, from Sumner et al. (2006), used the same manipulations but with fewer degrees of visibility, and thus represent replications of Experiments 1 and 2. For the logic of exposition they are therefore branded as Experiments 3 and 4. The original purpose of these previous experiments was to investigate the effect of attention on priming, but since the attention effect is not at stake here, we average over the attentional manipulation. Experiments 5 and 6 manipulate the visibility of the prime by playing on the properties of the mask, modulating its brightness or its density, respectively. Experiment 7 manipulates prime brightness like Experiments 2 and 4, but implements a different way of determining prime visibility exactly following Eimer and Schlaghecken (2002). a. Participants (all experiments) 62 participants (48 women; age 18–38) from Cardiff University participated in five experiments (respectively 11, 10, 10, 11 & 20 participants). Details about participants for Experiments 3 an 4 are to be found in Sumner et al. (2006). All self-reported having normal or corrected-to-normal vision, no history of brain damage and were right- handed.   8   b. Apparatus (all experiments) Stimulus presentation was performed by a PC-controlled Cambridge Research Systems (CRS) Visage® connected to a 21” Sony GDM-F520 Trinitron monitor. Stimulus presentation was synchronized with the screen refresh rate of 100 Hz, and timings were controlled and measured by the CRS clock and thus not subject to the errors produced by normal PC operating systems. Manual responses were collected using a CRS-CB6 button box. c. Masked-priming task The protocol is given in full for Experiment 1, and deviations from this in the other experiments are detailed below. Participants made speeded responses with a left- or right-hand key press (counterbalanced) to right and left arrows (1° x 1.5°), which occurred in random order and located at 0.5º from fixation, in a random direction from fixation (see Fig. 2-B). A fixation cross was visible at the center of the screen at the beginning of each trial. The primes were identical to either one or the other target, but presented for various duration between 10 and 60 ms (by steps of 10 ms), and appeared within 0.5° of fixation (i.e., in the same vicinity as the target, but not in an identical location on any trial). In all conditions the prime was followed by a 100 ms mask of 2.2° x 2.2° and constructed of 36 randomly orientated lines, excluding any orientation closer than ± 5° to the orientations present in the prime and target stimuli. A new mask was constructed on each trial but appeared always in the same place, centred on fixation. All trials had a long mask-target SOA of 150 ms. The background was dark grey (10 cdm-2) and other stimuli were light grey: fixation cross, primes, targets and masks were 60 cd/m2. 480 trials were presented in a random order (60 trials for each prime duration), with brief breaks every 60 trials. Participants were not informed about the different prime durations. Stimulus sequence illustrated in Fig 2-B.   9   A) #General#template#for#the#7#prime#detec>ons#tasks# Prime# Mask# +#temporally#unconstrained#response# B) #General#template#for#the#7#masked4prime#tasks# Prime# Mask# Target# +#speeded#response# C) #Four#ways#to#modulate#prime#visibility#in#prime#detec>on#and#masked4priming#tasks# Experiment*1*&*3* Prime#dura>on# Experiment*2,*4*&*7* Prime#brightness# Experiment*5* Mask#brightness# Experiment*6* Mask#density# 10#ms# 20#ms# 30#ms# 40#ms# 20#cd/m2# 160#cd/m2# 36#lines# 40#cd/m2# 120#cd/m2# 25#lines# 60#cd/m2# 80#cd/m2# 80#cd/m2# 60#cd/m2# 16#lines# 9#lines# 50#ms# 60#ms# 120#cd/m2# 40#cd/m2# 160#cd/m2# 20#cd/m2# 4#lines# 2#lines# Figure 2: A) Illustration of the stimulus sequence in the prime detection tasks. (B) Illustration of the stimulus sequence in the masked-priming tasks. C) Illustration of the four ways to alter prime visibility, with primes getting stronger or masks getting weaker from top to bottom. Because prime visibility is changed through the modulation of the physical characteristics of the prime or the mask, we will refer to changes in the prime ‘strength’. d. Prime identification tasks In experiments 1 to 6, prime visibility thresholds were assessed individually before or after each masked-priming task using a procedure of constant stimuli (240 trials in total, testing each of the prime or mask conditions in randomized order). The participants’ task was to guess the identity of the prime (forced choice). For each experiment, stimulus sequence and timing was identical to the masked priming protocol, but with the target omitted (Fig 2-A). In Experiment 7 instead of using a method of constant stimuli to FIGURE*2*Boy*&*Sumner* assess prime visibility, we used a 2-up 1-down staircase procedure following Eimer and Schlaghecken (2002). The staircase continued until four consecutive staircase reversals and data from the staircase were then used to plot the individual’s prime visibility   10   psychometric function. Discussion of the procedure (particularly why our procedure does not contain the target stimulus) can be found in a previous publication (Boy & Sumner, 2010). e. Experiment 2 The procedure was identical to Exp. 1 except that we fixed the duration of the prime at 40 ms and manipulated its brightness to cross the perceptual threshold. Six levels of brightness were selected (20, 40, 60, 80, 120 & 160 cd/m2) and picked in a randomly shuffled order on each trial (see Fig 2-B). f. Experiments 3 & 4 The datasets termed experiment 3 and 4 are re-analyses of the data collected by Sumner et al. (2006). As their dataset also contained an attentional cue manipulation that is not of interest for the present purpose, we collapsed the validly and invalidly cued conditions. Experiment 3 reanalyzes the data in their second experiment for primes lasting 20, 30, 40, 50 and 60 ms. Experiment 4 reanalyzes their third experiment in which they used 4 levels of prime brightness (20, 40, 80 & 160 cd/m2). Note that in both these experiments, presentation was blocked (256 trials for each prime duration or each prime brightness), rather than randomly shuffled, as in Experiments 1 and 2 above. g. Experiments 5 & 6 In these experiments, prime duration was fixed at 40 ms and the manipulation affected the brightness of the mask (values between 20 & 160 cd/m2) or its density (mask composed of 2, 4, 9, 16, 25, 36 lines, at random in any of the grid positions, see Figure 2- B). On each trial, one of the six levels of mask brightness (Expt. 5) or mask density (Expt. 6) was selected in a randomly shuffled order (a total of 80 trials for each of the 6 levels of brightness or density). Other details are as in Expt 1. h. Experiment 7 The masked-priming task in Experiment 7 was identical to Experiment 2 where prime brightness was manipulated (in 6 levels, 20, 60, 60, 80, 120 & 160 cd/m2). The difference resides in the prime identification task, which utilized a staircase procedure (see above). We also ran more participants.   11   3. Results a. Average compatibility effects For average compatibility effect (CE, average reaction time to incompatible trials - average reaction time to compatible trials), the results are unambiguous and simply stated. We find a clear transition from negative CEs for weak primes to positive CEs for stronger primes in all experiments (all Fs > 15, all ps < .0001, see Fig 3). The pattern is independent of the way prime strength was manipulated. Since visibility (prime discrimination) also rises systematically with these manipulations of the stimuli, strong correlations occur in each experiment between CEs and prime discrimination for each stimulus condition for each participant (all rs > .56, ps < .001; see Fig 4). However, as we shall see below, these correlations are entirely driven by the manipulation of stimulus strength, and not at all by individual differences in discrimination ability. A)#Experiment#1# B)#Experiment#2# C)#Experiment#3# D)#Experiment#4# ) s m ( t c e f f e y t i l i b i t a p m o C ) s m ( t c e f f e y t i l i b i t a p m o C 50 40 30 20 10 0 −10 −20 −30 −40 −50 50 40 30 20 10 0 −10 −20 −30 −40 −50 ) s m ( t c e f f e y t i l i b i t a p m o C 50 40 30 20 10 0 −10 −20 −30 −40 −50 ) s m ( t c e f f e y t i l i b i t a p m o C ) s m ( t c e f f e y t i l i b i t a p m o C 50 40 30 20 10 0 −10 −20 −30 −40 −50 50 40 30 20 10 0 −10 −20 −30 −40 −50 ) s m ( t c e f f e y t i l i b i t a p m o C ) s m ( t c e f f e y t i l i b i t a p m o C 50 40 30 20 10 0 −10 −20 −30 −40 −50 50 40 30 20 10 0 −10 −20 −30 −40 −50 F(5,55)#=#26.3,#p<#.0001# F(5,60)#=#26.1,#p#<#.0001# F(4,36)#=#18.2,#p<#.0001# F(3,27)#=#15.2,#p<#.0001# 51 53 63 80 86 96 Mean prime visibility (%) 51 61 70 73 86 96 Mean prime visibility (%) 51 60 64 89 94 Mean prime visibility (%) 51 89 65 Mean prime visibility (%) 76 E)#Experiment#5# F(5,55)#=#25.5,#p<#.0001# F)#Experiment#6# F(5,50)#=#25.7,###p<#.0001# G)#Experiment#7# F(5,95)#=#97.7,#p<#.0001## 52 59 60 68 93 98 Mean prime visibility (%) 51 53 62 75 86 96 Mean prime visibility (%) 52 63 68 72 85 95 Mean prime visibility (%) Figure 3. (A:G): Average compatibility effect as a function of mean prime visibility for each level of prime visibility for all seven experiments (error bars represent the inter-individual standard error of the mean). A spline fit connects the data points (we have no theoretical basis for any particular curve fit). As in Eimer and Schlaghecken (2002), the dotted vertical line on each plot indicates the 66% prime discrimination threshold along the prime visibility gradient as determined in the prime identification tasks.   12   FIGURE'3'Boy'&'Sumner' /Users/fredericboy/Dropbox/final#JEP/FigureavgCEs.m# A)#Experiment#1# r = 0.73746 p < .0001 50 40 30 20 10 0 −10 −20 −30 −40 −50 40 50 60 70 80 90 100 Prime identification (%) E)#Experiment#5# r = 0.5642 p < .0001 50 40 30 20 10 0 −10 −20 −30 −40 ) s m ( t c e f f e y t i l i b i t a p m o C ) s m ( t c e f f e y t i l i b i t a p m o C ) s m ( t c e f f e y t i l i b i t a p m o C ) s m ( t c e f f e y t i l i b i t a p m o C B)#Experiment#2# r = 0.74233 p < .0001 50 40 30 20 10 0 −10 −20 −30 −40 −50 40 50 60 70 80 90 100 Prime identification (%) F)#Experiment#6# r = 0.70179 p < .0001 50 40 30 20 10 0 −10 −20 −30 −40 ) s m ( t c e f f e y t i l i b i t a p m o C ) s m ( t c e f f e y t i l i b i t a p m o C D)#Experiment#4# r = 0.664 p < .0001 C)#Experiment#3# r = 0.66373 p < .0001 50 40 30 20 10 0 −10 −20 −30 −40 50 40 30 20 10 0 −10 −20 −30 −40 ) s m ( t c e f f e y t i l i b i t a p m o C −50 40 50 60 70 80 90 100 Prime identification (%) −50 40####50###60####70####80###90###100# 40 50 60 70 80 90 100 Prime identification (%) G)#Experiment#7# r = 0.82581 p < .0001 50 40 30 20 10 0 −10 −20 −30 −40 −50 40 50 60 70 80 90 100 Prime identification (%) −50 40 50 60 70 80 90 100 Prime identification (%) −50 40 50 60 70 80 90 100 Prime identification (%) Figure 4. (A:G): Scatter plots of the compatibility effect against prime discrimination. The coefficient of correlation of the linear regression is presented (along with its p-value). b. Correlations between priming and visibility across individuals Approach: If priming is zero with no prime, negative with an intermediate strength prime and positive with a strong prime, there are two places in this relationship where it is weakly negative: with very weak primes, or with stronger primes that are not quite strong enough to produce positive priming. Measuring the simple correlation between discrimination and priming could therefore be misleading, because subjects showing the same level of priming could actually be at different points on the biphasic relationship between prime strength and priming effect. To get around this problem, we followed Eimer and Schlaghecken (2002) and took the approach of measuring both discrimination and priming effect for multiple prime strengths. Then, for each participant, the discrimination threshold was extracted from the psychometric function of discrimination performance against prime strength, and the priming transition point (negative to positive) was extracted from the curve of CE against prime strength (see Fig 5). In other words, for each individual, we found the prime strength for which discrimination FIGURE'4'Boy'&'Sumner' # /Users/fredericboy/Dropbox/final#JEP/scaGerplot.m# accuracy was 75%, and the prime strength for which negative priming turned to positive priming (the zero crossing). If prime visibility is related to priming, these two measures are expected to positively correlate.   13   A)+Prime+discrimina2on+ + ) % 100+ 75+ ( + e c n a m r o f r e p n o 2 a n m i + !Varying!either:! EPrime+dura2on+ EPrime+brightness+ EMask+density+ EMask+brightness+ Prime+ Mask+ i r c s i D 50+ Prime++strength+ + B)+Masked+priming+ !Varying!either:! EPrime+dura2on+ EPrime+brightness+ EMask+density+ EMask+brightness+ Facilitation PCE Prime+ Mask+ Inhibition NCE Target+ 40+ 20+ 0+ + ) s m ( + + t c e ff e y t i l i + b 2 a p m o C E20+ E40+ Prime!strength+ + Subject+1+ Subject+2+ Subject+3+ Subject+4+ Subject+5+ Subject+6+ Subject+1+ Subject+2+ Subject+3+ Subject+4+ Subject+5+ Subject+6+ C)++ Weak+or+no+ correla2on+ + n o 2 a n m i ? Posi2ve+ correla2on+ + i r c s i d % 5 7 + t a h t g n e r t s + e m + i r P Prime+strength+at+ CE+crossing+point+ Figure 5. Illustration of the data processing chain used for testing correlation between individuals’ priming effect and how well they see the primes. A) Prime discrimination: derivation of the prime strength at which each subject shows 75% discrimination of the prime stimuli (which corresponds to guessing in 50% of trials). B) Masked priming: derivation of the prime strength for which each subject’s compatibility effect goes from negative to positive. C) Correlation between these two measures. In seven experiments we found only one hint of a positive correlation (Experiment 1, r = .28, p= .38). In all six other experiments, correlations coefficients ranged from -.21 to - .07 (all ps = NS, Fig 6). Of course, null results are difficult to be sure of. One possibility, especially with relatively small N, is that one or two outlying values can destroy the FIGURE!5!Boy!&!Sumner! statistical correlation even though a true correlation might exist. To assess the likelihood of this, we used jackknife estimates, which, reassuringly showed that none of the correlation coefficients are likely to be affected by a large bias; on average, the “true” coefficient of correlation was probably misestimated by not more than ± 0.09 (Table 1, row 2). Another possibility is that it is possible that any given subsample of a population, by chance, will not show a correlation even though a correlation exists in the whole population. We used simulation to estimate the chance of this happening in all seven experiments for the N we used in each experiment. Only one experiment in a total of   14   seven found a numerically positive r-value (but statistically not significant). Thus we estimated the chance that despite there existing a real between-subject correlation between visibility and priming, we only obtained once a positive r-value in all seven experiments. To do this we assume that despite any noise in our variable measures, if we had measured a sufficiently large number of participants we would have revealed a correlation if it exists. For exposition, let us assume it would be r=0.3. We therefore simulate a large population (100 000 data points) with a correlation of r=0.3. We then simulate each experiment by randomly selecting the same number of points as we had participants in that experiment, and we calculate the r-value we obtain with this subsample. We repeat this for each of the seven experiments, and then count the number of positive and negative r values obtained. We repeat this procedure 100 000 times, to obtain the probability that only one (or less) of our seven experiments would give an r value above zero, if the real r value is, for example, 0.3. We repeated this for ‘real’ r-values from 0 to 1. For 'real' r-values of 0.4 and above, the simulation produced zero occurrences of our data pattern in 100 000 iterations. For a real value of r=0.3 there were 16/100000, for a real value of r=0.2 there were 174/100000, for r=0.1 there were 1352/100000, and for r=0 there were 5914/100000. In other words, the probability of getting our results for a real r>0.3 is very low, and the probability for a real r=0.2 is about 30 times (5914/174) lower than if the real r-value is zero (for real r=0.1 it is about 4-5 times lower than for no correlation). From this we conclude that the likelihood of there being any sizable correlation (r>0.2) between visibility and priming is very small, given our data. We also checked whether the results were specific to extracting discrimination threshold using 75% performance. They were not; using either a 66% or a 70% threshold also produced no hint of positive correlation for experiment 2 to 7 (-0.18< r < 0.09). r-value for experiment 1 stayed close to the estimate at 75% performance (respectively .24 and .27). This is important because participants can differ in the slope as well as the position of their psychometric functions, and thus their rank order can change if we use different criteria for what their conscious threshold is. Note that at 75% discrimination accuracy, participants are 'seeing' (or basing their answers on information) 50% of the time, since that pure guessing would produce 25% correct answers when there are 25% incorrect   15   answers. At 66% accuracy, they are 'seeing' the stimulus 32% of the time (100-34*2). Thus the range of visibility levels we have used for the analysis spans 32%-50% seen targets, which we believe appropriately reflects the transition from subliminal to supraliminal. Note that the NCE to PCE transitions are also in this region (Figure 3). In case the correlation is better reflected by visibility performance near chance levels for the very weakest primes, we also correlated the earliest points on the psychometric functions (where they cross the y-axis) with the CE transition points. Of the seven experiments, we only obtained hints of positive correlation for experiments 1 and 7 (respectively .34 and .23, ps= NS). Finally we performed a further analysis that does not rely on estimating correlations at all (see section c).   16   A)"Experiment"1" " n o C a c fi C n e d i " t c e r r o c " % " n o i t a c i f i t n e d n o C a c fi C n e d i B)"Experiment"2" i t " c t c e e r r r o r o c c % " % " C)"Experiment"3" D)"Experiment"4" E)"Experiment"5" F)"Experiment"6" G)"Experiment"7" PS"@75"%"detecCon" PS"@0$crossing"CE" CorrelaCon" 100" 90" 80" 70" 60" 50" 40" 100 100" 90 90" 80 80" 70 70" 60 60" 50 50" 40" 40 100 100" 90 90" 80 80" 70" 70 60" 60 50" 50 40" 40 100 100" 100" 90" 90" 90 80" 80" 80 70" 70" 70 60" 60" 60 50" 50" 50 40" 40" 40 100" 90" 80" 70" 60" 50" 40" 100 100" 90" 90 80" 80 70" 70 60" 60 50" 50 40" 40 " ) s m ( " t c e ff e y t i l i " b C a p m o C 50" 40" 30" 20" 10" 0" $10" $20" $30" $40" $50" "10""""""""20"""""""30""""""40"""""""50"""""""60" " ) ) s s m m ( ( " t t c c e e ff f f e e y y t t i i l l i i " b b i C t a a p p m m o o C C 1 "20"""""""40""""""""60""""""80""""""120"""""160" 3 Prime brightness 2 4 5 6 1 3 5 "20"""""""""30"""""""""40"""""""""50""""""""60" 2 4 Prime duration " ) ) s s m m ( ( " t t c c e e ff f f e e y y t t i i l l i i " b b C i t a a p p m m o o C C " ) ) s s m m ( ( " t t c c e e ff f f e e y y t t i i l l i i " b b C i t a a p p m m o o C C """""20"""""""""""""40"""""""""""""80"""""""""""160" 1 2 Prime brightness 3 4 " ) s m ( " t c e ff e y t i l i " b C a p m o C 20"""""""40"""""""60"""""""80""""""120"""""160" " ) ) s s m m ( ( " t t c c e e ff f f e e y y t t i i l l i i " b b i C t a a p p m m o o C C 1 6 "36""""""25"""""""16""""""""9"""""""""4""""""""2" 2 4 3 Mask density 5 50" 50 40" 40 30" 30 20" 20 10" 10 0" 0 −10 $10" −20 $20" −30 $30" $40" −40 −50 $50" 50" 50 40" 40 30" 30 20" 20 10" 10 0" 0 −10 $10" −20 $20" $30" −30 $40" −40 $50" −50 50" 50 40" 40 30" 30 20" 20 10" 10 0" 0 −10 $10" $20" −20 $30" −30 $40" −40 $50" −50 50" 40" 30" 20" 10" 0" $10" $20" $30" $40" $50" 50" 50 40" 40 30" 30 20" 20 10" 10 0" 0 −10 $10" −20 $20" −30 $30" $40" −40 −50 $50" i t n o n o C a c fi C n e d a c i f i t n e d i i " t t c c e e r r r r o o c c " % % " " n n o o C C a a c c fi fi C C n n e e d d n o i t a c i f i t n e d i i " " t t t c c c e e e r r r r r r o o o c c c " " % % % i " n o C a c fi C n e d i " t c e r r o c " % " n o C a c fi C n e d n o i t a c i f i t n e d i " t t c c e e r r r r o o c c " % % i " n o i t a c i f i t n e d n o C a c fi C n e d i " t c t c e r e r r o r c o " % c % i 100 100" 90 90" 80 80" 70 70" 60 60" 50 50" 40" 40 ) s m ( t c e f f e " ) s m ( " t c e ff e y t i l i " y t i l i b C a p m o C b i t a p m o C 50" 50 40" 40 30" 30 20" 20 10 10" 0 0" −10 $10" −20 $20" −30 $30" −40 $40" −50 $50" "20"""""""40""""""""60""""""80""""""120"""""160" 6 1 5 2 3 Mask brightness 4 60" " ) s m 50" " i ( " t n o p g n i s s o r c " E C $ o r e Z 40" 30" 20" 10" 6 160" R"=".2798" "10""""""""20"""""""30""""""40"""""""50"""""""60" "10""""""""20"""""""30""""""40"""""""50"""""""60" Prime"duraCon"(ms)" " ) 2 " i m / d c ( " t n o p g n i s s o r c " E C $ o r e Z 2 3 Prime brightness 1 "20"""""""40""""""""60""""""80""""""120"""""160" Prime"brightness"(cd/m2)" 6 5 4 " ) s m " i ( " t n o p g n i s s o r c " E C $ o r e Z 1 5 3 "20"""""""""30"""""""""40"""""""""50""""""""60" 2 4 Prime duration Prime"duraCon"(ms)" 1 2 Prime brightness """"20"""""""""""""40"""""""""""""80"""""""""""160" Prime"brightness"(cd/m2)" 3 4 " ) 2 " i m / d c ( " t n o p g n i s s o r c " E C $ o r e Z " ) 2 " i m / d c ( " t n o p g n i s s o r c " E C $ o r e Z 20"""""""40"""""""60"""""""80""""""120"""""160" Mask"brightness"(cd/m2)" " ) s e n " i i l ( " t n o p g n i s s o r c " E C $ o r e Z i i 5 120" t n o p 4 80" g n s s o r c E C − o r e Z 20" 1 60" 3 40" 2 R"="$.11" 1 5 6 "20"""""""40""""""""60""""""80""""""120"""""160" 3 75%−identification crossing point 2 4 5 60" Data$from$Sumner$et$al$(2006)$ t i 4 50" n o p 3 40" i g n s s o r c E C − o r e Z 30" 2 20" 1 R"="$.10"" 1 3 5 ""20"""""""""30"""""""""40"""""""""50""""""""60" 75%−identification crossing point 2 4 160" 5 Data$from$Sumner$et$al$(2006)$ 4 80" i t n o p 60" 3 40" 2 i g n s s o r c E C − o r e Z 20" 1 R"="$.06"" 1 2 ""20"""""""""40""""""""""80""""""""160""""" 5 4 3 75%−identification crossing point 160" 120" 80" 60" 40" 20" R"="$.21" 20"""""""40"""""""60"""""""80""""""120"""""160" 6 2" 5 4" 4 9" i t n o p i g n s s o r c E C − o r e Z 16" 3 25" 2 36" 1 R"="$.16" 1 5 6 "36""""""25"""""""16""""""""9"""""""""4""""""""2" 3 Prime identification) 2 4 2 4 3 Mask density 1 6 "36""""""25"""""""16""""""""9"""""""""4""""""""2" Mask"density"(number"of"lines)" 5 6 160" 5 120" " ) 2 " i i i t n o p m / d c ( " t n o p g n i s s o r c " E C $ o r e Z g n s s o r c E C − o r e Z 4 80" 3 60" 2 40" 1 20" R"="$.17" 2 "20"""""""40""""""""60""""""80""""""120"""""160" 1 3 Mask brightness Prime"brightness"(cd/m2)" 4 5 6 2 "20"""""""40""""""""60""""""80""""""120"""""160" 1 5 3 Prime identification 4 6 Figure 6. (A:G): Individual data for the six experiments. Column 1: Prime strength at 75 % detection thresholds in the prime detection task. Column 2: Prime strength at the Zero-CE. Column 3: Scatter plot of detection threshold against priming transition point. The red arrows in row A show examples of how these values are derived (also refer to Figure 5). FIGURE'6'Boy'&'Sumner'   17   Expt. Expt. Expt. Expt. Expt. Expt. Expt. 1 2 3 4 5 6 7 Pearson’s r .245 -.1061 -.105 -.0616 -.2167 -.1605 -.1671 Jackknife bias est. -0.002 -0.003 -0.06 -0.09 -0.01 0.06 0.015 Z-score difference to Eimer & 0.26 2.07 2.06 1.97 2.43 2.30 2.35 Schlaghecken p-value .441 .047 .048 .057 .021 .029 0.024 Boot-strap p-value .104 .037 .031 .042 .0056 .029 0.021 Table 1. Rows 1 & 2: Pearson’s coefficients of correlation and their Jackknife bias estimates for the seven experiments. Rows 3 & 4: Results of the Fisher’s Z-score difference test comparing correlations obtained in the seven experiments to that calculated for Eimer and Schlaghecken’s (2002) datasets. Row 5: P-values derived from bootstrapping approach. See sections b and d of results for details. c. Better and worse discriminators. As a further analysis to test whether there is any effect of discrimination ability on priming, we plotted average CE curves for two groups of participants in each experiment – those with above median discrimination scores, and those with below median scores (we did this based on the individual 66% threshold values). Although this approach is more blunt than approaches taken above, it has two advantages: it includes all the participants, whereas above we could not include participants if their CE curve did not cross from negative to positive; it presents a clear visualization of whether the priming curve depends on prime visibility without the need for a further, less intuitive, analysis step (see Figure 7). We found no hint of any effect of visibility on the CE curves.   18   50 40 30 20 10 0 −10 −20 −30 −40 −50 50 40 30 20 10 0 −10 −20 −30 −40 −50 A)#Experiment#1# Best Worse 10 20 30 40 50 Prime duration (ms) E)#Experiment#5# Best Worse 20 40 60 80 120160 Mask brightness (cd/m2) 50 40 30 20 10 0 −10 −20 −30 −40 −50 50 40 30 20 10 0 −10 −20 −30 −40 −50 B)#Experiment#2# Best Worse 10 20 30 40 50 60 Prime brightness (cd/m2) F)#Experiment#6# Best Worse 36 25 16 9 4 2 Mask density (lines) 50 40 30 20 10 0 −10 −20 −30 −40 −50 50 40 30 20 10 0 −10 −20 −30 −40 −50 C)#Experiment#3# Best Worse 20 30 40 50 60 Prime duration (ms) G)#Experiment#7# 50 40 30 20 10 0 −10 −20 −30 −40 −50 D)#Experiment#4# Best Worse 20 80 160 40 Prime brightness (cd/m2) Best# Worse# Best Worse 20 40 80 160 20 40 Prime brightness (cd/m2) Figure 7. (A:G): Average CE curves for participant demonstrating best and worse discrimination in each experiment – those with above median discrimination scores, (red – labeled “Best”) and those with below median scores (blue – labeled “Worse”) ) A spline fit connects the data points (we have no theoretical basis for any particular curve fit). If the priming effect became positive only as an individual was better able to discriminate the prime, we would expect the blue curve to be shifted to the right relative to the red curve, because worse discriminators would need more powerful primes (relative to the mask) to produce positive priming. This is clearly not the case. d. Direct comparison to Eimer & Schlaghecken (2002) Eimer & Schlaghecken (2002) used the same framework as we have done, but unlike us, they found positive correlations between discrimination threshold and CE crossing point in two experiments, which manipulated prime strength using prime duration and mask density. To make a direct comparison with our data, we pooled together data from the two experiments by Eimer & Schlaghecken (2002) by Z-scoring them and obtained a correlation of r = .67 (p< .0007) for this new combined set of data. We also transformed the r-values from our experiments into Z-scores (through the r-to-z transformation method defined in Fisher, 1915): In five out of seven experiments, our correlation FIGURE'7'Boy'&'Sumner' differed significantly from that of Eimer and Schlaghecken (see Table 1, 3rd & 4th row). /Users/fredericboy/Dropbox/final#JEP/Best_Worse.m# Further, because there is possible bias in this parametric comparison introduced by distribution distortion in small samples, we used a data-driven bootstrapping approach. We derived a distribution or r-values for each experiment (and for the combined data in Eimer and Schlaghecken, 2002) by resampling the data (with replacement) 10000 times. The overlapping area under any two distributions then gives the p-value for the null hypothesis that the two r-values do not differ. These are given in Table 1, 5th row, for the comparisons of each of our experiments with the combined data of Eimer and Schlaghecken’s two experiments. In Experiments 1-6 we used a method of constant   19   stimuli to provide the psychometric function for estimating prime visibility. Eimer & Schlaghecken (2002) used a staircase procedure. In case this might make a difference to participant's behaviour, Experiment 7 used a staircase procedure. The results were indistinguishable from Experiments 2-6, and significantly different to Eimer & Schlaghecken (2002) (see Table 1) suggesting that the exact method of assessing visibility is not responsible for our failure to find a correlation. 4. Discussion Our results lead both to specific conclusions about the disputed relationship between visibility and reversed masked priming, and also more general conclusions about how relationships that are clearly apparent across stimulus manipulations can be entirely absent across individual differences. The latter issue is of general interest for the growing use of cognitive and sensorimotor tasks to study individual differences. Implications for the study of individual differences. Despite the theoretical problems, outlined in the Introduction, of relating individual differences to experimental manipulations (e.g. Borsboom, 2006), in practice there are many examples of cognitive tasks that were developed within the sphere of within- subject designs being widely employed in the study of individual differences. This trend is increasing with the rise of genetics and the search for cognitive endophenotypes of psychiatric disorders, for early markers of dementia/cognitive impairment, and the development of ever more sophisticated brain imaging techniques that are analyzed at the individual level, rather than using group-averages (voxel-based morphometry, diffusion tensor imaging, magnetic resonance spectroscopy, dynamic causal modeling etc.) With respect to these growing fields, our results can be taken as an exemplar cautionary tale: even relationships between simple tasks and basic cognitive constructs that are well worked out in the realm of controlled stimulus manipulations and within- subject designs, may not transfer to individual differences. Just as has been statistically pointed out (e.g. Borsboom, 2006, Borsboom et al, 2009), and as illustrated in Figure 1, the inter-individual variance can have an entirely different source from that produced by stimulus manipulations. The problem is likely to arise due to the multiple factors contributing to individual differences, even in simple tasks. For example, various ‘inhibition’ tasks have been   20   employed to study self-control and impulsivity, such as the antisaccade, Stroop task or stop-signal task. For these tasks, it is generally assumed that individual differences in performance reflect basic differences in inhibition ability, but this simple conclusion is actually not well supported by the fact that performance tends to correlate only very poorly across tasks supposed to measure the same inhibition ability (Barch, Braver, Carter, Poldrack, & Robbins, 2009; Cothran & Larsen, 2008; Cyders & Coskunpinar, 2011; Friedman & Miyake, 2004; Schachar, 2011). Thus just like in complex traits such as IQ and personality, individual differences in relatively simple behavioural tasks may also be too multiply determined to be easily matched to the cognitive mechanisms revealed by experimental manipulation. This is not to say that there is no point attempting to relate individual differences in cognition to the mechanisms explored through experimental manipulations. Rather, we argue that the endeavor will be more fruitful if approached with the understanding that the relationship will be complex and tricky to work out, rather than the assumption that individual differences automatically reflect the same mechanisms studied by within- subject manipulations in the same task. We recognise that this implicit assumption is very hard to avoid, and we have made it ourselves previously. Even Borsboom et al (2009) in their excellent exposition of the statistical problem of relating inter- and intra-individual variance, appear to conflate between and within participant effects at one point (p. 24-26, a study of differences in chess playing between expert and non-expert groups is used to support a conclusion about gaining expertise within individuals). That even the best statisticians succumb occasionally appears to confirm the deeply intuitive and appealing nature of such conclusions - which may sometimes be correct. But given their intuitiveness, the upmost vigilance will be required to work out when such assumptions are unfounded. This can be supported by the adoption, where appropriate, of statistical techniques that allow within and between participant variance to be analysed simultaneously (e.g. multilevel modeling, nested design, see Snijders & Bosker, 2012; Kliegl et al, 2011). Furthermore, different sources of inter- and intra- individual variance can be exploited to form more stringent tests of theory. For example, if X is hypothesized to cause Y, then there should be a systematic relationship between X and Y both in intra- and inter- individual variance. If both variances are assumed to come from the same source this might be seen merely as a replication, which would not encourage researchers to assess both.   21   The relationship between prime visibility and the NCE. The role of visibility for the ‘inhibitory’ component of masked priming – the NCE – has been disputed for a decade (for reviews, see Eimer & Schlaghecken, 2003; Sumner, 2007). In essence, there have been two questions at stake: Firstly, whether there is any systematic relationship between prime strength / visibility and the direction of priming; secondly, whether there is a causal connection between awareness and the occurrence or not of motor inhibition. Initial studies found that primes presented below the threshold of conscious visibility were categorically associated with NCEs whereas visible primes were associated with PCEs (Eimer & Schlaghecken, 2002; Klapp & Hinkley, 2002). Other studies implied that the transition from NCE to PCE seemed to occur in a continuous manner: as the prime got more visible, priming got more positive (Klapp, 2005; Schlaghecken & Eimer, 2006; Sumner et al., 2006). However some studies found – and some authors strongly argued for – no association between prime visibility and the direction of priming (Jaskowski, Bialunska, & Verleger, 2007; Lleras & Enns, 2004; Verleger, Jaskowski, Aydemir, van der Lubbe, & Groen, 2004). There have often been difficulties of interpretation because prime visibility is normally confounded with changes to the stimuli – such as the masks – which are thought to have their own effects on priming (Jaskowski, 2008; Jaskowski et al., 2007; Lleras & Enns, 2004; Verleger et al., 2004) and conversely, studies aiming to investigate different masks have often been confounded by visibility differences. Schlaghecken et al. (2008) circumvented this problem by changing prime discrimination through perceptual learning without changing the stimulus. Improved prime discrimination ('visibility') did not correspond to any change in the priming effect, providing the strongest evidence yet against a causal role for visibility. However, it remained possible that discrimination had improved through participants learning to attend better to the small cues available to guide prime discrimination. Such learning might not have transferred to masked priming blocks because participants now had to ignore the primes and attend to the target. Our approach was to test whether any relationship consistently held across different types of stimulus manipulation, and across individual ability. We found a clear systematic relationship between visibility (prime strength/mask weakness) and the direction of priming in every dataset (Fig 3, 4, 7). Since this held for four different ways to manipulate visibility (prime duration, prime brightness, mask brightness and mask density), it is   22   unlikely simply to reflect one type of stimulus characteristic. It appears to be a more general product of the relative strengths of prime and mask. Thus in answer to the first question - is there a systematic relationship? - we conclude that under most types of prime or mask manipulations, there is a strong relationship between prime visibility and priming. To further test the causal hypothesis, we took advantage of an alternative source of variance in visibility – individual differences. As in the perceptual learning approach of Schlaghecken et al. (2008), this is also not confounded by stimulus changes. Here we found, across all seven datasets, no hint of any positive correlation between priming and an individual’s ability to discriminate the primes (Fig 6 & 7), supporting the conclusions of Schlaghecken et al. (2008) that there is no causal influence. We used converging approaches to ascertain whether this was simply due to lack of power or outliers in the data (jackknifing, simulation, bootstrapping, direct comparisons to the previous study of Eimer and Schlaghecken, 2002, and grouping the CE data by a median split of the discrimination data, Fig 7). We cannot explain why our results do differ from those of Eimer and Schlaghecken (though we speculate below), but we can appeal to weight of evidence (seven datasets here, plus the evidence from Schlaghecken et al., 2008, vs. two experiments in Eimer and Schlaghecken, 2002). Further, it is essential to note that the logic that our framework for understanding the data, spelled out in the following sections, does not disallow correlations to occur – indeed we have shown they clearly occur for within-subject stimulus manipulation – they just do not reflect direct causal linkage when they do. Implications for theories of the NCE Our results are inconsistent with the original theory of the NCE (Eimer & Schlaghecken, 1998; Klapp & Hinkley, 2002), because it contained a causal role for prime visibility, proposing that automatic motor inhibition occurred as a result of the initial prime-related motor activation failing to reach awareness. The results are consistent with two later theories, which both emphasize the importance of the mask stimulus, but do not envisage a causal connection between prime visibility and the NCE. The ‘object updating’ (or ‘active mask’, or ‘mask-induced priming’, Lleras & Enns, 2004, 2005; Verleger et al., 2004) account suggested that the NCE was caused not by motor inhibition, but by positive priming in an unexpected direction due to prime-mask   23   interaction. When the mask contains elements of both possible primes, then the prime- mask sequence can also be considered as a sequence of both primes presented overlaid with a brief temporal separation. The ‘prime’ that appears second (i.e. the new elements of the mask) could then reverse any priming associated with the prime that appeared first (i.e. the actual ‘prime’). In this case, either increasing the perceptual strength of the first prime, or decreasing the perceptual strength of the second prime (our mask manipulations) would be expected to increasingly favour positive priming over reversed priming, creating the systematic relationship we found. However, previous studies have shown that the object updating account is very unlikely to explain the NCE with the type of mask stimuli we employ, which are not made up of overlapping prime stimuli (Sumner, 2007). Therefore, we turn to the second mask-related theory. The mask-triggered inhibition account (Boy et al., 2008; Jaskowski et al., 2007) shared the element of automatic motor inhibition with the original theory of Eimer and Schlaghecken, but it also focused on the mask like the object updating / active mask theory (Lleras & Enns, 2004, 2005; Verleger et al., 2004). It proposed that the inhibition must be triggered by a second stimulus that occurs after the prime – in this case the mask. New behaviorally relevant stimulus onsets are proposed to elicit inhibition of motor activity associated with previous stimulus – an automatic version of the bridge telling the engine room ‘hold that last command, new information received…’ In this case, it is clear that weaker masks might progressively weaken the triggered inhibition, and thus make an NCE less likely. Why stronger primes should also make the NCE less likely requires a bit more discussion, since the NCE has been found to strongly mirror the positive priming effect measured at shorter prime-target intervals (Boy & Sumner, 2010). We might therefore expect that stronger primes would lead to stronger positive priming and also stronger inhibition. That this is not the case implies that the inhibition mechanism may be limited in strength, and once the initial positive deflection of motor activation becomes too strong, it cannot be fully reversed. This explanation is consistent with the arguments of Lingnau and Vorberg (2005), who pointed out that the occurrence of a PCE does not mean inhibition is absent – just that inhibition was insufficient to reverse the initial activity. After all, it is plausible that the functional role of such inhibition would not be to reverse the direction of motor balance, but to return it towards baseline. It may be that only within a tight range of parameters in an artificial laboratory situation do we find that the elicited inhibition over-compensates for the initial activation, creating the NCE. Note   24   that for the sake of clarity, we have chosen to speak simply in terms of the relative strength of inhibition and activation mechanisms; we could also envisage that manipulations of the prime and mask differentially affect their response profiles across time. Implications for dissociation of visibility and sensori-motor mechanisms The lack of correlation across subjects, accompanied by the clear relationship across stimulus manipulation, allows us to go further than simply selecting a theory of the NCE that does not require causal connection between visibility and the NCE. Even the mask- triggered inhibition account (and the object updating account) would, at first sight, predict that if the relationship is present across stimulus manipulation, we would expect it across participants too. A participant who shows greater prime discrimination ability presumably has, in some way, a stronger representation of the prime relative to the mask than a participant who cannot discriminate the prime with the same stimulus settings. In other words, we assume that the direction of priming is caused by the relative strengths of activation and inhibition processes, which are in turn related to the relative strengths of prime and mask signals in the visual system. If we further assume that prime discrimination performance also reflects the relative strength of prime and mask signals in that person’s visual system, then we should find the correlation between visibility and priming across participants as well as across stimulus manipulations (Fig 8-A). That we do not, tells us that inter-subject variance in discrimination must arise mainly from a difference source than inter-stimulus variance in discrimination. The simplest solution to this would be that visibility and priming rely on entirely separate processes in different brain regions, that share only some initial early visual stage (Fig 8-B). Such conception of separate routes to process different aspects of a stimulus are not uncommon in psychology and echoes the famous dissociation between the processing of visual information for perception or for action (Milner & Goodale, 1995). If stimulus manipulations affect the shared visual stages, that would cause the relationship we reliably found. If inter-subject variabilities in visibility and priming arise mainly not in the early visual mechanisms, but in the further processes supporting awareness or motor processes separately, then there would be no correlation just as we found. We speculate that the main locus of individual variation is not fixed, and will depend on study parameters and the idiosyncrasies of participants. If in some studies a substantial portion of inter-individual variance happens to arise from the shared visual   25   stages, a correlation will be found between visibility and priming, just as reported by Eimer and Schlaghecken (2002). Anatomical separation is not required, however, to explain our results, and neither is a view that some visual processes are ‘for’ perception while others are ‘for’ action. We believe that all vision and perception is, in some sense, for action, but there are different degrees of temporal immediacy between visual and motor mechanisms (Bompas and Sumner, 2008). For example, the temporal distinction between a feed-forward sweep and a subsequent phase of recurrent processing (e.g., Lamme & Roelfsema, 2000) could also provide our dissociation between stimulus manipulations and cross-participant correlation. Rapid motor priming and inhibition are presumably triggered by the feedforward sweep, while conscious perception is thought to rely on recurrent processing (Fig 8-B). Indeed, this is the main explanation for how subliminal priming is possible at all. Individual differences in recurrent processing need not correlate with individual differences in the feedforward sensorimotor sweep, leading to no correlation between visibility and priming. However, if stimuli are manipulated, then both feedforward and recurrent phases are necessarily affected, and hence a systematic relationship between visibility and priming occurs. A)%Non%dissociated%processes%for%visibility%and%priming% Motor%% processes% % Activity levels in several interdependent processes influence both visibility and priming Further% processes% % Visual% processes% % B)%Independent%processes%for%visibility%and%priming% Motor%% processes% % Further% processes% % Visual% processes% % Performance*at*the* Individual*level* Facilitation PCE Prime strength% % % % 5 7 % t a h t g n e r t s % e m Inhibition NCE i r P = % Facilitation PCE Inhibition NCE Prime strength% Individual* differences* Posi?ve% correla?on% % n o ? a n m i i r c s i d Prime%strength%at% CE%crossing%point% ≠ % % % % 5 7 % t a h t g n e r t s % e m % Weak%or%no% correla?on% n o ? a n m i i r c s i d i r P Prime%strength%at% CE%crossing%point% Figure 8: A) Even without direct causal influence of visibility on priming, if the ability to identify the prime and the level of positive or reversed priming arise from the strength of representations in the same cascaded processing pathway, a correlation is expected.   26   FIGURE*8*Boy*&*Sumner* However, B) if the processes are separate, either anatomically or temporally, no correlation between them is expected. A final twist in the tale/tail of the relationship There is a final complication in the relationship between prime strength and the NCE, which recent evidence suggests is in fact consistent with the conclusions of this paper. When masked primes have been presented in the periphery, or when primes at fixation have been degraded, a PCE, not an NCE has occurred (Schlaghecken & Eimer, 2000, 2002, 2006). In other words, not only do strong primes produce PCEs, so do very weak primes (at least under some circumstances), and NCEs occur only for a band in between. To explain this, Schlaghecken & Eimer (2002) invoked a threshold mechanism by which inhibition is not triggered unless the initial prime-related activation is sufficiently strong (though still sub-motor threshold). Lingnau & Vorberg (2005) put this issue to test and systematically varied prime eccentricity, prime size and mask-target SOA, and argued that rather than a threshold below which inhibition does not occur, primes that leave weaker, smaller cortical representation could simply produce weaker inhibition with slower time course. Interestingly, none of our seven datasets showed any hint of this PCE for the weakest primes. It is possible that none of our primes were weak enough, relative to the masks and targets we used. Additionally, it is likely that there is not a simple weakness metric for primes that differ on various dimensions. Although all our manipulations here behaved in effectively the same way with respect to the peri-threshold NCE-to-PCE transition, it remains possible that the PCE seen previously for very weak primes does not occur for all ways of making a prime weak. Previously, the PCE for weak central primes with around 150 ms mask-target SOA (as we had here) was produced by adding noise to very brief primes (Schlaghecken and Eimer, 2002). None of our manipulations emulated this procedure, and most of our primes were 40 ms long. When primes were shorter than this, they were high contrast and none were presented in the context of noise. Just as we find here that prime visibility does not straightforwardly predict the transition from NCE to PCE for stronger primes, a simple 'perceptual weakness' metric may not predict whether weak primes produce a PCE or not. Consistent with this, Budnik, Bompas & Sumner (2013) recently reported that even when equated for visibility, peripheral and central primes still produced opposite priming effects. This indicates that there is no simple metric of perceptual strength between fovea   27   and periphery that both predicts priming and is reflected by discrimination performance. Rather, there seems to be a distinction between a prime's ability to reach conscious awareness (which Budnik et al. (2013) called 'perceptual strength') and its ability to elicit motor activation and inhibition (which Budnik et al. (2013) called 'sensorimotor strength'). Such a distinction is fully consistent with Figure 8-B, and our finding that individuals' ability to see primes does not predict their priming effects. Conclusions We have found a reliable systematic relationship between prime visibility and the direction of priming when stimulus properties of prime or mask are manipulated, but we have also shown that this was accompanied by a consistent lack of correlation across participants. We conclude that the relationship across stimulus manipulation occurs due to the relative impacts of prime and mask signals on motor activation and inhibition, consistent with the mask-triggered inhibition account of the NCE. Individual variance in discrimination ability must arise from a different source, probably the recurrent processes that support awareness. In a more general context, the clear coexistence of correlation across stimuli, but not across people – and thus the fact that the respective variances must arise from different sources – has cautionary implications for the interpretation of even relatively simple cognitive tasks in the study of individual differences. References Aglioti, S., DeSouza, J., & Goodale, M. A. (1995). Size-contrast illusions deceive the eye but not the hand. Current biology, 5(6), 679-685. Barch, D. M., Braver, T. S., Carter, C. S., Poldrack, R. A., & Robbins, T. W. (2009). CNTRICS final task selection: executive control. Schizophrenia Bulletin, 35, 115-135. Bompas, A., & Sumner, P. (2008). Sensory sluggishness dissociates saccadic, manual, and perceptual responses: an S-cone study. Journal of Vision, 8(8), 10 11-13. Borsboom, D. (2006). The attack of the psychometricians. Psychometrika, 71(3), 425– 440. doi:10.1007/s11336-006-1447-6 Borsboom, D., Kievit, R.A., Cervone, D., & Hood, S.B. (2009). The two disciplines of scientific psychology, or: The disunity of psychology as a working hypothesis. In: Valsiner, J. Molenaar, P.C.M., Lyra, M.C.D.P., & Chaudhary, N. (Eds.), Dynamic Process Methodology in the Social and Developmental Science. Springer, Heidelberg.   28     Boy, F., Clarke, K., & Sumner, P. (2008). Mask stimulus triggers inhibition in subliminal visuomotor priming. Exp Brain Res, 190(1), 111-116. Boy, F., Husain, M., & Sumner, P. (2010). Unconscious inhibition separates two forms of cognitive control. Proc Natl Acad Sci U S A, 107(24), 11134-11139. Boy, F., & Sumner, P. (2010). Tight coupling between positive and reversed priming in the masked prime paradigm. Journal of Experimental Psychology : Human Perception and Performance 36(4), 892-905. Budnik, U., Bompas, A. & Sumner, P. (2013) Perceptual strength is different from sensorimotor strength: evidence from the centre-periphery asymmetry in masked priming. Quarterly Journal of Experimental Psychology 66(1),15-22. Cothran, D. L., & Larsen, R. ( 2008). Comparison of inhibition in two timed reaction tasks: The color and emotion Stroop tasks. Journal of Psychology, 142, 373-385. Cronbach, L. J. (1957) The two disciplines of scientific psychology. American Psychologist, 12, 671-684. Cyders, M. A., & Coskunpinar, A. ( 2011). Measurement of constructs using self-report and behavioral lab tasks: Is there overlap in nomothetic span and construct representation for impulsivity? . Clinical Psychology Review, 31, 965-982. Eimer, M., & Schlaghecken, F. (1998). Effects of masked stimuli on motor activation: behavioral and electrophysiological evidence. J Exp Psychol Hum Percept Perform, 24(6), 1737-1747. Eimer, M., & Schlaghecken, F. (2002). Links between conscious awareness and response inhibition: evidence from masked priming. Psychonomic bulletin & review, 9(3), 514-520. Eimer, M., & Schlaghecken, F. (2003). Response facilitation and inhibition in subliminal priming. Biological Psychology, 64(1), 7-26. Fisher, R. A. (1915). Frequency distribution of the values of the correlation coefficient in samples of an indefinitely large population. Biometrika, 10(4), : 507–521.). Friedman, N. P., & Miyake, A. (2004). The relations among inhibition and interference cognitive functions: A latent variable analysis. Journal of Experimental Psychology: General, 133, 101-135. Gottesman, II, & Gould, T. D. (2003). The endophenotype concept in psychiatry: etymology and strategic intentions. Am J Psychiatry, 160(4), 636-645. Hermens, F., Sumner, P., & Walker, R. (2010). Inhibition of masked primes as revealed by saccade curvature. Vision Research, 50(1), 46-56.   29   Hull, C. L. (1945) The place of innate individual and species differences in a natural- science theory of behavior. Psychol. Rev., 52, 55-60. Jaskowski, P. (2008). The negative compatibility effect with nonmasking flankers: A case for mask-triggered inhibition. Consciousness and cognition, 17(3), 765-777. Jaskowski, P., Bialunska, A., & Verleger, R. (2007). Mask-and distractor-triggered inhibitory processes in the priming of motor responses: An EEG study. Psychophysiology, 45(1), 70-85. Klapp, S. T. (2005). Two versions of the negative compatibility effect: comment on Lleras and Enns (2004). J Exp Psychol Gen, 134(3), 431-435; author reply 436-440. Klapp, S. T., & Hinkley, L. B. (2002). The negative compatibility effect: unconscious inhibition influences reaction time and response selection. J Exp Psychol Gen, 131(2), 255- 269. Kliegl, R., Wie, P., Dambacher, M., Yan, M. & Zhou, X. (2011). Experimental effects and individual differences in linear mixed models: estimating the relationship between spatial, object, and attraction effects in visual attention. Frontiers in Psychology, 1, Article 238, doi: 10.3389/fpsyg.2010.00238. Lamme, V. A., & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23, 571–579. Leuthold, H., & Kopp, B. (1998). Mechanisms of priming by masked stimuli: Inferences from event-related potentials. Psychological Science, 9(4), 263-269. Lingnau, A., & Vorberg, D. (2005). The time course of response inhibition in masked priming. Percept Psychophys, 67(3), 545-557. Lleras, A., & Enns, J. T. (2004). Negative compatibility or object updating? A cautionary tale of mask-dependent priming. J Exp Psychol Gen, 133(4), 475-493. Lleras, A., & Enns, J. T. (2005). Updating a Cautionary Tale of Masked Priming: Reply to Klapp (2005). J Exp Psychol Gen, 134(3), 436-440. Lleras, A., & Enns, J. T. (2006). How much like a target can a mask be? Geometric, spatial, and temporal similarity in priming: a reply to Schlaghecken and Eimer (2006). J Exp Psychol Gen, 135(3), 495-500. Mattler, U. (2006). On the locus of priming and inverse priming effects. Percept Psychophys, 68(6), 975-991. Milner, A. D., & Goodale, M. A. (1995). The visual brain in action: Oxford University Press. Neumann, O., & Klotz, W. (1994). Motor responses to non-reportable, masked stimuli: Where is the limit of direct parameter specification?. In C. Umiltà & M. Moskovitch   30   (Eds.), Attention and Performance XV: Conscious and nonconscious information processing. (pp. 123-150). Cambridge, Mass: : MIT Press. Schachar, R. J., Forget-Dubois, N., Dionne, G., Boivin, M., and Robaey, P. (2011). . (2011). Heritability of response inhibition in children. Journal of the International Neuropsychological Society, 17, 238-247. Schlaghecken, F., Blagrove, E., & Maylor, E. A. (2008). No difference between conscious and nonconscious visuomotor control: Evidence from perceptual learning in the masked prime task. Consciousness and Cognition, 17, 84-93. Schlaghecken, F., & Eimer, M. (2000). A central-peripheral asymmetry in masked priming. Perception & Psychophysics, 62(7), 1367-1382. Schlaghecken, F., & Eimer, M. (2002). Motor activation with and without inhibition: evidence for a threshold mechanism in motor control. Perception & Psychophysics, 64(1), 148-162. Schlaghecken, F., & Eimer, M. (2004). Masked prime stimuli can bias "free" choices between response alternatives. Psychon Bull Rev, 11(3), 463-468. Schlaghecken, F., & Eimer, M. (2006). Active masks and active inhibition: A comment on Lleras and Enns (2004) and on Verleger, Jaśkowski, Aydemir, van der Lubbe, and Groen (2004). Journal of Experimental Psychology General, 135(3), 484–494. doi:10.1037/0096-3445.135.3.484 Snijders, T.A.B., & Bosker, R.J. (2012) Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling, 2nd ed. London: Sage Publishers. Sumner, P. (2007). Negative and positive masked-priming--implications for motor inhibition. Advances in Cognitive Psychology, 3(1-2), 317-326. Sumner, P. (2008). Mask-induced priming and the negative compatibility effect. Experimental psychology, 55(2), 133-141. Sumner, P., Tsai, P. C., Yu, K., & Nachev, P. (2006). Attentional modulation of sensorimotor processes in the absence of perceptual awareness. Proc Natl Acad Sci USA, 103(27), 10520-10525. Verleger, R., Jaskowski, P., Aydemir, A., van der Lubbe, R. H., & Groen, M. (2004). Qualitative differences between conscious and nonconscious processing? On inverse priming induced by masked arrows. J Exp Psychol Gen, 133(4), 494-515.   31      
ai_researcher
2
What_Really_is_Commonsense_Knowledge.pdf
4 2 0 2 v o N 6 ] L C . s c [ 1 v 4 6 9 3 0 . 1 1 4 2 : v i X r a What Really is Commonsense Knowledge? Quyet V. Do, Junze Li, Tung-Duong Vuong, Zhaowei Wang, Yangqiu Song, Xiaojuan Ma Department of Computer Science and Engineering, HKUST, Hong Kong SAR, China {vqdo, junze.li, tdvuong, zwanggy}@connect.ust.hk, {yqsong, mxj}@cse.ust.hk Abstract Commonsense datasets have been well devel- oped in Natural Language Processing, mainly through crowdsource human annotation. How- ever, there are debates on the genuineness of commonsense reasoning benchmarks. In spe- cific, a significant portion of instances in some commonsense benchmarks do not concern com- monsense knowledge. That problem would undermine the measurement of the true com- monsense reasoning ability of evaluated mod- els. Davis (2024) suggested that the problem originated from a blurry concept of common- sense knowledge, as distinguished from other types of knowledge. To demystify all of the above claims, in this study, we survey exist- ing definitions of commonsense knowledge, ground into the three frameworks for defin- ing concepts (Murphy, 2004), and consolidate them into a multi-framework unified definition of commonsense knowledge (so-called consoli- dated definition). We then use the consolidated definition for annotations and experiments on the CommonsenseQA and CommonsenseQA 2.0 datasets to examine the above claims. Our study shows that there exists a large portion of non-commonsense-knowledge instances in the two datasets, and a large performance gap on these two subsets where Large Language Mod- els (LLMs) perform worse on commonsense- knowledge instances. 1 Introduction Commonsense datasets have been well developed in Natural Language Processing since the last decade. As commonsense data is known to be implicit, almost all commonsense datasets are con- structed through crowd source human annotation in- stead of relying on automated dataset construction processes. These commonsense datasets serves as valuable resources to augment AI models in various aspects, such as text generation (Zhou et al., 2021; Ilievski et al., 2021b), visual reasoning (Zellers et al., 2019a), or building more capable knowledge models for further downstream applications (Yu et al., 2022; Hwang et al., 2021; Wang et al., 2023a), as well as benchmarks to evaluate the reasoning ca- pability of AI models (Talmor et al., 2019; Zhang et al., 2020; Bhagavatula et al., 2020; Talmor et al., 2022; Fang et al., 2023a). However, there are debates on the quality of com- monsense datasets, especially when they serves as evaluation benchmarks. Davis (2024) argued that many prevalent commonsense datasets are flawed in the sense that they contained a significant portion of instances which do not concern commonsense knowledge but other types of knowledge, namely common, encyclopedia, and expert knowledge (so- called referenced in this work). For example, in the CommonsenseQA 2.0 dataset (Talmor et al., 2022) which consists of Yes/No questions (or as- sertions), the instance “A male seahorse cannot give birth” (Answer: no) presents common biology knowledge, meanwhile, “Electrons are smaller than mesons” (Answer: no) is certainly an encyclopedic fact. As it has been widely discussed that language models excel in memory or retrieval tasks while still struggle in reasoning tasks (Bang et al., 2023; Goldberg, 2023; Huang and Chang, 2023), the flaw in commonsense datasets would undermine the measurement of the true commonsense reasoning ability of evaluated models. Davis (2024) sug- gested that the problem originated from a blurry concept of commonsense knowledge, as distin- guished from other types of knowledge. Due to that blurry concept, both annotators and researchers working on commonsense may not be aware of the genuineness problem of commonsense datasets. Indeed, according to our literature review, all works on commonsense have their ways to describe the concept. However, the description of each is not adequate and comprehensive to educate out- siders or even insiders of this research field about the concept and the difference with respect to other relevant concepts, such as referenced knowledge. Also, through the lens of concept definition the- ory (Murphy, 2004), we posit that each of the existing literature on commonsense only provides a limited number of features, making the concept commonsense not systematically and comprehen- sively depicted. Regarding the problems, Murphy (2004) suggested combining general description, examples, and list of features of the concept to form a complete definition. Motivated by that research gap, in this work, we consolidate the definition of commonsense as follow. Firstly, leveraging the descriptions about commonsense and referenced knowledge from pre- vious works, we attempt to distinguish common- sense knowledge from referenced knowledge. We provide a table of representative cases for each con- cept to show (or more subjectively, assume) the fundamental and subtle difference of these two con- cepts. Based on the descriptions and examples, we systematically propose a list of multi-aspect binary- value features that characterize commonsense, ref- erenced knowledge, and their difference. We then validate the significance of features through an empirical study on the CommonsenseQA (Talmor et al., 2019) and CommonsenseQA 2.0 (Talmor et al., 2022) datasets. Overall, we observe that 1) whether we can obtain the knowledge by our own experience/observation and 2) whether the knowl- edge is only mutual belief are the most significant features to identify commonsense from referenced knowledge. Given the consolidated definition, we analyze the portion of instances of commonsense and ref- erenced knowledge in the development sets of the CommonsenseQA and CommonsenseQA 2.0 datasets, as well as the performance of Large Language Models (LLMs) such as Gemini-Pro, ChatGPT, LLaMa2-7B, and Mixtral-8x7B on the commonsense-knowledge subsets (which consists of instances required commonsense knowledge to answer) and referenced-knowledge subsets from these datasets. Aligned with the claims which mo- tivate for this work, we observe a large portion of referenced knowledge in the two datasets (0.27 ± 0.09 for CommonsenseQA and 0.56 ± 0.1 for Com- monsenseQA 2.0)1, and a large performance gap (4 to 7 point of accuracy) on these two subsets, where LLMs perform worse on commonsense-knowledge instances, suggesting that commonsense reason- ing tasks or reasoning tasks in general are more 195% confidence interval challenging than memory-retrieval tasks. The organization of this paper is as follows. In section 2, we discuss related works (especially background knowledge on the frameworks to de- fine a concept in Murphy (2004)). In section 3, we show a comprehensive survey on the definitions of commonsense knowledge by grounding relevant previous works into three aforementioned defini- tion frameworks, then provide a table of represen- tative cases. After that, we describe the list of fea- tures and validation procedure. Finally, in section 4, we apply the consolidated definition to demystify relevant claims which motivate this work. 2 Background and Related Works Commonsense Research on commonsense had been started from the last century and early 2000s, with the foundational works such as Davis (1990); Lenat (1995); McCarthy (2002); Liu and Singh (2004). They laid the first building blocks on the definition, meaning, and practical application of commonsense knowledge in the field of language processing. Recent time has witnessed a drastic development of research in commonsense knowl- edge and reasoning, including resources and bench- marks, with the wide range of format (free text, knowledge graph, knowledge bases, etc.), topics (concept taxonomy, geographical tradition, daily inference, etc.), evaluation tasks (abductive reason- ing, question answering, or generation), as well as semantics dimensions (physical, linguistic, tex- tual worlds, etc.) (Ilievski et al., 2021a). Despite a number of works on commonsense, in section 3, we show that the definition of commonsense in existing works is not comprehensive, which possi- bly leads to the quality problem of commonsense datasets. Definition Frameworks As introduced in the The Big Book of Concepts (Murphy, 2004), there are three major frameworks of how we understand and organize the world around us through the no- tion of concepts. These frameworks motivate four main Views of concept: Classical View, Prototype View, Exemplar View, and Knowledge View. In layman term, a View of concept means a theoretical way to define a concept. Motivating the Knowledge View (Murphy and the ideal framework (Barsalou, Medin, 1985), 1985) views each concept as a part of our knowl- edge and understanding of the world, in which we do not learn concepts in isolation. It describes how each concept fits in other parts of our lives, e.g. what is its meaning, how to use it, why people cre- ate it, etc. For examples, through the ideal frame- work, weapon is defined as a thing “designed or used for inflicting bodily harm or physical damage”. The feature framework, which bases Classical and Prototype Views, represents a concept by a list of its most typical features. In item categorization, one may examine the similarity of the item to the feature list, then for every feature the item has, it gets “credit” for the feature’s weight, accumulating to the typicality of the item w.r.t. to the concept. Features are in fact not necessarily disjoint in term of semantics. For example, weapon can be char- acterized by three features “can do harm”, “made of metal”, and “sharp”. In a more complex setting (e.g. schemata), value of each feature can be nomi- nal or continuous. However, in this work, we only consider a simple setting with binary-value feature. In contrast, the exemplar framework, as the core of Exemplar View, represents a concept through a collection of specific instances or exemplars that ex- emplify the concept. This framework emphasizes the role of individual examples in categorization and facilitates greater flexibility in category bound- aries. Overall, each framework concerns a level of abstraction of a concept, and one framework is likely not adequate to represent the concept. Analysis with Categorization By nature, the hu- man knowledge is expanded through the separation of concepts for deeper study. Many developments of different fields, e.g. AI (Huang and Chang, 2023; Sun et al., 2023; Wang et al., 2023b), cognitive sci- ence (Genon et al., 2018), medicine (Stein et al., 2013), etc., follow the pattern to offer deeper under- standing of existing problems and better solutions. Recently, in the field of of AI, there are more and more efforts put into such type of analytical works. For example, to study the design bias of datasets and models in term of their creators’ identity and background, Santy et al. (2023) introduced a frame- work to quantify the alignments of human subjects of different demographic categories with datasets’ labels and model predictions. Likewise, Fang et al. (2023b) systematically define distinct kinds of bias in event temporal reasoning, then study knowledge conflicts arising from mismatches between actual temporal relations of events and the prior biases learned by the model. Similarly, in this work, we define commonsense as distinguished from refer- enced knowledge to better understand how LLMs perform in instances regarding each type of knowl- edge. Terminology To avoid ambiguity, we explain some terminology used in this work. The word “commonsense” refers to commonsense knowledge (e.g. Mountains are high), and “non-commonsense” refers to knowledge of another type (e.g. Washing- ton D.C. is the capital of the US) rather than an incorrect commonsense knowledge (e.g. PersonX likely goes to sleep when he is hungry). Further- more, “knowledge type of an instance” refers to the type of the knowledge used to do the corresponding task with the instance. 3 Commonsense vs. Referenced Knowledge 3.1 “Commonsense” in Previous Works A summary of literature review on many previous works on commonsense in term of three defini- tion frameworks - ideal, feature, and exemplar - is shown in Table 1. In the table, we show repre- sentative works from three groups: 1) decade-old- foundational or theoretical works, 2) works from the Allen Institute for Artificial Intelligence (AI2)2, 3) works from a variety of research groups. In general, except decade-old-foundational or theoretical works, all other works that we surveyed (Sap et al., 2019a; Talmor et al., 2019; Sap et al., 2019b; Zellers et al., 2018, 2019b; Hwang et al., 2021; Lourie et al., 2021; Forbes et al., 2020; Onoe et al., 2021; Zhang et al., 2023; Richardson and Heck, 2023; Madaan et al., 2022; Sun et al., 2022; Maharana and Bansal, 2022; Lu et al., 2023; Yin et al., 2022; Porada et al., 2022; Zhou et al., 2022; Qasemi et al., 2022; Zhou et al., 2020) do not pro- vide any defining feature about the concept “com- monsense”. They rather provide minimum descrip- tion (varying from one paragraph to one sentence) as domain-specific ideal about commonsense. That is a common phenomenon in reporting, as humans tend not to fully express a term defined before. Indeed, aforementioned works rely on previous works for reference about the concept. Interest- ingly, we observed that almost all works in group 2 and 3 refer the concept “commonsense” back to Liu and Singh (2004), and a majority of works in group 3 refer it back to or cite works in group 2. In term 2Allen Institute for Artificial Intelligence has made com- monsense reasoning a major focus of many of its research projects, as sponsored by DARPA (Davis, 2024) Prev. Work Topic Ideal Feature Exemplar Lenat (1995) (CYC) General common sense source "... Stating them [commonsense assertions] to another person, aloud or in print, would likely be confusing or insulting ..." re- McCarthy (2002) General description of common sense ability ".. Common sense involves certain abilities to decide what to do to achieve goals .. We may call the human ability to take facts into account common sense reasoning ability .." and Theory Davis Marcus (2015) "Piecemeal commonsense knowledge (e.g. specific facts) is relatively easy to acquire, but often of little use, because of the long-tail phenomenon discussed above" "Such assertions are unlikely to be published in textbooks, dic- tionaries, magazines, or encyclo- pedias, even those designed for children." ".. The facts that describe the consequences of events the ac- tor doesn’t control, are the most important common sense knowl- edge .." "Commonsense reasoning al- most always involves plausible reasoning; that is, coming to conclusions that are reasonable given what is known, but not guaranteed to be correct" Davis (2024) on Survey common re- sense sources and benchmarks "Common sense supports reasoning", "Com- monsense knowledge can be distinguished from from referenced knowledge, encyclope- dic knowledge and expert knowledge" "Common sense is largely sen- sible", "Commonsense knowl- edge is not book learning, ex- plicitly taught in schools." "You have to be awake to eat", "You can usually see people’s noses, but not their hearts", etc. "London is in the south of England.", "When objects collide they usually make a noise" "If you see a six-foot tall person holding a two-foot tall person in his arms, and you are told that they are father and son, you do not have to ask which is which" "The sun is very bright", ".. People cannot walk from one [Central Park] to the other [the Golden Gate Bridge] in fifteen minutes .." Liu and Singh (2004) (Concept- Net) General common- sense resource "To the AI community it [commonsense] is used in a technical sense to refer to the mil- lions of basic facts and understandings pos- sessed by most people" ".. Such [commonsense] knowl- edge is typically omitted from social communications, such as text." "A lemon is sour", "To open a door, you must usually first turn the door- knob", al. Sap et (2019a) (ATOMIC) Daily infer- ential knowl- edge "[Commonsense reasoning is] about what might have happened just before, what might happen next as a result, and how different events are chained through causes and ef- fects" Talmor et al. (2019) (Common- senseQA) Common re- sense sources and benchmarks al. Sap et (2019b) (SocialIQA) and Social emotional intelligence "When humans answer questions, they capi- talize on their common sense and background knowledge about spatial relations, causes and effects, scientific facts and social conven- tions." "Social and emotional intelligence [as com- monsense knowledge] enables humans to rea- son about the mental states of others and their likely actions" Onoe et al. (2021) (CREAK) Entity under- standing and common sense infer- ence. "These concepts [commonsense about every- day scenarios (physical, social, etc.) and fac- tual knowledge about entities] overlap in a set of inferences involving entities that we call entity commonsense." Zhang et al. (2023) (CIKQA) Common sense bench- marks "Understanding human language requires both the language knowledge (e.g., grammar and semantics) and world knowledge, which can be further divided into factual and com- monsense knowledge (Katz and Fodor, 1963). " n/a n/a n/a n/a n/a "X repels Y’s attack be- cause X wanted to pro- tect herself", "PersonX makes PersonY’s coffee thus PersonX [want to] adds cream and sugar" "When Simon heard the lawn mower, he was prob- ably outdoors and situ- ated at street level" "Alex spilled the food she just prepared all over the floor and it made a huge mess. Alex will want to mod up." "If you’re good at a skill you can teach others how to do it" "I drank from the water fountain. The cause of this was I was thirsty.", "The fish ate the worm. It was hungry. That means the fish was hungry." Table 1: Literature review on previous works on commonsense. We directly quote descriptions about commonsense for credibility. Text in [] means to complete the quotes. N/a means “not exist or copied/adapted from previous work”. For works concerning QA tasks, we convert instances from the QA format to free-text format. of five mentioned decade-old-foundational or theo- retical works, each paper did define commonsense using all definition frameworks, yet each posed at most one or two features of commonsense knowl- edge, while the rest of the description about com- monsense falls into definition frameworks ideal or exemplar. That makes our claim about the lack of a systematic and comprehensive list of defining features of commonsense knowledge. Furthermore, while ideals about commonsense of these works (except Davis (2024)) represent commonsense indistinguishable from basic facts, even including referenced knowledge (Halpern and Moses, 1990) such as law, regulation, convention, natural science, etc.; feature and exemplar views of these works (except Liu and Singh (2004) and McCarthy (2002) respectively) seems not to rep- resent such referenced knowledge. That means Liu and Singh (2004) and McCarthy (2002) treat commonsense as general basic knowledge which is indistinguishable from referenced knowledge, and three other works (Lenat, 1995; Davis and Marcus, 2015; Davis, 2024) to some extent treat common- sense different from referenced knowledge. Given that almost all other surveyed works follow the def- inition of commonsense in Liu and Singh (2004), we conclude that almost all previous works on com- monsense do not consider the difference between commonsense and referenced knowledge. At this moment, a researcher would have two directions regarding this problem. One is to con- tinue to treat commonsense and referenced knowl- edge similar, as “the boundaries between these cat- egories are extremely vague and it would be wasted effort to try to make them precise ... [then] we are just studying the use of world knowledge gener- ally, or reasoning generally” (Davis, 2024). The other is to consider the subtle difference of com- monsense and referenced knowledge and close this theory gap by de-blurring the boundary between commonsense and referenced knowledge. In this work, we incline to the latter. According to the re- porting bias, we argue that referenced knowledge is inherently reported more frequently (even long-tail referenced knowledge will be reported) and consis- tently (e.g. laws must be reported with integrity) than commonsense knowledge. 3.2 Descriptions and Examples We give a summary of assertions that concern either commonsense or referenced knowledge in Table 2 (with reference to previous works), as the mutual ground between us and readers to base our proposal of a list of binary-value features that distinguishes commonsense from referenced knowledge. In term of definition frameworks of concept, the assumption lies between and bridge the exemplar and feature frameworks. It not only regards the def- inition of concepts to instance level, but also sum- marizes instances into some representatives of the two concepts, which provide cues of the features by which we can discriminate the two concepts. 3.3 Features of Two Knowledge Types To discriminate the two concepts, we dive deep into multiple aspects of knowledge, such as: • Acquisition: where to acquire the knowledge, • Content + Representation: which topics and ob- jects does the knowledge regard, • Scope + Context + Evaluation: to what extent do people agree with or accept the knowledge (here, we do not consider “knowing”, as it is subjective) For each of the aspects, we inherit the viewpoints from previous works and propose extra viewpoints as binary-value features to justify if one instance concerns commonsense or referenced knowledge. Depending on each feature, having the feature (i.e. having value 1) would make the instance inclined to commonsense or referenced knowledge (i.e. the knowledge type tendency of features). These fea- tures are summarized in Table 3. It can be observed that the features are not mu- tually disjoint in terms of semantics (even features in the same aspect). Also, this list may not be complete. However, as the first attempt to toward a comprehensive definition of commonsense, we only work on aforementioned aspects and features. The features are referenced or self-observed but all deemed subjective, thus, it is unclear about the statistical significance of these features in deter- mining which knowledge is commonsense. There- fore, in the next subsection, we conduct a study on the CommonsenseQA and CommonsenseQA 2.0 datasets for examination. We explain our choice of datasets in Appendix B. 3.4 Significance of Selected Features Expert Annotation We recruit three expert anno- tators who are postgraduate research students and having at least one year of research experience in the topic of commonsense. We randomly sample 100 instances from the development set of Com- monsenseQA and CommonsenseQA 2.0, then ask three annotators to: 1) first, annotate the knowledge type of each instance (either commonsense or refer- enced), 2) then, after a week3, annotate the knowl- edge type tendency of each instance with respect to each feature. The aggregated value of knowl- edge type and features are the majority among three annotations. 3According to the forgetting curve, only 10% of new things remains in humans’ memory after a week. Aspects ↓ Commonsense Referenced Linguistics N/A Characteristics of Objects / Entities Non-defining characteristics of general objects Reference: McCarthy (2002) Example: "Mountains are high". (Elaboration: This is not necessary that a general mountain is high, but it is sensibly true. Yet the high- ness of a mountain is dependent of observers) Facts, Actions, States, and Events Implicit mutual belief (which is rarely convention- alized) Reference: Liu and Singh (2004); Davis (2024) Example: "At a funeral, a person would be sad". (Elaboration: It is a social etiquette to empathize with the lost of the family, but there is no written-down convention of the etiquette) Derivative of facts which is not written down and not always true. Reference: Davis and Marcus (2015) Example: "The weather is warm because it is sunny". (Elaboration: It is likely that the sunlight increases temperature. Yet there are many meteorology factors that affect temperature) Definition or linguistic meaning of a word Reference: Davis (2024) Example: "If a person is trying to keep something in their hand, they should hold it". (Elaboration: "hold" means "keep something in hand", hence this is the definition of the word) Characteristics of a specific entity Reference: Halpern and Moses (1990) Example: "The Fuji Mountain is 3,776 metres high". (Elaboration: The Fuji Mountain is a specific entity, thus the statement is verifiable) Defining characteristics of general objects Reference: Derived from the word "referenced" Example: "A mountain is a large natural elevation of the earth’s surface rising abruptly from the surround- ing level". (Elaboration: This is the definition of "mountain" from Google Translate) Specific law, regulation, convention, or written- down or by-the-book knowledge Reference: Lenat (1995) Example: "Pistol is not prohibited in California". (Elaboration: This statement expresses a regulation in the California State) Daily, encyclopedic, or scientific fact Reference: Davis (2024) Example: "The sun provides energy in 2 forms: heat and light". (Elaboration: This statement expresses a scientific fact (which is proven)) Table 2: Summary of assertions that concern either commonsense or referenced knowledge. Each collection (with bold-faced description) of instances is referred to previous works (Reference), and illustrated by an assertion (Example) and its elaboration (Elaboration). Notation a_regular T. R Reference Lenat (1995) a_self CS Complement of Aspect ↓ Feature Acquisition Content + Representation Scope + Context + Evaluation Knowledge can be obtained through regular public channels (education, research, mass official media) Knowledge is often obtained through self-directed personal experience (before applying the knowledge or learning through public channels) Knowledge regards to human daily interaction and activities, usage of human-created object, etc. Knowledge regards to STEM theory, nature, history, occasion, etc. Knowledge regards to a specific (named, identifiable) place, entity, object, method, number etc. cr_social CS cr_stem cr_spec Knowledge is logically/ empirically/ clinically certified or proven sce_prov Knowledge is from law, convention, definition, scripts, linguistics sce_conv Knowledge is of mutual subjective belief and observation sce_heur CS R R R R a_regular (Sap et al., 2019a) and Complement of cr_stem Davis (2024) Halpern (1990) and Moses Derived from the word "referenced" Lenat (2024) Liu and Singh (2004); Davis (2024) (1995); Davis Table 3: The list of features we consider to distinguish commonsense from referenced knowledge. “T.” stands for tendency of knowledge type. In “T.” column, CS and R denotes commonsense and referenced knowledge. Quality Control In term of quality control, we provide a full set of instructions including the mo- tivation of this work, summary of existing defini- tion frameworks of commonsense as distinguished from referenced knowledge (Tables 1,2) with an extended set of examples. We carry out 3 hours of training and Q&A session to familiarize annota- tors with the tasks. The fact that we let the anno- tators annotate the knowledge type before binary values of features is to mitigate any confounding factors other than the true relationship between them. Based on heuristics, we argue that if annota- tors work on features before knowledge type, they would use the information of features to label the knowledge type, which incur trivial dependency. Overall, for both datasets, the average Cohen’s Kappa score w.r.t. the knowledge type and each feature are all greater than 0.4, which indicates at least moderate agreement. Among these data fields, knowledge type and cr_spec feature of Common- senseQA has the highest kappa 0.703 an 0.726, respectively. Feature Significance’s Measurement Through regression models and decision tree models, we measure the statistical significance of features in determining which knowledge is commonsense. We set the knowledge type as the target, while features are attributes. In term of regression models, we use linear models instead of logistic models, because 1) we can convert logistic models back to linear models through logarithm of the target and 2) linear models offers more numerically stable values of estimators and statistics. We apply the Backward Elimination Procedure (BEP) with metric AIC to select the best model, whose features are assumed by us to be the most significant in determining the knowledge type. In term of the decision tree model, we fit the model (with the train/test ratio 8:2) in two settings: one includes all features, the other includes top 5 significant features (as features remained in the 5-feature model in the BEP). a_self, The obtained result shows that for Com- sce_convention, monsenseQA, sce_heuristic are the most significant features (with p value at most 0.1), while the decision trees achieves 95% prediction accuracy with the depth of 3 and the top 2 splits are based on a_self, sce_heuristic. Likewise, for CommonsenseQA 2.0, a_regular, a_self, sce_heuristic are the most significant feature (with p-values are 0, 0.117, 0 respectively), and the decision trees achieves 85% the top 2 split nodes are based on a_regular, sce_heuristic. We notice the difference in the significance of each feature in determining the knowledge type w.r.t. different datasets or data distribution in general, which is understandable. However, features such as a_self and sce_heuristic are commonly significant, suggesting that we can use these features to quickly generalize to other data distribution and justify the knowledge type of new instances. In fact, value 1 of features a_self and sce_heuristic all indicate that the implicitness of a knowledge in- stance, as opposed to the explicitness of referenced knowledge. We leave details about the regression models and decision tree models to Appendix B. 4 Analysis of Commonsense Datasets 4.1 Fraction of Non-commonsense Instances in Commonsense Datasets By the annotation of knowledge type of 100 in- stances in subsection 3.4, we estimate the 95% confidence interval of the proportion p of non- commonsense instances in CommonsenseQA and CommonsenseQA 2.0 datasets. As the label is binary, we treat the knowledge type of an in- stance as a random variable of binomial distribu- tion, with probability of referenced knowledge type (i.e. value 1) is p.4 By our computation5, the pro- portion p of non-commonsense instances in Com- monsenseQA and CommonsenseQA 2.0 datasets are 0.27 ± 0.09 and 0.56 ± 0.1, which suggests the genuineness problem of the two datasets. We also consider other famous commonsense datasets: WSC (Levesque et al., 2012) (one of the first commonsense benchmarks), Hel- laSwag (Zellers et al., 2019b) (included in HELM Classic (Liang et al., 2023)), and aNLI (Bhaga- vatula et al., 2020) (abductive reasoning). By the nature of the data from WSC, every instance ex- presses a specific situation and the task as coref- erence resolution, which requires implicit (lin- guisitic) knowledge without any proven evidence. Thus, these instances are deemed to be "Deriva- tive of facts which is not written down and not always true" (Table 2), which are assumed to be commonsense. That means the portion of non- commonsense knowledge instances in WSC is in- significant. Likewise, aNLI concerns daily situa- tion, thus the dataset is also likely genuine. We proof-check this heuristic by observing 50 random samples in each dataset, the result is as expected. At the meantime, HellaSwag is constructed from two datasets ActivityNet (Krishna et al., 2017) and WikiHow (Koupaee and Wang, 2018). While in- stances from ActivityNet is as situational as in aNLI, instances from WikiHow is not always com- monsense but expert or specialized long-tail engi- neering knowledge. We evaluate 50 random sam- 4In fact, knowledge type can be in the form of typicality with continuous value, and it should follow a Gaussian dis- tribution. However, it is difficult to quantify the typicality to continuous value. 5We use this online statistics calculator. the models and there are a lot of commonsense benchmarks are used to evaluate ChatGPT, Chat- GPT is well-trained with verbalized commonsense knowledge. 5 Conclusion In this work, we demystify claims regarding the genuineness of commonsense datasets. We survey and consolidate existing definitions of common- sense knowledge through the three frameworks for defining concepts. We then use the consoli- dated definition to show that there exists a large portion of non-commonsense knowledge in Com- monsenseQA and CommonsenseQA 2.0. There is also a large performance gap on two subsets of commonsense and referenced knowledge in the two datasets, where LLMs perform worse on commonsense-knowledge instances. Although we do not long for perfect commonsense datasets, our work aims to raise the awareness of the genuine- ness problem of commonsense datasets. In gen- eral to the NLP community, we call for theoretical works on subfields of NLP research which deal with unclear concepts. That would facilitate better understanding of the underlining problems for the NLP community. ples with sourceid as WikiHow, and observe ap- proximately half of those are non-commonsense. Overall, according to our consolidated definition of commonsense, there are so-called commonsense datasets with a large portion of non-commonsense instances, yet other commonsense datasets are of genuine commonsense. Nonetheless, our proposi- tions are limited to knowledge type but not the task of the benchmarks. That means our theory has not yet rejected that these benchmarks do not concern commonsense reasoning. 4.2 LLMs’ Performance on Commonsense- and Referenced-Knowledge Instances For both CommonsenseQA and CommonsenseQA 2.0, we aim to compare the accuracy of LLMs on two subsets, one is on a subset consisting of commonsense-knowledge instances, the other is on a subset consisting of referenced-knowledge instances. Following the prior analysis work (Santy et al., 2023), we scale the annotation of knowledge type to 300 instances for CommonsenseQA and CommonsenseQA 2.0. Model → Gemini ChatGPT LLaMa2 Mixtral CommonsenseQA Commonsense Referenced 75.53 80.59 75.10 76.11 61.37 68.65 72.10 79.10 CommonsenseQA 2.0 Commonsense Referenced 70.90 74.73 66.36 64.21 43.63 47.36 60.90 67.89 Table 4: LLMs’ performance with Accuracy metric. We employ four LLMs, which are Gemini-Pro, ChatGPT (Jun 2023 version), LLaMa2-7B-chat, and Mixtral-8x7B, as they are available, stable, and four of the most capable models at the time we were conducting our experiments. We set tem- perature T = 0 and use zero-shot prompt for all generation. The prompt instruction for each task is adapted from HELM. The result is shown in Ta- ble 4. We notice a significant performance gap (varying from 4 points to 7 points accuracy) be- tween the performance of LLMs, except ChatGPT, on the two subsets of both datasets. It suggests that tasks involving commonsense knowledge (or in a possibly overclaimed term, commonsense reason- ing tasks) are more challenging than tasks involving referenced knowledge, which concerns memory re- trieval. In term of ChatGPT, we argue that because OpenAI collects conversation data to further train Limitation This paper works on the definition of common- sense as distinguished from referenced knowledge - the cover of common, encyclopedic, and expert knowledge, and based on that, it demystifies claims concerning commonsense. Due to limited human resource, only a few datasets are empirically stud- ied and the obtained results are unavoidably subjec- tive in a certain level. Further study with a larger scale is expected to examine the generalizability of insights drawed from this work. Also, this work discusses the blurry concept of commonsense from perspectives of annotators and researchers work- ing on commonsense, which possibly lead to the genuineness problem of commonsense datasets, however, there is no empirical study as attempt to clarify the cause. Experiments with researchers from various research groups and crowdsourced annotators (e.g. in Amazon Mechanical Turk) are preferred to make the arguments in this work more convincing. Ethical Statments This work provides a (more) comprehensive liter- ature of the concept “commonsense” by examin- ing and experimenting with many commonsense datasets and benchmarks. Thus, this work shares the same ethical issues as these previous works. By our inspection, all sampled data instances do not contain any private information about any specific entities (e.g., a person or company). We carried out human expert annotation, where annotators are fairly paid according to the minimum wage require- ment of the local government. In another aspect, the study of LLMs’ perfor- mance on data subsets of different knowledge types involves the use of Gemini (gemini-pro), Chat- GPT (gpt-3.5-turbo-0613), LLaMa2 (llama-2-7b- chat-hf), and Mixtral (mixtral-8x7b-instruct). Ex- cept LLaMa2 which is deployed on local server, other three LLMs are called via APIs provided by GoogleAI, OpenAI, and Fireworks.AI respectively. Thus, the same risks from LLMs research are ap- plicable to this work (Bender et al., 2021). References Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wen- liang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. 2023. A multitask, multilin- gual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. In Proceedings of IJCNLP-AACL, pages 675–718, Nusa Dua, Bali. As- sociation for Computational Linguistics. L W Barsalou. 1985. Ideals, central tendency, and fre- quency of instantiation as determinants of graded structure in categories. J. Exp. Psychol. Learn. Mem. Cogn., 11(4):629–654. Emily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA. ACM. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han- nah Rashkin, Doug Downey, Wen tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In International Conference on Learning Representa- tions. Ernest Davis. 1990. Representations of commonsense knowledge. Elsevier. Ernest Davis. 2024. Benchmarks for automated com- monsense reasoning: A survey. ACM Comput. Surv., 56(4):1–41. Ernest Davis and Gary Marcus. 2015. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM, 58(9):92–103. Tianqing Fang, Quyet V. Do, Sehyun Choi, Weiqi Wang, and Yangqiu Song. 2023a. Ckbp v2: An expert- annotated evaluation set for commonsense knowl- edge base population. ArXiv, abs/2304.10392. Tianqing Fang, Zhaowei Wang, Wenxuan Zhou, Hong- ming Zhang, Yangqiu Song, and Muhao Chen. 2023b. Getting sick after seeing a doctor? diagnosing and mitigating knowledge conflicts in event temporal rea- soning. Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chem- istry 101: Learning to reason about social and moral norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 653–670, Online. Association for Computational Linguistics. Sarah Genon, Andrew Reid, Robert Langner, Katrin Amunts, and Simon B Eickhoff. 2018. How to char- acterize the function of a brain region. Trends Cogn. Sci., 22(4):350–364. Yoav Goldberg. 2023. Some remarks on large language models. Joseph Y Halpern and Yoram Moses. 1990. Knowledge and common knowledge in a distributed environment. J. ACM, 37(3):549–587. Jie Huang and Kevin Chen-Chuan Chang. 2023. To- wards reasoning in large language models: A survey. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1049–1065, Toronto, Canada. Association for Computational Linguistics. Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. (comet-) atomic 2020: On sym- bolic and neural commonsense knowledge graphs. In Thirty-Fifth AAAI Conference on Artificial Intel- ligence, AAAI 2021, Thirty-Third Conference on In- novative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Ad- vances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 6384–6392. AAAI Press. Filip Ilievski, Alessandro Oltramari, Kaixin Ma, Bin Zhang, Deborah L. McGuinness, and Pedro Szekely. 2021a. Dimensions of commonsense knowledge. Filip Ilievski, Jay Pujara, and Hanzhi Zhang. 2021b. Story generation with commonsense knowledge graphs and axioms. In Workshop on Commonsense Reasoning and Knowledge Bases. Mahnaz Koupaee and William Yang Wang. 2018. Wiki- how: A large scale text summarization dataset. Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. Douglas B Lenat. 1995. CYC. Commun. ACM, 38(11):33–38. Hector J. Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. KR’12, page 552–561. AAAI Press. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Ku- mar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Man- ning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Lad- hak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2023. Holistic eval- uation of language models. H Liu and P Singh. 2004. ConceptNet — a practical commonsense reasoning tool-kit. BT Technol. J., 22(4):211–226. Yujie Lu, Weixi Feng, Wanrong Zhu, Wenda Xu, Xin Eric Wang, Miguel Eckstein, and William Yang Wang. 2023. Neuro-symbolic procedural planning with commonsense prompting. Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. 2022. Language models of code are few-shot commonsense learners. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing, pages 1384–1403, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Adyasha Maharana and Mohit Bansal. 2022. On cur- In riculum learning for commonsense reasoning. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 983–992, Seattle, United States. Association for Computational Linguistics. John McCarthy. 2002. What is common sense. Gregory L Murphy. 2004. The big book of concepts. A Bradford Book. Bradford Books, Cambridge, MA. Gregory L Murphy and Douglas L Medin. 1985. The role of theories in conceptual coherence. Psychol. Rev., 92(3):289–316. Yasumasa Onoe, Michael J.Q. Zhang, Eunsol Choi, and Greg Durrett. 2021. Creak: A dataset for common- sense reasoning over entity knowledge. OpenReview. Ian Porada, Alessandro Sordoni, and Jackie Cheung. 2022. Does pre-training induce systematic inference? how masked language models acquire commonsense knowledge. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 4550–4557, Seattle, United States. Association for Computational Linguistics. Ehsan Qasemi, Filip Ilievski, Muhao Chen, and Pedro Szekely. 2022. PaCo: Preconditions attributed to commonsense knowledge. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2022, pages 6781–6796, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Christopher Richardson and Larry Heck. 2023. Com- monsense reasoning for conversational ai: A survey of the state of the art. Sebastin Santy, Jenny Liang, Ronan Le Bras, Katharina Reinecke, and Maarten Sap. 2023. NLPositionality: Characterizing design biases of datasets and models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9080–9102, Toronto, Canada. Association for Computational Linguistics. Nicholas Lourie, Ronan Le Bras, Chandra Bhagavat- ula, and Yejin Choi. 2021. Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark. AAAI. Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019a. ATOMIC: an atlas of machine commonsense for if-then reasoning. In The Thirty-Third AAAI Con- ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial In- telligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 3027–3035. AAAI Press. Da Yin, Hritik Bansal, Masoud Monajatipoor, Liu- nian Harold Li, and Kai-Wei Chang. 2022. GeoM- LAMA: Geo-diverse commonsense probing on multi- lingual pre-trained language models. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing, pages 2039–2055, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019b. Social IQa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4463– 4473, Hong Kong, China. Association for Computa- tional Linguistics. Dan J Stein, Crick Lund, and Randolph M Nesse. 2013. Classification systems in psychiatry. Curr. Opin. Psy- chiatry, 26(5):493–497. Kai Sun, Yifan Ethan Xu, Hanwen Zha, Yue Liu, and Xin Luna Dong. 2023. Head-to-tail: How knowl- edgeable are large language models (llm)? a.k.a. will llms replace knowledge graphs? Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, and Claire Cardie. 2022. Improving machine reading compre- hension with contextualized commonsense knowl- edge. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 8736–8747, Dublin, Ireland. Association for Computational Linguistics. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bha- gavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. 2022. Commonsenseqa 2.0: Exposing the limits of ai through gamification. Weiqi Wang, Tianqing Fang, Baixuan Xu, Chun Yi Louis Bo, Yangqiu Song, and Lei Chen. 2023a. CAT: A contextualized conceptualization and instan- tiation framework for commonsense reasoning. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 13111–13140, Toronto, Canada. Association for Computational Linguistics. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023b. How far can camels go? exploring the state of instruction tuning on open resources. Changlong Yu, Hongming Zhang, Yangqiu Song, and Wilfred Ng. 2022. Cocolm: Complex commonsense enhanced language model with discourse relations. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1175–1187. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019a. From recognition to cognition: Visual commonsense reasoning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93–104, Brus- sels, Belgium. Association for Computational Lin- guistics. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019b. Hellaswag: Can a machine really finish your sentence? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics. Hongming Zhang, Yintong Huo, Yanai Elazar, Yangqiu Song, Yoav Goldberg, and Dan Roth. 2023. CIKQA: Learning commonsense inference with a unified knowledge-in-the-loop QA paradigm. In Findings of the Association for Computational Linguistics: EACL 2023, pages 114–124, Dubrovnik, Croatia. As- sociation for Computational Linguistics. Hongming Zhang, Xinran Zhao, and Yangqiu Song. 2020. WinoWhy: A deep diagnosis of essential commonsense knowledge for answering Winograd schema challenge. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 5736–5745, Online. Association for Computational Linguistics. Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayat- nia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur. 2021. Commonsense- focused dialogues for response generation: An em- In Proceedings of the 22nd Annual pirical study. Meeting of the Special Interest Group on Discourse and Dialogue, pages 121–132. Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur. 2022. Think before you speak: Explicitly generating implicit commonsense knowledge for response generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1237–1252, Dublin, Ireland. Association for Computational Linguistics. Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2020. Evaluating commonsense in pre- trained language models. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial In- telligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9733–9740. AAAI Press. A Discussion on Knowledge Types A.1 General Although we aim to discriminate commonsense from referenced knowledge, we are not strict in the discrimination (and almost impossible to do it perfectly). It is undeniable that the knowledge type is varying between different people and even different timestamps of a person. We name it the circulation of knowledge, or the journey from un- known to known and popular of a knowledge. A person may learned a popular knowledge from of- ficial mass media directly, or learned a knowledge from observation (thus treat it as commonsense), then the knowledge may be conventionalized to be a referenced knowledge. A referenced knowledge that a person does not know may be commonsense for them. About our decision to group common, expert, and encyclopedic knowledge into the same cate- gories; we observe the dynamics of these knowl- edge types through time. An expert/encyclopedic knowledge can be popularized, making it more "common" to public. Also, a common knowledge for a person may be expert knowledge for others, as different people have different expertise. In fact, by our annotation on CommonsenseQA, the propor- tion of encyclopedic/expert knowledge is approxi- mately 5%. Therefore, we merge them to a group, named referenced knowledge, as the knowledge is true by nature or conventionalized and it need a reference for its validity. Furthermore, we want to relate commonsense and referenced knowledge in our work to other concepts. About the data frequency or distribu- tion, we aforementioned, commonsense tends to be more “long-tail” than referenced knowledge. In term of reasoning, commonsense likely exists and be necessary for with abductive reasoning, while referenced associate with logical reasoning. Like- wise, commonsense knowledge represents a prob- abilistic world, and referenced knowledge bases a deterministic world. B Further Discussion and Supplementary Materials B.1 Choices of Datasets and Annotation Objective There are several prevalent (textual) commonsense datasets such as ATOMIC, SocialIQA, Social- Chemistry101, etc., however, we choose Com- B.3 Better Guideline for Annotation Based on the consolidated definition commonsense, one may rely on the proximity of new instance to the representative cases in Table 2 to decide the knowledge type. Meanwhile, for uncertain cases, one can work with the feature lists in Table 3. Note that the significance of features may vary accord- ing to data distribution, thus a pilot study of the significance of features is preferred for better judg- ment. Nonetheless, 1) whether we can obtain the knowledge by our own experience/observation and 2) whether the knowledge is only mutual belief are the most significant features to identify common- sense from referenced knowledge. B.4 Portion of Non-Commonsense Instance We are aware of the p-hacking problem with sam- pled data, thus we extend the annotation of knowl- edge type on CommonsenseQA to its whole de- velopment set which consists of 1221 instances and recompute the confidence interval to prove the representativeness of our sampled data. We get 0.2153 ± 0.0231, which is expectedly not vary- ing much in term of lower bound in comparison to previously computed confidence interval. Also, as some people may argue the subjectivity in an- notation of basic knowledge which lies between of commonsense and referenced knowledge, we analyze the portion of encyclopedic and expert knowledge which are certainly non-commonsense. Considering the whole development set of Com- monsenseQA, there are approximately 5% of the instances which concern encyclopedic and expert knowledge. B.5 Evaluation of LLMs on Two Subsets We want to treat the accuracy as the mean of ran- dom variables from two populations (i.e. two sub- sets), whose random variables are indicator func- tions with each representing if an LLM’s answer to a task instance is correct, and conduct the Two- sample t-test. However, the distribution of our data is binomial, which does not satisfy the assumption of data normality of the test. Therefore, we only compare the accuracy in a straight forward manner. monsenseQA and CommonsenseQA 2.0 because from observation from a previous work and our- selves, they contain substantial amount of non- commonsense data. Also, the tasks of the two datasets are different, thus, we can study how dif- ferent are the contribution of/correlation between tendencies in criteria and the knowledge type. In term of the annotation objective, as an as- sertion may carry several piece of knowledge, we determine the knowledge type of an assertion based on the top non-grammatical division of the corre- sponding dependency tree, or the division which supports the answer for the task. In case a instance is not accurate6 or widely known, we assume it is correct and widely known as the standard data properties for annotation. B.2 Evaluation of Features We show the agreement of annotations of features’ value in Table 5. K. Type/ Features ↓ A01 A12 A20 Avg CommonsenseQA Knowledge Type a_regular a_self cr_social cr_stem cr_spec sce_prov sce_conv sce_heur 0.6896 0.595 0.6664 0.5949 0.3316 0.5949 0.5821 0.6712 0.4565 0.4052 0.8077 0.734 0.4492 0.6009 0.4334 0.2982 0.3735 0.4699 CommonsenseQA 2.0 Knowledge Type a_regular a_self cr_social cr_stem cr_spec sce_prov sce_conv sce_heur 0.54 0.5881 0.376 0.4538 0.511 0.6502 0.4689 0.52 0.3681 0.4694 0.6523 0.6689 0.5407 0.428 0.5403 0.4749 0.5339 0.3471 0.8246 0.7045 0.6286 0.6271 0.6191 0.6355 0.6491 0.6721 0.7333 0.4367 0.5264 0.375 0.502 0.3844 0.559 0.3409 0.5588 0.4813 0.7030 0.6552 0.5183 0.6267 0.4936 0.7257 0.5664 0.4679 0.5255 0.4669 0.4970 0.4980 0.5206 0.4073 0.6267 0.4365 0.5246 0.4541 Table 5: Agreement of annotations of features’ value and knowledge type. A{i}{i+1 mod 3} denotes the Cohen’s Kappa of annotations of (i+1)- and (i+2)-th annotators. Avg. denotes the average Cohen’s Kappa of three annotators. Next, we show the statistics of features’ signifi- cance via the full-feature regression model as well as decision tree of CommonsenseQA and Common- senseQA 2.0 respectively, in Figure 1. 6The accuracy of CommonsenseQA dataset is not close to 100%, admitted by the authors Figure 1: Details of the full-feature regression model as well as decision tree of CommonsenseQA and Com- monsenseQA 2.0 38394041d[te]d[c]d[c]model = ols(f"y ~ {' + '.join(features+interactions)}", df).fit()print(model.summary()) OLS Regression Results ==============================================================================Dep. Variable: y R-squared: 0.882Model: OLS Adj. R-squared: 0.872Method: Least Squares F-statistic: 84.97Date: Wed, 14 Feb 2024 Prob (F-statistic): 7.48e-39Time: 07:22:44 Log-Likelihood: 46.133No. Observations: 100 AIC: -74.27Df Residuals: 91 BIC: -50.82Df Model: 8 Covariance Type: nonrobust ================================================================================== coef std err t P>|t| [0.025 0.975]----------------------------------------------------------------------------------Intercept 0.7827 0.210 3.736 0.000 0.367 1.199a_regular 0.0410 0.067 0.616 0.539 -0.091 0.173a_self -0.2788 0.117 -2.380 0.019 -0.511 -0.046cr_social 0.0746 0.086 0.868 0.387 -0.096 0.245cr_stem 0.1341 0.091 1.467 0.146 -0.047 0.316cr_specific 0.0421 0.077 0.546 0.586 -0.111 0.195sce_proven -0.1012 0.175 -0.579 0.564 -0.448 0.246sce_convention 0.1623 0.184 0.880 0.381 -0.204 0.529sce_heuristic -0.5715 0.199 -2.873 0.005 -0.967 -0.176==============================================================================Omnibus: 48.299 Durbin-Watson: 1.979Prob(Omnibus): 0.000 Jarque-Bera (JB): 2345.622Skew: 0.395 Prob(JB): 0.00Kurtosis: 26.713 Cond. No. 39.9==============================================================================Notes:[1] Standard Errors assume that the covariance matrix of the errors is correctly specified. OLS Regression Results ==============================================================================Dep. Variable: y R-squared: 0.880Model: OLS Adj. R-squared: 0.874Method: Least Squares F-statistic: 138.2Date: Wed, 14 Feb 2024 Prob (F-statistic): 9.89e-42 OLS Regression Results ==============================================================================Dep. Variable: y R-squared: 0.671Model: OLS Adj. R-squared: 0.642Method: Least Squares F-statistic: 23.15Date: Wed, 14 Feb 2024 Prob (F-statistic): 6.41e-19Time: 07:23:14 Log-Likelihood: -16.343No. Observations: 100 AIC: 50.69Df Residuals: 91 BIC: 74.13Df Model: 8 Covariance Type: nonrobust ================================================================================== coef std err t P>|t| [0.025 0.975]----------------------------------------------------------------------------------Intercept 0.6063 0.210 2.886 0.005 0.189 1.024a_regular 0.4663 0.101 4.603 0.000 0.265 0.668a_self -0.1302 0.095 -1.366 0.175 -0.319 0.059cr_social -0.0404 0.162 -0.250 0.803 -0.362 0.281cr_stem -0.0186 0.156 -0.120 0.905 -0.328 0.291cr_specific 0.0318 0.143 0.222 0.825 -0.253 0.317sce_proven -0.1222 0.107 -1.145 0.255 -0.334 0.090sce_convention -0.0634 0.112 -0.565 0.573 -0.286 0.159sce_heuristic -0.4012 0.106 -3.792 0.000 -0.611 -0.191==============================================================================Omnibus: 11.355 Durbin-Watson: 1.776Prob(Omnibus): 0.003 Jarque-Bera (JB): 31.790Skew: 0.100 Prob(JB): 1.25e-07Kurtosis: 5.755 Cond. No. 16.1==============================================================================Notes:[1] Standard Errors assume that the covariance matrix of the errors is correctly specified. OLS Regression Results ==============================================================================Dep. Variable: y R-squared: 0.669Model: OLS Adj. R-squared: 0.652Method: Least Squares F-statistic: 38.02Date: Wed, 14 Feb 2024 Prob (F-statistic): 3.72e-21Time: 07:23:14 Log-Likelihood: -16.552No. Observations: 100 AIC: 45.10Df Residuals: 94 BIC: 60.74Df Model: 5
ai_researcher
3
Improving_Performance_of_Commercially_Available_AI_Products_in_a_Multi-Agent_Configuration.pdf
Improving Performance of Commercially Available AI Products in a Multi-Agent Configuration Cory Hymel Research Crowdbotics Berkeley, CA Sida Peng Research Microsoft Redmond, CA Kevin Xu Engineering GitHub San Francisco, CA Charath Ranganathan Engineering Crowdbotics Berkeley, CA the that improve tools developed Abstract—In recent years, with the rapid advancement of large language models (LLMs), multi-agent systems have become increasingly more capable of practical application. At the same time, the software development industry has had a number of new AI-powered software development lifecycle (SDLC). Academically, much attention has been paid to the role of multi-agent systems to the SDLC. And, while single-agent systems have frequently been examined in real- world applications, we have seen comparatively few real-world examples of publicly available commercial tools working together in a multi-agent system with measurable improvements. In this experiment we test context sharing between Crowdbotics PRD AI, a tool for generating software requirements using AI, and GitHub Copilot, an AI pair-programming tool. By sharing business requirements from PRD AI, we improve the code suggestion capabilities of GitHub Copilot by 13.8% and developer task success rate by 24.5% — demonstrating a real-world example of commercially-available AI systems working together with improved outcomes. Keywords—AI, LLM, Multi-Agent, Software Development I. INTRODUCTION The growing field of AI — and more specifically, large language models (LLMs) — has seen impacts across numerous domains [1, 2, 3]. Single agent systems are powered by one language model and will perform all the reasoning, planning, and tool execution on their own [4]. Multi-agent systems, which are systems consisting of multiple autonomous entities having different information and/or diverging interests [5] have shown promise in improving the capabilities of AI models to perform complex tasks [6, 7, 8, 9, 10]. While single agent architectures excel when problems are well-defined and feedback from other agent-personas or is not needed, multi-agent architectures tend to thrive more when collaboration and multiple distinct execution paths are required [11], such as software development. the user The software development industry has recently seen a significant influx of AI-powered tools designed to enhance the software development lifecycle (SDLC). These tools, ranging from code completion assistants to requirements engineering platforms, promise to boost productivity and streamline workflows. The performance of single-agent, standalone LLM- based code generation tools has been extensively studied using benchmarks [12, 13]. Multi-agent systems have likewise been studied in academia, with a large body of literature on their application in software development [14, 15, 16]. However, thus far, there have been no controlled studies of commercially- available AI systems working together in a multi-agent model. In this experiment, we tested two commercially available AI tools: Crowdbotics PRD AI and GitHub Copilot, in an experimental, multi-agent setup where software project requirements generated from PRD AI are shared with GitHub Copilots’ neighboring tab context model. By having this additional business context, we expected GitHub Copilot's “code suggestion” feature to improve and developers using PRD AI + GitHub Copilot to succeed more frequently. A. AI in Requirements Engineering Requirements engineering (RE) is fundamental to successful software development. RE comprises the systematic approach to defining, documenting, and maintaining requirements throughout a project's lifecycle. As Nuseibeh and Easterbrook [17] note, RE ensures that the final product aligns with user needs and business objectives. Over the years, many AI techniques have been employed to represent and analyze requirements, ranging from knowledge representation and reasoning in the 1980s to the use of natural language (NL) processing, machine learning, and deep learning since the 2000s [18, 19]. The majority of these methods utilizing ML/DL are based on supervised learning, requiring large amounts of labeled training data not readily available in the RE space [20, 21]. LLMs powered by deep learning algorithms and large training corpus offer significant benefits during the RE process [22]. LLMs' ability to access and generate natural language responses has applications across the RE process, from requirements and refinement, and generating solution concepts and system architectures [23]. Recent studies [24] have shown that using just a few prompts, LLMs were able to generate better extraction results than existing techniques such as Jdoctor [25] and DocTer [26]. specification extraction, elicitation, B. Background: Crowdbotics PRD AI Crowdbotics PRD AI tool that specializes in generating robust product requirements documents (“PRD”) for software projects leveraging LLMs. The platform uses Azure OpenAI’s GPT 4.0 model with a progressive composition model with a retrieval augmented generation (RAG) framework on top of it. The tool is capable of producing extensive requirements that include artifacts such as: epics, user stories, user personas, acceptance criteria, technical recommendations, and others. We used PRD AI to generate a PRD and retrieved a small task list that was used by participants as task requirements and by the GitHub Copilot model for context seeding. C. AI in Code Generation Design and writing software code is something traditionally reserved for humans given the complexity and size of context needed to perform adequately. Recently LLMs have shown amazing abilities in code generation when given a distinctive task with adequate requirements defined [12, 27, 28, 29]. We’ve seen a number of ways in which code generation powered by LLMs can be used from end-to-end code generation, test generation, snippet generation, code suggestion generation, code completion generation, and others. Research in both the academia and industry settings foreshadows a significant impact on software engineering by boosting developer productivity with LLMs acting as code generators [30, 31, 32, 33]. D. Background: GitHub Copilot GitHub Copilot (GHC) is a pair programming tool that has multiple features such as inline suggestions, chat capabilities, and others. GHC is powered by a distinct production version of OpenAI's generative AI model, Codex [12, 34]. GHC itself has been shown to give a large productivity boost to developers, in some cases completing tasks up to 55.8% faster than those not using it [34]. During this study, we focused on measuring only the inline code suggestion functionality of GHC, which at the time of writing has, on average, a 27% acceptance rate [35] (Fig 1). E. Background: Multi-Agent Systems In the field of AI, an agent often refers to an artificial entity that is able to perceive its “surroundings”, make autonomous decisions, and take actions based on those decisions [36]. The concept and study of “Agent”, “Autonomous Agent”, “AI Agent”, and “Multi-Agent” has been ongoing for decades [37, 38, 39, 40, 41, 42]. Agents can be represented and used in many different ways, from chatbots and copilots to complex autonomous systems [43]. LLMs, with their remarkable ability to reason and non-deterministic design, have made them ideal candidates for creating multi-agent-based systems. Compared to systems using a single LLM-powered agent, multi-agent systems offer advanced capabilities by 1) specializing LLMs into various distinct agents, each with different capabilities, and 2) enabling interactions among these diverse agents to simulate complex real-world environments effectively [44]. This ability to perform complex problem-solving across multiple disciplines has made LLM multi-agent systems attractive adaptations to the software development lifecycle [45, 46]. F. Background: Multi-Agent Development Systems in Software large language models (LLMs) in software development, Recent research has explored the application of multi-agent leveraging multiple, systems specialized improve to coherence and correctness in software engineering tasks compared to single-agent systems. MetaGPT introduces a framework that incorporates human workflows and standardized LLM-based multi-agent operating collaborations, addressing the challenge of cascading hallucinations in complex tasks [45]. Similarly, ChatDev procedures into presents a virtual software development company that utilizes LLMs throughout the entire development process, dividing it into distinct stages with specialized software agents [46]. More recently, in the commercial space, Devin was released and claimed as the world's first “AI programmer” [47] however, its ability to handle large projects has yet to be empirically supported. II. RESEARCH CONTRIBUTION to increase their capabilities While there has been extensive research in academia on the application of multi-agent systems and commercially available “black box” multi-agent systems, there is little to no data on real- world, commercially available, independent models working together in a multi-agent framework. This research study was contextualized from Peng et. al’s [34] research that demonstrated GitHub Copilot can increase developer productivity by 55.8% on average, in conjunction with Vaithilingam et al.’s [13] findings that Copilot’s suggestions sometimes lack alignment with the specific project requirements. By providing GitHub Copilot, an AI code generation tool, with additional requirements context through Crowdbotics PRD AI, an AI requirements generation tool, we demonstrate commercially available products working as a multi-agent system with improved overall performance together. III. EXPERIMENT DESIGN We conducted a controlled experiment to measure the change code suggestion acceptance rate of GHC by developers task by sharing business when performing a coding requirements from PRD AI. The experiment began on August 6, 2024, and concluded on October 11, 2024 — with 101 developers participating. A. Participant Selection and Grouping Participants were sourced from three software development providers: Nagarro, Tkxel, and Upwork. Upon agreement, contracts and data-sharing agreements were sent to participants following vendor policies. In general, the following baseline requirements were provided when searching for possible participant: (Location) Global, (Programming Language) Python, (Skill Level) Intermediate to Advanced, (Additional Constraint) Knowledge of Fast API. We chose to include a global pool of talent to ensure diverse backgrounds and expand the possible participant pool as much as possible. Python was chosen for two reasons. First, Python is a widely known language — making the participant pool larger. Second, Codex, which powers GitHub Copilot, has shown excellent capabilities in Python programming tasks [12]. Participant skill to be between level was self-selected intermediate and advanced with participants all currently working in the software development space. Finally, we leveraged Fast API [48] as a pre-installed tool to ensure candidates were able to complete the programming task within the 4-hour timeframe where existing knowledge of the API was considered. Upon contract completion, participants were randomly split into three groups with each group being provided unique instructions. The three groups were defined as follows: Group 1 (Control Group): Developers will use VS Code without GitHub Copilot and with starter requirements located in a separate document. Group 2 (Copilot Group): Developers will use VS Code with GitHub Copilot enabled, and with starter requirements located in a separate document. Group 3 (Copilot Enhanced Group): Developers will use VS Code with GitHub Copilot enabled, and with starter requirements pre-seeded in a neighboring tab within VS Code. B. Task Definition The project task was designed to be of moderate difficulty to ensure it was challenging, yet still achievable in the maximum time frame of 4 hours. Participants were compensated based on completion and not by total hours therefore incentivizing them to complete the assignment as quickly as possible [49]. A brief description of the task is listed below. The full description given to participants can be found in the Appendix. Task Description: You are being asked to develop a backend using FastAPI for a (simplified) magazine subscription service. This backend service would expose a REST API that enables users to: 1. Register, login, and reset their passwords. 2. Retrieve a list of magazines available for subscription. This list should include the plans available for each magazine and the discount offered for each plan. 3. Create a subscription for a magazine. 4. Retrieve, modify, and delete their subscriptions. C. Experiment Procedure and Data Collection Participants were given a set amount of time (4 hours max) to complete the task to ensure consistency across all three groups. Experiment user flow: 1. Introduction and Onboarding: Participants are given access to a different Notion page according to which group they were assigned, which gave them directions to set up their project — along with FAQs, rules, and methods to submit their code upon completion. 2. Environment Setup: We used a combination of GitHub Classroom and GitHub Codespaces to preconfigure a web- based IDE environment per group. Participants were given a link depending on which group they were assigned which took them to GitHub Classroom (Fig 2). From there, a new repository was automatically generated with pre-seeded “boilerplate” code. This boilerplate code was identical across all the three groups. Next, they were instructed to create a new Codespace, which generates a virtual IDE in their browser (Fig 3). 3. Task Execution: Participants performed the assigned tasks within the 4-hour timeframe. Participants were asked to ensure they had a complete 4-hour working block available to complete the task. 4. Task Completion: When done, participants were instructed to run pre-provide local unit tests (Fig 4) to ensure the completeness of the task. After this was completed, they were instructed to submit and push their code to the repository. 5. Task Validation: Upon submission to the repository, an automated compiler with test cases checked the validity of the work and ensured completion (Fig 5). Telemetry data was tracked by GitHub Copilot and reported for study data analysis. The following quantitative data metrics were reported: Data Point task_acceptance suggestion_acceptance Description The number of participants that successfully completed the task in the 4-hour timeframe and were able to pass unit tests. The percentage of code suggestions from GHC that were accepted by participants Fig. 5. Telemetry data points collected for study. IV. RESULTS A total of 101 participants took part in the study, of which 99 participated; 2 became unresponsive shortly after signing the contract and were dismissed as per provider policies. The remaining participants were split into three groups of the following composition: Control Group 32, Copilot Group 35, and Copilot Enhanced Group 32. A. Task Acceptance Of the participants in the Control Group, 11 failed to complete the task. In the Copilot Group, 10 failed to complete the task. In the Copilot Enhanced Group, 3 failed to complete the task. Using linear regression modeling, we found that the Copilot Enhanced group was 24.5% more likely to pass the test than the Control group. The Copilot Enhanced group was also 14.8% more likely to pass the test than participants in the Copilot Group. Estimate SE tStat pValue (Intercept) 0.62963 0.090556 6.9529 3.5159e-09 Treatment_1 0.097643 0.12211 0.79966 0.42717 Fig. 6. Copilot vs Control, Test Passed Estimate SE tStat pValue (Intercept) 0.62963 0.079783 7.8918 1.0399e-10 Treatment_1 0.24537 0.10833 2.265 0.027335 Fig. 7. Copilot Enhanced vs Control, Test passed B. Suggestion Acceptance In measuring code suggestions rates, we measured the amount of suggested code provided inline by GitHub Copilot as a ratio of suggestion shown to those accepted. Our hypothesis was that sharing business model context from PRD AI in a neighboring tab with GitHub Copilot would improve the code suggestion rate. This hypothesis was found to be true — using PRD AI with Github CoPilot improved the code suggestion acceptance rate by 13.8 percentage points—from 27% to 40.8%. Estimate SE tStat pValue (Intercept) 0.23657 0.051156 4.6246 1.9157e-05 Treatment_1 0.13882 0.072908 1.904 0.061475 Fig. 8. Copilot fraction acceptance This offers a substantial improvement (+ 51.1%) in the base code suggestion acceptance rate (27%) of GitHub Copilot users [35]. GHC’s code suggestion model uses data from open tabs in an IDE, while these tabs usually contain code, in this experiment we included business requirements that the neighboring tab the suggestion model could improvements. This simple study shows that model context sharing can have a significant impact on commercially available platforms. index against resulting in There was also a difference in the number of code suggestions shown between the GitHub Group and GitHub Enhanced group. With the GitHub Group being shown 257.06 on average and the GitHub Enhanced Group being shown 127.09 on average. While there were fewer lines shown the GitHub Enhanced Group, they were of higher quality given the increased acceptance rate. It’s also reasonable to suggest that there were fewer suggestions shown because participants in the GitHub Enhanced Group arrived at the natural stopping point of "this code works" in fewer prompts. C. Study Limitations scenarios. Additionally, This study's focus on a single, specific coding task within a 4-hour time limit, while necessary for experimental control, may not fully reflect the diversity and extended duration of real- world development individual variations in programming expertise and prior experience with AI coding assistants could have influenced the results, despite our efforts to control for skill level. The study also assumes that the context provided by PRD AI was uniformly beneficial, which may not always be the case in real-world scenarios where context quality and relevance can vary. Lastly, our research does not address the long-term effects of using such integrated AI tools on developer skills and practices. V. DISCUSSION Our findings in this paper show the potential opportunity for commercial model enhancement in a multi-agent setup. The combination of PRD AI's business context with GitHub Copilot led to a substantial 13.8% increase in code suggestion acceptance rates. This improvement is particularly significant, as it represents a 51.1% increase over the baseline acceptance rate of 27% for typical GitHub Copilot users. This significant improvement in code suggestion acceptance rates underscores the importance of context in AI-assisted coding, suggesting that future AI software development tools could benefit greatly from incorporating broader project context through other tooling providers. Participants using the Copilot Enhanced setup were 24.5% more likely to complete the assigned task successfully compared to the Control Group, and 14.8% more likely to complete the task compared to the GitHub Copilot. These results indicate that context-enhanced AI tools could lead to substantial productivity gains in real-world software development scenarios. In fact, the overall candidate success rate in this experiment shows the significant benefit of creating commercially-available AI tooling that is capable of integrating and sharing with other providers. This study demonstrates the significant potential of context- sharing between commercially available AI tools to enhance their overall performance. As outlined in the Introduction, a significant amount of research has focused on single-agent architecture, with new work in multi-agent configurations constrained to in either academia or closed systems. We have yet to see commercially-available platforms openly work to create accessible, model-to-model collaborative systems. Based on our findings in this study, we hope to promote a more open, collaborative AI future. ACKNOWLEDGMENT We would like to thank Nagarro on their close partnership in helping identify, and manage study participant's quickly. Darcy Jacobsen from The Wednesday Group for continuous proofing and editing before publication. All the amazing developers that participated in the study, thank you. REFERENCES et [1] A. Vaswani Is All You Need. 2017. arXiv:1706.03762v7J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68–73. [2] T. B. Brown et al., Language Models are Few-Shot Learners. 2020. al., Attention arXiv:2005.14165v4K. Elissa, “Title of paper if known,” unpublished. [3] S. Bubeck et al., Sparks of Artificial General Intelligence: Early experiments with GPT-4. 2023. https://arxiv.org/abs/2303.12712 [4] T. Masterman, S. Besen, M. Sawtell, and A. Chao, “The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey,” ArXiv, vol. abs/2404.11584, 2024, [Online]. Available: https://api.semanticscholar.org/CorpusID:269187633 [5] Y. Shoham and K. Leyton-Brown, Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. USA: Cambridge University Press, 2008. G. Li, H. A. A. K. Hammoud, H. Itani, D. Khizbullin, and B. Ghanem, CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Available: Society. https://arxiv.org/abs/2303.17760 [Online]. 2023. [6] [7] C. Qian et al., “Experiential Co-Learning of Software-Developing Agents,” in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Aug. 2024, pp. 5628–5640. doi: 10.18653/v1/2024.acl-long.305. J. S. Park, J. O’Brien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein, “Generative Agents: Interactive Simulacra of Human Behavior,” 2023. doi: 10.1145/3586183.3606763. [8] [9] W. Zhou et al., Agents: An Open-source Framework for Autonomous Language Agents. 2023. [Online]. Available: https://arxiv.org/abs/2309.07870 [10] W. Chen et al., AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors. 2023. [Online]. Available: https://arxiv.org/abs/2308.10848 [11] T. Masterman, S. Besen, M. Sawtell, and A. Chao, “The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey,” ArXiv, vol. abs/2404.11584, 2024, [Online]. Available: https://api.semanticscholar.org/CorpusID:269187633 [12] M. Chen et al., Evaluating Large Language Models Trained on Code. 2021. [Online]. Available: https://arxiv.org/abs/2107.03374 [13] P. Vaithilingam, T. Zhang, and E. L. Glassman, “Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models,” 2022. doi: 10.1145/3491101.3519665. [36] Z. Xi et al., The Rise and Potential of Large Language Model Based Available: [Online]. 2023. Agents: https://arxiv.org/abs/2309.07864 Survey. A [37] Jennings, N.R., Sycara, K. & Wooldridge, M. A Roadmap of Agent Research and Development. Autonomous Agents and Multi-Agent Systems 1, 7–38 (1998). https://doi.org/10.1023/A:1010090405266 [38] N. R. Jennings And M. Wooldridge, “Applying Agent Technology,” Applied Artificial Intelligence, vol. 9, no. 4, pp. 357–369, 1995, doi: 10.1080/08839519508945480. [39] S. Franklin and A. Graesser, “Is It an agent, or just a program?: A taxonomy for autonomous agents,” in Intelligent Agents III Agent Theories, Architectures, and Languages, 1997, pp. 21–35. [40] C. Castelfranchi, “Modelling social action for AI agents,” Artificial doi: 103, Intelligence, https://doi.org/10.1016/S0004-3702(98)00056-3. 157–182, 1998, vol. no. pp. 1, [41] J. Ferber, Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence, 1st ed. USA: Addison-Wesley Longman Publishing Co., Inc., 1999. [42] Panait, L., Luke, S. Cooperative Multi-Agent Learning: The State of the (2005). Art. Auton Agent Multi-Agent Syst 11, 387–434 https://doi.org/10.1007/s10458-005-2631-2 [43] J. Ruan et al., TPTU: Large Language Model-based AI Agents for Task [Online]. Available: Tool Usage. 2023. Planning https://arxiv.org/abs/2308.03427 and [44] T. Guo et al., Large Language Model based Multi-Agents: A Survey of Available: [Online]. 2024. Progress Challenges. and https://arxiv.org/abs/2402.01680 [45] S. Hong et al., MetaGPT: Meta Programming for A Multi-Agent Available: Framework. [Online]. 2024. Collaborative https://arxiv.org/abs/2308.00352 [46] C. Qian et al., ChatDev: Communicative Agents for Software Available: [Online]. 2024. Development. https://arxiv.org/abs/2307.07924 the Introducing Devin, [47] Wu, S. (2024, March 12). Introducing Devin, the first AI Software first AI software engineer. Engineer. https://www.cognition.ai/blog/introducing-devin [48] FASTAPI. FastAPI. (n.d.). https://fastapi.tiangolo.com/ [49] G. Smitizsky, W. Liu, and Uri Gneezy, “On the value(s) of time: Workers’ value of their time depends on mode of valuation,” Proceedings of the National Academy of Sciences, vol. 118, no. 34, p. e2105710118, 2021, doi: 10.1073/pnas.2105710118. APPENDIX [14] Z. Du et al., Multi-Agent Software Development through Cross-Team Available: [Online]. Collaboration. https://arxiv.org/abs/2406.08979 2024. [15] J. Li, Q. Zhang, Y. Yu, Q. Fu, and D. Ye, More Agents Is All You Need. 2024. [Online]. Available: https://arxiv.org/abs/2402.05120 [16] Z. Liu, Y. Zhang, P. Li, Y. Liu, and D. Yang, Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization. 2023. [Online]. Available: https://arxiv.org/abs/2310.02170 [17] B. Nuseibeh and S. Easterbrook, “Requirements engineering: a roadmap,” in Proceedings of the Conference on The Future of Software Engineering, 2000, pp. 35–46. doi: 10.1145/336512.336523. [18] F. Dalpiaz and N. Niu, “ Requirements Engineering in the Days of Artificial Intelligence ,” IEEE Software , vol. 37, no. 4, pp. 7–10, Jul. 2020, doi: 10.1109/MS.2020.2986047. [19] J. N. och Dag, B. Regnell, V. Gervasi, and S. Brinkkemper, “A linguistic engineering approach to large-scale requirements management,” IEEE Softw., vol. 22, no. 1, pp. 32–39, 2005. doi: 10.1109/MS.2005.1. [20] F. Dalpiaz, D. Dell’Anna, F. B. Aydemir, and S. Çevikol, “Requirements Classification with Interpretable Machine Learning and Dependency Parsing,” in 2019 IEEE 27th International Requirements Engineering Conference (RE), 2019, pp. 142–152. doi: 10.1109/RE.2019.00025. [21] A. Sainani, P. R. Anish, V. Joshi, and S. Ghaisas, “Extracting and Classifying Requirements from Software Engineering Contracts,” in 2020 IEEE 28th International Requirements Engineering Conference (RE), 2020, pp. 147–157. doi: 10.1109/RE48521.2020.00026. [22] B. Wei, Requirements are All You Need: From Requirements to Code with LLMs. 2024. [Online]. Available: https://arxiv.org/abs/2406.10101 [23] L. Belzner, T. Gabor, and M. Wirsing, Large Language Model Assisted Software Engineering. 2023. [24] D. Xie et al., Impact of Large Language Models on Generating Software Available: [Online]. 2023. Specifications. https://arxiv.org/abs/2306.03324 [25] A. Blasi et al., “Translating code comments to procedure specifications,” in Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2018, pp. 242–253. doi: 10.1145/3213846.3213872. [26] D. Xie et al., “DocTer: documentation-guided fuzzing for testing deep learning API functions,” in Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis, 2022, pp. 176–188. doi: 10.1145/3533767.3534220. [27] J. Austin et al., Program Synthesis with Large Language Models. 2021. [Online]. Available: https://arxiv.org/abs/2108.07732 [28] R. Li et al., StarCoder: may the source be with you! 2023. [Online]. Available: https://arxiv.org/abs/2305.06161 [29] Y. Wei, Z. Wang, J. Liu, Y. Ding, and L. Zhang, Magicoder: Empowering Code Generation with OSS-Instruct. 2024. [Online]. Available: https://arxiv.org/abs/2312.02120 [30] T. B. Brown et al., Language Models are Few-Shot Learners. 2020. [Online]. Available: https://arxiv.org/abs/2005.14165 [31] S. Zhang et al., OPT: Open Pre-trained Transformer Language Models. 2022. [Online]. Available: https://arxiv.org/abs/2205.01068 [32] OpenAI et al., GPT-4 Technical Report. 2024. [Online]. Available: https://arxiv.org/abs/2303.08774 Fig. 1. Example of code suggestions from GitHub Copilot [33] Q. Zheng et al., CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Benchmarking on HumanEval-X. 2024. [Online]. Available: https://arxiv.org/abs/2303.17568 [34] S. Peng, E. Kalliamvakou, P. Cihon, and M. Demirer, The Impact of AI on Developer Productivity: Evidence from GitHub Copilot. 2023. [Online]. Available: https://arxiv.org/abs/2302.06590 [35] A. Ziegler et al., “Measuring GitHub Copilot’s Impact on Productivity,” Commun. ACM, vol. 67, no. 3, pp. 54–63, Feb. 2024, doi: 10.1145/3633453. Fig. 2. GitHub Classroom particpants accessed Fig. 3. Browser based IDE, Codespace, from GitHub. You are being asked to develop a backend using FastAPI for a (simplified) magazine subscription service. This backend service would expose a REST API that enables users to: 1. Register, login, and reset their passwords. 2. Retrieve a list of magazines available for subscription. This list should include the plans available for each magazine and the discount offered for each plan. 3. Create a subscription for a magazine. 4. Retrieve, modify, and delete their subscriptions. ## Data Models Overview ### Magazine A magazine that is available for subscription. Includes metadata about the magazine such as the name, description, and a base_price (which is the price charged for a monthly subscription). The base_price is a numerical value and must be greater than zero. ### Plan Plans to which users can subscribe their magazines. There are 4 plans available in the system as described below. A Plan object has the following properties: title, a description, a renewalPeriod, discount - a percentage, expressed as a decimal - for this plan (e.g. a discount of 0.1 means a 10% discount), and a tier. The tier is a numerical value that represents the level of the plan. The higher the tier, the more expensive the plan. The renewalPeriod is a numerical value that represents the number of months in which the subscription would renew. Renewal periods CANNOT be zero. For example, a renewalPeriod of 3 means that the subscription renews every 3 months. The 4 plans that you must support are given below. #### Silver Plan - title: "Silver Plan" - description: "Basic plan which renews monthly" - renewalPeriod: 1 - tier: 1 - discount: 0.0 #### Gold Plan - title: "Gold Plan" - description: "Standard plan which renews every 3 months" - renewalPeriod: 3 - tier: 2 - discount: 0.05 #### Platinum Plan - title: "Platinum Plan" - description: "Premium plan which renews every 6 months" - renewalPeriod: 6 - tier: 3 - discount: 0.10 Fig. 4. Example of local tests passing #### Diamond Plan - title: "Diamond Plan" - description: "Exclusive plan which renews annually" - renewalPeriod: 12 - tier: 4 - discount: 0.25 ### Subscription A Subscription tracks which Plan is associated with which Magazine for a specific User. The subscription also tracks the price at renewal for that magazine and the next renewal date. A User can have only one Subscription for a specific Magazine and Plan at a time. The Subscription object has the following properties: user_id, magazine_id, plan_id, price, renewal_date, and is_active. The price at renewal is calculated as the base_price of the magazine discounted by the discount of the plan. For example, if the base price of the magazine is 100 and the plan discount is 0.10, the price will be 90. The price is a numerical value and must be greater than zero. Fig. 5. Example Autograder test passing I. APPENDIX: PROJECT REQUIREMENTS # Requirements for Magazine Subscription Service For record keeping purposes, subscriptions are never deleted. If a user cancels a subscription to a magazine, the corresponding is_active attribute of that Subscription is set to False. Inactive subscriptions are never returned in the response when the user queries their subscriptions. ## Business Rules 1. Subscriptions can be modified before the expiry of the subscription period. For example, if a user has subscribed to a magazine with a Silver Plan and decides to upgrade to a Gold Plan, the Silver Plan subscription is deactivated and a new subscription is created with a new renewal date for the Gold Plan that the user has chosen. 2. If a user modifies their subscription for a magazine, the corresponding subsciption is deactivated and a new subscription is created with a new renewal date depending on the plan that is chosen by the user. 1. For this purpose assume that there is no proration of the funds and no refunds are issued.
ai_researcher
2
Evaluating_Understanding_and_Improving_Constrained_Text_Generation_for_Large_Language_Models.pdf
Impacts Towards a comprehensive assessment of the book impact by integrating multiple evaluation sources Qingqing Zhou1, Chengzhi Zhang2, * 1. Department of Network and New Media, Nanjing Normal University, Nanjing 210023, China 2. Department of Information Management, Nanjing University of Science and Technology, Nanjing 210094, China Abstract. The surge in the number of books published makes the manual evaluation methods (e.g. peer review) difficult to efficiently evaluate books. The use of books’ citations and alternative evaluation metrics (e.g. library holdings, social media mentions, book reviews) can assist manual evaluation and reduce the cost of evaluation. However, most existing evaluation research was based on a single evaluation source with coarse-grained analysis, which may obtain incomprehensive or one-sided evaluation results of book impact. Meanwhile, relying on a single resource for book assessment may lead to the risk that the evaluation results cannot be obtained due to the lack of the evaluation data, especially for newly published books. Hence, this paper measured book impact based on an evaluation system constructed by integrating multiple evaluation sources. Specifically, we conducted finer-grained mining on the multiple evaluation sources, including books’ internal evaluation resources (e.g. books’ contents) and external evaluation resources (e.g. books’ reviews, books’ citations and books’ usages). Various technologies (e.g. topic extraction, sentiment analysis, text classification) were used to extract corresponding evaluation metrics from the internal and external evaluation resources. Then, Expert evaluation combined with analytic hierarchy process was used to integrate the evaluation metrics and construct a book impact evaluation system. Finally, the reliability of the evaluation system was verified by comparing with the results of expert evaluation, detailed and diversified evaluation results were then obtained. The experimental results reveal that differential evaluation resources can measure the books’ impacts from different dimensions, and the integration of multiple evaluation data can assess books more comprehensively. Meanwhile, the book impact evaluation system can provide personalized evaluation results according to the users’ evaluation purposes. In addition, the disciplinary differences should be considered for assessing books’ impacts. Keywords: book impact assessment; multiple evaluation sources, review mining; citation context analysis; depth and breadth analysis Citation information: Qingqing Zhou, Chengzhi Zhang. Impacts Towards a comprehensive assessment of the book impact by integrating multiple evaluation sources. Journal of Informetrics, 2021, 15(3): 101195. https://doi.org/10.1016/j.joi.2021.101195 * Corresponding author: Chengzhi Zhang, E-mail: [email protected]. 1 1 Introduction With the rapid development of Internet and digitalization, people’s reading and evaluation models of books are also changing. Literature databases, social media and e-commerce websites provide many new evaluation sources for book impact evaluation (Azer, 2019; Torres-Salinas et al., 2014). Meanwhile, the progress of digital storage and technologies about natural language processing provide technical support for measuring book impact. Therefore, the impact evaluation of books is no longer limited to the traditional evaluation metrics, such as peer reviews or citation frequencies. Massive alternative evaluation sources can be analyzed to detect more evaluation metrics (e.g. purchase intentions, citation functions) and thus overcome shortcomings of traditional metrics, such as high cost or time consumption (Torres-Salinas et al., 2017b; Zuccalá & Leeuwen, 2014). Hereby, currently, multiple evaluation resources have been used to assess impacts of books, including book contents (Mooney & Roy, 2000), book reviews (Chevalier & Mayzlin, 2006), book citations (Gorraiz et al., 2014b), book usages (Calhoun, 2011) etc. These books related evaluation resources can reflect the impacts of books from different dimensions, and provide supplementary information for the evaluation research from the corresponding dimensions. However, most existing research was based on a single evaluation resource. The shortcomings of such evaluation method are obvious, as the used evaluation resource may be absent for some books, especially newly published books. For example, for 2739 books analyzed in (Kousha & Thelwall, 2016), only 84% books have google citations, 29% books have amazon reviews, and 7% books have Mendeley bookmarks. For 15928 books assessed in (Kousha et al., 2017), only 73.8% books have google citations, 34.6% books have Wikipedia citations, and 14.1% books have Goodreads reviews. Meanwhile, totally different or even contradictory evaluation results may be obtained by choosing different evaluation resources. For example, Sentiment Analysis and Opinion Mining by Bing Liu has been cited more than 5000 times in Google scholar, while it has only been discussed about 10 times in Amazon. The scientific integration of evaluation resources can not only solve these problems, but also provide comprehensive evaluation results for users without prior evaluation knowledge or users without obvious evaluation dimension tendency, so as to help users quickly obtain the evaluation conclusions they need (Torres-Salinas et al., 2017a). Hence, finer-grained mining on the multiple evaluation resources and the integration of corresponding evaluation results are necessary. This paper synthesized the multi-source evaluation data and then integrated metrics extracted from these sources to construct a multi-level and multi-dimensional evaluation metric system for assessing books’ comprehensive impacts. The experimental results indicate that the integration of multiple evaluation sources can detect detailed evaluation information and meet users’ personalized evaluation demands. 2 Related works Currently, various resources are used to evaluate books’ impacts. In this section, we describe two types of evaluation resources, namely books’ external resources and internal resources. Many external evaluation resources of books are used to evaluate the impacts of books, such as book reviews, book citations and book usages. Book reviews reflect users’ direct attitudes on books (Zhang et al., 2019). Scholars analyze books’ quality and evaluate values of books for scientific research with academic reviews (Gorraiz et al., 2014a; Zuccalá et al., 2014). For example, Kousha and Thelwall (2015) and Zhou and Zhang (2020b) measured books’ impacts based on academic reviews from Choice and confirmed the validity of academic reviews for book impact evaluation. 2 Social media and e-commerce users post online reviews to express opinions on books’ prices, papers, appearances etc. (Kousha & Thelwall, 2016). Online reviews from Amazon (Zhou et al., 2016) and Goodreads (Kousha et al., 2017; Maity et al., 2018) have been widely analyzed to identify impacts of books in different languages. Citations of books are commonly used to assess books’ impacts (Butler et al., 2017), and multiple citation databases provide extensive citation data for impact evaluation. Scopus (Zuccalá & Cornacchia, 2016), Web of Science Core Collection (Gorraiz et al., 2014b; Tsay et al., 2016), Google Scholar (Thelwall & Abrizah, 2014) and Microsoft Academic (Kousha & Thelwall, 2018) are effective evaluation resources. Meanwhile, Chinese Social Science Citation Index (Su et al., 2014) and Chinese Book Citation Index (Ye, 2014) are designed and developed for evaluating impacts of Chinese books. Books’ citation literatures can also be systematically used for indicators of books’ impacts. Zhou and Zhang (2020a) conducted fine-grained analysis on books’ citation literatures to assess books’ wider impacts. Meanwhile, citation contexts about books in citation literatures reveal researchers’ citation intentions and attitudes on books. Mccain and Salvucci (2006) mined 574 citation contexts about The Mythical Man-Month to evaluate the its impact. Zhou and Zhang (2019) analyzed 2288 citation contexts about 370 books and then assessed impacts of these books. With the development of Web 2.0, many alternative evaluation resources are mined and used for measuring books’ use impact. Library holdings (White & Zuccalá, 2018), library loans (Cabezas- Clavijo et al., 2013), publisher prestige (Donovan & Butler, 2007), syllabus mentions (Kousha & Thelwall, 2008) and social media mentions (Batooli et al., 2016; Oberst, 2017) were extracted and analyzed to measure books’ impacts from different aspects. The above evaluation resources and metrics extracted from such resources are mainly based on books’ external information. However, shortcomings of these external information cannot be ignored, as some books may not be commented or cited, the lack of evaluation data may result in the failure of evaluation. Hence, book impact assessment based on books’ internal information is necessary. As the internal information of a book, the analysis of the book content, especially the full-text content, can reflect the quality of the book directly. However, due to the difficulty of obtaining books’ contents, the evaluation analysis of books based on full texts is rare. Books’ tables of contents are summaries of books’ contents, researchers then used the tables of contents to measure the books’ impacts in terms of the content dimension (Poulsen, 1996; Zhang & Zhou, 2020). In conclusion, massive metrics extracted from various sources are proved to be useful for book impact assessment. The extracted metrics include both frequency-level metrics (e.g. citation frequencies and library holdings) and content-level metrics (e.g. metrics from reviews, citation contexts or tables of contents). Frequency-level metrics can provide intuitive evaluation results, while shortcomings of such metrics are obvious. Researchers cannot detect users’ real reactions to books (e.g. whether users will recommend or buy books) or identify the applicable populations of books. Content-level metrics can overcome shortcomings of frequency-level metrics and reflect different impact dimensions from frequency information. In other words, metrics delivered from different sources cannot replace each other, but may play a complementary role. Integrating the existing evaluation resources reasonably and effectively to obtain books’ comprehensive impacts is of great significance. Hence, this paper aims to integrate multi-source evaluation data to construct an evaluation system, so as to provide more detailed and comprehensive information for meeting the evaluation needs of different categories of users. 3 3 Research questions Little research thus far has assessed book impacts based on a multi-source evaluation system constructed by integrating multiple resources, which may ignore book impacts in some dimensions, and then lead to the decline in the accuracy and practicability of evaluation results. Hence, the present study fills the gap by addressing the following research questions: RQ1. Which metrics can reflect book impact more? RQ2. Can the impacts of books be evaluated better by integrating multiple evaluation resources? RQ3. Are there disciplinary differences in the book impact assessment? 4 Methodology 4.1 Framework The primary purpose of this paper is assessing books’ comprehensive impacts by integrating multiple evaluation resources. We collect book evaluation resources from the internal and external dimensions of books. The internal evaluation resource is book content-related information, while the external evaluation resources of books include book review-, citation- and usage-related information. By mining and analyzing these resources (e.g. sentiment analysis, topic analysis), we can extract evaluation metrics of book impact and construct a book impact evaluation system. Then, we calculate weights and scores of each metric in the evaluation system, so as to get the impact results of books. In addition, we compare our evaluation results and scores evaluated by experts to verify the reliability of the assessment system. The overall framework is summarized in Figure 1. Evaluation sources Book impact system Book contents Evaluation metric extraction Book impact evaluation Metric weight calculation Book reviews Evaluation system construction Metric score calculation Books Book citations Book usages Analysis results Book impact scores Expert evaluation scores Figure 1. Framework of book impact assessment based multiple sources 4.2 Evaluation source collection This paper collects multiple evaluation resources to evaluate book impact from the internal and external dimensions of books, including book contents, reviews, citation information and usage information. These resources can directly reflect the attitudes and opinions to books of users related to book impacts (or users who pay attention to book impact evaluation), such as the authors, public readers, scholars and related institutions. With the rapid development of e-commerce, people are more used to buy books online and generate massive book reviews. These reviews express users’ opinions on books and reveal their sentiment tendencies on various aspects of books. The effective mining of reviews can identify users’ purchase intentions and preferences. Meanwhile, online reviews are popular, massive, measurable and easy to access, which can be used as an important resource to evaluate impact of books (Zhou et al., 2016). Hence, for book reviews, we firstly matched Chinese discipline category 4 (Standardization Administration of China, 2009) with book category provided by Amazon 1 to identify book disciplines (as the evaluation objects in this paper are Chinese books). Five disciplines were identified, including Computer Science, Literature, Law, Medicine and Sport Science. Then, we collected amazon reviews of books in the five disciplines in July 2017, and got 642258 reviews of 57627 books. Books’ tables of contents are summary of the books by authors, which abstract contents of books. Users can make a preliminary judgment on the contents of books by browsing the tables of contents (TOCs for short). Therefore, books’ TOCs can be used to reflect impacts of books in contents. Hence, TOCs of the 57627 books were collected from amazon simultaneously for extracting content-related metrics. Books’ citation-related information includes books’ citation frequencies and citation literatures (literatures that cited books). We extracted books’ citation frequencies and citation literatures from Baidu Scholar2 (one of the largest academic platform in the world with more than 1.2 billion academic resources3) with a crawler by matching titles, authors and publication years of books in August 2017. Then, citation frequencies and citation literatures (including titles, publication years, full texts) of 9757 books were collected (55467 of 65224 books had no citation). Meanwhile, we extracted citation contexts in citation literatures of books manually. Due to the high cost of manual annotation, we selected 500 books from the 9757 books according to the ratios of different citation frequencies. As part of citation literatures have no citation mark in the texts. Thus, we got 2288 citation contexts of 370 books. Each citation context contains five sentences, namely citation content and the former and latter two sentences of the citation content. Books commented by users (57627 books) Books cited by scholars (9757of 57627 books) Books with citation Books contexts extracted collected by manually (370 of libraries (370 9757 books) of 370 books) Confirmed book set (370 books) Figure 2. The process of data collection Table 1 Data statistics of books in five disciplines Disciplines #TOCs #reviews # citations # citation contexts #library holdings Computer Science 63 2742 385 284 234 Literature Law Medicine 76 2891 404 548 237 80 1530 450 614 201 90 1879 506 585 371 Sport Science Total 61 370 1652 10694 332 257 202 2077 2288 1245 Book usage information includes books’ sales and library holdings. Due to Amazon’s privacy rights, we cannot obtain the specific sale numbers of books in bulk. In this paper, we extracted book sale information from Amazon by matching ISBN of books, as Amazon provides books’ sale ranking information on the book detail pages. We collected book’ library holding information from 1 https://www.amazon.cn/gp/book/all_category 2 http://xueshu.baidu.com/ 3 https://xueshu.baidu.com/usercenter/show/baiducas?cmd=intro 5 WorldCat.org (OCLC). Finally, we obtained multi-dimensional evaluation information of 370 Chinese books (published from 1985 to 2016). The process of data collection is shown in Figure 2. Data statistics are shown in Table 1. 4.3 Construction of evaluation metric system for book impact We constructed the evaluation system of book impact with four resources: book contents, book reviews, book citations and book usages. We firstly conducted data mining on the multiple evaluation resources, including multi-granularity sentiment analysis, depth and breadth analysis, and citation context analysis, so as to obtain corresponding evaluation metrics. Then, an impact evaluation system was obtained based on the demonstration by domain experts. 4.3.1 Impact assessment metrics from book contents This paper analyzed books’ TOCs to measure book impacts from the dimension of book contents. Specifically, we conducted topic analysis on books’ TOCs with LDA (Latent Dirichlet Allocation) to calculate books’ depth and breadth (Hoffman et al., 2010; Pons-Porrata et al., 2007). We held that books introduced less topics tend to be more insightful, while books with more uniformly topic distributions may get higher breadth scores (Zhang & Zhou, 2020).. Then, we got two evaluation metrics, including TOC depth and TOC breadth, as shown in Figure 3. TOC depth refers to the depth of book contents reflected in the books’ TOCs, while TOC breadth refers to the breadth of book contents reflected in the books’ TOCs. The two metrics can be computed by equation (1) and (2). Evaluation source Book contents Metrics TOC depth Metric scores 𝑆_𝑇𝑂𝐶𝑑𝑒𝑝𝑡ℎ+ TOC breadth 𝑆_𝑇𝑂𝐶𝑏𝑟𝑒𝑎𝑑𝑡ℎ+ Figure 3. Impact assessment metrics from book contents 𝑆_𝑇𝑂𝐶𝑑𝑒𝑝𝑡ℎ+ = 1/( #123456+789 #6:;<89 ) (1) 𝑆_𝑇𝑂𝐶𝑏𝑟𝑒𝑎𝑑𝑡ℎ+ = − C DE(#123456+789) #123456+789 J M C 𝑝_𝑇𝑂𝐶𝑡𝑜𝑝𝑖𝑐𝑠+J 𝑙𝑛 𝑝_𝑇𝑂𝐶𝑡𝑜𝑝𝑖𝑐𝑠+J (2) Where, 𝑆_𝑇𝑂𝐶𝑑𝑒𝑝𝑡ℎ+ means depth score of book 𝑖 , #𝑇𝑂𝐶𝑡𝑜𝑝𝑖𝑐𝑠+ is number of topics expressed in the table of contents of book 𝑖, #𝑝𝑎𝑔𝑒𝑠+ means pages of the book 𝑖. 𝑆_𝑇𝑂𝐶𝑏𝑟𝑒𝑎𝑑𝑡ℎ+ denotes breadth score of book 𝑖, 𝑝_𝑇𝑂𝐶𝑡𝑜𝑝𝑖𝑐𝑠+J is the topic probability of the book 𝑖 in topic j. 4.3.2 Impact assessment metrics from book reviews Evaluation source Book reviews Metrics # Positive review # Negative review Star rating Aspect satisfaction Metric scores 𝑆𝑝𝑜𝑠𝑖 𝑆𝑛𝑒𝑔𝑖 𝑆𝑠𝑡𝑎𝑟𝑖 𝑆𝑎𝑠𝑝𝑒𝑐𝑡𝑖 Figure 4. Impact assessment metrics from book reviews Book reviews reflect users’ opinions on books and books’ aspects, such as price, printing, and paper. Hence, in order to get users’ overall sentiments and aspect sentiments, we conducted multi- 6 granularity sentiment analysis on book online reviews (Book reviews in this paper refer to online reviews of books. We did not analyze books’ scholar reviews published in journals, as the number of books in the corpus commented by scholars is too small, accounting for only about 18.38%.)(Zhou et al., 2016). Specifically, we used supervised machine learning to identify the sentiment polarities of reviews. Then, we extracted aspects of books via deep learning (i.e. Word2Vec4) and detected sentiment polarities of aspects in each review (Zhou & Zhang, 2018). Hereby, four evaluation metrics were extracted from book reviews, including the number of positive reviews, number of negative reviews, star rating and aspect satisfaction, as shown in Figure 4. Aspect satisfaction reflects users’ satisfactions on aspects of books. Scores of the four metrics can be compute with equation (3) to (7). 𝑆𝑝𝑜𝑠𝑖 = #𝑝𝑜𝑠𝑖 (3) Where, 𝑆𝑝𝑜𝑠𝑖 is the score of the positive review metric of book 𝑖 ; #𝑝𝑜𝑠𝑖 is the number of positive reviews of book 𝑖. 𝑆𝑛𝑒𝑔𝑖 = #𝑛𝑒𝑔𝑖 (4) Where, 𝑆𝑛𝑒𝑔𝑖 is the score of the negative review metric of book 𝑖 ; #𝑛𝑒𝑔𝑖 is the number of negative reviews of book 𝑖. 𝑆𝑠𝑡𝑎𝑟𝑖 = 𝑛𝑖 𝑗MC 𝑠𝑡𝑎𝑟𝑖𝑗 𝑛𝑖 (5) Where, 𝑆𝑠𝑡𝑎𝑟𝑖 denotes the star rating score of book 𝑖, 𝑛𝑖 means numbers of reviews of book 𝑖, 𝑠𝑡𝑎𝑟𝑖𝑗 means the star rating in review 𝑗 of book 𝑖. 𝑆𝑎𝑠𝑝𝑒𝑐𝑡𝑖 = 𝑚𝑖 𝑗MC 𝑎𝑠𝑝𝑒𝑐𝑡𝑖𝑗 𝑚𝑖 (6) 𝑎𝑠𝑝𝑒𝑐𝑡𝑖𝑗 = 𝑛𝑖𝑗 𝑘MC 𝑣𝑖𝑗𝑘 𝑛𝑖𝑗 𝑘MC |𝑣𝑖𝑗| (7) Where, 𝑆𝑎𝑠𝑝𝑒𝑐𝑡𝑖 denotes the aspect satisfaction score of book 𝑖, 𝑎𝑠𝑝𝑒𝑐𝑡𝑖𝑗 means score of aspect 𝑗 about book 𝑖, 𝑚𝑖 means the number of aspects about book 𝑖. 𝑣𝑖𝑗𝑘 denotes aspect score of aspect 𝑗 in review 𝑘 about book 𝑖. If aspect 𝑗 in review 𝑘 is positive, 𝑣𝑖𝑗𝑘 equals 1, else it equals -1. 𝑛𝑖𝑗 means the number of reviews with aspect 𝑗 about book 𝑖. 4 http://word2vec.googlecode.com/svn/trunk/ 7 4.3.3 Impact assessment metrics from book citations Metrics #citation Evaluation source Citation literature depth Metric scores 𝑆𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑖 𝑆𝑐𝑖𝑡𝑑𝑒𝑝𝑡ℎ+ Book citations Citation literature breadth 𝑆𝑐𝑖𝑡𝑏𝑟𝑒𝑎𝑑𝑡ℎ+ Citation strength Citation function 𝑆𝑖𝑛𝑡𝑖 𝑆𝑓𝑢𝑛𝑖 Figure 5. Impact assessment metrics from book citations We extracted citation-based metrics from two citation sources, including citation frequency and citation literature. The citation frequency of books reflects scholars’ opinions and attitudes on books. Generally, books with higher citation frequencies tend to get higher impacts (Kousha et al., 2011). For citation literatures, we can analyze the depth and breadth of books’ citation literatures to measure books’ depth and breadth (Zhou & Zhang, 2020a). Meanwhile, the analysis on citation contexts in citation literatures can identify citation intentions of scholars, which can measure detailed impacts of books (Zhou & Zhang, 2019). Hence, we can get five evaluation metrics from book citations, including citation frequency, citation literature depth, citation literature breadth, citation intensity and citation function, as shown in Figure 5. Citation literature depth means the depth of a book reflected by literatures cited the book, while citation literature breadth means the breadth of a book reflected by literatures cited the book. Citation function refers to scholars’ purposes of citing books, including background citation, comparison citation and use citation (Hernández-Alvarez et al., 2017). Background citation means the book is cited to elaborate the frontier value, theoretical significance or practical value of a research field from a macro perspective. Comparison citation is cited for comparing the theories, methods, results or conclusions from books with the authors’ research. Use citation aims to cite theories, methods, data, tools, etc. from existing books. Citation intensity denoted citation frequencies of a book in one citation literature. For calculating scores of the five metrics, we conducted finer-grained analysis on the citation resources. Specifically, we counted numbers of citation literatures to get scores of citation frequencies, which can be calculated by equation (8). 𝑆𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑖 = #𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑖 (8) Where, 𝑆𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑖 is the score of the citation frequency metric of book 𝑖 ; #𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑖 is the number of citations of book 𝑖. We extracted topics expressed by citation literatures to reflect depth and breadth of books from the dimension of book citation. We held that books with more citation literatures and the citation literatures introduced fewer topics tend to get higher depth scores. Meanwhile, books with more uniformly topic distributions tend to get higher breadth scores. Hence, the depth and breadth of books based on citation literatures can be computed by equation (9) and (10). 𝑆𝑐𝑖𝑡𝑑𝑒𝑝𝑡ℎ+ = #7+4:4+5T9 #7+4456+789 (9) 8 Where, 𝑆𝑐𝑖𝑡𝑑𝑒𝑝𝑡ℎ+ means the depth score of book 𝑖 based on citation literatures, #𝑐𝑖𝑡𝑡𝑜𝑝𝑖𝑐𝑠+ is topic numbers expressed in citation literatures of book 𝑖, #𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛+ means citation frequency of book 𝑖, i.e. numbers of citation literatures of book 𝑖. 𝑆𝑐𝑖𝑡𝑏𝑟𝑒𝑎𝑑𝑡ℎ+ = − C DE(#7+4456+789) #7+4456+789 J M C 𝑝_𝑐𝑖𝑡𝑡𝑜𝑝𝑖𝑐𝑠+J𝑙𝑛 (𝑝_𝑐𝑖𝑡𝑡𝑜𝑝𝑖𝑐𝑠+J) (10) Where, 𝑆𝑐𝑖𝑡𝑏𝑟𝑒𝑎𝑑𝑡ℎ+ denotes the breadth score of book 𝑖 based on citation literatures, #𝑐𝑖𝑡𝑡𝑜𝑝𝑖𝑐𝑠+ is the number of topics of book 𝑖, 𝑝_𝑐𝑖𝑡𝑒_𝑡𝑜𝑝𝑖𝑐𝑠+J is the topic probability of the book 𝑖 in topic j. We counted citations about a given book in a citation literature to calculate citation intensity of the book, which can be computed by equation (11) 𝑆𝑖𝑛𝑡𝑖 = 𝑛 𝑖𝑛𝑡𝑖𝑗 𝑗WX 𝑆𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑖 (11) Where, 𝑆𝑖𝑛𝑡𝑖 denotes citation intensity score of book 𝑖, 𝑖𝑛𝑡𝑖𝑗 means citation intensity score of book 𝑖 in citation literature 𝑗, 𝑆𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑖 is citations of book 𝑖. We conducted text classification on citation contexts extracted from citation literatures to identify scholars’ three different citation functions, and then calculated metric scores of citation function with equations (12) and (13) (Hernández-Alvarez et al., 2017). 𝑆𝑓𝑢𝑛𝑖 = 𝑛 𝑗WX 𝑓𝑢𝑛𝑖𝑗 𝑛𝑖 (12) 𝑓𝑢𝑛𝑖𝑗 = 1, Background citation 2, 𝐶𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑜𝑛 𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛 3, 𝑈𝑠𝑒 𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛 (13) Where, 𝑆𝑓𝑢𝑛𝑖 denotes citation function score of book 𝑖, 𝑓𝑢𝑛𝑖𝑗 means citation function score of the 𝑗th citation context about book 𝑖. 𝑛𝑖 is the total citation frequency in the texts of citation literatures about book 𝑖. 4.3.4 Impact assessment metrics from book usages The usages of books (e.g. library holdings and sales) are closely related to books’ use impacts. Books with more library holdings and sales may get higher impacts (White et al., 2009). Therefore, in terms of book usages, we extracted four metrics, including library holding number, library holding region, library holding distribution and sale, as shown in Figure 6. Library holding numbers is the total number of a book in libraries around the world. Library holding region measures how many countries collect the book. Library holding distribution refers to holding distribution of the book in libraries. The four usage-related metrics can by equations (14) to (17). 9 Evaluation source Book usages Metrics Library holding number Library holding region Library holding distribution Sale Metric scores 𝑆𝑛𝑢𝑚𝑖 𝑆𝑟𝑒𝑔𝑖 𝑆𝑑𝑖𝑠𝑖 𝑆𝑠𝑎𝑙𝑒𝑖 Figure 6. Impact assessment metrics from book usages 𝑆𝑟𝑒𝑔𝑖 = #ℎ𝑜𝑙𝑑𝑟𝑒𝑔𝑖 (14) 𝑆𝑛𝑢𝑚𝑖 = 𝑆𝑟𝑒𝑔𝑖 𝑗MC #ℎ𝑜𝑙𝑑𝑛𝑢𝑚𝑖𝑗 (15) 𝑆𝑑𝑖𝑠𝑖 = − C DE(𝑆𝑟𝑒𝑔𝑖) 𝑆𝑟𝑒𝑔𝑖 J M C 𝑝_holdings+Jln (𝑝_holdings+J) (16) 𝑆𝑠𝑎𝑙𝑒𝑖 = #𝑠𝑎𝑙𝑒𝑖 (17) Where, 𝑆𝑟𝑒𝑔𝑖 is the score of holding regions of book 𝑖 ; #ℎ𝑜𝑙𝑑𝑟𝑒𝑔𝑖 is the number of regions that collected book 𝑖. 𝑆𝑛𝑢𝑚𝑖 is the score of holding numbers of book 𝑖 ; #ℎ𝑜𝑙𝑑𝑛𝑢𝑚𝑖𝑗 is the number of library holdings of book 𝑖 in region 𝑗. 𝑆𝑑𝑖𝑠𝑖 is the score of holding distributions of book 𝑖, 𝑝_holdings+J is the probability of the book 𝑖 in region j. 𝑆𝑠𝑎𝑙𝑒𝑖 denotes the score of sale of book 𝑖; #𝑠𝑎𝑙𝑒𝑖 is the reordered sales ranking of book 𝑖. 4.4 Calculation of metric weights for book impact assessment Based on the above analysis, we constructed a multi-level and multi-dimensional book impact evaluation system, as shown in Figure 7. Each metric can be quantified to reflect different characteristics of books and be used to evaluate the impact of books. Expert evaluation combined with analytic hierarchy process (AHP) was used to calculate weights of evaluation metrics (Saaty, 2005). The AHP decomposes the problem into different factors according to the requirements of the overall goal. Based on the interrelated influence among factors, the factors are aggregated and combined at different levels to form a multi-level structure model. Finally, the problem comes down to the determination of the relatively important weights of the lowest level (i.e. evaluation metrics) relative to the highest level (i.e. book evaluation). Therefore, AHP is effective for hierarchical decision analysis, and can be used to calculate the weights of metrics in the evaluation system (Lee & Kozar, 2006). 10 Book impact assessment Book contents Book reviews Book citations Book usages # Positive review #citation TOC depth # Negative review TOC breadth Star rating Citation literature depth Citation literature breadth Citation strength Library holding number Library holding region Library holding distribution Aspect satisfaction Citation function Sale Figure 7. Book impact assessment system Firstly, we invited experts in the field of book impact assessment (including scholars and relevant practitioners) to participate in the metric importance survey, so as to obtain the initial weights of metrics. 65 questionnaires were sent out and 53 valid questionnaires are collected. The questionnaire is shown in Appendix A. We use the 5-level scale to evaluate importance of metrics, ranging from 1 for “very unimportant” to 5 for “very important”. Then, we get initial weights of all metrics in Figure 7. Finally, based on the results of the questionnaire survey, AHP was used to calculate the final weights of all metrics (Cheng & Li, 2001). 4.5 Calculation of book impact scores We integrated the evaluation metrics of multiple evaluation sources to determine the book impact score. Specifically, we normalized the score of each metric, and then book impact scores were obtained by weighted sum of the normalized scores with equation (18) and (19). o JMC 𝑆𝑐𝑜𝑟𝑒+ = 𝑁𝑜𝑟𝑆+J = 2 ∗ atan 𝑆+J /𝜋 (19) (18) (𝑁𝑜𝑟𝑆+J ∗ 𝑤J) Where, 𝑤J denotes weighting of metric 𝑗, m is the number of metrics, 𝑁𝑜𝑟𝑆+J is normalized score of metric 𝑗 about book 𝑖. 𝑆+J is score of metric 𝑗 about book 𝑖. 5 Results 5.1 Analysis on metric weights of book impact assessment In order to determine which metric is more important for measuring book impacts (i.e. for answering RQ1), we calculated the weights of different metrics in the evaluation system. Figure 8 shows the weight scores of primary metrics. Figure 8 (a) presents the initial importance of the four primary metrics scored by 53 experts, and Figure 8 (b) reports the final weight scores of the four primary metrics. We can see from Figure 8 that the weight of book content is slightly higher than the other three metrics. It indicates that the importance of the four first-class metrics for book impact evaluation is close, while the book content is relatively more important. Meanwhile, the evaluation 11 results from experts reveal that the first-class evaluation metrics extracted from four evaluation resources can be used to measure book impact. These metrics assess books’ impacts of different dimensions from the internal and external aspects of books. Therefore, the integration of the four evaluation dimensions (or four evaluation resources) can be used to comprehensively evaluate the impacts of books. (a) (b) Figure 8. The weight scores of primary metrics Table 2 represents weights of secondary evaluation metrics in the book impact assessment system. For the secondary metrics, the weights of the internal evaluation metrics (i.e. the metrics extracted from the book content) are similar, about 0.14. The weights of the external evaluation metrics (i.e. the metrics extracted from book review, book citation and book usage) distribute between 0.047 and 0.064 and lower than the internal evaluation metrics. It reflects that book content is a quite important book evaluation resource. However, the existing research on book impact assessment is rarely based on book content. This may because books’ contents often cannot be easily obtained online, and the difficulty of content analysis or processing is obviously higher than that of academic articles and other types of publications. In addition, the sum of the evaluation metrics weights from the outside of books (0.7211) is higher than internal evaluation metrics (0.2789). It indicates that the impact evaluation of books cannot only be based on the internal evaluation metrics, various external evaluation metrics are also an important evaluation basis. In summary, we can only obtain books’ impacts from one dimension if we based on a single data source, and once there is a lack of data in this dimension (e.g., no book reviews), the impacts of books cannot be evaluated. Therefore, integrating multi-source data to evaluate the impacts of books can effectively avoid such shortcomings, and provide comprehensive evaluation results for users. Table 2. The weights of book impact evaluation metrics Primary metrics Secondary metrics Weights of secondary metrics Book contents Book reviews TOC depth TOC breadth #positive review #negative review Star rating Aspect satisfaction Book citations #citation 0.1443 0.1346 0.0640 0.0622 0.0578 0.0540 0.0502 12 Citation literature depth Citation literature breadth Citation strength Citation function Library holding number Library holding region Library holding distribution Sale 0.0498 0.0477 0.0491 0.0482 0.0598 0.0569 0.0578 0.0636 Book usages Figure 9 shows the metric score ranks of 5 books with the highest impact scores. We can see score ranks of the 5 books in the 15 metrics are varied. It reveals that even books with high impacts are difficult to get high scores in all dimensions. Meanwhile, it also indicates that book impact evaluation based on a single evaluation resource may get one-sided evaluation results. Figure 9. Metric score ranks of Top 5 books 5.2 Analysis on impact scores of book impact assessment 5.2.1 Reliability analysis on book impact assessment results In order to verify the reliability of the book impact results based on the impact evaluation system (i.e. for answering RQ2), we invited experts to evaluate the books’ impacts manually, and then compared the two evaluation results. Specifically, we firstly took 48 books in 8 research domains of computer science and 30 books in 5 research domains of literature as experimental samples, as shown in Table 3. Then, we invited experts in the field of computer science and literature to manually assess the importance of books in corresponding disciplines by using a 5-level scale, ranging from 1 for “low impact” to 5 for “high impact”. Meanwhile, we provided detailed page links of books on Amazon and Douban book5 (an online book reading and comment website) for respondents to understand books. The questionnaire of books in literature is shown in Appendix B (The questionnaire of books in computer science is similar). 56 valid questionnaires related to 5 https://book.douban.com/ 13 computer science and 48 valid questionnaires related to literature were collected from experts. In the valid questionnaires, more than 80% of the respondents have master’s degree or above, of which about 30% are doctors. Thirdly, we calculated the average score of expert evaluation as the final impact score of each book. Finally, we conducted correlation analysis between expert evaluation scores (i.e. book impact based on manual evaluation) and automatic assessment scores (i.e. book impact based on evaluation metric system). The results are shown in Table 4. It can be seen from Table 4 that the automatic book impact scores have a significant positive correlation with the expert evaluation results. It indicates that the calculation results based on our evaluation system are reliable. Table 3. Domains and numbers of books for expert evaluation Disciplines Domains #books Domains #books Computer Science Computer control simulation and artificial intelligence Computer network security Database Operating system Literature research Literature Novel Poetry and Drama 10 Software engineering 7 5 6 7 6 9 Programming and development PLC Technology Computer algorithms Prose History 5 7 3 5 5 3 Table 4. Correlations between book comprehensive impact scores and expert evaluation scores Disciplines Spearman correlation coefficients Computer science Literature 0.631** 0.715** N 48 30 Note: **. Significant at p=0.01 5.2.2 Impact scores of book impact assessment Based on the multi-source data mining and analysis, we got the book impact assessment results, as shown in Figure 10. From Figure 10 we can see scores of books’ comprehensive impacts range from 0.39 to 0.66, and most books are lower than 0.6. It indicates that the number of books with high impacts is relatively small, and most of them are in the set of low impact. Hence, books related scholars and institutions need to allocate resources effectively, as books cannot always get high scores in all aspects. Figure 10. Scores of book impact assessment 14 5.3 Discipline analysis on book impact assessment results Figure 11. Scores of book impacts in different disciplines In order to identify the disciplinary differences (i.e. for answering RQ3), we counted the book impacts scores in different disciplines and identified their score interval distributions. Figure 11 shows the impact scores of books in five disciplines. It can be seen from Figure 11 that the distribution trends of book impact scores in different disciplines are similar. There are less books in the high score area or low score area of each discipline, and most books are concentrated in the middle area. However, the impact scores of different disciplines are quite different. Law, computer science and literature get book impact scores higher than 0.65, while impact scores of books in medicine and sport science are all lower than 0.65. In addition, the number of books with impact scores higher than 0.6 in computer science is significantly less than that in other four disciplines, and only books in sport science get impact scores lower than 0.4. Hence, we can conclude that that disciplinary differences are existing, and users (including individual users and institutional users) need to consider the disciplinary differences when selecting, comparing and awarding books. We counted the number distributions of different disciplines in different book impact score intervals, as shown in Figure 12. The impact scores of most books are in the middle score interval (i.e. 0.4-0.6). Meanwhile, about 10% books get impact scores higher than 0.6, while less than 1% books get impact scores lower than 0.4. The distribution results are consistent with the above analysis results based on Figure 10. In terms of discipline differences, we can see that the proportion of sports science books in low score interval (i.e. 0.3-0.4) is significantly higher than that of other disciplines. In the middle score interval, the proportions of books in law and medicine are higher. The proportion of literature in high score interval (i.e. 0.6-0.7) is highest, while the number of computer science books in high score interval is least. The proportion difference of the five 15 disciplines in the four impact intervals indicates that there are obvious disciplinary differences in the distribution of the impact scores, especially the distributions of the extreme impact scores. Figure 12. Distributions of book impact scores 6 Discussion 6.1 Comparative analysis with other evaluation methods This paper measured book impacts via integrating multiple evaluation resources including both internal and external evaluation resources of books. Compared with evaluation manually, book evaluation based on evaluation system can assess the impact of large numbers of books more quickly, reduce the cost of book evaluation research and shorten the evaluation cycle. Compared with assessment research based on a single evaluation resource, this method can obtain the evaluation basis from more dimensions and more types of user groups, including book authors, researchers, ordinary readers and various institutional users (e.g. libraries). We conducted correlation analysis between expert evaluation scores and impact scores based on a single evaluation source, the correlation results are shown in Table 5. We can see from Table 5 that impact scores based on all four evaluation sources are significantly correlated with expert evaluation scores. It indicates that the four types of resources are reliable book impact evaluation resources, which can be used to measure different dimensions of book impact. However, the four correlation coefficients in Table 5 are lower than the correlation coefficients based on comprehensive evaluation (0.631 and 0.715). Hence, we can conclude that although the single evaluation source can be used to evaluate the impacts of books, the evaluation results are not comprehensive. The evaluation results obtained by integrating resources can overcome the one-sidedness of evaluation based on a single source, and avoid the situation that the book impact cannot be evaluated when lacking the certain dimension of evaluation data. More importantly, in some cases, users do not have a clear evaluation purpose or tendency. Thus, they are not sure which evaluation source is the most reliable basis for book selection, while comprehensive evaluation results can provide effective references for users, so as to effectively deal with such “evaluation cold start” phenomenon. 16 Table 5. Correlations between book impact scores based on single source and expert evaluation scores Correlation based on book based on book based on book based on book Impact scores Impact scores Impact scores Impact scores content review citation usage Expert evaluation scores Computer science Literature 0.114* 0.103* 0.440** 0.531** 0.141* 0.159* 0.531** 0.269** Note: **. Significant at p=0.01, *. Significant at p=0.05 A noteworthy phenomenon is that for the four primary metrics, the metric weight of book content is slightly higher than the other three primary evaluation metrics, while the correlation coefficient between the impact scores based on book content and the expert evaluation scores is lower than other metrics. This may be related to the metrics delivered from the book content, that is, the TOC depth and TOC breadth. Existing studies have proved that the depth and breadth of books can be used to evaluate the impacts of books, but it is often difficult for book authors to balance the two (Zhang & Zhou, 2020). In other words, books with higher depth values are often difficult to get higher breadth values. We conducted correlation analysis between the TOC depth and TOC breadth, and the two metrics were highly negatively correlated (-0.820). Therefore, we can roughly convert the two metrics. Equation (20) shows the calculation of the comprehensive impact scores and conversion of the two secondary metrics extracted from book content. 𝑆𝑐𝑜𝑟𝑒+ = 𝑆75T4<T4+ + 𝑆r<s+<t+ + 𝑆7+4:4+5T+ + 𝑆u8:;<+ = 𝑤v<64w ∗ 𝑁𝑜𝑟𝑆123v<64w+ + 𝑤xr<:v4w ∗ 𝑁𝑜𝑟𝑆123xr<:v4w+ + 𝑆r<s+<t+ + 𝑆7+4:4+5T+ + 𝑆u8:;<+ ≅ 𝑤v<64w ∗ 𝑁𝑜𝑟𝑆123v<64w+ + 𝑤xr<:v4w ∗ −𝑘 ∗ 𝑁𝑜𝑟𝑆123v<64w+ + 𝑆r<s+<t+ + 𝑆7+4:4+5T+ + 𝑆u8:;<+ = 𝑤v<64w − 𝑘 ∗ 𝑤xr<:v4w ∗ 𝑁𝑜𝑟𝑆123v<64w+ + 𝑆r<s+<t+ + 𝑆7+4:4+5T+ + 𝑆u8:;<+ (20) Where, 𝑆75T4<T4+ is the impact scores based on book content of the book 𝑖, 𝑆r<s+<t+, 𝑆7+4:4+5T+ and are impact scores based on other three sources. 𝑤v<64w and 𝑤xr<:v4w are weights of the 𝑆u8:;<+ TOC depth and TOC breadth, 𝑁𝑜𝑟𝑆123v<64w+ and 𝑁𝑜𝑟𝑆123xr<:v4w+ denote normalized scores of the two metrics about book 𝑖, 𝑘 means the conversion coefficient of the two metrics. It can be seen from equation (20) that the high negative correlation between the two metrics weakens the weight of the primary metric (i.e. book content), and eventually leads to the weaker correlation between the impact scores based on book content and the comprehensive scores. In addition, book impact evaluation based on the evaluation system can provide users with fine- grained analysis results, so as to support the decision-making of users from different groups. We take the book Sweeping up fallen leaves for winter as an example, the fine-grained analysis results are shown in Appendix C. From Appendix C we can see impact score of the book is ranked as 6 in this paper. In terms of book contents, the ranking of TOC depth is in the middle, while the ranking of TOC breadth is relatively low. We can conclude that the depth of the book is general and the scope of content is relatively small. In terms of book reviews, the book has many positive reviews and negative reviews, and 82% reviews are positive. Meanwhile, most users give 4-star or 5-star ratings for the book. It reveals that most users hold a positive attitude towards the book. In addition, the most satisfied and dissatisfied aspects are printing and price, while the most concerned and least concerned aspects are content and font. It indicates that satisfaction of content that users pay most attention to needs to be improved. For book citations, the ranking of citation frequency and citation 17 literature depth is low, while citation literature breadth is high. It indicates that the book is less cited, while the topics of citations are diverse. Meanwhile, the book is most cited for use. In terms of book uses, this book has a large number of library holdings, and is collected by libraries in five countries around the world. The USA has the largest holding number of the book, followed by China. In conclusion, based on the analysis of multi-source evaluation data, we can get fine-grained evaluation results about books, and such results are difficult to obtain based on a single evaluation resource. In addition, the book impact evaluation results in structured rich text form in Appendix C can help users understand books more comprehensively and quickly, which is also the original intention of book impact evaluation research. 6.2 Book impact assessment based on users’ diversified evaluation demands For users who have clear evaluation purposes (or evaluation needs), we can not only provide comprehensive evaluation results with detailed information, but also provide evaluation results based on specific evaluation resources according to users’ different demands. This also reflects the advantages of the comprehensive evaluation system, that is, the differentiated combination of evaluation resources can adapt to the diversified and personalized evaluation tasks. For example, for users who want to refer to the previous purchase opinions or attitudes by existing users for book selection, we can provide them with book impact results based on book reviews, as shown in Table 6. Table 6. Book impact assessment based on book reviews Rank ISBN Title Discipline 1 9787508633893 My Life Sport science Sweeping up 2 9787108025371 fallen leaves Law 9787505732025 for winter Memory is a light pain Literature 9787532553129 Nalan’s Poems Literature 9787020102990 From the Seine to Firenze Literature 3 4 5 … … … … Book impact scores based on book reviews For academic institutions, which pay more attention to the academic impacts of books, we can calculate impacts of books based on books’ citation information, as shown in Table 7. Such book evaluation results can provide support for academic institutions to assist experts with awarding books, so as to improve the evaluation efficiency and reduce the award cost. For libraries, they often need to consider the global library holdings and sales of books for book selections. Therefore, impact evaluation results based on book uses are often needed, as shown in Table 8. Based on such book impact assessment results, the libraries can quickly identify the books that need to be added, and adjust the position of books, so as to better ensure the circulation of books and ensure the libraries’ customer flow. For scholars, book content information is important for book recommendation. Hereby, impact evaluation is often measured based on book contents. The assessment results are shown in Table 9. 18 When selecting or recommending books, especially massive books with similar topics, scholars can choose books more quickly. Table 7. Book impact assessment based on book citations Rank ISBN Title Discipline 1 2 3 4 5 9787807528876 9787811210330 9787811065497 9787514606331 7308050467 On the vocabulary of Zhou Mi's notes Zhongjing internal medicine Gynecopathy Four tragedies of Shakespeare Chinese and foreign literary selections of Yu Dafu Literature Medicine Medicine Literature Literature Book impact scores based on book citations … … … … Table 8. Book impact assessment based on book usages Rank ISBN Title Discipline 1 9787306037602 Tips for healthy exercise you don't know 2 3 4 7301094469 7040220629 9787117119726 Book impact scores based on book usages 5 9787301112496 On Chinese traditional law: from the perspective of Chinese traditional studies Selected lectures on drama of yuan, Ming and Qing Dynasties Clinical parasitology laboratory Research on the theory of absolute property act and the legal system of real right … … … Table 9. Book impact assessment based on book contents Sport science Law Literature Medicine Law … Rank ISBN Title Discipline 1 9787514606331 2 9787301113028 3 7030128834 4 9787117134606 Book impact scores based on book contents 5 9787807087199 Four tragedies of Shakespeare Encyclopedia of law of Peking University: Economic Law programmer's Manual of Visual foxpro8.0 Handbook of rational use of antibiotics Appreciation Dictionary of 300 Yuan opera Literature Law Computer science Medicine Literature … … … … In addition to providing evaluation results based on specific evaluation resources, users can also adjust the weight of each metric in the evaluation system according to their own needs, so as to obtain personalized evaluation results. However, it is worth noting that the adjustment of metric weights requires users to have a quite clear understanding of their evaluation needs. 19 Our study is subject to a few limitations. Firstly, due to the high cost of obtaining citation contents manually, data size in this paper is small. Hence, we will try to automatically detect the citation contents, so as to assess more books from more disciplines to further verify the reliability and feasibility of the evaluation system and methods proposed in this paper. Meanwhile, due to the sparsity of data (e.g. books’ academic reviews published in journals), some evaluation resources are not included in the evaluation system of this paper. In the future, we need to explore the acquisition and analysis of such data, so as to improve the evaluation system. Secondly, in the process of integrating different resources, the quality difference of multiple evaluation resources also needs to be considered (Zhang et al., 2019). Measuring the data quality of different evaluation sources and screening reliable evaluation data is also a research direction of subsequent optimization. Meanwhile, it is necessary to integrate the evaluation data of the same evaluation resource in different platforms to avoid the evaluation error caused by a single platform. Lastly, this paper selected four evaluation resources from internal and external dimensions of books. However, there are still unidentified resources that can also be used to evaluate the impact of books. Therefore, in the follow-up study, we will excavate more reliable evaluation sources to improve the evaluation metric system. 7 Conclusion This paper constructed an evaluation system for book impact and provided a comprehensive impact evaluation result. Meanwhile, users can integrate the required evaluation metrics according to different evaluation purposes and demands. In answer to the first research question, the importance of metrics from the four resources is similar, while the weights of metrics extracted from book content are slightly higher. These evaluation metrics measure the impacts of books from different dimensions and play a complementary role in the impact evaluation process. Regarding the second research question, the multi-source book impact assessment system does seem to be valuable for the book impact assessment. Meanwhile, assessment results based on the evaluation system can provide more detail information for different types of users and meet diverse users’ evaluation needs. Addressing the third research question, there are substantial differences between books published in different disciplines. In the book selection, recommendation and other related activities, it is necessary to fully consider the disciplinary differences of books. In conclusion, book impacts measured based on the evaluation system can not only provide comprehensive evaluation results for users, but also obtain personalized evaluation results according to the evaluation needs of users. Meanwhile, this paper provides supplementary information for existing books evaluation, and it is suitable for various evaluation scenarios. Acknowledgement This work is supported by the National Social Science Fund Project (No. 19CTQ031). Reference Azer, S. A. (2019). Top-cited articles in medical professionalism: a bibliometric analysis versus altmetric scores. BMJ Open, 9(7), e029433. 20 Batooli, Z., Ravandi, S. N., & Bidgoli, M. S. (2016). Evaluation of scientific outputs of Kashan university of medical sciences in scopus citation database based on scopus, researchgate, and mendeley scientometric measures. Electronic physician, 8(2), 2048-2056. Butler, J. S., Kaye, I. D., Sebastian, A. S., Wagner, S. C., & Vaccaro, A. R. (2017). The Evolution of Current Research Impact Metrics: From Bibliometrics to Altmetrics? Clinical Spine Surgery, 30(5), 226-228. Cabezas-Clavijo, Á., Robinson-García, N., Torres-Salinas, D., Jiménez-Contreras, E., Mikulka, T., Gumpenberger, C., . . . Gorraiz, J. (2013). Most borrowed is most cited? Library loan statistics as a proxy for monograph selection in citation indexes. Proceedings of 14th International Society of Scientometrics and Informetrics, 1237-1252. Calhoun, J. C. (2011). Reviews, Holdings, and Presses and Publishers in Academic Library Book Acquisitions. Library resources & technical services, 45(3), 127-177. Cheng, E. W. L., & Li, H. (2001). Analytic hierarchy process:an approach to determine measures for business performance. Measuring Business Excellence, 5(3), 30-37. Chevalier, J., & Mayzlin, D. (2006). The effect of word of mouth online: Online book reviews. Journal of Marketing Research, 43, 348-354. Donovan, C., & Butler, L. (2007). Testing novel quantitative indicators of research ‘quality’, esteem and ‘user engagement’: An economics pilot study. Research Evaluation, 16(4), 231-242. Gorraiz, J., Gumpenberger, C., & Purnell, P. J. (2014a). The power of book reviews: a simple and transparent enhancement approach for book citation indexes. Scientometrics, 98(2), 841- 852. Gorraiz, J., Purnell, P. J., & Glänzel, W. (2014b). Opportunities for and Limitations of the Book Citation Index. Journal of the Association for Information Science & Technology, 64(7), 1388-1398. Hernández-Alvarez, M., Soriano, J. M. G., & Martínez-Barco, P. (2017). Citation function, polarity and influence classification. Natural Language Engineering, 23(4), 561-588. Hoffman, M. D., Blei, D. M., & Bach, F. R. (2010). Online Learning for Latent Dirichlet Allocation. Advances in neural information processing systems, 23, 856-864. Kousha, K., & Thelwall, M. (2008). Assessing the Impact of Disciplinary Research on Teaching: An Automatic Analysis of Online Syllabuses. Journal of the Association for Information Science & Technology, 59, 2060-2069. Kousha, K., & Thelwall, M. (2015). Alternative metrics for book impact assessment: Can Choice reviews be a useful source? Proceedings of the 15th international conference on scientometrics and informetrics, 59-70. Kousha, K., & Thelwall, M. (2016). Can Amazon.com reviews help to assess the wider impacts of books? Journal of the Association for Information Science & Technology, 67(3), 566-581. Kousha, K., & Thelwall, M. (2018). Can Microsoft Academic help to assess the citation impact of academic books? Journal of Informetrics, 12(3), 972-984. Kousha, K., Thelwall, M., & Abdoli, M. (2017). Goodreads reviews to assess the wider impacts of books. Journal of the American Society for Information Science & Technology, 68(8), 2004- 2016. Kousha, K., Thelwall, M., & Rezaie, S. (2011). Assessing the citation impact of books: The role of Google Books, Google Scholar, and Scopus. Journal of the American Society for Information Science & Technology, 62(11), 2147-2164. 21 Lee, Y., & Kozar, K. A. (2006). Investigating the effect of website quality on e-business success: An analytic hierarchy process (AHP) approach. Decision support systems, 42(3), 1383- 1401. Maity, S. K., Panigrahi, A., & Mukherjee, A. (2018). Analyzing Social Book Reading Behavior on Goodreads and how it predicts Amazon Best Sellers. Proceedings of the International Conference on Advances in Social Networks Analysis and Mining, 211-235. Mccain, K. W., & Salvucci, L. J. (2006). How influential is Brooks' Law? A longitudinal citation context analysis of Frederick Brooks' The Mythical Man-Month. Journal of Information Science, 32(3), 277-295. Mooney, R. J., & Roy, L. (2000). Content-Based Book Recommending Using Learning for Text Categorization. Fourth ACM Conference on Digital Libraries, 195-204. Oberst, U. (2017). Measuring the societal impact of research with Altmetrics: an experiment. Journal for library culture, 5(1), 16-21. Pons-Porrata, A., Berlanga-Llavori, R., & Ruiz-Shulcloper, J. (2007). Topic discovery based on text mining techniques. Information Processing & Management, 43(3), 752-768. Poulsen, C. (1996). Tables of Contents in Library Catalogs: A Quantitative Examination of Analytic Catalogs. Library resources & technical services, 40(2), 133-138. Saaty, T. L. (2005). Analytic Hierarchy Process: John Wiley & Sons, Ltd. Su, X., Deng, S., & Shen, S. (2014). The design and application value of the Chinese Social Science Citation Index. Scientometrics, 98(3), 1567-1582. Thelwall, M., & Abrizah, A. (2014). Can the impact of non-Western academic books be measured? An investigation of Google Books and Google Scholar for Malaysia. Journal of the Association for Information Science & Technology, 65(12), 2498-2508. Torres-Salinas, D., García, N. R., Larguero, J. M. C., & López-Cozar, E. D. (2014). Coverage, field specialisation and the impact of scientific publishers indexed in the Book Citation Index. Online Information Review, 38(1), 24-42. Torres-Salinas, D., Gumpenberger, C., & Gorraiz, J. (2017a). PlumX As a Potential Tool to Assess the Macroscopic Multidimensional Impact of Books. Frontiers in Research Metrics & Analytics, 2, 5. Torres-Salinas, D., Robinson-Garcia, N., & Gorraiz, J. (2017b). Filling the citation gap: measuring the multidimensional impact of the academic book at institutional level with PlumX. Scientometrics, 113, 1371-1384. Tsay, M.-y., Shen, T.-m., & Liang, M.-h. (2016). A comparison of citation distributions of journals and books on the topic “information society”. Scientometrics, 106(2), 475-508. White, H. D., Boell, S. K., Yu, H., Davis, M., Wilson, C. S., & Cole, F. T. H. (2009). Libcitations: A measure for comparative assessment of book publications in the humanities and social sciences. Journal of the Association for Information Science and Technology, 60(6), 1083- 1096. White, H. D., & Zuccalá, A. A. (2018). Libcitations, worldcat, cultural impact, and fame. Journal of the Association for Information Science and Technology, 69(12), 1502-1512. Ye, J. (2014). Development, significance and background information about the “Chinese Book Citation Index” (CBkCI) demonstration database. Scientometrics, 98(1), 557-564. Zhang, C., Tong, T., & Bu, Y. (2019). Examining differences among book reviews from various online platforms. Online Information Review, 43(7), 1169-1187. 22 Zhang, C., & Zhou, Q. (2020). Assessing books' depth and breadth via multi-level mining on tables of contents. Journal of Informetrics, 14(2), 101032. Zhou, Q., & Zhang, C. (2018). Detecting Users' Dietary Preferences and Their Evolutions via Chinese Social Media. Journal of Database Management (JDM), 29(3), 89-110. Zhou, Q., & Zhang, C. (2019). Using Citation Contexts to Evaluate Impact of Books. Proceedings of the 17th International Conference on Scientometrics and Informetrics. 2487-2488. Zhou, Q., & Zhang, C. (2020a). Evaluating wider impacts of books via fine-grained mining on citation literatures. Scientometrics, 1-26. Zhou, Q., & Zhang, C. (2020b). Measuring book impact via content-level academic review mining. The Electronic Library, 38(1), 138-154. Zhou, Q., Zhang, C., Zhao, S. X., & Chen, B. (2016). Measuring book impact based on the multi- granularity online review mining. Scientometrics, 107(3), 1435-1455. Zuccalá, A., & Cornacchia, R. (2016). Data matching, integration, and interoperability for a metric assessment of monographs. Scientometrics, 108(1), 465-484. Zuccalá, A., & Leeuwen, T. V. (2014). Book reviews in humanities research evaluations. Journal of the American Society for Information Science & Technology, 62(10), 1979-1991. Zuccalá, A., Someren, M. V., & Bellen, M. V. (2014). A machine-learning approach to coding book reviews as quality indicators: Toward a theory of megacitation. Journal of the Association for Information Science & Technology, 65(11), 2248-2260. 23 Appendix A Questionnaire of assessment metrics about book impact Dear scholars: We are conducting research about book impact assessment. We have analyzed related works about book impact assessment, and a preliminary assessment system is structured (as shown in the following figure). In order to improve the assessment system, please give your valuable opinion about importance of following assessment metrics. Assessment system includes four first-grade metrics: book reviews, book contents, book citations, book usages. Each first-grade metric has corresponding second-grade metrics. Please assess the importance of metrics at all grades. 1: Very unimportant 2: Not important 3: General importance 4: Relative important 5: Very important Thank you for your support and cooperation. Book impact assessment Book reviews Book contents Book citations Book usages # Positive # Negative Star rating Aspect #citations Citation literature depth values Citation literature breadth values TOC depth values TOC breadth values Citation strength Library holding numbers Citation functions Library holding regions Library holding distributions E-commerce sales/ sale ranks Part1: Your basic information Major: E-mail: 24 Your educational background: Your educational background: ○ Below the undergraduate level ○ Undergraduate ○ Master ○ Doctorate and above ○ Assistant professor ○ Associate Professor ○ Professor ○ Other Part2: Importance of assessment metrics Q2: The importance of first-grade indexes: Book impact assessment Book reviews Book contents Book citations Book usages First-grade metrics Very unimportant Very important Book reviews: Book contents: Book citations: Book usages □ 1 □ 1 □ 1 □ 1 □ 2 □ 2 □ 2 □ 2 □ 3 □ 3 □ 3 □ 3 □ 4 □ 4 □ 4 □ 4 □ 5 □ 5 □ 5 □ 5 Q3: The importance of second-grade indexes about book reviews: # Positive reviews: Number of positive reviews about this book given by users # Negative reviews: Number of negative reviews about this book given by users Star rating: Star ratings given by users Aspect satisfactions: Users’ satisfaction about book aspects (aspects refer to price, printing etc.) Book impact assessment Book reviews # Positive reviews # Negative reviews Star rating Aspect satisfaction Second-grade metrics Very unimportant Very important # positive reviews: # negative reviews: Star rating: Aspect satisfactions: □ 1 □ 1 □ 1 □ 1 □ 2 □ 2 □ 2 □ 2 □ 3 □ 3 □ 3 □ 3 □ 4 □ 4 □ 4 □ 4 □ 5 □ 5 □ 5 □ 5 Q4: The importance of second-grade indexes about book contents: TOC depth values: Depth of books reflected by books’ tables of contents. Higher depth value of books means books introduced deeper theory, technology, etc. 25 TOC breadth values: Breadth of books reflected by books’ tables of contents. Higher breadth value of books means book involved a wider range of knowledge, and introduced more theory, technology, etc. Book impact assessment Book contents TOC depth values TOC breadth values Second-grade metrics Very unimportant Very important TOC depth values: TOC breadth values: □ 1 □ 1 □ 2 □ 2 □ 3 □ 3 □ 4 □ 4 □ 5 □ 5 Q5: The importance of second-grade indexes about book citations: #citations: Citation frequency of this book Citation literature depth values: Depth of the book reflected by literatures which cited this book Citation literature breadth values: Breadth of the book reflected by literatures which cited this book Citation strength: Citation times of this book in one literature by analyzing citation context Citation functions: Citation function refers to the use of this book cited by other literatures, e.g. background citation, method citation etc. Book impact assessment Book citations #citations Citation literature depth values Citation literature breadth values Citation strength Citation functions Second -grade metrics Very unimportant Very important #citations: Depth values: Breadth values: Citation strength: Citation functions: □ 1 □ 1 □ 1 □ 1 □ 1 □ 2 □ 2 □ 2 □ 2 □ 2 □ 3 □ 3 □ 3 □ 3 □ 3 □ 4 □ 4 □ 4 □ 4 □ 4 □ 5 □ 5 □ 5 □ 5 □ 5 Q6: The importance of second-grade indexes about book usages: Library holding numbers: Total number of collections about this book in various libraries around the world 26 Library holding regions: Total number of library regions that collect this book Library holding distributions: Holding distributions of this book in various libraries around the world E-commerce sales/ sale ranks: The sales of books on e-commerce website Book impact assessment Book usages Library holding numbers Library holding regions Library holding distributions E-commerce sales/ sale ranks Second -grade metrics Very unimportant Very important Library holding numbers: Library holding regions: Library holding distributions: E-commerce sales/ sale ranks: □ 1 □ 1 □ 1 □ 1 □ 2 □ 2 □ 2 □ 2 □ 3 □ 3 □ 3 □ 3 □ 4 □ 4 □ 4 □ 4 □ 5 □ 5 □ 5 □ 5 27 Appendix B Questionnaire of the impacts of books in literature Dear scholars: We are conducting research about book impact assessment. You are invited to assess the impacts of books in the following five domains of literature. You can make a comprehensive assessment according to books’ citations, reviews, sales, library holdings etc., and then give the impact score grades of books. 1: Low impact 2: Relative low impact 3: General impact 4: Relative high impact 5: High impact Thank you for your support and cooperation. Part1: Your basic information Major: E-mail: Your educational background: ○ Below the undergraduate level ○ Undergraduate ○ Master ○ Doctorate and above Part2: Book impact assessment Q2: Books in the domain of literature research ID Title Authors Publishers 1 2 3 4 5 6 7 History of Chinese literature Lin Geng Tsinghua University Press, 2009 A brief history of world literature Li Mingbin Peking University Press, 2002 Japanese elegance Onishi Yoshinori Beijing Jiban books Co., Ltd., 2012 Psychology of contemporary literature and art History of fiction: Theory and Practice Jinyuanpu China Renmin University Press, 2009 Chen Pingyuan Peking University Press, 2010 The foundation of modern Zhang Fugui, Wang literature Xueqian, Liu Zhongshu Peking University Press, 2009 History of ancient Chinese Literature Guo Yuheng Shanghai Classics Publishing House, 1998 (click on the title of the book below to get more information about the book) Title Low impact High impact History of Chinese literature A brief history of world literature □1 □1 □ 2 □ 2 □3 □3 □4 □4 □5 □5 28 Japanese elegance Psychology of contemporary literature and art History of fiction: Theory and Practice The foundation of modern literature History of ancient Chinese Literature □1 □1 □1 □1 □1 □ 2 □ 2 □ 2 □ 2 □ 2 □3 □3 □3 □3 □3 □4 □4 □4 □4 □4 □5 □5 □5 □5 □5 Q3: Books in the domain of novel ID Title Authors Publishers 1 2 3 4 5 6 The true story of Ah Q Lu Xun China Overseas Chinese press, 2013 A woman with flowers in her arms Mo Yan Comments on a dream of Red Mansions Wang Guowei, Cai Yuanpei Shanghai Literature and Art Publishing House, 2012 Shanghai Classics Publishing House, 2011 The Peach Blossom Fan annotated Kong Shangren, Liang Phoenix publishing house, 2011 by Liang Qichao On the version of dream of Red Mansions Qichao Lin Guanfu Culture and Art Press, 2007 Red Mansions in the wind Miao huaiming Zhonghua Book Company, 2006 (click on the title of the book below to get more information about the book) Title Low impact High impact The true story of Ah Q A woman with flowers in her arms Comments on a dream of Red Mansions The Peach Blossom Fan annotated by Liang Qichao On the version of dream of Red Mansions Red Mansions in the wind □ 1 □ 1 □ 1 □ 1 □ 1 □ 1 □ 2 □ 2 □ 2 □ 2 □ 2 □ 2 □ 3 □ 3 □ 3 □ 3 □ 3 □ 3 □ 4 □ 4 □ 4 □ 4 □ 4 □ 4 □ 5 □ 5 □ 5 □ 5 □ 5 □ 5 Q4: Books in the domain of poetry and drama ID Title Authors Publishers Nalan’s poetry and lyrics Zhang caozhen, Nalanxingde Shanghai Literature and Art Publishing House, 2009 Four tragedies of Shakespeare William Shakespeare China Pictorial press, 2013 Recite progress Zhang Benyi Guangxi Normal University Press, 2013 On the original poem Ye Xie, Shen Deqian Phoenix publishing house, 2010 A study on the vocabulary of Zhoumi notes Lectures on famous Ci Poems of Tang and Song Dynasties Yang Guan Bashu publishing house, 2011 Wang Zhaopeng Guangxi Normal University, 2006 Xi Murong’s classic works Xi Murong Contemporary world press, 2007 Collection of Ming Dynasty folk songs Zhou Yubo, Chen Shulu Nanjing Normal University Press, 2009 Hamlet’s problem Zhang Pei Peking University Press, 2006 1 2 3 4 5 6 7 8 9 29 (click on the title of the book below to get more information about the book) Title Low impact High impact Nalan’s poetry and lyrics Four tragedies of Shakespeare Recite progress On the original poem A study on the vocabulary of Zhoumi notes Lectures on famous Ci Poems of Tang and Song Dynasties Xi Murong’s classic works Collection of Ming Dynasty folk songs Hamlet’s problems Q5: Books in the domain of prose □ 1 □ 1 □ 1 □ 1 □ 1 □ 1 □ 1 □ 1 □ 1 □2 □2 □2 □2 □2 □2 □2 □2 □2 □3 □3 □3 □3 □3 □3 □3 □3 □3 □4 □4 □4 □4 □4 □4 □4 □4 □4 □5 □5 □5 □5 □5 □5 □5 □5 □5 ID Title Authors Publishers 1 Memory is a light pain Long Yingtai, Jiang Xun China Friendship Publishing Company, 2013 May you embrace the world warmly Bi Shumin Jiangsu literature and Art Publishing House, 2013 Li Ao’s love letters Li Ao Time literature and Art Press, 2012 Sleep empty Annie baby (Qingshan) Beijing October literature and Art Publishing House, 2013 Along the Seine to Firenze Huang Yongyu People's Literature Press, 2014 2 3 4 5 (click on the title of the book below to get more information about the book) Title Low impact High impact Memory is a light pain May you embrace the world warmly Li Ao’s love letters Sleep empty Along the Seine to Firenze □ 1 □ 1 □ 1 □ 1 □ 1 □ 2 □ 2 □ 2 □ 2 □ 2 □ 3 □ 3 □ 3 □ 3 □ 3 □ 4 □ 4 □ 4 □ 4 □ 4 □ 5 □ 5 □ 5 □ 5 □ 5 Q6: Books in the domain of history ID Title Authors Publishers 1 The Rommel Papers Liddle Hart Democracy and construction press, 2015 2 Military diary Xie Bingying Jiangsu literature and Art Publishing House, 2010 3 Yu Qiuli and the oil war Chen Daokuo PLA literature and Art Publishing House, 2009 (click on the title of the book below to get more information about the book) Title Low impact High impact The Rommel Papers Military diary Yu Qiuli and the oil war □ 1 □ 1 □ 1 □ 2 □ 2 □ 2 □ 3 □ 3 □ 3 □ 4 □ 4 □ 4 □ 5 □ 5 □ 5 30 Appendix C Fine-grained analysis of impact scores of Sweeping up fallen leaves for winter ISBN Title Disciplines Impact rank 9787108025371 Sweeping up fallen leaves for winter Law 6 Book contents Book reviews TOC depth rank: 191 TOC breadth rank: 281 #positive review rank: 6 #negative review rank: 8 Book citations #citations rank: 356 Citation literature depth rank: 370 Citation literature breadth rank: 59 #positive reviews V.S. #negative reviews Citation strength rank: 142 Citation function rank: 34 18% Positive reviews Negative reviews 82% 10% Citation intensity 50% 40% 1 2 6 1 0.5 0 Star ratings 1 star 2 stars 3 stars 4 stars 5 stars 16% 0% Citation function Background citation Most satisfied aspect: Least satisfied aspect: Printing Price 84% comparison citation use citation 0.75 0.7 0.65 0.6 0.55 Book usages Holding number rank: 9 Holding region rank: 10 Holding distribution rank: 137 E-commerce sale rank: 17 Aspect Library holding numbers and regions Most concerned aspect: Least concerned aspect: Content Font 20 15 10 5 0 0.25 0.2 0.15 0.1 0.05 0 31
ai_researcher
1
Integrating_Human-Computer_Interaction_Principles_in_User-Centered_Dashboard_Design_Insights_from_Maintenance_Management.pdf
Synthesized Trust Learning from Limited Human Feedback for Human-Load-Reduced Multi-Robot Deployments Yijiang Pang, Chao Huang, Rui Liu∗ 1 2 0 2 l u J 3 1 ] O R . s c [ 2 v 1 5 1 3 0 . 4 0 1 2 : v i X r a Abstract— Human multi-robot system (MRS) collaboration is demonstrating potentials in wide application scenarios due to the integration of human cognitive skills and a robot team’s powerful capability introduced by its multi-member structure. However, due to limited human cognitive capability, a human cannot simultaneously monitor multiple robots and identify the abnormal ones, largely limiting the efficiency of the human- MRS collaboration. There is an urgent need to proactively reduce unnecessary human engagements and further reduce human cognitive loads. Human trust in human MRS collabo- ration reveals human expectations on robot performance. Based on trust estimation, the work between a human and MRS will be reallocated that an MRS will self-monitor and only request human guidance in critical situations. Inspired by that, a novel Synthesized Trust Learning (STL) method was developed to model human trust in the collaboration. STL explores two aspects of human trust (trust level and trust preference), meanwhile accelerates the convergence speed by integrating active learning to reduce human workload. To validate the effectiveness of the method, tasks ”searching victims in the context of city rescue” were designed in an open-world simu- lation environment, and a user study with 10 volunteers was conducted to generate real human trust feedback. The results showed that by maximally utilizing human feedback, the STL achieved higher accuracy in trust modeling with a few human feedback, effectively reducing human interventions needed for modeling an accurate trust, therefore reducing human cognitive load in the collaboration. I. INTRODUCTION Human multi-robot system (MRS) collaboration demon- strates great potentials in broad application scenarios benefit- ing from the integration of human cognitive skills and MRS’s powerful capability introduced by its multi-members with diverse functions. With human guidance on robot motion correction, task reallocation, risk assessment, and status pre- diction, MRS performance is improved with more resiliency to real-world disturbances [1]–[3]. In real-world applications, by monitoring the task process and robot performance, human involvements mainly manifested as giving out sugges- tions or directly manipulating the robots [4], [18]. However, wide implementations of human-MRS collaboration are still limited by the human cognitive capability. First, uncertainty happens during the robots performing tasks, it is hard for a human to shift attention between multiple faulty factors and robots, especially when the task environment is complex or the number of robot requesting for human assistance is large. Second, a long time of supervision on robot performance imposes heavy cognitive loads on human operators, making * is the corresponding author [email protected]. Authors are with The Cognitive Robotics and AI Lab (CRAI), College of Aeronautics and Engineering, Kent State University, Kent, OH 44240, USA. Fig. 1. The illustration of human trust manifesting as trust level and trust preference. Human operator rates trajectory A and trajectory B with trust level ”high” and shows preference with ”PA > PB”. them tired and uncomfortable and therefore reducing the social acceptance of an MRS. There is an urgent need to make robots understand human expectations to proactively reduce unnecessary human engagements and further reduce human cognitive loads for effective collaboration. Human trust in human MRS collaboration shows human expectation on robot performance [5], [6]. Investigating human trust is crucial for developing effective cooperation between humans and an MRS, such as helping robots to understand human expectations, actively identifying unsatis- fying robot behaviors, and selectively reporting critical issues to request human corrections. Based on the trust estimation, the work between a human and an MRS will be reallocated that an MRS will self-monitor its performance, reducing unnecessary human interventions and only requesting human guidance in critical situations. Therefore to solve the human supervision and cognitive load challenges, this paper devel- oped an intelligent trust model to estimate human trust to help an MRS to improve the collaboration with the avoidance of unnecessary human assistance. In real-world situations, human trust manifests as two different types, trust level and trust preference. As shown in Fig 1, the trust level is a human operator’s assessment for robot mission performance, which is represented as discrete values usually, adapting to human’s discrete representations [7]. Trust preference is a human operator’s comparison between two different performances of robots performing tasks, which reveals human criteria of adjusting trust based on robot behaviors. Therefore the trust preference is more informative in understanding human expectations comparing with the absolute trust level. In this paper, a novel Synthesized Trust Learning (STL) method was developed to model human trust in the collabo- ration, which includes two aspects of human trust, trust level and trust preference, meanwhile, accelerates the convergence speed by integrating active learning to reduce human work- load. This paper mainly has three contributions: 1) A novel trust modeling method, Synthesized Trust Learning, is developed to maximally explore hetero- geneous human feedback to model human trust. STL enables a human-robot cooperative system to quickly model the human operator’s expectation on the perfor- mance of robots and further assist human operators in guiding robots. 2) An accelerating learning manner is designed through deep integration between active learning framework with the proposed method and an open-world simu- lation environment. 3) A novel trust-aware partnership is designed to facilitate human MRS collaboration based on trust estimation. The trust model integrating human expectation actively assists a human with monitoring robot performance, reducing the burden of human operator. II. RELATED WORK Prior researches investigated the computational trust mod- els [8]. In [10], the updating process of degrees of trust between the human operator and multiple robots was con- sidered as a factorial hidden Markov model (HMM). Robot performance that was quantified with a linear model param- eterized with performance features, human-robot interaction data and subjective assessments from the human operator were used to update the degree of trust in robot. [11] pro- posed Bayesian and recurrent neural models to predict self- reported human trust that changed with observations of robot performance. [9] developed a trust model to quantify the trust level between human operator and robot swarm, which considered the trust level as a weighted linear combina- tion of performance features. However, the above-mentioned researches ignored the trust information contained in the trust preference, which limits their model performance in the situations where require a comparison between similar task performances. In this paper, STL maximally explores heterogeneous human feedback including both trust rating and comparison to model human trust. Researches have been done to learn from heterogeneous human feedback. [12] developed a general active preference- based method to extract information from human preference between two trajectories. The ratings of two trajectories were modeled as selection probability, then active learn- ing and sampling methods were used to calculate weights for the feature. [13] assumed reward function was a lin- ear combination of features and explored data of human demonstrations and preferences. Their method combined a numerical optimization method that is easy and fast but usually subject to a local optimum with sampling methods to get the weights of features. Different from this paper, the above approaches relied on and applied to linear regression models or polynomial regression models, which are hard to design with data having complex mapping relationship especially in scenarios of modeling human cognition. In this paper, STL, which is based on deep learning, is more adaptive in complex concept learning. Different from our paper, which focuses on continual learning with heterogeneous data, [14] used different blocks of convolutional neural network to learn from different sen- sors to uniform the heterogeneous inputs. [15]–[17] achieved learning without forgetting by penalizing major changes in the parameters that were important for the previous tasks. the STL uses a shared structure to deal In this paper, with heterogeneous data and achieves continual learning by maintaining the previous model performance on new tasks. in Our previous work explored the potentials of trust human-robot cooperative system [18], [19]. This paper fur- ther developed a more advanced trust model to actively assist human operators in the collaboration with a unique perspec- tive of reducing human cognitive loads in collaboration. III. PRELIMINARIES This section introduces the notations used throughout the paper and gives a brief overview of multi-robot systems, human trust feedback, and supervised learning setup of this paper. Multi-Robot Systems: Consider a robot team of n robots with status Xi, where Xi = (Vi, Pi, Oi). For robot i, Vi ∈ R3 denotes velocity, Pi ∈ R3 denotes position, and Oi ∈ R3 denotes orientation. A task trajectory ξ ∈ Ξ is a finite sequence of robot states, i.e., ξ = (X)T t=0, where T is the end of the task. And the features of a trajectory is denoted as ψ, which is calculated by a feature extraction function ψ = φ(ξ) = φ((X)T t=0). Human Trust Feedback: Human operator monitors task process and gives trust feedback according to the task performance of trajectories. T level denotes the trust level human rated for a trajectory, which is discrete and varies within [−1, 1]. For example, T level is one of T demarc = (−1, −0.5, 0, 0.5, 1) for five categories of trust level. Human operator can also provide trust preference to a pair of trajec- tories [I, I (cid:48)]. If the first trajectory is preferred, [I, I (cid:48)] = [1, 0], or [I, I (cid:48)] = [0, 1] if the second trajectory is preferred. Supervised Learning Setup: Consider a deep neural network based model represented by a parametrized function fθ with parameters θ. When adapting to a new task Ta with a corresponding dataset, D = {(xi, yi)k i=1} with k example pairs, the model’s parameters θ become θa. θa = θ − α∇θLTa (fθ) (1) In this paper, T = {Ta, Tb}, and T are learned sequentially, which turns the learning process to a continual learning problem. θb = θa − β∇θa LTb (fθa ) (2) where the α, β is the learning rate. Specifically, Ta and Tb have heterogeneous data. Ta consists of the input ψ and the corresponding trust level T level as the label, and Fig. 2. The architecture of the Synthesized Trust Learning method. Human operator provide trust feedback by monitoring MRS performing tasks, then the heterogeneous data, {features of a single trajectory and corresponding trust level, and features of a pair of trajectories and corresponding trust preference}, are both used to update the trust model sequentially. i i])n i=1}. i , ψq is with a corresponding dataset, Da = {(xi, yi)m i=1} = {(ψi, T level )m i=1}. Tb consists of the input a pair of ψ and the corresponding trust preference (I, I (cid:48)) as the label, and is with a corresponding dataset, Db = {(xi, yi)n i=1} = {([ψp i ], [Ii, I (cid:48) IV. SYNTHESIZED TRUST LEARNING This section describes the Synthesized Trust Learning method, which maximally explores heterogeneous human feedback and accelerates the convergence speed by integrat- ing active learning mechanism to model human trust in a human-robot cooperative system. The architecture of the STL is shown in Fig 2. A. Continual Learning with Heterogeneous Data i=1} = {(ψi, T level The STL is a two-step continual learning method. Firstly, the model learns from task Ta with dataset Da = {(xi, yi)m )m i=1}, and the model’s parame- ters θ become θa. Secondly, the model learns from task Tb with dataset Db = {(xi, yi)n i=1}, where [ψp, ψq] denotes a pair of trajectories, and the model’s parameters θa become θb. i=1} = {([ψp i ], [Ii, I (cid:48) i , ψq i])n i Learning Task Ta: The goal of this step is to get a trust level prediction model. The object function that we minimize in Ta is LTa (fθ) = (cid:88) L2(fθ(ψi), T level i ) (3) ψi,T level i ∼Ta where the output fθ(ψi) = ˆT level is continuous and varies within [−1, 1], and the ∀T demarc that is closest to ˆT level will be considered as the trust level for the input trajectory. How- ever, this model fθa ignores the trust information between different trajectories that are on the same trust level. Learning Task Tb: The goal of this step is considered as getting a trust model, which can predict trust level for a single trajectory and distinguish the trust preference for a pair of trajectories even in the same trust level. This paper assumes that the human’s trust preference is with respect to the true trust level of each task trajectory. As per this model, the trust preference that the human gives to a pair of trajectories is calculated by sof tmax(fθ(ψp), fθ(ψq)). The loss function [20] that we minimize in Tb is LTb (fθ) = (cid:88) Le(sof tmax(fθ(ψp i ), fθ(ψq i )), [Ii, I (cid:48) i]) i ,ψq [ψp i ],[Ii,I (cid:48) + λL2(fθ(ψp i]∼Tb i ), fθa (ψp i )) + λL2(fθ(ψq i ), fθa (ψq i )) is Le cross entropy (4) where function. Le(sof tmax(fθ(x(i))), y(i)) makes the output be fit for the ground truth of trust preference; λL2(fθ(x(i)), fθa (x(i))) is the continual learning term, which compensates the output to be fit for the output of original trust level of Ta; λ is a loss balance weight. loss B. Active Learning In order to speed up convergence further reduce human workload, this paper synthesizes experiments with active learning mechanism for Ta. The constraint combines the least confidence strategy with maximum difference strategy, which means the model before applying active learning has the least confidence in its most likely label for the generated the generated data should have the trajectories, besides, maximum difference with the existed data. minimize: ψ |fθ(ψ) − ∀T demarc| + λSC(ψ, ∀Ψd) (5) subject to: ψ ∈ Ψ where SC is cosine similarity, Ψd is the set of existed training data, and Ψ is set of feasible data. V. EVALUATION In order to validate the effectiveness of STL in modeling human trust in human-robot cooperative system, a task scenario, ”multiple robots search victims in the context of city rescue”, was designed in an open-world simulation Fig. 3. The illustration of the experiment setting, including an interactive interface and three typical robot behaviors that influence human trust. Fig. 4. trustworthiness. The illustration of the sample robot behaviors with predicted environment. And a pioneer user study was conducted to collect human trust feedback of task trajectories for model training and validation. The following aspects were validated. (i)the effectiveness of STL in reducing the human workload of labeling; (ii)the effectiveness of STL in improving model performance with heterogeneous data learning. A. Experiment Environment Setting This paper envisioned a real-world human-robot cooperative system where human operator was able to monitor robot motion status, task status, and had a higher level view of observing the surrounding environment and overall environment supported by panoramic images and positioning system. There were three main sub-parts in the experiment, sim- ulation environment, interactive interface, and multi-robot system, as shown in Fig 3. The open-world simulation environment was developed based on open-source platforms AirSim [21] and Unreal Engine [22], and the map described an urban city environment. Human operator interacted with the environment through a customized interactive interface, shown in the ”Interactive Interface” of Fig 3. The main view of the interface displayed the multi-robot system; some customized widgets displayed the motion status, task status, team view, and overall environment view to let the human operator learn the task process and performance better. The multi-robot system consisted of six Unmanned Aerial Vehi- cles (UAVs). In order to support active learning, this paper developed independent application program interfaces to control three features of the robot team, velocity, formation, and heading direction variance. In each task, one or two target locations were randomly generated to simulate the victim locations, and another pa- rameters, ψc, were randomly generated in the preset range to control the above-mentioned three features of the robot team. The robot team was initialized to a same starting location locations automatically then flew to the assigned target supported by an integrated path planning algorithm. During a task, the task trajectory, ξ = (X)T t=0, would be recorded to a local file, and in this paper, the feature extraction function φ(ξ) extracted average velocity, formation, and heading direction variance from the local file as the real trajectory features, ψ. Meanwhile, human operator observed the task process and task performance through the interactive interface. Human User Study A human user study consisting of 10 participants was conducted. The user study comprised two main parts, a tutorial and experimental surveys collected after viewing all randomly generated trajectories. The tutorial in- cluded two video examples of two sample tasks belonging to {Ta, Tb} respectively. The experimental survey had a series of sequential task trajectories and pairs of task trajectories. Participants were asked to observe the trajectories and then provide trust feedback. B. Result analysis Overall, the illustration of the sample robot behaviors with predicted trustworthiness was summarized in Fig. 4, showing the trust model’s potential in assisting human to monitor task performance, actively reducing the burden of human operator. Validation of Active Learning Performance. In order to visualize the difference between raw trajectory features and the trajectory features generated by the active learning, 50 copies of raw trajectory features were selected to compare with the generated features, as shown in Fig. 5. The degree of distinction, which is calculated by min|fθ(ψ) − ∀T demarc| and scaled to three levels [1.0, 0.5, 0.0], and the difference of data distribution, which is measured by two-dimensional Kolmogorov–Smirnov test [23], were used to measure the effectiveness of active learning, which are the reflections of Fig. 5. The comparison between raw task trajectory features and generated task trajectory features. Each dot represents a trajectory feature, and the color of a dot represents the degree of distinction of the dot for the current model. The density distribution of all dots is projected onto each axis. least confidence strategy and maximum difference strategy respectively. (i) data distribution. The distribution of raw trajectory features and the distribution of uniform features in three axis were significantly different (pxy < 0.05, pxz < 0.05, pyz < 0.05) in the data set of raw trajectory features; The distribution of the combined features and the distribution of uniform features in three axis were similar (pxy > 0.1, pxz > 0.1, pyz > 0.1) in the data set of raw trajec- tory features plus generated trajectory features. Therefore, the active learning method can enrich the data diversity by compensating the raw data with maximum difference strategy, which can increase model convergence speed and further reduce human workload by providing model with unfamiliar but task-related knowledge. (ii) degree of distinc- tion. The number of trajectories with different distinction levels were 14, 18, and 18, respectively in the data set of raw trajectory features. While in the data set of trajectories generated with least confidence strategy, the current sub- optimal model had a relatively small distinction level for all the trajectories, showing that the generated trajectories contain knowledge that is unfamiliar for the current sub- optimal model. Therefore, the active learning mechanism can speed up the training process by actively requesting human assistance for learning unfamiliar but task-related knowledge, which therefore reduce the total number of samples needed and relief human cognitive workload. Validation of Human Workload Reduction. In order to validate the effectiveness of STL in reducing human workload of labeling, the model used three different set- tings of training data to train the model, as shown in Fig. 6. ”W/O active learning -less data” was trained with 40 training samples sampled from Da; ”W/O active learning” was trained with 60 training samples sampled from Da; ”W/ active learning” was trained with 40 training samples sampled from Da, and another 20 training samples generated by active learning method then labeled individually. Each setting was run ten times to mitigate the uncertainty intro- duced by random data sampling and random initialization of neural network parameters. The setting, ”W/ active learning”, reached the highest average test accuracy, 80%, as shown on the right of Fig. 6. The result indicated that the model Fig. 6. The comparison of testing accuracy in prediction of trust level between the model using active learning mechanism and without using active learning mechanism. It shows the model using the proposed active learning mechanism had a better performance in the prediction of trust level than the model without using active learning mechanism when training with the same number of training samples. The comparison of testing accuracy in prediction of trust level Fig. 7. and trust preference between the model using heterogeneous data learning and without using heterogeneous data learning. It shows the model using the proposed heterogeneous data learning had a better performance in the prediction of trust level and trust preference than the model without using active learning mechanism. using the proposed active learning mechanism had a better performance in the prediction of trust level than the model without using active learning mechanism when training with the same number of training samples (same amount of human workload), testing accuracy increased 5%. Validation of Heterogeneous Data Learning Trust pref- erence is more informative compared with trust level, which reveals human criteria of adjusting trust by providing a comparison between two trajectories with the same trust level. By combining the two aspects of human trust (trust level and trust preference), the real human trust could be revealed more comprehensively, therefore, facilitating the alignment between the real human trust and the learned trust model. In order to validate the effectiveness of STL in improving model performance with heterogeneous data learning, the model used two different settings of training data to train the model. ”W/O preference learning” was trained with 60 training samples sampled from Da; ”W/ preference learning” was trained with 60 training samples [9] Nam, Changjoo, et al. ”Models of trust in human control of swarms with varied levels of autonomy.” IEEE Transactions on Human- Machine Systems 50.3 (2019): 194-204. [10] Mahani, Maziar Fooladi, Longsheng Jiang, and Yue Wang. ”A Bayesian Trust Inference Model for Human-Multi-Robot Teams.” International Journal of Social Robotics (2020): 1-15. [11] Soh, Harold, et al. ”Trust Dynamics and Transfer across Human- Robot Interaction Tasks: Bayesian and Neural Computational Models.” IJCAI. 2019. [12] Sadigh, Dorsa, et al. ”Active Preference-Based Learning of Reward Functions.” Robotics: Science and Systems. 2017. [13] Palan, Malayandi, et al. ”Learning reward functions by integrating hu- man demonstrations and preferences.” Robotics: Science and Systems. 2019. [14] Xue, Hongfei, et al. ”Deepfusion: A deep learning framework for the fusion of heterogeneous sensory data.” Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing. 2019. [15] Aljundi, Rahaf, et al. ”Memory aware synapses: Learning what (not) to forget.” Proceedings of the European Conference on Computer Vision (ECCV). 2018. [16] Mallya, Arun, and Svetlana Lazebnik. ”Packnet: Adding multiple tasks to a single network by iterative pruning.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. [17] Kirkpatrick, James, et al. ”Overcoming catastrophic forgetting in neural networks.” Proceedings of the national academy of sciences 114.13 (2017): 3521-3526. [18] Liu, Rui, et al. ”Trust-Aware Behavior Reflection for Robot Swarm Self-Healing.” Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. International Founda- tion for Autonomous Agents and Multiagent Systems, 2019. [19] Liu, Rui, et al. ”Trust Repair in Human-Swarm Teams+.” 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 2019. [20] Li, Zhizhong, and Derek Hoiem. ”Learning without forgetting.” IEEE transactions on pattern analysis and machine intelligence 40.12 (2017): 2935-2947. [21] Shah, Shital, et al. ”Airsim: High-fidelity visual and physical simu- lation for autonomous vehicles.” Field and service robotics. Springer, Cham, 2018. [22] Epic Games, ”Unreal Engine”, in https://www.unrealengine.com and https://www.unrealengine.com/marketplace/en-US/product/urban- city, 2020. [23] Fasano, Giovanni, and Alberto Franceschini. ”A multidimensional version of the Kolmogorov–Smirnov test.” Monthly Notices of the Royal Astronomical Society 225.1 (1987): 155-170. sampled from Da, and another 40 training samples sampled from Db. As shown in Fig. 7, each setting was run ten times to mitigate the uncertainty introduced by random data sampling and random initialization of neural network parameters. The setting, ”W/ preference learning”, reached 92.5% testing accuracy in prediction of trust preference and 75% testing accuracy in prediction of trust level; The setting, ”W/O preference learning”, reached 87.5% testing accuracy in prediction of trust preference and 70.0% testing accuracy in prediction of trust level. The result indicated that the model using the proposed heterogeneous data learning method learned the trust preference information and further improved model performance in prediction of trust level. VI. CONCLUSION AND FUTURE WORK This paper developed a Synthesized Trust Learning method to model human trust in human-robot cooperative system, which maximally explored two aspects of human trust, trust level and trust preference. To validate the effec- tiveness of STL, this paper envisioned a real-world human- robot cooperative system and developed a simulation envi- ronment, interactive interface, and programmable multi-robot system to support the task scenario, ”Multiple robots search victims in the context of city rescue”. A user study with 10 volunteers was conducted to collect trust feedback for training and testing the trust model. The effectiveness of STL in reducing human workload was validated by a higher accuracy in the prediction of trust level than the baseline method when using the same amount of human workload. The effectiveness of STL in improving model performance with heterogeneous data learning was validated by improved model performance in the prediction of trust level and trust preference. In the future, more patterns of human trust will be investigated to expand the ability of the trust model; then the feature extraction function, φ(ξ), can be improved to better extract trajectory features. REFERENCES [1] He, Wei, Zhijun Li, and CL Philip Chen. ”A survey of human- centered intelligent robots: issues and challenges.” IEEE/CAA Journal of Automatica Sinica 4.4 (2017): 602-609. [2] Doroodgar, Barzin, et al. ”The search for survivors: Cooperative human-robot interaction in search and rescue environments using semi- autonomous robots.” 2010 IEEE International Conference on Robotics and Automation. IEEE, 2010. [3] Hatanaka, Takeshi, Nikhil Chopra, and Masayuki Fujita. ”Passivity- based bilateral human-swarm-interactions for cooperative robotic net- works and human passivity analysis.” 2015 54th IEEE Conference on Decision and Control (CDC). IEEE, 2015. [4] Kolling, Andreas, et al. ”Human swarm interaction: An experimental study of two types of interaction with foraging swarms.” Journal of Human-Robot Interaction 2.2 (2013). [5] Abbass, Hussein A., Jason Scholz, and Darryn J. Reid. Foundations of trusted autonomy. Springer Nature, 2018. [6] Billings, Deborah R., et al. ”Human-robot interaction: developing trust in robots.” Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. 2012. [7] Dietrich, Eric, and Arthur B. Markman. ”Discrete thoughts: Why cognition must use discrete representations.” Mind & Language 18.1 (2003): 95-119. [8] Hussein, Aya, Sondoss Elsawah, and Hussein A. Abbass. ”Towards Trust-Aware Human-Automation Interaction: An Overview of the Potential of Computational Trust Models.” HICSS. 2020.
ai_researcher
1
Visualization_analysis_of_satellite_intelligence_based_on_scientific_knowledge_graph.pdf
COVID-19 Knowledge Graph: Accelerating Information Retrieval and Discovery for Scientific Literature Colby Wise, Vassilis N. Ioannidis, Miguel Romero Calvo, Xiang Song, George Price, Ninad Kulkarni, Ryan Brand, Parminder Bhatia, George Karypis Amazon Web Services AI colbywi,ivasilei,miguelrc,xiangsx,gwprice,ninadkul,brandry,parmib,[email protected] 0 2 0 2 l u J 4 2 ] R I . s c [ 1 v 1 3 7 2 1 . 7 0 0 2 : v i X r a Abstract The coronavirus disease (COVID-19) has claimed the lives of over 350,000 people and infected more than 6 million people worldwide. Several search engines have surfaced to provide researchers with additional tools to find and retrieve information from the rapidly growing corpora on COVID-19. These engines lack extraction and visualization tools necessary to retrieve and interpret complex re- lations inherent to scientific literature. Moreover, because these engines mainly rely upon semantic information, their ability to capture complex global relationships across documents is limited, which reduces the quality of similarity-based article recommenda- tions for users. In this work, we present the COVID-19 Knowledge Graph (CKG), a heterogeneous graph for extracting and visualizing complex relationships between COVID-19 scientific articles. The CKG combines semantic information with document topological information for the application of similar document retrieval. The CKG is constructed using the latent schema of the data, and then enriched with biomedical entity information extracted from the un- structured text of articles using scalable AWS technologies to form relations in the graph. Finally, we propose a document similarity engine that leverages low-dimensional graph embeddings from the CKG with semantic embeddings for similar article retrieval. Analy- sis demonstrates the quality of relationships in the CKG and shows that it can be used to uncover meaningful information in COVID-19 scientific articles. The CKG helps power www.cord19.aws and is publicly available. CCS Concepts • Information systems → Novelty in information retrieval; • Computing methodologies → Learning latent representations. Keywords Knowledge Graph, Heterogeneous Graph, Graph Representation Learning, Semantic Embedding, Graph Neural Networks Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. Conference’17, July 2017, Washington, DC, USA © 2020 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn ACM Reference Format: Colby Wise, Vassilis N. Ioannidis, Miguel Romero Calvo, Xiang Song, George Price, Ninad Kulkarni, Ryan Brand, Parminder Bhatia, George Karypis. 2020. COVID-19 Knowledge Graph: Accelerating Information Retrieval and Dis- covery for Scientific Literature. In Proceedings of ACM Conference (Con- ference’17). ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/ nnnnnnn.nnnnnnn 1 Introduction The onset of the novel SARS-CoV-2 virus has emphasized the need to accumulate insights from large volumes of information. Thousands of new scientific articles on the virus are being pub- lished weekly, leading to a rapid increase in the cumulative knowl- edge about the coronavirus disease (COVID-19). COVID-19 has heightened the need for tools that enable researchers to search vast scientific corpora to find specific information, visualize connections across the data, and discover related information in the data. Several COVID-19 dedicated search engines have come online to address the need for information retrieval of scientific literature on the disease. Search engines like Sketch Engine COVID-19, Sinequa COVID-19 Intelligent Search, MicrosoftâĂŹs CORD19 Search, and AmazonâĂŹs CORD19 Search use a variety of methodologies such as keyword search, natural language queries, semantic relevancy, and knowledge graphs. However, these engines return thousands of search results that overlook inherent relationships between sci- entific articles, such as subject topic and citations, and do not pro- vide tools to visualize relationships, which is beneficial for knowl- edge discovery. In this paper, we construct the COVID-19 knowl- edge Graph (CKG) by extracting rich features and relationships of COVID-19 related scientific articles and develop a document similarity engine that combines both semantic and relationship information from CKG. Knowledge graphs (KGs) are structural representations of rela- tions between real-world entities where relations are defined as triplets containing a head entity, a tail entity, and the relation type connecting them. KG based information retrieval has shown great success in the past decades [16, 19]. We construct the CKG using the CORD19 Open Research Dataset of scholarly articles [26]. Sci- entific articles, publication authors, author institutional affiliations, and citations form key relationships in the graph. Further, we ex- tract biomedical entity relationships and highly abstracted topics from the unstructured text of articles using Amazon Comprehend Medical service and train a topic model on the corpus. By applying data normalization technologies we eliminate duplicate entities and noisy linkages. The resulting KG contains 336,887 entities and Conference’17, July 2017, Washington, DC, USA Wise and Ioannidis, et al. 3,332,151 relations. The CKG has been made publicly available to researchers with rapid âĂIJone-clickâĂİ cloud deployment tem- plates.1 We introduce a document similarity engine that leverages both the semantic information of articles and the topological infor- mation from the CKG to accelerate COVID-19 related information retrieval and discovery. We employ SciBERT [12], a pretrained NLP model, to generate semantic embeddings for each article. Mean- while, we utilize knowledge graph embedding (KGE) [28, 30] and graph neural network [25] technologies to generate embeddings for entities and relations of the CKG. Finally, by combining judiciously the semantic embeddings and graph embeddings we use the similar- ity engine to propose top-k similar articles. The CKG and similarity engine are new additions to www.CORD19.aws, a website using machine learning to help researchers search thousands of COVID- 19 related scientific articles using natural language question queries that has seen over 15 million queries across more than 70 countries. The CKG adds a graph-traversal ranking feature to search and the similarity engine powers the similarity-based recommended article system. To further demonstrate the quality of the CKG, we conduct a series of experiments analyzing the relations that form the core pillars of the graph. We first evaluate the ability of our methodology to capture the topic information in the text, and show that extracted topics align well with the subjects of scientific journals. We also perform link prediction analysis by extracting graph embeddings that validates the quality of the relations in the graph and demon- strates that we capture important topological information from the CKG. Our analysis shows that the semantic embeddings and graph embeddings learn useful information and improve our ability to quantify similarity between articles. Lastly, several motivating ex- amples show that querying the CKG can extract actionable insights from scientific articles. To summarize, our contribution is fourfold: C1 We construct a scientific KG, named COVID-19 Knowledge Graph (CKG), by judiciously combining the inherent schema information from COVID-19 scientific articles as well as the extracted biomedical entity relationships and topic informa- tion. C2 We conduct several data normalization methodologies to cu- rate the CKG and demonstrate its information retrieval, vi- sualization and discovery capabilities. The CKG is publicly available through https://aws.amazon.com/cn/covid-19-data- lake/. C3 We present a novel similarity-based document retrieval sys- tem that combines semantic article information with doc- ument topological information learned from the CKG and show that it reliably improves the quality of user-suggested documents. C4 The similarity engine and the CKG have been integrated into a public available search service for COVID-19 through www.CORD19.aws to power the similarity-based article rec- ommendation system and to provide a graph-traversal rank- ing feature. 1https://aws.amazon.com/cn/covid-19-data-lake/ 2 CKG Construction & Curation CKG is a directed property graph where entities and relations have associated attributes (properties) and direction (directed). Fig- ure 1 illustrates the directed property graph structure for a small subgraph of CKG. In this section we describe the dataset used to construct the CKG, define the entity and relation types, detail CKG curation methods, provide summary statistics describing the graph, and detail the cloud infrastructure that drives CKG scalability. Figure 1: Visualization of CKG. Paper entities (blue) con- nect to Concepts (red), topics (light blue), and authors (gold) through directed relations. Authors connect to institutions (green). 2.1 The CORD-19 Dataset COVID-19 Open Research Dataset (CORD-19) is a dynamic, grow- ing repository of scientific full text articles on COVID-19 and related coronaviruses created by the Allen Institute for AI (AI2) [26]. The data is made available via Kaggle with weekly updates as part of the on-going CORD-19 Research Competition [1]. As of 06-01-2020, the CORD-19 dataset consists of over 60,000 full text. Rich metadata is provided as part of the dataset e.g. article authors. The data is sourced from several channels such as PubMed, bioArxiv, and medRxiv. The dataset is multidisciplinary with articles covering virology, translational medicine, epidemiology, computer science and more. CORD-19 grows constantly and AI2 is working with the wider research community to normalize and improve the quality of the data. 2.2 Entity Types The CKG contains five types of entities corresponding to papers, authors, institutions, concepts, and topics as summarized in Table 1. Information on what these entities represent, their attributes, and how they are created follows. Paper Entities. Representation of scientific articles. Attributes in- clude title, publication date, journal, and Digital Object Identifier (DOI) link as available in the CORD-19 Dataset from AI2. COVID-19 Knowledge Graph: Accelerating Information Retrieval and Discovery for Scientific Literature Conference’17, July 2017, Washington, DC, USA Table 1: COVID-19 Knowledge Graph entity and relations. Entity Type Count Relation Type Count Papers Authors Institutions Concepts Topics Total 42,220 162,928 21,979 109,750 10 336,887 authored_by affiliated_with associated_concept associated_topic. cites 240,624 121,257 2,739,665 95,659 134,945 3,332,151 Author Entities. Representation of the paper authors. Attributes include the first, middle, and last names. Institution Entities. Institution affiliations for authors. Attributes include institution name, country, and city. Concept Entities. Comprehend Medical (CM) Detect Entities V2 is an Amazon Web Service that uses natural language processing (NLP) and machine learning for medical language entity recognition and relationship extraction [2]. CM classifies extracted entities into entity types: Ibuprofen (entity) belongs to the Medications category (entity type). We leverage CM to extract biomedical entities from the scientific articles. Specifically, given the example text "Abdom- inal ultrasound noted acute appendicitis, recommend appendec- tomy followed by several series of broad spectrum antibiotics," CM extracts Abdominal (Anatomy), ultrasound (Test Treatment Proce- dure), acute appendicitis (Medical Condition), appendectomy (Test Treatment Procedure), and antibiotics (Medication) as recognized entities along with model confidence scores. Entity names e.g. acute appendicitis, form concept entities in the CKG while entity category and model confidence score are the entities’ attributes. Topic Entities. We use an extension of Latent Dirichlet Alloca- tion (LDA) [13] termed Z-LDA [11], trained using the title, ab- stract and body text from each paper. Labels are generated with the help of medical professionals to eliminate, merge, and form 10 topics which serve as the basis for topic entities in the CKG: Vaccines/Immunology, Genomics, Public Health Policies, Epidemi- ology, Clinical Treatment, Virology, Influenza, Healthcare Industry, Lab Trials (human) and Pulmonary infections. Re-modeling and manually labeling a topic model is inefficient, therefore we train a multi-label classifer [23] using the original topic model labels and a training split from 59k total documents. The resulting classifier achieves an average F1 score of 91.92 with on average 2.37 labels per document. 2.3 Relation Types Relations in the CKG are directed and summarized in Table 1. Here we defined all relation types. authored_by. This relation connects paper entities with author entities and indicates that authorship relation. affiliated_with. This relation connects author entities with insti- tution entities and indicates that affiliated relation. associated_concept. This relation connects paper entities with con- cept entities and indicates that associated relation. These relation have the CM model confidence score as an attribute. associated_topic. This relation connects paper entities with topic entities and indicates that associated relation. These relation have the Z-LDA prediction score as an attribute. cites. This relation connects paper entities with paper references indicating a citation relation. 2.4 CKG Curation 2.4.1 Concept Normalization We use thresholding on the confidence scores as a de-noising step by requiring an entity’s confidence scores to exceed a 0.5% threshold that is determined through empirical experimentation. We explored a parameter range of 0.4%-0.6% in 0.1 increments. Thresholding comes at the expense of entity coverage: higher confidence thresh- old increases the likelihood of papers with no or few extracted entities. Next, we lemmatize concept entity names as a form of nor- malization using SciSpacy [22]. SciSpacy is built upon the robust SpaCy NLP library [3], but is trained on biomedical texts similar to those in the CORD19 dataset. We experimentally found SciSpacy to provide target results for limited string lemmatization test cases. Moreover, we keep a running distribution of concept appearances across the dataset. A concept may appear in N papers, where N is the total number of papers in the dataset. We prune concepts that occur in less than 0.0001%. Concepts that appear in greater than 50% are flagged for manual qualitative assessment of infor- mation value. The main downside of this approach is scalability and in future work we plan to systematize and extend this process using domain-specific specialized ontology standardization tools like Comprehend Medical RxNorm [4]. 2.4.2 Author Normalization Author names in the CORD-19 dataset require judicious processing. Oftentimes, paper authors have incomplete information such as missing âĂIJfirst nameâĂİ or high name variation between different academic journals. Additionally, author citations often follow an abbreviated format using âĂIJfirst initial, last nameâĂİ. We utilize a hybrid approach similar to [10] involving normalization and linking. When linking authors, we normalize author names via lower casing, removing punctuation, and merging âĂIJfirst, middle, last nameâĂİ. 2.4.3 Citation Linking We also normalize the author information in the cited papers and match the normalized author names. This allows us to link papers based on citations. We require that both normalized author information and article title information match exactly. From here, we include citation links for papers referenced within the CORD-19 dataset and find 43% of papers cite another paper available in the CORD-19. 2.5 Graph Statistics Table 1 provides counts of all entity and relation types. The ∼42k paper entities have on average 2.3 outgoing topic relations, 64.9 outgoing relations to concepts, 5.7 outgoing relations to authors and 3.2 outgoing citation relations. Furthermore, ∼29k paper enti- Conference’17, July 2017, Washington, DC, USA Wise and Ioannidis, et al. Figure 2: Degree distribution of CKG for various sub-graphs: shows degree change of CKG with concept relations re- moved; citation relations removed; topic relations removed. ties have at least one outgoing citation relation to another paper, ∼18k have at least one incoming citation relation from another paper, ∼14.6k have at least one incoming and outgoing citation relation, and ∼9.7k have neither an incoming nor outgoing citation relation. The 163k author entities have on average 0.75 outgoing relations to institutions indicating not all authors have institution information in the data. When considering an undirected version of the graph, there are 109 connected components with the diameter of the largest connected component (CC) equaling 12 entities that indicates one large CC contains 99% of relations and entities, while the diameter (12) indicates the CKG is dense. Figure 2 shows the undirected degree distribution plot of several sub-graphs of the CKG. We observe that the greatest change in degree distribution comes from the sub-graph without concept relations, exemplifying that concepts form key links in the graph. 2.6 Infrastructure We use Amazon Neptune, a fully-managed graph database opti- mized for storage and navigation scaling to billions of relationships. Neptune supports property graphs and the query languages like Apache TinkerPop Gremlin and SPARQL. Neptune’s Bulk Load- ing [5] feature helps reduce data ingestion time from several hours (sequential loading) to minutes for 330k entities and 3.3M relations using a db.r5.4xlarge (8 cores, 16 vcpu, 128 Gb Memory, 3500 Mbps storage bandwidth) Amazon Elastic Compute Cloud (EC2) instance. By utilizing [6] users can find the exported Neptune graph data, Amazon CloudFormation [7] templates for one-click recreation and deployment of the CKG, and the structured entities and relation files as comma-separated values (CSV) files. We use Tom Sawyer Graph Database browser for visualizations [8]. 3 Using CKG for Information Retrieval In this section we show the CKG uncovers intricate relationships in CORD-19 scientific articles that can aid the research and policy decision processes. • Query 1: What authors and institutions are publishing re- search pertaining to the drug remdesivir and human lab trial? COVID-19 has highlighted the difficulty of health and public policy decision making during pandemics. The above question is Figure 3: Query 1: author research leaders [blue box] ii) in- stitutional leaders [gold box] iii) institution collaborations [green] motivated by the scenario where policy makers are interested in forming a task force of leading authors and institutions on a rapidly evolving area of research such as a drug treatments for COVID-19. Remdesivir is an investigational nucleotide analog drug currently in FDA clinical trials by Gilead Sciences [9]. A CKG user can struc- ture a query identifying articles with remdesivir concept and lab trials (human) topics form connections. Paper to concept and topic relations form "one-hop" relations. From here we find paper to au- thor relations via another "one-hop" operation and subsequently, author to institution relations via a second "one-hop" (two-hops total) operation. Figure 3 visually depicts this query process using a small subset of the graph. The author entity, surround by a blue box, is connected to two papers discussing both remdesivir and lab trials (blue arrows). This author can be viewed as research leader for this query. Similarly, the institutional research leader of this sub-graph is the vertex surrounded in gold box, connected to multiple authors who have published articles matching this query. Lastly, the CKG also helps to uncover multiple-organization collaborations depicted by the vertex surrounded by green box and arrows. • Query 2: What papers discussing COVID-19 risk factors are most often cited by researchers within the CORD-19 dataset? Researchers can query the CKG to return scientific articles related to specific COVID-19 risk factors such as asthma, heart disease, and respiratory malfunction. The query returns articles with related risk factors. Next, the citation network is leveraged to rank articles by citation counts within the data set. Table 2 shows the top three results for this query and the respective citations. Table 2: Graph query results. CORD_UID Title grw5s2pf m1jbpo5l vnn4135b The Molecular Biology of Coronaviruses Bocavirus and Acute Wheezing in Children A Diverse Group of Previously Unrecognized Human Rhinoviruses Are Common Causes of Respiratory Illnesses in Infants Cited By 498 152 68 100101102103104105degree100101102103104number of nodesNode Degree vs Number of Nodesfull graphgraph with no topicsgraph with no conceptsgraph with no reference edges COVID-19 Knowledge Graph: Accelerating Information Retrieval and Discovery for Scientific Literature Conference’17, July 2017, Washington, DC, USA 4 Using CKG for article recommendations In this section we combine article semantic information with CKG topological information to quantify similarity between articles and construct a similarity-based recommendation system. 4.1 Leveraging Embeddings Semantic Embeddings 4.1.1 In order to capture semantic information across the CORD-19 sci- entific articles we leverage SciBERT [12] that has shown strong transfer learning performance on a wide variety of NLP tasks [15]. Specifically, our goal is to represent CORD-19 scientific articles as dense document embeddings. Sentence Transformer library creates sentence level embeddings from the plain text articles [24]. We tokenize the title, abstract and body text into sentences and then using SciBert to create three embedding matrices representing sentences from component of the article. Next, we average each metric to compute three dense vectors. Finally, a single dense document embedding is obtained by averaging the vectors. Table 3 shows the average pairwise cosine similarity of the seman- tic embeddings constructed from the title, abstract, and body. The cosine similarity matrix among paper pairs is averaged to obtain average similarity for each text portion. We observe the average similarity of scientific articles and availability in the dataset differ based on the article text portion used, noting titles on average have lower similarity and have the highest dataset coverage compared to abstracts or body text. The lower coverage of abstracts drove our decision to combine body and title text with abstracts. Table 3: Average cosine distance and percent of dataset cov- erage using SciBERT embeddings. Text Type Cosine Similarityavд Data Coverage titlet abstracta bodyb combined .266 .139 .092 .131 97.7% 84.9% 98.6% 99.8% 4.1.2 Knowledge Graph Embeddings: TransE We leverage knowledge graph embedding (termed KGE) method- ology to encode entities and relations (relations) in the COVID-19 Knowledge Graph as d-dimensional vector embeddings. The em- beddings associated with the entities and relations of the graph are generated by a specific KGE algorithm TransE [14] that satisfy a predetermined mathematical model. We can use these embed- dings for downstream tasks such as paper recommendation [29]. In particular, papers with high similarity in embedding space will be highly correlated. The knowledge graph G is composed of entities and relations such that G = (V , E), where V represents graph entities and E represents the set of relations connecting entities. A specific instance of a relation is represented as a triplet (h, r , t), in which h is the head entity, r the type of the relation, and t the tail entity. Given a set of triplets T in the above format, TransE learns a low-dimensional vector for each entity and relationship where h + r ∼ t by minimiz- ing a margin-based objective function over the training set using stochastic gradient descent min log(1 + exp(−y × f (h, r, t))) (cid:213) (1) h,r,t∈D+∪D− where f (h, r , t) = γ − ∥h + r − t ∥2 is the scoring function; h, r, t are the embedding of the head entity h, relation r and the tail entity t, and γ is a predefined constant. Here D+ and D− represent the positive and negative sets of triplets respectively, and y = 1 if the triplet corresponds to a positive example and −1 otherwise. Negative triplets are corrupted versions of the extant (positive) triplets defined by the KG, in which either the head or the tail entity have been randomly swapped for another entity in V . We leverage the Deep Graph Library Knowledge Embedding li- brary (DGL-KE) [30], a high performance package for learning large-scale KGE, to train the aforementioned KGE model. By sup- plying the model with both the entities and relation triplets as described in table 1, we generate vector embeddings for each paper. 4.1.3 Relational Graph Convolutional Network KGE models generate embeddings solely by taking into account the structure of the graph. Nevertheless, the learned semantic embed- dings can be used as relation features for learning paper relation embeddings. In this section we present an experiment extending the KGE methodology by directly incorporating semantic informa- tion to learn paper embeddings that directly capture semantic and topological information. While KGE models do not directly exploit relation features graph convolutional networks can exploit such re- lation features and possibly obtain richer embeddings [20]. For this purpose, we apply a relational graph convolutional network (termed RGCN ) model [25] to learn the relation embeddings exploiting both paper semantic features as well as the graph structure. An RGCN model is comprised by a sequence of RGCN layers. The output of the lth RGCN layer for relation n is a nonlinear combina- tion of the hidden representations of neighboring entities weighted based on the relation type. The relation features are the input of the first layer in the model, which are the semantic paper embeddings. For relation types without features we use an embedding layer that takes as input an one-hot encoding of the relation id. The entity embeddings are obtained by the final layer of the RGCN. The major difference among RGCN and KGE is that RGCN embed- dings are learned with graph convolutions and take into account relation features whereas the KGE embeddings are just supervised by equation (1) [25, 30]. Recaping, the RGCN relation embeddings combine both the graph structure information as well as the relation features generated by the semantic embedding methods. We imple- ment and train the RGCN model using the DGL framework [27]. The RGCN model was parametrized with 400 hidden units per layer, L = 2 hidden layers. 4.2 Similarity Engine Construction Our document similarity engine uses a combinations of the se- mantic and KGE embeddings as the RGCN model under-performed in certain ways as shown in Section 5. Thereby we capture semantic information contained within a publication with the paper’s topo- logical information from the CKG e.g. papers, authors, concepts, Conference’17, July 2017, Washington, DC, USA Wise and Ioannidis, et al. Figure 4: Distribution of topics by journal topics, etc. relations. Given a paper, the engine retrieves a list of top-k most similar papers using cosine distance. 5 Analysis This section is organized into two parts presenting metrics and results evaluating the work done in Sections 2 and 4 respectively. Part one validates the construction and curation of the CKG by showing article topics align with common subject focuses of scien- tific journals and CKG relations are high quality. Part two analyzes the results of the similarity engine demonstrating we can improve the quality of recommended articles using both semantic and topo- logical information. 5.1 Graph Validation 5.1.1 Topic Model Validation Most journals have well defined topics. For example, Journal of Vi- rology explores the nature of viruses and mainly focuses on related domains; Journal of Vaccine focuses on the field of vaccinology. To evaluate our topic model, we summarise the generated topics from papers in the CKG belonging to these two journals in Figure 4. It can be seen that the generated topics of papers from Journal of Virology, e.g., virology, genomics and lab-trials-human, are highly related to virology. The generated topics of papers from Journal of Vaccine, e.g., vaccines-immunology, are highly related to vaccinology. 5.1.2 CKG Relation Validation To assess the correctness of the triplets that make up the CKG, we used the KGE model described in Section 4.1.2 to score each of its triplets using score = γ − ||h + r − t ||2, where h and t are the embeddings for the head and tail entities, r is the embedding of the relation type, and γ = 12 is an offset used to accelerate the training. We compute these scores for all of CKG’s triplets by following a 10-fold strategy to split the triplets into 10 sets. In this approach, for each fold we used the remaining 9 folds to estimate the KGE model and used it to computed the scores for the (2) Figure 5: link prediction score distribution by relation types left-out fold. According to Equation 1, if the score computed for a triplet is around 0, then the triplet is consistent with the KGE model. On the other hand, if the score is further away from 0 (in either direction), then the triplet corresponds to potentially an outlier or an error. Figure 5 shows the score distribution of the triplets for different relation types. These results show that the score of most triplets is close to 0 and that there is only a small fraction of inconsistent (according to the model) triplets. 5.2 Recommendation Analysis 5.2.1 Topic Similarity We start by analyzing the topic similarity between each source paper and its top-5 most similar papers. In Table 4 a baseline is established by generating a top-5 list of papers random selected from the 42k scientific articles. Then, we collect top-5 similar ar- ticle recommendations ri j , j < 5 for every source paper si using four different embedding methods (Semantic, KGE, RGCN and Se- mantic&KGE). We make use of topic-based distances to compute measures of similarity by creating a one-hot encoded vector T (u) for every paper p in our dataset representing its topics e.g. contains or not. Jaccard distance [21] is used to compute distance between vectors u, v ∈ [T , F ]N , N ∈ N J (u, v) = (3) cT F + cFT cT T + cT F + cT F where ci j is the number of occurrences of u[k] = i and v[k] = j, j < N . Intra-List Similarity (ILS) [31] is used to measure topic similarity of paper recommendations using the average Jaccard distance between a source paper and its list of top − 5 similar papers. We then take the average of scores over all source papers and compare across methods as displayed in Table 4. For each source paper si we define its topic similarity T S(si ) = 1 k = 5 J (T (si ),T (ri j )), T S = 1 N T S(si ) k (cid:213) N (cid:213) (4) i=1 j=1 According to Equation 4, the lower the score, the more common topics are between the source paper and its top − 5 similar papers. COVID-19 Knowledge Graph: Accelerating Information Retrieval and Discovery for Scientific Literature Conference’17, July 2017, Washington, DC, USA In Table 4 we observe lower average Jaccard scores between source papers and similar recommendations relative to the baseline in all embedding methods. Furthermore, we note KGE embedding achieves a comparatively lower score than RGCN. Finally, the com- bination of semantic and KGE embeddings achieves the lowest Jaccard score. Table 4: Topic similarity (Jaccard distance) of recommenda- tions vs random baseline. Method Random SemanticSem GraphKG E GraphRGC N Sem. & KGE Topic SimilarityJ accar d .821 .360 .345 .654 .311 5.2.2 Citation Similarity The CKG citation network shows the relationship between papers. If a paper is cited by another, they may share the same topic, use similar technology or have similar motivation. We train RGCN em- beddings from the CKG with and without the citation network and follow the same methodology for KGE embeddings. We select only papers that cite at least one other paper. For each of these papers we generate the top-5 similar papers and calculate the average num- ber of a paper’s citations that appear in the top-5 recommended most similar papers. For Table 5 we average this score across all papers for the four RGCN and KGE embeddings. We observe that KGE trained with citations has the highest overlap score at 29.11% as expected. Further, KGE embeddings learned without citations do a poor job of recommending cited papers in the top-5. This is expected since the relations authored_by, associated_topic, and as- sociated_concept do not give much information to infer the exact citation: many papers share the same topic and concept. Table 5: RGCN vs KGE Top-5 Citation Overlap Method RGCNwithout cit at ions KGEwithout cit at ions RGCNwith cit at ions KGEwith cit at ions Overlap 5.22% 0.01% 8.96% 29.11% Table 6: Overlapping (intersection over union) scores of Top- 5 similar papers by methodology Random SemanticSem KGE RGCN Random SemanticSem GraphKG E GraphRGC N Sem & KGE 1.000 - - - 0.10 0.014 1.000 - - 0.164 0.009 0.084 1.000 - 0.463 0.008 0.081 0.137 1.000 0.005 Figure 6: Visual comparison of truncated SVD of four em- bedding methods using five scientific articles in the dataset. Paper CORD_UIDs: pw60qx7c, fjfc3rto, 790d7v7q, v2lp739t, kt5awf8i 5.2.3 Embedding Subspace We use truncated singular value decomposition (SVD) to create 2D projections of paper embeddings of different embeddings methods. We select 5 papers with different topics in our dataset and their corresponding top-5 recommendations. We plot the truncated SVD reduction of their embeddings and plot them based on the source paper. The results are represented in Figure 6. The top left shows the SciBERT embeddings for the five papers and their associated topics (color coded according to paper). We observe the topics genomics and epidemiology and human lab trials as described in Section 2, are close to each other. This is expected as many genomic studies are genome wide association studies, which are considered a subset of epidemiology. The top right shows the KGE embeddings’ SVD result. It can be seen papers from same topics are clustering to each other while separating across topics. On the other side, the combination of SciBERT embeddings with KGE embeddings which is currently used in the similarity engine (bottom left) shows that virology and vaccines immunology, and genomics and epidemiology and human lab trials narrow in proximity from KGE. This matches the observed research given virology is the study of viruses while similarly, vaccines immunology is the study of how viral immu- nizations stimulate the immune system hence closer embedding similarity match expectations of researchers. 5.2.4 Recommendation Overlap We generate top-5 most similar papers for each paper in the dataset using five different methodologies, Random (Randomly select 5 pa- pers), Semantic, KGE, RGCN and Semantic&KGE. Table 6 captures the intersection over union of similar paper sets across methodolo- gies. We observe a low overlapping between semantic and graph embeddings, which is as expected since Semantic capture the se- mantic information of certain paper while KGE/RGCN capture the Semantic EmbeddingsKGESemantic & KGETopicsgenomicsepidemiology and lab-trials-humanvirologyclinical-treatment, public-health-policies,and healthcare-industryvaccines-immunologyRGCN Conference’17, July 2017, Washington, DC, USA Wise and Ioannidis, et al. [6] [n.d.]. https://aws.amazon.com/covid-19-data-lake/. [7] [n.d.]. https://aws.amazon.com/cloudformation/. [8] [n.d.]. https://www.tomsawyer.com/. [9] [n.d.]. https://www.gilead.com/purpose/advancing-global-health/covid-19/ about-remdesivir. [10] Waleed Ammar, Dirk Groeneveld, Chandra Bhagavatula, Iz Beltagy, Miles Craw- ford, Doug Downey, Jason Dunkelberger, Ahmed Elgohary, Sergey Feldman, Vu Ha, et al. 2018. Construction of the Literature Graph in Semantic Scholar. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers). 84–91. [11] David Andrzejewski and Xiaojin Zhu. 2009. Latent Dirichlet Allocation with Topic-in-Set Knowledge. In Proceedings of the NAACL HLT 2009 Workshop on Semi-supervised Learning for Natural Language Processing. 43–48. [12] Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 3606–3611. [13] David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research 3, Jan (2003), 993–1022. [14] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Ok- sana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems. 2787–2795. [15] Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175 (2018). [16] Jeffrey Dalton, Laura Dietz, and James Allan. 2014. Entity query feature expansion using knowledge base links. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. 365–374. [17] Ruggero Gramatica, Tiziana Di Matteo, Stefano Giorgetti, Massimo Barbiani, Dorian Bevec, and Tomaso Aste. 2014. Graph theory enables drug repurposing– how a mathematical model can drive the discovery of hidden mechanisms of action. PloS one 9, 1 (2014). [18] Dietmar Jannach, Lukas Lerche, Fatih Gedikli, and Geoffray Bonnin. 2013. What recommenders recommend–an analysis of accuracy, popularity, and sales diver- sity effects. In International conference on user modeling, adaptation, and person- alization. Springer, 25–37. [19] Young Whan Kim and Jin H Kim. 1990. A model of knowledge based information retrieval with hierarchical concept graph. Journal of Documentation (1990). [20] Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. Toulon, France. [21] Sven Kosub. 2016. A note on the triangle inequality for the Jaccard distance. CoRR abs/1612.02696 (2016). arXiv:1612.02696 http://arxiv.org/abs/1612.02696 [22] Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing. In Proceedings of the 18th BioNLP Workshop and Shared Task. 319–327. [23] Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. 2011. Classifier chains for multi-label classification. Machine learning 85, 3 (2011), 333. [24] Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084 (2019). [25] Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European Semantic Web Conference. Springer, 593–607. [26] Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Mer- rill, et al. 2020. CORD-19: The Covid-19 Open Research Dataset. arXiv preprint arXiv:2004.10706 (2020). [27] Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, et al. 2019. Deep graph library: Towards efficient and scalable deep learning on graphs. arXiv preprint arXiv:1909.01315 (2019). [28] Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph IEEE Transactions on embedding: A survey of approaches and applications. Knowledge and Data Engineering 29, 12 (2017), 2724–2743. [29] Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, and Le Song. 2019. Can Graph Neural Networks Help Logic Reasoning? arXiv preprint arXiv:1906.02111 (2019). [30] Da Zheng, Xiang Song, Chao Ma, Zeyuan Tan, Zihao Ye, Jin Dong, Hao Xiong, Zheng Zhang, and George Karypis. 2020. DGL-KE: Training Knowledge Graph Embeddings at Scale. arXiv preprint arXiv:2004.08532 (2020). [31] Cai-Nicolas Ziegler, Sean M McNee, Joseph A Konstan, and Georg Lausen. 2005. Improving recommendation lists through topic diversification. In Proceedings of the 14th international conference on World Wide Web. 22–32. Figure 7: Popularity (= occurrences of paper in the top-5 most similar paper list) analysis for semantic embedding and KGE embedding engine grouped by bins. topological information of the CKG. The combination of them, i.e. Semantic&KGE, shows the agreement with both side, which means it can recommend papers with a conjunction of both semantic and topological information.. 5.2.5 Popularity Figure 7 presents a popularity analysis of KGE and Semantic Em- bedding, where popularity captures the number of occurrences of an individual paper in the top-5 most similar items list for all papers in the dataset grouped by frequency. The left tail of the distribution shows papers that occur many time times in top-5 rec- ommended lists with the overall distribution resembling a power law distribution common to recommendation systems [18]. For KGE embeddings 707 papers occur more than 20 times and for semantic 912 occur more than 20 times. 6 Conclusion In this paper we construct a COVID-19 Knowledge Graph from the CORD-19 dataset and demonstrate how researchers and policy makers can extract timely information to answer key scientific questions on COVID-19 from a corpus of scientific articles. To further facilitate efforts we employ machine learning entity de- tection models to extract medical entities and relationships. With the help of medical professionals we add global topic information that forms additional medical relationships in the CKG. We train KGE models using CKG relations to obtain paper embeddings cap- turing topological isomorphic and semantic information for the application of similar paper retrieval on www.cord19.aws. Future work may include further enhancements to CKG information re- trieval capabilities such as: expanding biomedical entity extraction using biomedical concept annotators like PubTator2, re-training RGCN models with additional entity and relation attributes, and incorporating additional KGs into the CKG e.g. COVID-19 drug repurposing graphs [17]. References [1] [n.d.]. https://www.kaggle.com/allen-institute-for-ai/CORD-19-research- challenge. [2] [n.d.]. https://aws.amazon.com/comprehend/medical/. [3] [n.d.]. https://spacy.io/. [4] [n.d.]. https://docs.aws.amazon.com/comprehend/latest/dg/ontology-linking- rxnorm.html. [5] [n.d.]. https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html. 2https://www.ncbi.nlm.nih.gov/research/pubtator/
ai_researcher
3
Teaching_Smaller_Language_Models_To_Generalise_To_Unseen_Compositional_Questions_(Full_Thesis).pdf
4 2 0 2 v o N 5 2 ] L C . s c [ 1 v 5 8 9 6 1 . 1 1 4 2 : v i X r a Teaching Smaller Language Models To Generalise To Unseen Compositional Questions Timothy John Hartill A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science, The University of Auckland, 2024. Abstract We are inspired by recent progress with pretrained large Language Models (LLMs), that are able to answer questions that are unlikely to have been encountered during training. However a diversity of potential applications exist in the broad domain of reasoning systems and considerations such as latency, cost, available compute resource and internet connectivity are relevant in determining an appropriate approach. We consider the setting where some local compute capacity is available at inference time but internet connectivity is not. Similar to a general-purpose LLM, we assume that our much smaller Reasoning Models may be asked arbitrary questions from unknown distri- butions, hence we focus on evaluation in an unseen setting where our evalu- ation datasets are disjoint from our training datasets. We equip our models to answer diverse questions through multitask training focused on instilling an ability to reason over a provided context to an answer. We acquire this context from two knowledge sources; a local Wikipedia corpus queried using a multi-hop dense retrieval system with novel extensions, and from ratio- nales generated from a larger Language Model optimised to run in a lower resource environment. Our main contributions to the study of question-answering in this setting are as follows: We propose novel methods to evaluate whether our model is capable of answering contextualised questions without memorisation, and show that it is. We establish a comprehensive set of baseline results on unseen evaluation datasets. We show that the addition of novel retrieval- augmented training datasets (RATD) to the training regime of the Reason- ing Model in conjunction with our retrieval system significantly improves i results. We demonstrate further significant improvement through the appli- cation of methods for combining contextual knowledge from our two sources. The first method (RR) involves training a novel Rationale Ranking model to score both generated rationales and retrieved contexts with respect to relevance and truthfulness. We then use the scores to derive combined con- texts from both knowledge sources using a number of strategies. We also show that utilising the RATD datasets enables our model to become profi- cient at utilising information from combined contexts both separately and in conjunction with the RR method. ii Acknowledgements I am especially grateful to Pat Riddle whose guidance and tireless efforts were essential in maintaining a high standard in our experiments and in our writing. Pat’s enthusiasm for rigorous scientific research was an inspiration to me throughout this endeavour. Thanks also to my many collaborators, particularly Neset Tan, Diana Benavides-Prado and Josh Bensemann who provided valuable feedback and suggestions at critical junctures. I am grateful to the authors of Pi et al. (2022) for providing their unre- leased POET-SQL dataset and to Omar Khattab for similarly providing his Hover paragraph sequencing data. Finally, to my wife Clare and my daughters Simone and Sophie, thank you for your fortitude, endless support, and patience throughout the journey. iii Contents 1 Introduction 1.1 Background and Motivation . . . . . . . . . . . . . . . . . . 1.2 Research Problem . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . 2 Preliminaries 2.1 Computational Approaches to Question-Answering . . . . . 2.2 Language Models . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Unseen Evaluation Datasets . . . . . . . . . . . . . . . . . . 3 Related Research 3.1 Memorisation in Language Models . . . . . . . . . . . . . . . 3.2 Retrieval from Textual Corpora . . . . . . . . . . . . . . . . 3.3 Knowledge Augmentation from LLMs . . . . . . . . . . . . . 3.4 Multiple Knowledge Sources . . . . . . . . . . . . . . . . . . 3.5 Falsehood Detection . . . . . . . . . . . . . . . . . . . . . . 3.6 Multitask Pretraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Numerical Literacy in Language Models 4 Do Smaller Language Models Answer Contextualised Questions Through Memorisation Or Generalisation? 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 UQA and UQA+TDND Model Training . . . . . 2 2 5 7 9 11 11 13 17 22 22 25 26 27 28 29 30 31 31 33 34 iv 4.2.2 Evaluation Dataset Preprocessing . . . . . . . . 4.2.3 Similarity Computation Method . . . . . . . . . 4.3 Main Experiment . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Experimental Results and Discussion . . . . . . 4.3.2 Chapter Limitations . . . . . . . . . . . . . . . . 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Using Retrieval-Augmented Training Datasets To Im- prove Reasoning Performance 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Retrieval 5.2.2 Reranking and Evidence Set Scoring . . . . . . . Iterator In-domain Evaluation . . . . . . . . . . 5.2.3 5.2.4 Reasoning Models . . . . . . . . . . . . . . . . . 5.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Experimental Results 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Combining Rationale Generation and Dense Retrieval 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Rationale Generation . . . . . . . . . . . . . . . 6.2.2 Retrieval . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Rationale Ranker . . . . . . . . . . . . . . . . . 6.2.4 Reasoning Models . . . . . . . . . . . . . . . . . 6.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Models . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Context Combination Methods and Experimental Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Experimental Results 36 37 40 41 42 42 44 44 46 47 49 51 51 55 56 56 62 63 63 66 67 68 69 72 73 73 73 76 79 v 7 Conclusion 7.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . 7.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendices A Hyperparameters A.1 Hyperparameters (Chapter 4) . . . . . . . . . . . . . . . . . A.2 Hyperparameters (Chapters 5 and 6) . . . . . . . . . . . . . B Reasoning Model Input Formats C Wikipedia Corpora D Iterator Training Details D.1 Retrieval Model Additional Details . . . . . . . . . . . . . . D.2 Paragraph Reranker Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.3 Evidence Set Scoring Model E Reasoning Model Multitask Training Details E.1 UQA and UQA+TDND Models (Chapter 4) . . . . . . . . . E.2 Base, Base+RATD, GR and GR+RATD Models (Chapters 5 and 6) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 81 83 85 86 88 89 89 89 91 93 94 94 94 95 97 97 98 F LLM Prompts and Example Generations 104 F.1 Prompts For LLM Rationale Generation . . . . . . . . . . . 104 F.1.1 Binary-labelled Datasets (SQA) . . . . . . . . . 104 F.1.2 Span or binary answers (ARC-DA, IIRC, Musique)106 F.1.3 Multi-choice Datasets (CSQA) . . . . . . . . . . 109 F.2 LLM-generated Rationale Examples . . . . . . . . . . . . . . 112 F.3 Prompts For LLM-generated Negative Rationales for RR Model training . . . . . . . . . . . . . . . . . . . . . . . . . 113 . . . . . . . . 114 F.4 LLM-generated Negative Rationale Examples vi G Significance Tests 115 G.1 Means, Standard Deviations and 95% Confidence Intervals (Chapter 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 G.2 Paired Bootstrap P-values (Chapter 5) . . . . . . . . . . . . 116 G.3 Critical Distances (Chapter 6) . . . . . . . . . . . . . . . . . 117 H Additional Experiments 118 H.1 Most Similar Evaluation-Train Pairs Within Least Similar Subset (Chapter 4) . . . . . . . . . . . . . . . . . . . . . . . 119 H.2 Most Similar Evaluation-Train Pairs Within Unmemorisable Subset (Chapter 4) . . . . . . . . . . . . . . . . . . . . . . . 120 H.3 Example Failure Cases (Chapter 5) . . . . . . . . . . . . . . 121 H.4 StableVicuna FP16 Comparison To INT8 (Chapter 6) . . . . 122 . . . . . . . . . . 123 H.5 Context Component Analysis (Chapter 6) Bibliography 124 1 1 Introduction 1.1 Background and Motivation When prompted with task demonstrations (Brown et al., 2020), instructions (Sanh et al., 2021; Wei et al., 2021; Ouyang et al., 2022) or reasoning chains (Wei et al., 2022), large Language Models (LLMs) have shown an ability to answer diverse questions unlikely to have been encountered during training (Brown et al., 2020; Sanh et al., 2021; Wei et al., 2021; Du et al., 2022; Chowdhery et al., 2022). While impressive, this performance has required access to considerable computational resource, typically centralised and ac- cessed over a network that is assumed to be continuously available. In this thesis, we consider the implications and opportunities that an alternative scenario might present; one in which internet connectivity is assumed to be unreliable, unavailable, or merely prohibitively expensive. To make progress in this scenario, utilising technology widely available at the time of writing, we assume that some local compute capacity is available at inference time, namely the equivalent of a single workstation with a large consumer-grade GPU card. Such resource-constrained environments are abundant, ranging from vehicles and fixed locations without continuous internet access, to sen- sitive applications involving highly confidential information not shareable over the internet. In our constrained environment, we utilise a smaller Language Model that can be run locally on our workstation to answer questions. We define 2 smaller Language Models as generative Transformer models (Vaswani et al., 2017) with 400 million to 1 billion trainable parameters, i.e those that are large enough to be effective at answering questions whilst being able to perform training and inference with reasonable latency, cost and energy effi- ciency. We boldly assume that like a general-purpose LLM, our smaller Lan- guage Models will be expected to answer arbitrary questions from unknown distributions. This is uncommon in that, excepting Khashabi et al. (2020b), few papers have reported zero-shot results for smaller Language Models, fo- cusing instead on optimising performance via finetuning for particular tasks. However, duplication between test and training splits in natural language processing (NLP) datasets is frequent (Lewis et al., 2021; Lee et al., 2022; Krishna et al., 2021; Kambhatla et al., 2023), which leads to conjecture as to what exactly a model has learned in the fine-tuned setting. In addition to the possibility of answer leakage from directly memorised training samples, it has been shown that models are able to utilise more subtle cues, such as the writing style of a particular annotator who contributed to both train and test splits, for better results than are achievable where the test split is truly independent of the training split (Geva et al., 2019). To minimise such issues as well as to facilitate comparison in a similar setting as other zero/few shot studies, we define an unseen question as one from an evaluation dataset that is disjoint from our training datasets. LLMs have been shown to have strong performance in answering ques- tions that are input without any supporting context i.e. open domain ques- tions (Roberts et al., 2020). By contrast, smaller Language Models, such as the BART model (Lewis et al., 2020a) that we use throughout our experi- ments, are poor at answering such uncontextualised questions, particularly when the evaluation question is not a paraphrase of a memorised training sample (Lewis et al., 2021). An alternative approach, which we follow and extend, has been to use the question text to query a knowledge source and retrieve information pertinent to answering the question. The problem is then transformed into a reading comprehension (RC) challenge whereby the question and the acquired context are input into a Language Model that would preferably reason over the question and the provided context to infer an answer (hereafter, called a Reasoning Model). 3 In the belief that regardless of how comprehensive any available knowl- edge source may be, there will be many questions that cannot be answered using information from a single retrieved document, we focus our study on compositional questions. The classical Partee (1984) definition of composi- tionality as an ability to build up the meaning of a complex expression by combining the meanings of its parts has been challenging in practice to use- fully apply to natural language tasks such as machine translation and our question-answering topic (Dankers et al., 2022; Hupkes et al., 2020). Re- cent work has alternatively described compositional or “complex” questions as those where answering requires decomposition into multiple reasoning steps (Talmor and Berant, 2018; Geva et al., 2021), or reasoning (sometimes termed composition) over more than one piece of information (Yang et al., 2018; Min et al., 2019; Khot et al., 2020; Zhao et al., 2023). The skills in- volved in such reasoning are diverse and multidimensional (Rogers et al., 2023), encompassing for example fact composition (Khot et al., 2020), nu- merical reasoning (Dua et al., 2019; Zhao et al., 2023), logical operations (Clark et al., 2020b; Sinha et al., 2019) or set operations (Sen et al., 2022). Noting that the complexity of reasoning needed is a function of both the question and the available evidence (Min et al., 2019), and that Language Model training data is itself a source of evidence, we offer a modestly revised definition of a compositional question as follows: A question is compositional if it is unlikely to be answerable by our Reasoning Model with a memorised answer from a similar training example, or by retrieving any single document from any available knowledge source. Here, a knowledge source refers to training data for any Language Model we utilise or the textual corpus accessed by our retrieval system. A document refers to an individual training sample, or corpus paragraph respectively. Our first knowledge source is a corpus consisting of English Wikipedia paragraphs. Methods for retrieving information from such textual cor- pora have a long history in the information retrieval domain generally e.g. Spärck Jones (1972), and more recently for augmenting open domain ques- tions (Chen et al., 2017; Karpukhin et al., 2020). In regard to the latter, early 4 studies focused on the single-hop case where a single document from the cor- pus typically provides sufficient evidence to enable answering the question in a deductively valid fashion. This work has subsequently been extended to retrieval for multi-hop questions where multiple documents from the corpus are necessary to answer the question (Qi et al., 2021; Xiong et al., 2021). Here studies have focused on datasets such as HotpotQA (Yang et al., 2018) where the necessary number of documents, henceforth n, has often been lim- ited to two. In our work, we extend n to an arbitrary number of documents and introduce an Evidence Set Scoring model whose purpose is to quantify the sufficiency of the information accumulated up to each hop for answering a question. Corpora such as Wikipedia contain large amounts of factual information and it might be expected that effective retrieval from such sources would pro- vide good information for answering questions of a factual nature. However such knowledge sources have been shown to be less effective for identify- ing other types of information such as commonsense, or “world” knowledge (Piktus et al., 2021). We therefore evaluate a second knowledge source in combination with the first; rationales generated by larger Language Mod- els conditioned on the question text. We define a rationale as a free-text explanation (Wiegreffe and Marasović, 2021) of approximately one to three sentences that aims to provide sufficient evidence from which to deduce an answer. Querying a LLM over the internet to generate rationales would of course defeat our purpose, but we study the case where a larger Language Model can be optimised to run in our constrained environment. 1.2 Research Problem The setting defined above poses a number of under-explored challenges that form the basis of our research. These can be summarised as: Smaller Language Model Viability As Reasoning Models ■ The extent to which RC questions can be answered by smaller Lan- guage Models without reference to one or more memorised training samples has not previously been documented. 5 ■ How well smaller Language Models can perform the reasoning function in the unseen setting, and how performance can be improved has not been comprehensively studied. ■ Few studies quantify the LLM performance gap to smaller Language Models when both are considered in similar unseen settings. Knowledge Retrieval Limitations ■ Even the most comprehensive set of knowledge sources is unlikely to yield sufficient information to enable answering any question deduc- tively. This could be due to any combination of (1) incompleteness of the knowledge source, (2) incompleteness of the question specifica- tion, (3) sub-optimality in the retrieval algorithm, or (4) information retrieved being false. It is therefore desirable to consider the situation where information retrieved is partially evidential, contains irrelevant distractions, or false information. We evaluate novel mitigations for these challenges. ■ Research on improving performance in dense retrieval from textual corpora where the retrieval components are not fine-tuned on the same datasets as the target questions is limited (exceptions and alternatives to our approach in this regard are discussed in Section 3.2). Knowledge Source Strengths and Weaknesses ■ As we discuss in Section 3.3, a number of studies consider LLMs as knowledge sources, but these generally assume that the LLM is the sin- gle, or primary source. Perhaps because of this assumption there has not been much focus on quantifying the detailed strengths or weak- nesses of LLMs as knowledge sources in contrast to other possible sources of contextual information. ■ Conversely, approaches focusing on retrieval from textual corpora tend to benchmark themselves against LLMs in a closed book setting where the LLM is the Reasoning Model as well as the knowledge source. This has the effect of conflating LLM reasoning ability with LLM viability 6 as a knowledge source. We offer an evaluation in a setting where these are disentangled. ■ Few other studies have considered approaches to combining knowledge from disparate sources in constrained settings. Section 3.4 discusses those studies that we have been able to identify. 1.3 Contributions In the setting discussed above, we address our research questions and make the following contributions to the research community: 1. We demonstrate that a smaller Language Model is capable of perfor- mance beyond simple memorisation in deriving correct answers to chal- lenging compositional questions. To achieve this we propose a method of identifying overlap between evaluation and training samples based upon semantic similarity of input and output tokens. We utilise this approach in conjunction with a technique to intervene with additional training datasets to create a Reasoning Model versus a baseline Rea- soning Model with no intervention. Our approach enables us to miti- gate effects of pretraining on results and to avoid comparing disparate populations of evaluation subsets as some prior studies have done. Af- ter demonstrating the effectiveness of our methods in identifying both memorisable, and unmemorisable samples we are able to show that improved performance on unmemorisable samples is not attributable to the effect of memorisation. 2. We offer what is to our knowledge the most comprehensive set of base- lines evaluating smaller Language Model zero-shot reasoning abilities versus LLM and other approaches published to date. Here our baseline (Base) is a multitask-trained Reasoning Model that is trained in two stages on a large number of tasks, both existing and those that we develop. 3. We propose the “Iterator”, a dense retrieval, reranking and evidence set scoring system that aims to identify the relevant n documents 7 necessary to answer n-hop questions, where n is arbitrary but we use n = 4. 4. We use the Iterator against a corpus of English Wikipedia paragraphs both to develop contexts for unseen evaluation questions and to de- velop retrieval-augmented training datasets (RATD) which are added to the existing Base training regime in training the Base+RATD model. RATD datasets are intended to impart diverse reasoning strate- gies, such as an ability to identify and weigh partially evidential facts in long, noisy contexts. We show that when used in conjunction with our retrieval-augmented evaluation samples the Base+RATD model significantly outperforms the Base model on the established baselines. 5. We evaluate methods for combining information from two knowledge sources to develop contexts that are more helpful in answering ques- tions. The first knowledge source is the above Iterator with Wikipedia while the second involves rationale generation from larger Language Models that are optimised to run locally in a resource-constrained environment. We propose “Rationale Ranking” (RR), a method that both selects context components by relevance, and filters components that may be false. This is accomplished by training a Rationale Rank- ing model to score LLM-generated rationales and Iterator-generated contexts for truthfulness in addition to the more common practice of quantifying relevance. A number of strategies are then evaluated for using the resulting scores to develop contexts that combine information from both knowledge sources. We show that the RR method signifi- cantly outperforms the earlier Base+RATD baselines. We also show that models trained using the earlier RATD training method are able to generalise sufficiently such that they can successfully utilise com- bined contexts both in isolation from, and in conjunction with, RR scoring. 6. We show that smaller Language Models trained for reasoning can manifest comparable or stronger performance on unseen questions to LLMs, when provided with the same knowledge to reason over that the LLM is capable of generating for itself. 8 7. We present evidence to illustrate the respective strengths and weak- nesses of LLMs and n-hop retrieval from a Wikipedia corpus as knowl- edge sources. The LLM tends to offer better performance when consid- ering questions requiring commonsense knowledge (e.g. “I’m crossing the river, my feet are wet but my body is dry, where am I?”). Retrieval from the Wikipedia corpus tends to be better at extracting knowl- edge necessary to answer n-hop factual questions where n is higher than two (e.g. “The Rhine forms a border between Aschenbrödel’s composer’s country and another country where women got the vote when?”). Moreover, we show that combining information from these sources significantly improves the average performance over evalua- tion datasets versus using a single source, and on individual evalua- tion datasets the combined context performance is often beyond what either knowledge source in isolation can deliver. Portions of this thesis have been published in a peer-reviewed interna- tional journal. In particular, our RATD paper was accepted by Transactions on Machine Learning Research (TMLR) in August 2023 (Hartill et al., 2023). Another paper of which portions are also contained in this thesis has been submitted to a well-regarded venue for peer review and is awaiting review completion. 1.4 Thesis Overview The remainder of this work is organized in the following chapters. Chapter 2 provides preliminary explanations relevant to discussion in the following chapters, specifically the models we use and the unseen evaluation datasets we choose. Chapter 3 reviews related research on the various topics that we utilise or extend in our research. We highlight the differences and similarities to our problem formulation. 9 Chapter 4 proposes a set of methods for determining whether a smaller Language Model is capable of reasoning over a provided question and con- text to an answer or whether it is only capable of providing a memorised answer from a similar training input. Chapter 5 introduces a set of baselines for performance on challenging unseen compositional questions, comparing our approach of augmenting questions with a retrieved context using the Iterator against LLM and other approaches. We then discuss our method for improving performance via the addition of RATD datasets to the training regime of our Reasoning Model and demonstrate that this significantly improves performance when combined with our retrieval method. Chapter 6 presents a set of methods for combining the retrieval knowledge source developed in the prior chapter with a second knowledge source con- sisting of rationales generated by larger Language Models. Here we show that further significant improvement against the baselines are possible and explore the strengths and weaknesses of each knowledge source with re- spect to the different types of questions encapsulated in each of our baselines. Chapter 7 concludes the thesis. Here, we discuss limitations and potentially fruitful avenues to be explored in future research. 10 2 Preliminaries The purpose of this chapter is to provide necessary definitions and back- ground explanations relevant to the thesis. For the interested reader, Section 2.1 provides a very brief history of computational approaches to answering questions. Since it does not contain novel ideas, it may be skipped. Sec- tion 2.2 provides summary background on Language Models and introduces nomenclature used later in this thesis. Finally, to avoid duplication, Section 2.3 provides a description of each dataset we use in evaluation as different subsets of these are utilised in Chapters 4, 5 and 6. Since we reuse or develop a large number of training datasets, the reader is referred to Chapter 5 for the Reasoning Model training process, and to Appendix E for further details on the individual training datasets. 2.1 Computational Approaches to Question-Answering Excepting the recent trend towards using LLMs to answer questions di- rectly using knowledge encoded in the model parameters, computational approaches to the question-answering challenge have relied upon external sources of knowledge. The earliest question answering system was BASE- BALL (Green et al., 1961) which parsed a question into a structured rep- resentation which was then used to iteratively query a structured database. Another very early system is described in Simmons et al. (1964). It used 11 content words extracted from each question to query an index of such terms and retrieve sentences that could be relevant to answering the question. The question and each sentence were then parsed using a dependency parser and sentences were scored with respect to the similarity of structure to the question parse. The highest scoring sentence was selected as most likely to contain the answer. These two studies are representative of the two main historical themes in question-answering: Semantic parsing methods such as BASEBALL convert a question into a structured representation capable of being used as an exact query against a database to return an answer. Information Retrieval-based methods use some (not necessarily structured) representation of the question to retrieve a set of candidate documents, and then as in our case, use diverse RC mechanisms to extract or compute an answer from them (Bordes et al., 2014a; Jurafsky and Martin, 2023). Explorations of classical methods for RC Mechanisms where the con- text has been provided rather than retrieved can be found in Hirschman et al. (1999); Riloff and Thelen (2000). These both rely on lexical overlap between question and context sentences. Ng et al. (2000) claims to be the first machine learning method that is competitive for RC. They use a logis- tic regression model to score each question-context sentence pair where each pair is represented as a vector of 20 specifically constructed features such as a count of the number of words in common between the question and the sentence. In 1999 the Text REtrieval Conference (TREC) question answering track was launched with a goal of bringing together work being performed on Information Retrieval with work being done on RC (Voorhees, 2001). Falcon (Harabagiu et al., 2000) is one such resulting project encompassing both of these aspects. More recently Bordes et al. (2014a,b) use neural models to embed bag- of-words representations of the question and subgraphs from the Freebase knowledge graph into a vector space such that the dot product of the re- sulting vector representations are higher where the subgraph contains the answer. Since that time, many different approaches to question-answering involving neural models have been studied. Prominent amongst these are 12 approaches utilising Language Models, discussed in the next section, and approaches using graph neural networks (Kipf and Welling, 2017). In the latter, a Language Model is typically used to create contextualised vec- tor representations of the question and retrieved (or provided) contextual information. A graph is then constructed over both with novelty being in- troduced in the specification of nodes and edges. These representations are then passed through a graph neural network for further refinement. The final representations are subsequently used as input into further neural models for performing tasks such as answering the question and predicting which sentences in the retrieved context are relevant (Fang et al., 2020). 2.2 Language Models Language Models estimate a probability function P for a word, or token, in a sequence of such tokens (Manning and Schutze, 1999). Given a sequence of s words w1, ..., ws denoted as ws 1, the task has been formalised as learning the joint probability of the sequence from the product of the conditional probability of each word conditioned on the subsequence preceding it: P (ws 1) = s Y i=1 P (wi|wi−1 1 ) (2.1) According to (Jurafsky and Martin, 2023), the mathematics of a tractible approximation of this was first formalised by Markov (Markov, 1913). Such n-gram models restrict the historical context considered in estimating the probability of the ith word to n − 1 words by substituting the P (wi|wi−1 ) term in Equation 2.1 with P (wi|wi−1 i−n+1) where n is typically one (substitut- ing P (wi) for P (wi|wi−1 i−n+1)), two (bigrams) or three (trigrams). The con- ditional probability for each n-gram is estimated based on a count of the number of occurrences of it in the corpus. 1 In 2000, (Bengio et al., 2000) proposed a neural version of a Language Model where the probability distribution over possible next words from an input sequence is estimated by a feed-forward neural network. Each word in the vocabulary was represented by a dense vector C(i) ∈ Rd in which features are learned during training. The vector was stored in a matrix and 13 accessed via the simple strategy of assigning each word in the vocabulary an index number. This is readily identifiable with the embedding tables used as the first layer in modern neural Language Models. In 2013 Mikolov et al. (2013a,b) improved upon the utility of such word embeddings by propos- ing the Continuous-Bag-Of-Words (CBOW) model (where the embedding parameters are learned from predicting the current word from both prior and future words), and the Skip-gram model (where the training objective was to predict prior and future words from the current word). Embeddings created with models such as these and similar were commonly used as input representations in the next generation of neural Language Models that were built using recurrent neural networks (RNNs). In 2014 Sutskever et al. (2014) proposed a sequence-to-sequence Lan- guage Model built for the task of neural machine translation (NMT). It was built using the LSTM (Hochreiter and Schmidhuber, 1997) version of a RNN and featured an encoder-decoder architecture where at each timestep up to a maximum input sequence length t, the embedding for a word from the input sequence q : {xt 1} is input into the encoder, which outputs a hidden representation h ∈ Rd where d is the dimensionality of each input embedding as well as the hidden state. During training, the decoder takes the final h as input along with the desired translation (or answer in our question-answering case) a : {ym 1 }. As with Bengio et al. (2000) the decoder is trained to estimate the probability distribution over possible next words. This is applied autoregressively to generate a word per iteration: P (a|q) = m Y i=1 P (yi|h, ym−1 1 ) (2.2) Extending the RNN architecture, Bahdanau et al. (2015) proposed an attention mechanism that uses a softmax function to produce a weighting over the sequence of all hidden states H ∈ Rd×t produced by the encoder with the aim of weighting the most relevant parts of the corresponding input representations higher than others. This was shown to substantially improve performance on NMT tasks, and subsequently on other tasks such as question-answering as well. Adding the attention enhancement results in an update to the probability estimation function: 14 P (a|q) = m Y i=1 P (yi|H, ym−1 1 ) (2.3) In the question-answering domain, Iyyer et al. (2014) and Hermann et al. (2015) applied RNN architectures to RC tasks. Chen et al. (2017) also used RNN models but here information retrieval was used to augment each ques- tion qi with a retrieved context ci where i denotes the ith sample in a dataset. For brevity, throughout this thesis, we will denote input into a model using angle brackets e.g. in the Chen et al. (2017) case the encoder input would be ⟨qi, ci⟩, the decoder input would be ⟨Hi, ai⟩ and we will omit the batch dimension for readability. Vaswani et al. (2017) proposed the first Transformer model, which demonstrated improved performance on NMT tasks. Similar to (Sutskever et al., 2014), this was an encoder-decoder model that estimates the prob- ability function as per Equation 2.3. However the model differs greatly in that each of the encoder and decoder components primarily consist of alter- nating layers of self-attention and feed-forward layers. Self-attention relates each position in a single sequence to each other. Vaswani et al. (2017) for- malised this in the well-known equation: Attention(Q, K, V) = sof tmax( QK⊤ √ dk )V (2.4) 1√ dk Here each input embedding is linearly projected onto query and key vectors q, k ∈ Rdk and a value vector v ∈ Rdv . These are packed into matrices Q, K and V. is used as a scaling constant. Simplifying for brevity by ignoring positional encoding, multiple attention heads, layer normalisation and residual connections, the resulting weighted output is input into the subsequent feed forward layer. In the encoder, the process repeats until the final feed forward layer outputs Hi ∈ Rd×t. In 2019 Devlin et al. (2019) proposed BERT, which is an implementa- tion of the encoder component of the original Transformer. This paper in- troduced the masked Language Modelling (MLM) pretraining task in which the next-word modelling task introduced in Equation 2.1 is replaced with a bi-directional cloze-style objective (Taylor, 1953) reminiscent of that in the Mikolov et al. (2013a) CBOW model. In the MLM version of the cloze 15 objective, tokens in the input sequence are randomly masked and the model is able to consider both prior and future tokens in estimating the probability distribution over possible tokens that each masked token could be. In this thesis we utilise later variants of BERT, namely RoBERTa (Liu et al., 2019) and ELECTRA (Clark et al., 2020a) as described in Chapters 5 and 6. Several variations of the MLM objective have seen wide adoption in encoder-decoder Transformer Language Models. Of particular note, Raffel et al. (2020) evaluate a number of MLM styles and finally propose T5, a family of models that are pretrained using the version of MLM where the objective is to predict variable-length spans of text that have each been re- placed by a single masking token. Similar to GPT (Radford et al., 2018), described below, they then perform further training using supervised objec- tives over a variety of NLP tasks, and show that the resulting model has strong performance over all of them. At about the same time Lewis et al. (2020a) proposed BART, a similar model to T5 except that here the MLM pretraining objective was to predict the entire input sequence with all mask tokens substituted with the correct text. We use BART as our main Reason- ing Model throughout this thesis. One difference to the original is that in our work, where we include a MLM task, we substitute the T5-style objective of predicting the unmasked answer spans in preference to the original BART objective of predicting the entire input sequence as it is less computationally intensive. Another line of Transformer model evolution has been the emergence of decoder-only Transformer Language Models. Unlike the encoder-decoder variants, these generally estimate the probability function using the original next-word objective similar to Equation 2.1. GPT (Radford et al., 2018) was the first of these. In this study they showed that pretraining on a large corpus using the next-word objective followed by task-specific finetuning was effective in producing strong performance on individual tasks. A subsequent model, GPT2 (Radford et al., 2019), was the first to show that a sufficiently large Language Model (1.5 billion trainable parameters) trained on a large corpus could become proficient on evaluation tasks in a zero-shot (unseen) setting. The GPT3 study (Brown et al., 2020) showed further improvement was possible by hugely scaling the model size to 175 billion parameters along 16 with increasing the pretraining corpus size. This paper also introduced the idea of few-shot prompting where several exemplars of the problem to be solved along with the query are provided to the model as a prompt. In Chapter 6 we utilise two such decoder-only LLMs, BLOOM (Le Scao et al., 2022) and StableVicuna (Stability-AI, 2023) in a resource constrained setting and with a focus upon their utility as knowledge sources. 2.3 Unseen Evaluation Datasets For our experiments in Chapters 5 and 6, we focus our study on a set of unseen evaluation datasets that meet the following criteria: (1) Datasets collectively involve diverse textual and numerical reasoning strategies. (2) Questions are generally readily answerable by humans with access to a web browser and without specialised knowledge. (3) Questions tend to be com- positional as per our definition in the Introduction. (4) Relevant comparison with prior work exists. Each evaluation dataset consists of a single split from the original dataset. This is typically the split most commonly used by others in pub- lished results. The particular split used is noted below for each dataset. Our experiments often involve augmenting the question component of each evaluation sample with contexts sourced by different means. This means that we must distinguish a number of different versions of each dataset. Therefore, in Chapter 5 we denote dataset variants that have been aug- mented via retrieval using our Iterator system as “DatasetR”, and those with a gold context, “DatasetG”, or similar. In Chapter 6 we report results over the set of evaluation datasets with various context types in a single table. Hence for readability in that chapter we simplify the nomenclature to denote a set of datasets augmented with our retrieval as “Iterator only” in prefer- ence to the individual “DatasetR” format. We similarly denote datasets aug- mented with rationales generated by a LLM as “Rationale only”, and those with contexts created by combining both knowledge sources as “Rationale + Iterator”. We use the “DatasetR” nomenclature below when describing Iter- ator augmentation. Except for noting that corresponding “Rationale Only” and “Rationale + Iterator” variants are created for each of the datasets, we 17 omit further mention of them in this section and refer the reader to Chapter 6 for details of their construction. All versions of our evaluation (and training) datasets are accessable at github.com/timhartill/unseen_questions. StrategyQA (Geva et al., 2021), hereafter SQA, contains binary-labeled commonsense samples requiring a diversity of n-hop reasoning strategies (on average samples require content from 2.33 separate paragraphs to an- swer when considering retrieval from Wikipedia i.e. n = 2.33). The form of questions is generally implicit, meaning they do not leak information as to how they could be decomposed (e.g. “Did Aristotle use a laptop?” versus “Was Aristotle alive at the time that laptops were invented?”). Many samples involve reasoning to a plausible rather than an entailed conclusion even where gold paragraphs are provided (Liang et al., 2022) e.g. “Is greed the most prevalent of the Seven Deadly Sins?”. To facilitate comparison with other zero-shot approaches we use the full training set for evaluation as per BIG-bench (Srivastava et al., 2022) (denoted SQA for question-only and SQAR for question plus our retrieval). We also report results with two forms of gold context; using the provided summary notes which have a short paragraph, rationale-like form (SQAGF), and using the full paragraphs from each of three individual annotators (SQAGP) - for brevity we report the mean score over the three gold paragraph sets. CommonsenseQA (Talmor et al., 2019) (CSQA) is a 5-way multi-choice (MC) dataset of commonsense questions derived from Conceptnet (Speer et al., 2017). The task is to choose the best option of which more than one may sometimes be plausible, hence it may be necessary to consider knowledge related to each option before answering. Many of the questions involve commonsense knowledge that is unlikely to be retrievable from a generic corpus (“Where on a river can you hold a cup upright to catch water on a sunny day”). However retrieving specific related examples such as “At the river, I filled my cup at a waterfall” may sometimes be possible (Piktus et al., 2021). CSQA augmented with our retrieval is denoted CSQAR. We 18 report all results against the development split as is common practice. DROP (Dua et al., 2019) is a RC dataset wherein answering each question requires simple numerical or temporal reasoning. Questions only make sense in conjunction with the provided gold paragraph so we do not perform retrieval. Answers may be numbers, dates or text spans. Answers are often abstractive e.g. “How many field goals were scored in the first quarter? ...The Lions scored first...with a 23-yard field goal...The Buccaneers tied it up with a 38-yard field goal...then took the lead...The Lions responded with a 28-yard field goal...” The answer is 3 which isn’t explicit in the context. We use the full development split in all experiments except for those in Chapter 4 where preprocessing is performed as described in that chapter. IIRC (Ferguson et al., 2020) contains questions where an initial paragraph is given and answers depend upon this plus additional paragraphs that must be retrieved (1 ≤ n ≤ 4+). Each sample is provided with links to all supporting documents, and prior work leverages these to restrict the number of documents to be retrieved from. We instead use our Iterator to augment samples from the full Wikipedia corpus using the concatenation of question and initial paragraph as the query, without reference to the given links (IIRCR). We also report comparison against an oracle context (IIRCG) that we construct from the initial paragraph concatenated with the linked supporting documents. Answers may be numbers, binary, text spans or labeled unanswerable. For IIRCG unanswerable samples, we construct contexts using the initial paragraph fragment plus 1-2 random distractor paragraphs. We report all results against the test split. ARC-DA (Bhakthavatsalam et al., 2021) is a question-only subset of ARC (Clark et al., 2018) where questions have been re-worded to make sense in an open domain context. The Worldtree database (Xie et al., 2020) provides explanatory fact sets for ARC samples which average six facts per sample. The original multichoice versions of ARC are part of our training regime, hence compositionality is doubtful and samples are only partially unseen in the sense that the question format is different (and we use the test split). 19 Nonetheless we report results in the interests of exploring diversity. We experiment with Iterator-augmented (ARCDAR) versions as well as with a gold context that we construct from Worldtree (ARCDAG) by concatenat- ing the individual fact strings. Musique (Trivedi et al., 2022a) is an n-hop dataset (n ≤ 4) constructed by combining single-hop questions from existing datasets including SQuAD (Rajpurkar et al., 2016) which is also part of our training regime. More- over we utilise the training split of Musique in both our retriever and Reasoning Model training. However the provided development split has been constructed such that for all samples no single hop question, answer, or associated paragraph is common to the corresponding element of any training sample. Therefore we construct a new development set from the training set and experiment with the original Musique development split as “partially seen”, this time where the form of questions is “seen” but the exact questions are not. Prior work generally uses specialised retrieval for Musique where selection is from the set of gold and distractor paragraphs provided for each sample. We experiment with our retrieval (MusiqueR), and with a gold context constructed from the concatenation of the supplied gold paragraphs (MusiqueG). In Chapter 4 we also make use of CommonsenseQA and DROP, and ad- ditionally consider the following datasets. We use the publicly available development split for each: DROP-CS (Gardner et al., 2020) contains perturbed versions of DROP Test split samples e.g. by making a minor change to the context such that the label is changed. ROPES (Lin et al., 2019) is a RC dataset that requires multi-step reasoning over a situation, often involving qualitative relations such as “higher” or “lower”. Questions are human-authored based on passages from Wikipedia 20 and science textbooks. NewsQA (Trischler et al., 2017) is a RC dataset of human-authored ques- tions about CNN articles. PIQA (Bisk et al., 2020) is a two-option MC dataset covering physical commonsense questions. Samples are created by human annotators from prompts sourced from instructibles.com. QASC (Khot et al., 2020) is an eight-option MC dataset covering human- authored science questions that require two facts to answer. Facts are sourced from a corpus derived from open web pages (Clark et al., 2016). 21 3 Related Research 3.1 Memorisation in Language Models As in our case, prior work on studying the effects of memorisation on model performance in the NLP domain has generally focused on identifying subsets of evaluation data that are either unlikely or likely to have been memorised from training data. Studies have then considered the performance of a sub- set in conjunction with the nature of the input samples. Lewis et al. (2021) consider open-domain single-hop factual questions. By identifying test ques- tions with answers matching training questions and then manually identify- ing those evaluation samples where the question is or isn’t a paraphrase of a training question, they show that smaller Language Models (such as the BART model (Lewis et al., 2020a) we also use) exhibit low performance on samples that don’t have a match in the training set. Our Chapter 4 can be considered as an extension of this work in the area of RC questions that re- quire reasoning over a context to answer. We show that in contrast to their findings on factual questions, a BART model is capable of improved per- formance for RC samples without a memorisable match in the training set. Elangovan et al. (2021) consider train-test overlap on different NLP tasks to ours. To evaluate similarity they utilise cosine similarity between sparse bag-of-words vectors constructed for each test and train sample. Similar to our study, a recent work, Kambhatla et al. (2023), considers cosine simi- larity over sentence embedding vectors as the similarity measure, although 22 they only consider the input tokens whereas we consider both input and output. Additionally this study differs from our purpose in that it is focused on identifying dataset contamination between test and train splits within the same dataset, and in other methodological aspects such as controlling for the effects of pretraining as discussed further in Chapter 4. The effect of evaluation dataset contamination in the pretraining datasets of large Language Models (LLMs) has been reported in a number of studies (Brown et al., 2020; Sanh et al., 2021; Wei et al., 2021; Du et al., 2022; Chowdhery et al., 2022). These generally automate the process of contami- nation discovery by considering n-gram overlap between evaluation datasets and pretraining data. A filtered, clean version of each evaluation dataset is sometimes then constructed and performance is compared to that of the full dataset. Generally these studies find that even where an evaluation dataset is found to heavily overlap with pretraining data, the performance gap be- tween clean and full versions is small and each clean version may either slightly underperform or slightly overperform the full version. Although we are not disagreeing with the overall findings, one criticism of this approach is that n-gram overlap can only detect test-train overlap where the overlap is an exact match of contiguous tokens, while paraphrases or overlaps between discontinuous tokens that otherwise overlap highly will not be detected. Also focusing on memorisability in pretraining data in the situation where the pretraining corpus is available, Carlini et al. (2023) evaluate mem- orisation by prompting a model with a particular sequence and ascribing memorisation if the model continuation is an exact match to the ground truth continuation of that sequence. They show that the degree of memo- risation increases both with the size of the model and with the number of duplicates of a sequence in the pretraining data. Lee et al. (2022) show that training on de-duplicated pretraining data results in memorised text being generated ten times less frequently. Kandpal et al. (2023) show that single- hop factual question answering performance is correlated with the number of documents containing the question and answer entities seen in pretrain- ing. In the domain of numerical reasoning, Razeghi et al. (2022) show that numerical term frequency in the pretraining corpus also correlates with ac- curacy. The study goes on to remove evaluation samples that are likely to 23 have been memorized i.e. those where the input terms and the answer co- occur in a pretraining document. It was then found that the performance of the remaining unmemorisable samples continues to correlate with the fre- quency of the input terms in the pretraining corpus, suggesting that the performance improvement is not solely due to memorisation. As a reminder that spurious memorisation can lead to lower results in downstream evaluation as well as inflating results, our own results in Chap- ter 5 show that removing near-duplicate Musique (Trivedi et al., 2022a) training samples from a BART model training regime resulted in improved downstream performance where evaluation samples had input token overlap with the duplicated training samples but had different labels. Outside of the NLP domain, a number of studies have challenged the his- torical assumption that an ability to memorise the training set and an ability to generalise are mutually exclusive (Zhang et al., 2021). In considering over- parameterised models (those with more trainable parameters than samples they are trained on), Zhang et al. (2017) found that such models are capable of perfectly memorising a training set with randomly assigned labels, with- out learning any ability to generalise. Models trained on the same training data except with correct labels assigned are of course able to generalise suc- cessfully to test samples. By varying the degree of randomness in assigning labels to training samples between these two extremes the authors found a correlation between generalisation error and the amount of label noise, show- ing that overparameterised neural networks are capable of both capturing the extant signal in the data, while at the same time memorising the noisy part. Feldman (2019) proposes that memorisation in long-tail distributions (i.e. the common case where classes consisting of small numbers of samples collectively comprise a significant fraction of the distribution) is actually necessary in minimising generalisation error, and empirically demonstrates this in Feldman and Zhang (2020). The focus of our study differs from these in that we are primarily interested in evaluating whether a model in our setting can exhibit an ability to generalise in the absence of an opportunity to memorise. With a more distant connection with our work, Hacohen et al. (2020) show that various neural models learn similar classification functions at par- 24 ticular stages of training. Exploring this idea in the NLP domain, Choshen et al. (2022) study the order that linguistic phenomena are learned over the course of training and find that neural Language Models with differing ar- chitecture and training data tend to acquire particular linguistic abilities in a similar order. Future work might consider the relationship, if any, between such order of learning and the acquisition of skills involving memorisation versus those relating to more abstract RC skills such as logical operations, multi-step reasoning and so forth. 3.2 Retrieval from Textual Corpora As discussed in Section 2.2, Chen et al. (2017) first used sparse retrieval, namely TF-IDF (Spärck Jones, 1972), against Wikipedia in the context of open domain question-answering. In dense retrieval, query and corpus doc- uments are embedded into the same vector space and retrieval is typically performed through maximum inner product search (MIPS) over the result- ing dense vectors. Several such approaches e.g. Karpukhin et al. (2020) focus on retrieving the single most relevant document sufficient for answering a single-hop query. Lewis et al. (2020b) combine single-hop dense retrieval with a generative Transformer using end-to-end backpropagation, a combination that they term retrieval-augmented generation (RAG). Xiong et al. (2021) introduce multi-hop dense retrieval (MDR), to retrieve multiple documents necessary to answer a complex multi-hop question. They focus on the two- hop situation where a maximum of two documents are sufficient to answer a question. In this situation training samples are input to a shared question and document encoder as: (1) Input ⟨qi⟩ with an objective of minimizing dis- tance to the vector representing di,0 (hereafter denoted ⟨qi⟩ → di,0), where di,t is the t-th supporting document of qi to retrieve. (2) Input ⟨qi, di,0⟩ → di,1. We extend the MDR training regime and loss computation to enable retrieval of an arbitrary maximum number of documents i.e. ⟨qi, di,0, ..., di,t⟩ → di,t+1. Wang et al. (2018) introduced the concept of a Reranker that refines re- trieved results. IRRR (Qi et al., 2021) combined sparse retrieval and rerank- ing into an iterative single model that can also answer multi-hop questions that have extractive answers. Baleen (Khattab et al., 2021), is also iterative 25 but uses a dense retrieval system based upon encoding a dense vector per input token. Their two-stage condenser system comprises a Reranker that scores the relevance of each sentence for each retrieved document followed by an additional module that scores relevance of each sentence from the top- scoring sentences selected over multiple documents from the first stage. It then generates a compressed context of relevant sentences, to be utilised by a separate QA Model. We take inspiration from Baleen’s two-stage approach but other than using our own retriever, we differ most notably in that we introduce an Evidence Set Score into the second stage with the goal of quan- tifying the sufficiency of the entire set of selected sentences for answering a query, in addition to scoring the relevance of individual sentences. Sparse retrieval offers the advantage that it can perform well in zero-shot settings where lexical overlap is sufficient to identify relevant documents. Several studies evaluate methods that improve the performance of dense retrieval models in zero-shot settings. A number of these use diverse unsu- pervised techniques involving creating queries and positive passages from unlabelled text e.g. (Lee et al., 2019; Ram et al., 2022; Izacard et al., 2022). In a different approach, Chen et al. (2021) trained a dense retriever to im- itate a lexical-based model with good results. Thakur et al. (2021) created the BEIR benchmark to further the study of retrieval in the zero-shot set- ting and some recent papers report results against this benchmark. We are unable to do so as some of our retriever training datasets are BEIR com- ponents, however we note as a future direction that our retriever training might benefit further from applying techniques that have been effective on BEIR. 3.3 Knowledge Augmentation from LLMs Bosselut et al. (2019) proposed COMET, a GPT-based Model (Radford et al., 2018) trained on triples from the ATOMIC (Sap et al., 2019a) and ConceptNet (Speer et al., 2017) knowledge graphs such that it would gener- ate potentially novel triple completions. Shwartz et al. (2020) compare aug- mentation methods using COMET, ConceptNet and their self-talk method 26 where the question-answering Language Model is self-queried to produce ad- ditional information pertinent to answering the question. Liu et al. (2022) generate knowledge statements from GPT-3 (Brown et al., 2020) conditioned on the question and use the augmented samples in separate smaller Reason- ing Models. Yu et al. (2023) also generate contextual information from a LLM, in this case by clustering supporting documents from dataset training splits and creating prompt exemplars from each cluster separately so that the LLM may generate diverse knowledge statements. Following the intro- duction of chain-of-thought (COT) prompting (Wei et al., 2022), a number of recent papers (Magister et al., 2023; Li et al., 2023; Hsieh et al., 2023; Wu et al., 2023; Shridhar et al., 2023) use this prompting style to distill training sets of rationale-augmented samples from internet-accessable LLMs such as GPT-3 or Palm (Chowdhery et al., 2022), which are then typically used to train much smaller models in task-specific finetuned settings sometimes such that the label and the rationale are output to avoid the issue of having to generate a rationale from the LLM at test time. We note that our usage of LLM-generated rationales is rather different from these in that we assume a locally-accessable LLM (with lower resource requirements) at test time and do not incorporate LLM-generated rationales in our Reasoning Model train- ing. We do however incorporate negative rationales generated by a LLM in our RR Model training regime as discussed in Section 6.2.3. 3.4 Multiple Knowledge Sources Retrieval has been successfully used as a method for querying knowledge sources other than textual corpora. For example this approach has been used to obtain information from knowledge graphs by embedding the constituent triples as the document vectors in addition to, or instead of, standard text. Yu et al. (2022) augment commonsense questions with retrieved information from a commonsense-focused corpus consisting of information sourced from knowledge graphs, commonsense datasets and other textual sources. Perhaps most similar in spirit to our work Pan et al. (2023) consider knowledge graphs, Wikipedia data, a dictionary, and others, as separate knowledge sources, each queried using dense retrieval. In contrast to our approach of 27 considering various methods for combining information, they train a model to select the single most relevant source for augmenting each input sample. This is analogous to our “Max Score” method described in Section 6.3.2. Like us they train a smaller Reasoning Model with disparate training and evaluation datasets, although unfortunately their evaluation datasets differ from ours. Also in a similar direction to our “Max Score” method, Si et al. (2023) route a query to four expert LLMs and select the single most likely answer using a smaller classifier trained for that purpose. Sun et al. (2018) combine information from a textual corpus and a knowledge graph into a question-specific subgraph from which an answer is extracted. In a finetuned setting, Xu et al. (2022) also consider multiple knowledge sources. They use an entity linking method to query ConceptNet and sparse retrieval over a dictionary and a set of commonsense datasets. The results are always concatenated which is similar to our Naïve Concatenation method (Section 6.3.2). 3.5 Falsehood Detection Our RR Model, trained to score for truthfulness and relevance over instances from disparate knowledge sources, can be seen as a novel extension to a Reranking approach. However it also shares an objective with methods aim- ing to detect falsehood in LLM generations. Generally these methods fall into three categories. The first are methods based on the intuition that higher token log probabilities correspond to better text along a particular dimen- sion such as truthfulness (Yuan et al., 2021; Fu et al., 2023). The second are factuality detection methods that evaluate LLM-generated assertions as true if they can be supported by a external reference (e.g fact retrieval from a reliable corpus). Recent studies here include (Min et al., 2023; Chern et al., 2023). A third category, broadly called self-checking involves prompting a LLM such as ChatGPT or GPT-4 (OpenAI, 2023) to recognize their own errors (Chern et al., 2023), or refine their own outputs (Chen et al., 2023; Madaan et al., 2023), without recourse to external tools. In this category but with a different approach, Manakul et al. (2023) score the consistency 28 between a reference statement and several stochastically sampled versions of it that may be likely to diverge more if the reference is a hallucination. 3.6 Multitask Pretraining Raffel et al. (2020) showed that when trained using self-supervised pre- training followed by supervised multitask training, a single sequence-to- sequence Transformer model without task-specific architectural modification was capable of performing well on all the diverse tasks it had been trained upon. Since then, a number of studies have shown the efficacy of super- vised multitask training in facilitating generalisation in question-answering tasks (Khashabi et al., 2020b; Sanh et al., 2021; Wei et al., 2021; Khashabi et al., 2022). Different to us, but orthogonal to our approach, many stud- ies e.g. Sanh et al. (2021); Wei et al. (2021); Ouyang et al. (2022) make use of instruction-based tuning to facilitate generalisation. In order to focus on evaluation of differing training data regimes, we make use of a similar fixed prompting format to Khashabi et al. (2020b, 2022) and utilise many of their converted QA datasets. Perhaps most similar to our work, Wang et al. (2022b) combines multitask training over multi-choice datasets with external retrieval which they use to augment the training set. However their implementation diverges from ours in that they use sparse retrieval and then a fusion-based method similar to Izacard and Grave (2021) wherein multi- ple retrieved document vectors are used with gated cross-attention to focus on salient information. Their evaluation datasets are disjoint with ours and don’t cover broader reasoning skills like numeracy, so comparison must be left to future work. Longpre et al. (2021) created a synthetic dataset by substituting en- tity names in existing dataset contexts and updating corresponding labels to produce new unfactual but logically consistent samples. They show that training on the new dataset plus the original causes their model to rely on reasoning over the context more, and less on knowledge encoded in pa- rameters. Recently, Li et al. (2022) extended this approach to a fine-tuning framework for LLMs wherein the model is trained on relevant, irrelevant, and counterfactual but logically consistent contexts. Their approach differs 29 from ours in that our RATD datasets are constructed so as to encourage rea- soning to a plausible conclusion whereas theirs are constructed with logical entailment in mind i.e. to ignore contexts where deductively valid reasoning is not possible in favor of knowledge stored in the LLM parameters. 3.7 Numerical Literacy in Language Models Yoran et al. (2022), Pi et al. (2022) and Geva et al. (2020) all develop numeracy-focused pretraining datasets that we adapt and utilise. Gener- ally these approaches have concentrated on finetuned settings and to our knowledge we are the first to study their performance against a diver- sity of unseen evaluation datasets. Recently Trivedi et al. (2022b) released numeracy-focused pre-training datasets constructed from “Question Decom- position Meaning Representation” (QDMR) representations of several exist- ing datasets from Wolfson et al. (2020). These are structured representations of reasoning paths leading from questions to answers. They were released too late for us to include in our pretraining regime but we report comparisons in Table 5.2. 30 Do Smaller Language Models Answer Contextualised Questions Through Memorisation Or Generalisation? 4 4.1 Introduction Memorisation has been described as the learning of a direct mapping be- tween input features and particular outputs (Chatterjee, 2018; Elangovan et al., 2021; Schwarzschild et al., 2021; Lewis et al., 2021), in contrast with generalisation (Elangovan et al., 2021), or the application of a method for deriving the output (Schwarzschild et al., 2021). A number of studies have considered the impacts of memorisation from the perspective of the capacity of particular models to memorise pretraining data e.g. Carlini et al. (2023); Chowdhery et al. (2022) as well as through the lens of downstream evalua- tion dataset contamination e.g Brown et al. (2020); Sanh et al. (2021); Wei et al. (2021); Du et al. (2022); Chowdhery et al. (2022). A general finding has been that memorisation capacity scales with model parameter count, which implies that smaller models would suffer less from this problem. How- ever observations from Lewis et al. (2021), as well as from our own work in Chapter 5, on the BART model (Lewis et al., 2020a) suggest that unde- tected memorisation could effect smaller Language Models sufficiently so as to be an issue in interpreting results. We consider the impact of memorisation on evaluation samples that preferably should involve reasoning from a question, over a provided con- text to an answer. Where the context is of a free-form nature we describe 31 Figure 4.1: Visualisation of key aspects of our methods. We consider two models, one trained on a set of question-answering datasets (UQA) and the other trained on UQA plus two additional datasets collectively referred to as TDND (UQA+TDND). TDND samples are constructed so as to improve performance on some of our evaluation datasets and to be irrelevant for others. Our objective is to understand whether any improvement is attributable to memorisation or to TDND samples imparting an improved ability to generalise. We select evaluation samples that are very unlikely to have become memo- risable from our training datasets based on a semantic similarity score (Section 4.2.3), and compare performance between the two models. Our method enables evaluating per- formance for each model on the same subset of unmemorisable samples, and it does not require access to the pretraining corpus. these as requiring reading comprehension (RC samples) and we denote sam- ples where the context comprises multi-choice options as MC samples. We characterise an evaluation sample as memorisable if it is similar in terms of input and output to one or more training samples e.g. an evaluation sam- ple consisting of the input “What is a tool for indicating air pressure? (A) seismograph (B) barometer ...” and label “barometer” is memorisable if a sample with input “Which weather instrument measures air pressure? (A) barometer (B) rain gauge ...” and label “barometer” exists in the training data. To identify memorisable evaluation samples we propose a method of scoring similarity between each evaluation and each training sample using semantic similarity as encoded in sentence embedding vectors produced by a Sentence Transformers model (Reimers and Gurevych, 2019). This is dis- cussed in more detail in Section 4.2.3. The UnifiedQA project (UQA) (Khashabi et al., 2020b) demonstrated that it is possible to attain good performance on unseen evaluation datasets 32 (those that have not been involved in training) after further training of a pretrained Language Model on a variety of question-answering datasets in a multitask fashion. One of the unseen RC datasets that Khashabi et al. (2020b) use for evaluation is DROP (Dua et al., 2019). Performance on DROP is rather poor in the UQA setting. This dataset requires simple numerical literacy in order to correctly answer a question. A separate study, Geva et al. (2020), demonstrated significant performance improvement on DROP by pretraining on two synthetic datasets (collectively referred to here as TDND) that they designed to impart simple numerical reasoning strate- gies. We add TDND to the UQA training mixture (denoted UQA+TDND) and analyse the impact on subsets of DROP (Dua et al., 2019), ROPES (Lin et al., 2019), and several other unseen RC and MC datasets that are unlikely to be memorisable, even after the addition of the TDND datasets. In summary the major contributions of this chapter are: 1. We propose a method of identifying evaluation-train overlap based on semantic similarity of input and output tokens. 2. We propose a method to intervene with additional training datasets versus a baseline, both to mitigate effects of pretraining on results, and to avoid the need to compare disparate populations of evaluation subsets. 3. We demonstrate the effectiveness of our methods in identifying both memorisable, and unmemorisable samples. 4. We show that performance on unmemorisable subsets of DROP and ROPES is significantly improved by the addition of TDND training datasets. 4.2 Method In the context of language models, Carlini et al. (2023) characterise memo- risation as the generation of an exact continuation of a text sequence, given the first part of the sequence as input. Several other studies (Section 3.1) 33 test for potential memorisation (evaluation dataset contamination) as the presence of n-gram(s) in training samples that co-occur in evaluation sam- ples (where n ≥ 8). We take a view of potential memorisation as occurring where there is not only overlap in a contiguous sequence of tokens but also where a discontinuous subset of input tokens could directly produce a par- ticular output. For example learning one or more training samples similar to “Who had more field goals Vikings or Colts? ...” with label “Colts” could cause a model with evaluation input “Who was winning at the end of the first quarter? ... Colts leading 3-0...” to predict “Colts” without any seman- tic understanding of the question or the context. We develop an alternative method of evaluating evaluation-train similarity using cosine similarity of evaluation and train sample sentence embedding vectors. We find that this approach surfaces test-train overlaps where the tokens discontinuously (or contiguously) overlap (see Section 4.2.3). In some prior work it has been necessary to compare disparate pop- ulations of evaluation samples in order to draw conclusions. For example Chowdhery et al. (2022) note that in comparing the full version of an evalu- ation dataset to a filtered version consisting only of unmemorisable samples they are comparing different subsets. We address this issue by identifying evaluation samples that will not be rendered memorisable by the addition (“intervention”) of new training datasets and then using this same subset to evaluate the performance difference before and after our intervention. This approach has the added benefit that we do not need access to the pretraining corpus. A visual overview of our approach is provided in Figure 4.1. Below we discuss how the training regimes for our “before” model (UQA) and “after” model (UQA+TDND) are constructed, our evaluation datasets, and our methods for identifying evaluation samples that are very unlikely to have become memorisable by the intervention of the additional training datasets. 4.2.1 UQA and UQA+TDND Model Training Our main experiments evaluate the performance difference between two models; UQA and UQA+TDND. Both are trained using the same hyper- parameters (Appendix A.1), the only differences being the respective sets of 34 UQA Run Step Dev Perf. UQA+TDND Step Dev Perf. 1 2 3 140,000 110,000 140,000 65.80% 150,000 66.62% 140,000 66.13% 140,000 67.45% 68.76% 68.74% Table 4.1: Best model selection for three runs each of UQA and UQA+TDND. Step is the training step at which the best model is selected. Dev Perf is the mean accuracy over constituent development sets. The UQA+TDND best model has usually but not always been trained for more steps than the UQA best model. datasets used to train them. We experimented with differing combinations of hyperparameters on both training mixtures until we found a set that worked well over both. Training for both models is performed in a multi-task man- ner, uniformly sampling over the training datasets. The best model from each run is selected as that with the highest mean performance over all de- velopment sets after 150,000 train steps which allows for some flexibility in tuning per training mixture as shown in Table 4.1. We make use of a similar fixed prompting format to Khashabi et al., 2020b, 2022 (Appendix B), and take as our UQA baseline the same set of training datasets that they use. Specifically, UQA consists of datasets of RC type; SQUAD 1.1 (Rajpurkar et al., 2016), SQUAD 2 (Rajpurkar et al., 2018), NarrativeQA (Kočiský et al., 2018), along with MC datasets RACE (Lai et al., 2017), ARC (Clark et al., 2018), Regents (Clark et al., 2016) (“Sci-Elem” and “Sci-Mid” in this Chapter) , OpenbookQA (Mihaylov et al., 2018), MCTest (Richardson et al., 2013), and one binary-labelled dataset, BoolQ (Clark et al., 2019a). As noted, Geva et al. (2020) developed two synthetic datasets designed to impart numerical reasoning ability of the sort needed to improve model performance on DROP (Dua et al., 2019). Of these, “Textual Data” (TD) contains RC samples with similar vocabulary and involving similar reason- ing skills to DROP (e.g. “Who had the lowest number of field goal yards in total? ... Dolphins nailed 26 field goal yards and Vikings nailed 15 field goal yards...”, label “Vikings”). The second dataset, “Numerical Data” (ND) contains a large number of samples with inputs consisting of symbolic ex- pressions (e.g “argmin(undergrass 11952 bussu 3315)?”, label “bussu”). Geva et al. (2020) show that pretraining on TD and ND followed by finetuning 35 on DROP leads to substantially higher performance. In our case, we convert the datasets (collectively TDND) to our format; specifically ND is converted to our open domain format, and TD to RC format as detailed in Appendix B. These are added to the UQA training mixture to train our UQA+TDND model. Further detail on the datasets used in the training regime for both models may be found in Appendix E.1. 4.2.2 Evaluation Dataset Preprocessing We selected evaluation datasets as described in Section 2.3, namely DROP, DROP-CS, ROPES, NewsQA, PIQA, CSQA and QASC, in all cases using the publically available development split. We discovered that the DROP development split that we use here for evaluation contained over 800 exact duplicates. Because we were unsure whether duplicate samples were the result of some bias in dataset creation that could manifest itself when we select smaller “unmemorisable” subsets we de-duplicated all our evaluation splits and note that DROP-CS also con- tained a very small number of duplicates. An example for each dataset is shown in Table 4.4. Eval Dataset All Least Similar Unmemorisable DROP DROP-CS ROPES NewsQA PIQA CSQA QASC 3277 478 1688 4341 1838 1221 926 867 154 307 1204 1354 233 139 652 110 197 759 588 129 99 Table 4.2: Evaluation Dataset sample counts. “All” is the total sample count after de- duplication and removal of samples with numeric answers. Least Similar is the subset of these with a Similarity Score of each evaluation sample to it’s most similar training sample under 60.0. Unmemorisable samples are those Least Similar which also have no answer term overlap with the most similar training sample. When selecting “unmemorisable” subsets (see Section 4.2.3 below) we observed that samples with numeric answers were much more likely to be 36 filtered out since many such answers tend to be commonly occurring small numbers (1, 2, 5...). To combat this bias we remove all samples with numeric answers from our DROP and DROP-CS evaluation. The resulting sample counts are in Table 4.2. Elaboration as to how the “Least similar” and “Unmemorisable” subsets are derived follows in the next section. 4.2.3 Similarity Computation Method To evaluate similarity between evaluation and training samples, we use sentence embedding vectors produced by the “sentence-transformers/stsb- roberta-large” model (Reimers and Gurevych, 2019) from the Huggingface library (Wolf et al., 2020). We quantify the “memorisability” of each eval- uation sample from each training sample by computing a Similarity Score as: sim(ei, tj) = csim(eq i , tq j) + csim(ea i , ta j ) 2 ∗ 100 (4.1) Here ei and tj are the embeddings for the ith evaluation and jth train- ing samples, q and a refer to the question (including context) and answer components of each, and csim is the cosine similarity function. We consider both q and a equally as we are primarily interested in identifying evaluation- train pairs where a memorised answer could inflate results. Alternative for- mulations that consider q only would also identify spuriously memorisable samples that could deflate results but this does not suit our purpose here. We require a memorisability threshold T for Similarity Scores, below which sample pairs are sufficiently dissimilar as to be unmemorisable. The choice of a value for T involves a trade-off between confidence that no mem- orisable samples remain and diminishing sample counts. We identified a suitable value of T through an iterative process of evaluating the ten most similar sample pairs for each evaluation dataset at a possible value for T and increasing this value at each iteration until we found a value at which no memorisable sample pairs were identified but remaining sample counts are reasonable (Table 4.2). This value was identified as T = 60. We cross- checked this by searching for the lowest Similarity Score for any sample pair 37 where we considered the evaluation sample to be memorisable. This value was found to be substantially higher than 60, further increasing our confi- dence that evaluation subsets identifed at T = 60 were unlikely to contain memorisable samples (the most similar pair for each subset at T = 60 is shown in Appendix H.1). We call the resulting subset of samples for each evaluation dataset “Least Similar”. Acknowledging the possibility that some number of Least Similar sam- ples could still be memorisable we then took a further subset of Least Similar samples where the answer has no word overlap with the most similar training sample. For brevity we call this further subset “Unmemorisable” as short- hand for “unlikely to be memorisable from our training datasets, including TDND”. We note that we are unable to eliminate evaluation samples that have answer overlap with any training sample as this would eliminate too many samples. It is also worth clarifying that our definition of “Unmemorisable” does not preclude a given evaluation sample being memorisable from pretraining data. Since we are comparing performance before and after the intervention with TDND datasets it is only strictly necessary that our Unmemorisable samples not be memorisable from TDND although in practice we ensure they are not memorisable from any of our UQA+TDND datasets. 4.2.3.1 Similarity Computation Evaluation - In-Domain Datasets We initially evaluate the calibration of our method by considering similarity between the train and development/test splits of our training datasets. As Table 4.3 shows, identical or near identical sample pairs occur for most training datasets and these tend to score close to 100. 4.2.3.2 Similarity Computation Evaluation - Evaluation Datasets Turning to our evaluation datasets, we first consider the most similar over- all eval-train pair for each evaluation dataset (i.e. the unfiltered versions without removal for Least Similar or Unmemorisable subsets). Generally we 38 Dataset Eval Sample [Split] Most Similar Train Sample Sci-Elem Sci-Mid ARC- Easy ARC- Hard BoolQ MCTest OBQA RACE SQuAD Green plants get the energy they need to make food from? sunlight [Test] Iron oxides such as rust form when iron metal reacts with oxygen in the air. What are the chemical symbols for the two ele- ments found in iron oxide? Fe and O [Test] Which of the following elements is best able to combine with itself and hydrogen [H] to form large molecules? carbon [C] [Test] Students watched a bird fly to and from a large bush every few minutes. The stu- dents told their teacher "The bird has a nest in that bush." This statement is an example of? an inference made from ob- servations [Test] Has an mlb game ever ended in a tie? . . . The longest game by innings in Ma- jor League Baseball was a 1–1 tie. . . Yes [Dev] What did Hannah and Mary chase at the park? . . . Hannah and Mary ran around chasing butterflies for a little time. . . but- terflies [Dev] Oak tree seeds are planted and a sidewalk is paved right next to that spot until even- tually the tree is tall and the roots must extend past the sidewalk which means? parts may break the concrete [Test] The last sentence in the passage shows that _ ? . . . Little Tommy . . . said "Well on the first day of school when I saw that man nailed to the plus sign I knew they weren’t joking. " Tommy was afraid of be- ing nailed [Test] Under Elie Metchnikoff’s cellular theory what cells were responsible for immune re- sponse? . . . According to the cellular the- ory of immunity . . . by Elie Metchnikoff it was . . . phagocytes. . . phagocytes [Dev] Identical except for order of multi-choice options. (99.48) Identical. (100.00) Identical. (100.00) Identical except that one multi-choice op- tion is different. (99.91) Identical. (100.00) What did my granddaughter try to catch? ... granddaughter Tina ... catch ... butter- fly... butterfly (87.53) Identical except for order of multi-choice options. (99.95) Identical. (99.99) Question is a paraphrase ("Cellular im- munology expressed the theory that what cells caused immune responses?"), context and answer are identical. (99.75) Table 4.3: In-domain Test-Train Overlap. Most similar test-train pairs for each constituent training dataset as measured by Similarity Score (in brackets). The actual evaluation split used is in square brackets. For readability, multi-choice options are removed, remaining context is truncated and answers are in italics. The same pair was identified in both SQuAD 1.1 and SQuAD 2 hence shown once. Train samples that are identical or para- phrases to evaluation samples from the same dataset are highlighted in red. find the incidence of identical or near identical pairs is much lower than is the case for the above in-domain evaluation, however memorisable evalua- tion samples certainly exist as shown in Table 4.4. In contrast to the above in-domain evaluation where contiguous overlaps of tokens in similar pairs are common, it can be seen that memorisable samples in Table 4.4 gen- erally would not have been detected without a method that can pick up 39 Eval Dataset DROP Eval Sample Most Similar Train Sample Which household was second most com- mon? . . . there were 19306 households . . . 39.9% were non-families. . . non-families ROPES DROP-CS Which team went scoreless in the third quarter? . . . Buffalo . . . connected . . . 8- yard TD pass for the only score of the pe- riod. . . Vikings Will Seattle have more or less sulfur oxides in the air than St. Louis? . . . Seattle has installed a new wind farm and zero emis- sion solar farm to generate power while St. Louis recently installed a coal fired power plant . . . less What was the score in the Werder Bremen Athletic Bilbao game? . . . Werder Bremen beat Athletic Bilbao 3-0 . . . 3-0 NewsQA PIQA Trees? provide homes for animals CSQA QASC The water in clouds turn in to what when it gets cold? snowflake What is a tool for indicating air pressure? barometer SQuAD 1.1: What is the second highest demographic for households? . . . There were 230233 households . . . 37.4% were non-families. . . non-families (94.40) TD: Who had the lowest number of field goal yards in total? . . . Dolphins nailed 26 field goal yards and Vikings nailed 15 field goal yards. . . Vikings (89.96) SQuAD 1.1: Were sulfonamides more or less toxic than arsphenamine? . . . Com- pared to arsphenamine the sulfonamides . . . were far less toxic . . . less (81.13) SQuAD 2: What was the winning score for the game with Real Madrid at Bernabeu stadium? . . . The pinnacle of the . . . sea- son . . . the . . . Bernabéu Stadium in a 3–0 win over Real Madrid. . . 3-0 (88.06) RACE: The story is about _ ? . . . Some animals live in holes in trees . . . the homes of some animals (77.04) ARC-Hard: Which form of water is most likely to appear when the temperature is below freezing? snow (87.27) Sci-Elem: Which weather instrument mea- sures air pressure? barometer (95.14) Table 4.4: Overlap between unseen evaluation and train datasets. Most similar overall sample pair for each evaluation dataset as measured by Similarity Score (in brackets). For readability, multi-choice options are removed, remaining context is truncated and answers are in italics. Red denotes train samples that could potentially make the corresponding evaluation sample memorisable through contiguous or discontiguous sets of input tokens. discontinuous token overlaps. For brevity, the supporting table of Least Similar evaluation-train pairs is in Appendix H.1, having already noted that we cannot identify any mem- orisable evaluation samples in that category. Similarly, Appendix H.2 shows the most similar evaluation-train pair for Unmemorisable evaluation sam- ples. Unsurprisingly we cannot identify any memorisable evaluation samples here either. 4.3 Main Experiment All evaluation datasets of RC format are evaluated using the F1 score as formulated by Rajpurkar et al. (2016). The MC datasets are evaluated by 40 taking the option with the highest overlap with the predicted answer and then scoring as exact match. The UQA and UQA+TDND Models are based on BART (Lewis et al., 2020a). All models use the Huggingface (Wolf et al., 2020) implementations. We train three models for each of UQA and UQA+TDND respectively using different random seeds and take the mean over each set as our main measure. We ascribe statistical significance to performance change between UQA and UQA+TDND if it is at the 95% confidence level (confidence intervals and standard deviations are in Appendix G.1). 4.3.1 Experimental Results and Discussion Table 4.5 shows the effect of adding the TDND datasets to the training regime. Considering the unfiltered evaluation sets comprised of “All Sam- ples”, it is no surprise that DROP and DROP-CS show a large performance improvement (15.7% and 19.3% respectively) since the TDND datasets are specifically designed for that purpose. Moving to the Unmemorisable sub- sets, there is still a 9% performance improvement for DROP showing that while there is some diminishment, a material performance improvement that is not attributable to memorization remains. DROP-CS improvement is sim- ilar but this result is not significant due to the small sample size. While our experiment cannot tell us what mechanism is responsible for this ability to generalise, the intuitive explanation is that TDND datasets have as intended imparted relevant numerical reasoning strategies. Eval Dataset Random UQA All Samples UQA +TDND % Change Least Similar UQA +TDND % Change Unmemorisable UQA UQA +TDND % Change UQA DROP DROP-CS ROPES NewsQA PIQA CSQA QASC 40.2 32.0 41.2 57.3 63.5 55.6 37.7 50.0 20.0 12.5 46.5 38.2 51.9 56.6 62.3 55.4 36.2 15.7 19.3 26.1 -1.3 -1.9 -0.4 -3.8 41.0 36.3 46.5 52.8 62.2 61.5 35.7 43.9 41.8 55.3 50.3 61.7 61.2 34.1 7.1 15.3 18.9 -4.7 -0.8 -0.5 -4.7 41.7 38.5 41.9 53.4 60.3 60.7 36.4 45.5 42.2 52.6 51.4 60.4 61.0 33.7 9.0 9.6 25.7 -3.7 0.1 0.4 -7.4 Table 4.5: Effect of intervention with TDND datasets on All, Least Similar, and Unmem- orisable evaluation samples. Figures are the mean over three model runs trained with different random seeds. Statistically significant changes at the 95% confidence level are marked in bold i.e. the improvement for DROP and ROPES is significant in Least similar and Unmemorisable subsets, changes for other datasets are not. 41 ROPES shows an even larger performance improvement than DROP over All Samples which is largely retained for the unmemorisable subset (26.1% → 25.7%). Noting that like DROP, ROPES also requires multi-step reasoning over a context and often involves qualitative relations like “less” or “lower” (Lin et al., 2019) it is reasonable to say that benefits imparted by TDND samples are responsible for the improvement. For example a typical TD sample might involve a judgement such as “Who had the lowest number of field goal yards in total? ... Dolphins nailed 26 field goal yards and Vikings nailed 15 field goal yards...” 4.3.2 Chapter Limitations Since our similarity computation (Equation 4.1) considers both the ques- tion and the answer components it is able to identify evaluation samples that contribute to inflated results from the model emitting memorised but correct answers. However using the Equation 4.1 formulation, we cannot say what could be deflating results (e.g. NewsQA and QASC in Table 4.5). For example, it could be an effect of spurious memorisation where an incorrect answer is emitted based on one or more superficially similar training sam- ples, random perturbation, or it could equally be some other factor such as the result of the incorrect application of some method learned as a result of the TDND intervention. 4.4 Conclusion We have proposed a method of identifying evaluation-train overlap based on semantic similarity of input and output sequences that is reinforced by the further elimination of evaluation samples with overlap in answer terms to the most similar training sample. We have shown that this method is able to identify evaluation samples that are memorisable through both contiguous and non-contiguous token overlap with similar training examples. To avoid the pitfall of having to compare disparate populations of evalua- tion samples, as well as to eliminate any dependency on knowing the contents of the pretraining dataset, we have also proposed a method for determining 42 whether or not performance improvement is attributable to memorisation. This involves an intervention through the addition of training datasets that might be expected to improve performance on some evaluation datasets but not on others and measurement of the resulting performance difference. We have shown that for contextualised questions there is significant performance improvement on unmemorisable subsets of DROP and ROPES i.e the im- provement is not attributable to memorisation. 43 5 Using Retrieval-Augmented Training Datasets To Improve Reasoning Performance The research presented in this chapter has been adapted from T. Hartill, N. TAN, M. Witbrock, and P. J. Riddle. Teaching smaller language models to generalise to unseen compositional questions. Transactions on Machine Learning Research, Aug. 2023. The results of this chapter are available in the GitHub repository github.com/timhartill/unseen_questions 5.1 Introduction As noted, LLMs show an ability to answer questions unlikely to have been encountered during training. Rather than encoding all knowledge in the parameters of a LLM, an alternative approach has been to transform the original question-answering problem into a RC problem by retrieving rele- vant information for answering a particular query from an external corpus, and training a smaller Reasoning Model to reason over the concatenation of the query and retrieved information to derive an answer e.g. Chen et al. (2017). In this chapter we extend retrieval methods as described in Section 3.2 in conjunction with a supervised multitask pretraining regime for the 44 Reasoning Model involving 79 tasks for our baseline and 93 tasks for the improved model. The viability of this approach outside of fine-tuned settings is currently subject to limitations, both in the retrieval component, as discussed below, and with respect to the inabilities of smaller language models to perform the reasoning function as well as larger models. We aim to quantify performance limitations and evaluate mitigations for some of them. There are at least two significant challenges in retrieval to be overcome. Firstly, no matter how large the corpus is, there will always be missing information, particularly so in our setting where neither datasets nor corpus have been normalised such that sufficient information is in the corpus to make each question answerable through deductively valid means. Secondly, as long as humans ask questions with ambiguous references e.g. “Who is the spouse of the Green performer?” (Trivedi et al., 2022a), retrieval will necessarily be imperfect even where sufficient knowledge exists in the corpus and the retrieval method is otherwise perfect. We evaluate a method for addressing these issues. Specifically, we mea- sure the effect of adding datasets to our Reasoning Model training regime that are designed to impart heuristic strategies for reasoning to a plausible rather than an entailed answer. We construct these datasets by building contexts for training questions using our retrieval system against a fixed corpus of English Wikipedia paragraphs. The resulting RATD samples are included in training our Reasoning Model irrespective of whether they con- tain partial, full, or no evidence. Our approach carries the advantage that a diversity of reasoning strategies may be imparted. Such strategies include ignoring an irrelevant context completely or weighing partially evidential facts; e.g. reasoning toward answering “Do teenagers always rebel against their parents?” (Talmor et al., 2021) can be aided by the retrieval of knowl- edge that “Adolescents who have a good relationship with their parents are less likely to engage in various risk behaviours”, even though there is no entailment implied. Generally our method is most applicable to question-answering tasks where the desired answer is short i.e. from a word to a short sentence, and the question itself does not come already supplied with a fully evidential context. 45 We also assume that it is possible to retrieve sufficient information from our corpus so as to make a question answerable within a modest sequence length (we limit ourselves to a 512 token maximum) e.g. we are unlikely to be able to answer a question such as “How many songs have a person’s name in the title?” even through retrieving every instance is theoretically possible. We focus our study on a subset of the unseen evaluation datasets pre- viously described in Section 2.3, namely StrategyQA (Geva et al., 2021), Musique (Trivedi et al., 2022a), IIRC (Ferguson et al., 2020), ARC-DA (Bhakthavatsalam et al., 2021), DROP (Dua et al., 2019), and Common- senseQA (Talmor et al., 2019). In summary the major contributions of this chapter are: 1. We offer what is to our knowledge the most comprehensive set of base- lines evaluating smaller Language Model zero-shot reasoning abilities published to date. 2. We show that augmenting the training regime with RATD datasets significantly improves performance from the baselines. 3. We demonstrate that training for numerical literacy and unanswer- ability is brittle in the unseen setting in the absence of sufficiently similarly formatted training examples. 4. We propose effective extensions to the retrieval approach as described below. 5.2 Method We develop and train the Retrieval, Reranking, Evidence Set Scoring (col- lectively the “Iterator”), and Reasoning Model components separately as visualised in Figure 5.1. Comparisons with retrieval systems in our setting are limited since gold paragraph annotation does not exist. Moreover, ex- cepting Khashabi et al. (2020b, 2022) papers tend not to report zero-shot results for smaller language models such as the BART (Lewis et al., 2020a) 46 Figure 5.1: Major system components: The Iterator (green boxes) and Reasoning Model (blue box). An initial query for hop t=0 is input into the Retriever. The Reranker scores each of the retrieved k paragraphs and constituent sentences. Top-x sentences (Evidence Set≤t) are selected from top-ranked sentences from the Reranker and from the prior hop Evidence Set<t. The query + Evidence Set≤t are input into the Evidence Set Scorer which computes an overall Evidence Set Relevance Score e and individual sentence relevance scores. Paragraphs associated with the top five sentences of Evidence Set≤t are appended to the query and the process repeats tmax times. Finally, paragraph fragments recovered from the Evidence Set for hop t=arg max(e) are concatenated with the original query and input into the Reasoning Model for answer generation. Reasoning Model we use. Therefore we initially evaluate the performance of components on in-domain settings with comparisons to strong prior work, and report results in this section. In subsequent sections we move to the major focus of our study, namely to evaluate our method of adding RATD datasets to improve reasoning in the setting where questions are unseen, suf- ficient evidence to deductively answer a query may not be retrievable, and the model is too small to effectively answer open domain questions without a context to reason over. 5.2.1 Retrieval For the retrieval component of the Iterator, as discussed in Section 3.2, we extend MDR (Xiong et al., 2021) from a two hop maximum to enable training on samples with an arbitrary maximum number of hops (tmax). Training is over a mixture of datasets with questions involving one to four hops to answer; HotpotQA (Yang et al., 2018), Hover (Jiang et al., 2020), Natural Questions (Kwiatkowski et al., 2019), and Musique (Trivedi et al., 2022a). Hence in practice we set tmax = 4. Multi-hop questions contain multiple possible reasoning paths through the labelled gold paragraphs, some of which the encoder is able to learn to generalise from (“learn- 47 able”) and some not (Xiong et al., 2021). For example, given a set of sup- porting documents for a 4-hop qi as {di,0, di,1, di,2, di,3}, semantic overlaps between qi and the documents might enable learnable reasoning paths of ⟨qi, di,0, di,1, di,2, di,3⟩ or ⟨qi, di,1, di,0, di,3, di,2⟩ but not ⟨qi, di,2, di,0, di,1, di,3⟩ or others. Our training regime samples a learnable reasoning path and builds training samples for subsets; e.g. from ⟨qi, di,1, di,0, di,3, di,2⟩ we would build four single-hop samples ⟨qi⟩ → di,1, ⟨qi, di,1⟩ → di,0, ⟨qi, di,1, di,0⟩ → di,3 and ⟨qi, di,1, di,0, di,3⟩ → di,2. We based document sequencing for learnable rea- soning paths for Musique on the decomposed reasoning steps provided with that dataset. For HotpotQA and Hover we used the ordering that was used in Xiong et al. (2021) and Khattab et al. (2021) respectively, while Natural Questions is treated as single-hop. For each training sample, positive documents from other training exam- ples in the batch are used as negatives, to which are added two adversarially negative paragraphs specific to that question. Where adversarial negative documents were not otherwise available we created them from our Wikipedia corpus by taking the first paragraph of directly hyperlinked documents from each gold paragraph. Specifically, we used this strategy to create negative documents for Hover as well as to create additional negatives for Musique. We used adversarial negatives for HotpotQA and Natural Questions supplied from Xiong et al. (2021) and Karpukhin et al. (2020) respectively. Our objective function is similar to others e.g. (Xiong et al., 2021; Karpukhin et al., 2020). For hop t of the i − th training sample it mod- els the probability of each next document given a query as: P (dveci,t+1|qveci,t) = exp(dveci,t+1 · qveci,t) dvec∈Di exp(dvec · qveci,t) P (5.1) Where qveci,t = enc(⟨qi, di,0, ..., di,t⟩), dveci,t+1 = enc(⟨di,t+1⟩), enc is the shared encoder, qveci,t is the encoded query vector, dveci,t+1 is the encoded next document vector, Di is the set of positive and negative document vec- tors for qi and · denotes the inner product operation. 48 5.2.2 Reranking and Evidence Set Scoring To refine retrieved documents we implement a two-stage system comprising Paragraph Reranker and Evidence Set Scoring models. Both models were trained using a mixture of datasets that come with sentence-level annota- tions, namely HotpotQA, Hover and FEVER (Thorne et al., 2018). Training samples for the Reranker are built from learnable reason- ing paths. For single-hop samples the Reranker is trained with input ⟨qi, di,0⟩ to score di,0 relevance. Multi-hop questions can have different phras- ing to single-hop questions and so we cannot rely purely on single hop samples for training for proficiency in scoring relevance for the first hop of a multi-hop sample. Therefore, for two-hop paths, samples are ran- domly built to one or two hops i.e. ⟨qi, di,0⟩ to score di,0 relevance, or ⟨qi, di,0, di,1⟩ to score di,1. To remediate imbalance in hop distribution three and four hop samples are always built to the respective maximum hop count. Each query is paired with both a positive paragraph to be scored, and a sub- stituted negative paragraph. The sampling function implements a form of shared normalization (Clark and Gardner, 2018) such that pairs are posi- tioned in the same training batch. In the Reranker, a paragraph relevance score (p) in addition to individual sentence relevance scores (sp) are learned. The objective function for each is binary cross-entropy with the overall loss being an unweighted summation (see Appendix D.2 for details). Turning to inference, intuitively, a high-scoring sentence in a relevant paragraph is more likely to be evidential than a high scoring sentence in an irrelevant paragraph. We manually observed that p is often more accurate than sp and hence experimented with tuning a weight, w, in a sentence scor- ing function s = wp + (1 − w)sp. For in-domain datasets such as HotpotQA we found non-zero values of w that improved both sentence and paragraph recall by over 2%, and F1 score by over 6%, providing evidence that our observation was correct. However the optimal value of w varied between 0.0 and 0.9 over in-domain datasets and tuning w for any of our unseen datasets using their gold annotations would compromise our experimental setup. Hence we simply score each sentence in our main experiments as s = 0.5p + 0.5sp. 49 For the second stage Evidence Set Scorer, at each hop t the Evidence Set≤t is selected from top-ranked sentences from the Reranker and from the prior Evidence Set<t, if any. The query and Evidence Set≤t are input into the Evidence Set Scorer which scores evidence set relevance (e), and sentence relevance (se) in the context of the evidence set. We retain p for each selected sentence from the Reranker since sentences from highly relevant paragraphs are morely likely to be evidential. The sentences for the t + 1 evidence set are thus selected by ranking according to 0.5p + 0.5se and then taking a maximum of five sentences that score over a threshold. The 0.5 coefficients were chosen after a similar evaluation as was done for the Reranker scoring function described above. We observed instances where the evidence set weakened as well as where it strengthened with additional hops, so we then take the evidence set from hop t = arg max(e) rather than assuming that tmax always selects the best. We observed that a high-scoring sentence is sometimes contextualized by adjacent sentences and collectively they create a stronger rationale. Hence final context for each query, both for RATD dataset creation and for creating context for unseen evaluation samples, is created by recovering a paragraph fragment for each selected sentence by prepending/appending the preceding and subsequent sentence from the associated full paragraph where these ex- ist, and then concatenating the document title with the resulting fragment. Ordering of paragraph fragments is by 0.5p + 0.5smax where smax is the max- imum Evidence Set Scorer sentence relevance score per paragraph. Using these paragraph fragments it is possible to fit contexts of approximately 6-7 paragraph fragments within a 512-token maximum sequence length. In the case of datasets such as IIRC (Ferguson et al., 2020) that provide an initial paragraph in addition to the question, the initial paragraph is prepended to the context. The Evidence Set Scoring model is trained with Evidence Sets built as combinations of positive and negative sentences, including replacing positive sentences with negative sentences from positive paragraphs and negative sentences from negative paragraphs. Each question is paired with both a fully evidential set of sentences and a partially evidential (or non-evidential) set of sentences sampled such that pairs are in the same training batch. The 50 objective functions for both e and se are binary cross-entropy and as with the Reranker the final loss is an unweighted summation. The label for e is 1.0 if a subset of the Evidence Set is fully evidential, 0.0 otherwise. Further details of the Iterator components are in Appendix D. 5.2.3 Iterator In-domain Evaluation Sentence EM Sentence F1 Model ↓ # of Hops → Baleen 4-hop + FLIPR retriever Iterator + MDR retriever Iterator + our retriever 3 4 All 4 All 2 81.2 82.5 80.0 81.5 39.2 37.7 33.3 47.3 71.4 14.8 64.6 81.7 40.1 39.3 75.8 27.5 46.8 82.5 66.7 45.4 72.1 75.7 59.0 68.7 2 3 Table 5.1: In-domain Retrieval and Reranking Evaluation on Hover development set with k = 25. Baleen is finetuned on Hover, MDR is trained on HotpotQA, and our retriever is trained on a mixture of HotpotQA, Hover, Musique and Natural Questions. We initially evaluate performance of the Iterator in an in-domain setting using the Hover development set against the HotpotQA Wikipedia Abstracts Corpus (Yang et al., 2018), since Hover contains samples with up to four hops and it is possible to compare against the published Baleen (Khattab et al., 2021) performance. Here the number of paragraphs retrieved on each hop (k) is 25. Results (Table 5.1) indicate that our Iterator is competitive with Baleen in this setting with our two-hop performance better using both Exact Match and F1 but their four-hop performance dominating. A reason we are stronger overall than Baleen on EM while the reverse is true for F1 is due to our choice of ranking function - Baleen ranks sentences entirely using se whereas we utilise a linear combination of our Reranker paragraph score p and se. Unsurprisingly our retriever performance is progressively better than MDR as the number of hops increases. Our main experiments below use a corpus consisting of English Wikipedia paragraphs from the August 1 2020 dump. Details are in Ap- pendix C. 5.2.4 Reasoning Models A number of studies have shown the efficacy of supervised multitask training in facilitating generalisation in question-answering tasks (Khashabi et al., 51 2020b; Sanh et al., 2021; Wei et al., 2021; Khashabi et al., 2022). We adopt this approach for training our Reasoning Models which we characterise as models that take a question and context pair as input ⟨qi, ci⟩ and generate an answer ai. To facilitate numerical computation we adapt the Reasoning Model to- kenizer for digit tokenisation (Wallace et al., 2019; Geva et al., 2020) in all experiments. Noting that some of the numerical pretraining tasks take much longer to train to a reasonable degree of proficiency than our textual question- answering tasks, we continue training our Reasoning Models from their orig- inal pretraining checkpoint with two additional stages of multitask pretrain- ing. 5.2.4.1 Stage 1 Pretraining In Stage 1 we train using tasks that are aimed at imparting by abstraction a diversity of foundational reasoning skills, with a bias towards simple numer- ical literacy. Specifically we utilise existing tasks from Yoran et al. (2022), Pi et al. (2022) and Geva et al. (2020) as well as some we create ourselves (see Appendix E.2 for details). Stage 1 training is on a total of 33 tasks. One of these is a version of the original self-supervised masked language modelling task which is sampled with probability λ = 0.35 so the model re- tains language understanding skills. The remaining tasks are sampled using an error-based sampling regime (Gottumukkala et al., 2020) whereby tasks with low accuracy in the previous validation step are oversampled in the subsequent training steps and vice-versa. 5.2.4.2 Stage 2 Pretraining In Stage 2, we add five open domain (i.e. question-only) question-answering tasks to the above foundational Stage 1 tasks (for 38 tasks in total, denoted Group 1 ). We add the open domain tasks with the primary aim of teaching the model about the expected form of answer for a given question type e.g. yes or no for “Could an Aardvark use a knife and fork?” noting that it has been shown that smaller models cannot learn such open domain tasks well 52 (Lewis et al., 2021). To avoid the possibility of catastrophic forgetting, we continue to train on Group 1 in conjunction with a new set of tasks, Group 2, which is sampled with λ = 0.8. Group 2, described further below, contains tasks aimed at teaching more question-answering specific reasoning skills, with a bias towards RC datasets. Our purpose in having two groups is to enable us to implement differing sampling strategies within a single training regime. For Group 1 we utilise uniform sampling over all tasks and for Group 2 we use error-based sampling. This combination represents our solution to the issue noted in Yoran et al. (2022), namely that excessive oversampling will occur for tasks that the model cannot learn well. In addition we find uniform sampling useful for regulating the sampling of the tasks that the model has already learned in Stage 1. 5.2.4.3 Base and Base+RATD Models We now discuss two resulting models, both continue training from the best Stage 1 checkpoint and use the same Group 1 tasks but different in Group 2 tasks. The first, our Base model, uses 41 tasks in Group 2 for an overall total of 79 tasks (38 Group 1 + 41 Group 2 ). Group 2 consists of a diverse range of question-answering datasets. Of note, to facilitate an ability to identify relevant information and perform deductively valid reasoning, for HotpotQA, Hover, FEVER, Musique, Natural Questions, CREAK (Onoe et al., 2021) and TriviaQA (Joshi et al., 2017), we construct fully evidential contexts with many irrelevant distractors using a combination of gold and distractor paragraph fragments such that we are as close to our maximum sequence length of 512 tokens as possible without truncating sentences. Since some evaluation samples have a label of “unanswerable”, we also create versions of HotpotQA, Hover, FEVER and Musique by similar construction to the fully evidential samples but with key gold sentences or paragraphs removed. These are assigned an “unanswerable” label. For our second model, Group 2 consists of the 41 tasks in the above Base Group 2 plus an additional 14 RATD datasets for a total of 55 tasks. Our resulting Base+RATD model is thus trained on a total of 93 tasks (38 53 Group 1 + 55 Group 2 ). As described above, the RATD dataset contexts are constructed using our Iterator against the full Wikipedia corpus. Recalling that none of our original datasets are normalised against the version of Wikipedia we use, the resulting contexts are noisy, often containing partial or no relevant evidence and many distractors. We hypothesise that the utility of these is to impart a variety of heuristic strategies using a context form similar to that which our downstream unseen evaluation datasets will have. Thus our Base+RATD model may be equipped for reasoning to a plausible answer from partial information as well as the deductively valid answer derivable for the majority of datasets used to train the Base model. Details of all datasets utilised in Reasoning Model training are in Ap- pendix E. 5.2.4.4 Reasoning Model In-domain Evaluation Pretraining Regime POET-SQL (BART)a PReasM (T5-large)b PReasM w/digit tok. (T5-large)c PReasM + Teabreac (T5-large)d Teabreac (T5-3B)d Ours: Base (BART) Ours: Base+RATD (BART) Params DROP IIRCG IIRCR 440M 770M 770M 770M 3B 440M 440M 82.2 72.3 80.0 83.2 86.7 79.2 79.6 75.0 73.3 77.9 79.5 80.2 80.1 45.1 40.9 47.6 51.0 53.6 52.8 Table 5.2: Comparison of our Reasoning Model performance to related pretraining meth- ods in finetuned setting on DROP dev set and IIRC test set (F1). Our IIRCR uses our retrieval from English Wikipedia paragraphs whereas other studies shown use different techniques to retrieve only from provided supporting documents. a Pi et al. (2022); b Yoran et al. (2022) trained without digit tokenisation; c from Trivedi et al. (2022b) wherein PReasM is retrained with digit tokenisation; d Trivedi et al. (2022b). For comparison with related approaches, we fine-tune our models on DROP (Dua et al., 2019) and separately on IIRCG and IIRCR (Ferguson et al., 2020). IIRCG is an oracle setting, with context consisting of gold sen- tences and surrounding text. IIRCR has a retrieved context using respective retrieval methods from each study as discussed in Section 2.3. As shown in Table 5.2 we are competitive with other approaches in this in-domain setting: We are slightly behind on DROP compared to POET (Pi et al., 54 2022) and Teabreac (Trivedi et al., 2022b), however we are state of the art on IIRCG and IIRCR. 5.3 Experiments Our experiments are aimed at answering three main research questions: R1. What is the impact of adding RATD datasets to the Reasoning Model Base training regime? R2. How effective is pretraining for numerical literacy in the unseen setting for smaller language models? R3. What is the performance differential between our Reasoning Model with differing evaluation dataset context configurations and high-performing models in a similar unseen setting? For each evaluation dataset, where possible we report our results against other zero/few-shot work. If known, we also report the current state of the art. As applicable for each dataset we report results without retrieval, with our retrieval (denoted DatasetR), and with gold context (denoted DatasetG or similar). To facilitate comparison against prior work on DROP (Dua et al., 2019) and IIRC (Ferguson et al., 2020) we use the numeracy-focused F1 calcula- tion introduced in Dua et al. (2019) whereby if the gold label is a number, the predicted answer must contain that number irrespective of other token overlap. For consistency we retain this method for reporting F1 for other datasets noting this is equivalent to standard F1 where the gold answer is not a number and disadvantageous to our results where the gold answer is a number. For datasets with binary labels we adopt the calculation used in Khashabi et al. (2020b) where to count as a match the predicted an- swer must appear in the gold label and the opposing answer must not. For multi-choice evaluation, we take the option with the highest overlap with the predicted answer and then score as exact match. Where comparing performance of our Base against Base+RATD models we use the paired bootstrap test (Efron and Tibshirani, 1993) to test for statistical significance (p < 0.05). 55 5.3.1 Models The Retriever component of the Iterator is built upon RoBERTa-base (Liu et al., 2019) and both the Reranker and Evidence Set Scorer use ELECTRA- large (Clark et al., 2020a). Unless noted otherwise, all results are reported against the same two final Reasoning Models which are based on BART (Lewis et al., 2020a). All models use the the Huggingface (Wolf et al., 2020) implementations. 5.3.2 Experimental Results 5.3.2.1 StrategyQA and CommonsenseQA Base+RATD significantly outperforms Base on StrategyQA for SQA, SQAR and SQAGP (Table 5.3). On SQAR (which uses our retrieved contexts) our much smaller Base+RATD model slightly exceeds performance of the two 11 billion parameter models and is comparable with OPT 175B (Zhang et al., 2022). Our Base model fails to improve with SQAGP (which has contexts of gold paragraphs) versus the question-only SQA version. The improvement on SQAGP with the addition of RATD draws attention to the fact that outside of our RATD datasets the majority of our multihop training samples are aimed at imparting deductively valid forms of reasoning which, as noted above, are often inapplicable for SQAGP. As described in Section 2.3, the contexts of SQAGF are of a condensed, rationale-like form, distinct from the standard verbose paragraph form of SQAGP. Model performance on SQAGF hugely outperforms our other con- figurations. This shows that with a context of a form the model has learned to reason with, it is possible to solve challenging implicit questions. As to where our models may have learned to reason with this short context form we note that some of the training datasets contain similar short form con- texts e.g. BoolQ (Clark et al., 2019b), which like StrategyQA has binary labels. Our Base model has 84.9% development set accuracy on BoolQ. As Table 5.4 shows, augmenting CommonsenseQA samples with retrieval (CSQAR) yields mixed results. Others e.g. Piktus et al. (2021) have observed 56 Model Params Base Base+RATD Random PaLM - COT+Self-cons.a U-PaLM - 5 shotb PaLM - 5 shotc OPT - 5 shotd T0++e UnifiedQA v2f PaLM - 5 shot UnifiedQA v2 Ours: SQA Ours: SQAR (Our retrieval) Ours: SQAGF (Gold facts) Ours: SQAGP (Gold paras) 540B 540B 540B 175B 11B 11B 8B 50.0 81.6 78.3 73.9 58.5 54.4 57.9 55.4 770M 51.6 440M 51.6 440M 48.4g 440M 72.8 440M 51.6 50.0 53.9 58.9 71.2 55.8 Table 5.3: StrategyQA performance comparison (Accuracy). StrategyQA contains binary- labelled, multi-hop commonsense questions. Bold figures denote the better of our two models. All Base versus Base+RATD differences are statistically significant. a Wang et al. (2022a); b Tay et al. (2022); c Chowdhery et al. (2022); d from Taylor et al. (2022); e Sanh et al. (2021); f Khashabi et al. (2022) g Below-random performance on our Base model with Q+retrieval is due to the model predicting text other than yes or no. Prepending “Yes or no -” to each question improves the score from 48.4 to 54.9. The corresponding Base+RATD figure is 58.8 which retains statistical significance. that the best zero/few shot performance on this type of dataset has been achieved with much larger models rather than external retrieval and our analysis bears this out. The addition of extra reasoning strategies via the RATD datasets is more successful; as with StrategyQA, performance on CommonsenseQA is improved with the Base+RATD model. 5.3.2.2 DROP and IIRC As with PaLM, our Base and Base+RATD models are trained using digit tokenization. On DROP both our models outperform all models not trained using this method including GPT3 175B and InstructGPT 175B (Ouyang et al., 2022) (Table 5.5). Performance of our models approaches that of PaLM 8B and PaLM 540B in the zero shot setting but both are superior to ours with a 5-shot prompt. 57 Model Params Base Base+RATD Random Prior work (finetuned)a PaLM - 0/5 shotb GPT3 - 0/few shotc UnifiedQA v1d PaLM - 0/5 shot GPT3 - 0/few shot UnifiedQA v1 Ours: CSQA Ours: CSQAR (Our retrieval) 418M 540B 175B 11B 8B 760M 770M 440M 440M 20.0 91.2 69.2/81.5 81.5/85.0 76.2 66.0/77.6 61.8/62.7 60.9 61.1 62.4 20.0 64.0 63.6 Table 5.4: CommonsenseQA development set performance comparison (Accuracy). Com- monsenseQA contains multi-choice commonsense questions. Bold figures denote the better of our two models. Base+RATD improvement is statistically significant for CSQA but not for CSQAR (adding retrieved context improves Base but not Base+RATD). a Xu et al. (2021); b Chowdhery et al. (2022); c Brown et al. (2020); d Khashabi et al. (2020b) Model Params Base Base+RATD PaLM - 0/5 shota GPT3 - 0/few shotb InstructGPT PPO+ptx - 0/few shotc UnifiedQA v1d PaLM - 0/5 shot UnifiedQA v1 GPT3 - 0/few shot Ours 540B 43.7/70.8 175B 23.6/36.5 175B 15.2/33.3 11B 32.5 8B 45.1/69.4 770M 24.6 760M 14.4/24.0 40.7 440M 40.0 Table 5.5: DROP development set performance comparison (F1). DROP primarily tests numeracy in reading comprehension. Reduced performance on Base+RATD versus Base is statistically significant. aChowdhery et al. (2022); bBrown et al. (2020); cOuyang et al. (2022); d Khashabi et al. (2020b) Ablative experiments on our training regime components (Table 5.6) indicate that digit tokenization, numerical literacy training datasets and two stage training are all important in achieving the best DROP performance in our setting. Table 5.7 shows performance on IIRC. A first glance suggests that poor retrieval is the major cause of low performance on IIRCR, however inspec- tion of retrieved items suggests that retrieval is often fully evidential. The breakdown by answer types in Table 5.8 indicates that a major cause of fail- 58 Model All Ans. Types Numeric Ans. Only Two Stage: +DT +NumLit One Stage: +DT +NumLit Two Stage: -DT +NumLit One Stage: +DT -NumLit 40.0 38.2 34.7 29.0 25.4 22.9 16.6 11.2 Table 5.6: DROP development set (F1). Ablative results on our Reasoning Models trained using Base+RATD datasets trained in one or two stages, with/without digit tokenization (+/-DT), and with/without numerical literacy training datasets (+/-NumLit). Note that the -NumLit setting is only relevant for single-stage training. Model Params Base Base+RATD a Prior work: IIRCR Ours: Finetuned IIRCR (Our retrieval)b Ours: IIRCR (Our retrieval) Ours: Finetuned IIRCG (Gold context)b Ours: IIRCG (Gold context) 123M 440M 440M 440M 440M 51.6 53.6 23.8 80.2 59.6 25.5 58.1 Table 5.7: IIRC test set evaluation (F1). IIRC tests diverse reasoning requiring retrieval. Both Base to Base+RATD comparisons are statistically significant. a Ferguson et al. (2022) use a finetuned Reasoning Model and specialised retrieval with corpus restricted to documents linked from each initial paragraph. b To the best of our knowledge our Base model finetuned on IIRCR and separately on IIRCG are both SOTA at the time of writing so we report these given unavailability of unseen comparisons. ure is that in contrast to DROP, almost all numeric answers are predicted incorrectly for both IIRCG (gold contexts) and IIRCR (retrieved contexts). Finetuning alleviates the issue, confirming that the model is capable of per- forming the necessary computation when trained with sufficiently similar examples. Our Base+RATD model generally correctly predicts unanswerability for IIRCG but almost never does for IIRCR. The IIRCR context frequently con- tains either enough information to make the question answerable, or more frequently such relevant information as to make it appear answerable. Sim- ilar to the numerical computation issue, adding sufficiently similar training examples via finetuning enables the model to distinguish unanswerable sam- ples. Appendix H.3 illustrates failure cases for numeric and unanswerable types. 59 Dataset Ans. Type Base+RATD Finetuned DROP IIRCG IIRCR Span (2962) Multi-span (567) Num (5850) Date (157) All (9536) Span (544) Binary (66) Num (277) No answer (414) All (1301) Span (544) Binary (66) Num (277) No answer (414) All (1301) 67.4 42.0 25.4 62.4 40.0 59.8 57.1 2.9 92.8 58.1 48.9 68.2 3.8 2.6 25.5 82.3 72.2 79.0 74.0 79.6 74.3 64.7 67.4 98.8 80.1 44.8 57.6 41.5 69.9 52.8 Table 5.8: Breakdown by answer type on DROP development set and IIRC test set (F1). Sample counts are in brackets. Finetuned models are trained from the Base+RATD checkpoint. 5.3.2.3 ARC-DA and Musique Table 5.9 shows model performance on our “partially seen” datasets, ARC- DA and Musique. On ARC-DA, adding RATD datasets significantly im- proves results in both retrieved and gold settings. By contrast, Musique performance significantly degrades with Base+RATD. Noting that Musique is the only evaluation dataset for which we create RATD datasets, we hy- pothesise that in the case of highly similar training examples to particu- lar evaluation samples, the model prediction is the memorised answer of a similar training example. We confirm this by examining the predicted an- swers of the 1,670 Musique evaluation samples that scored 0 F1 against Base+RATD. Of these the predicted answers of 716 samples are an exact match to a Musique training sample gold answer (e.g. “Who is the spouse of the Green performer?” is incorrectly answered as “anna gordy gaye” be- cause this is the label to a number of training questions of “Who is the spouse of ...” form). An ablative experiment, wherein we trained a version 60 of Base+RATD without the Musique RATD datasets, results in improved performance versus Base and the original Base+RATD on Musique (Table 5.9) without material impact to other evaluation dataset results. Model Params Base Base+RATD UnifiedQA+ARCDA/MC with IRa Ours: ARCDAR (Our retrieval) Ours: ARCDAG (Gold context) Musique - EX(SA)b Ours: MusiqueR (Our retrieval) Ours: MusiqueG (Gold context) 11B 61.4 440M 28.8 440M 56.8 102M 49.8 440M 24.3 440M 60.8 31.6 59.1 22.2 (28.2) 43.8 (62.4) Table 5.9: ARC-DA (test accuracy) and Musique (development F1) comparisons. ARC- DA is science question answering and Musique involves multi-hop question answering. All Base to Base+RATD differences are statistically significant. Musique performance degradation in Base+RATD is caused by adding Musique RATD in training; results for an ablative model trained with all datasets except for Musique RATD is shown in brackets in the Base+RATD column. a Bhakthavatsalam et al. (2021): Training includes ARC-DA. b Trivedi et al. (2022a): EX(SA) uses specialised retrieval from each Musique sample’s gold and distractor paragraphs. The Musique training split has 19,938 samples but only 2,057 unique labels, and questions with the same answer tend to be of similar form, such as the above “Who is the spouse of...” example. Therefore we consider the question of whether the poor performance of Base+RATD here is a gen- eral weakness of our method or whether it is specific to the particular bias of Musique. We trained another Base+RATD model, this time with the Musique RATD training dataset substituted with a filtered variation that only contains samples with unique labels. Similar to the above Musique RATD ablation, this version also significantly improves against the original Base+RATD (+3.0 F1 for MusiqueR and +10.6 F1 for MusiqueG) without impact to other results. Hence, assuming appropriate consideration of ex- isting dataset bias when selecting RATD training samples, we affirm the robustness of our method. 61 5.4 Conclusion We have argued that an ability to reason over imperfect and incomplete information is a critical skill with which question-answering models must be endowed. To facilitate such ability we create RATD datasets that are designed to impart heuristic reasoning strategies with context of a form similar to that which retrieved contexts for downstream tasks will have. We show that training on RATD datasets improves performance on all unseen evaluation datasets with retrieved contexts. This sometimes comes at a small cost in situations where questions come with gold contexts that are in a form that our model is already good at utilizing (SQAGF, DROP, and IIRCG) although we suggest that in practice such gold contexts are the less common case. (R1) We also show that even with our large and diverse pre-training regime, questions involving numerical computation and those labelled unanswerable remain sensitive to the similarity of training samples. (R2) Our results demonstrate that generic retrieval without normalisation can outperform specialised methods (e.g. we are state of the art on fine-tuned IIRCR) and that our overall method can yield performance on par or better than that of much larger models without fine-tuning (e.g. SQAR, DROP). (R3) 62 6 Combining Rationale Generation and Dense Retrieval 6.1 Introduction “It was soon realized that the problem of systematically acquiring information from the environment was much less tractable than the mental activities the information was intended to serve” - Moravec (1988) Moravec’s paradox is the observation that problems such as developing an ability to reason, that might have been assumed to be one of the most difficult challenges in artificial intelligence has been easier to resolve than the challenge of acquiring more basic knowledge such as sensory information. It is motivating to consider this in the context of recent advances in using both LLMs and retrieval against large textual corpora for information acquisition in the question-answering domain. In this chapter, we focus on methods to improve the performance of a smaller Language Model (i.e. Reasoning Model) which, given a question and an acquired explanatory context as input, is expected to reason to provide an answer. To acquire the explanatory context, we consider two knowledge sources both individually and in combination; retrieval of an explanatory context from a corpus of English Wikipedia paragraphs via our Iterator as introduced in Chapter 5, and rationale generation from LLMs. Retrieval has generally been a relatively resource-efficient activity but until recently even inference on LLMs has required considerable computational resources. Re- cent innovations such as those involving 8-bit matrix multiplication (INT8) 63 Figure 6.1: Overview of our approach. Given an unseen question Q: [1] we acquire ex- planatory contexts, C1 and C2, from two knowledge sources. [2] We score the acquired contexts for relevance and truthfulness using a Rationale Ranking (RR) model that we train on diverse relevant/irrelevant samples that make both truthful and false assertions. [3] We evaluate and select methods for combining or filtering C1 and C2. [4] We evaluate the performance of different contexts (Cn) on a set of Reasoning Models that are trained on different mixtures of training datasets, including a mixture containing RATD datasets, and a mixture without these. In the diagram, red denotes false information and green highlights relevant and truthful evidence. (Dettmers et al., 2022) enable the use of LLMs as frozen knowledge bases in constrained settings. For example inference on the 13 billion parameter StableVicuna model (Stability-AI, 2023) that we convert to INT8 and use in some experiments runs in approximately 18 GB of GPU RAM, well within the current capacity of large consumer GPU cards. We choose retrieval from a reliable corpus and LLMs as our knowledge sources since we hypothesise that they may offer differing and complimen- tary characteristics. Studies such as Khattab et al. (2021), and our own described in Chapter 5, have shown that multi-hop retrieval systems can be proficient at identifying the relevant n documents necessary to answer n-hop factual questions where n can be greater than two, e.g. those found in the Hover (Jiang et al., 2020) or Musique (Trivedi et al., 2022a) datasets (“The Rhine forms a border between Aschenbrödel’s composer’s country and another country where women got the vote when?”). However we are unaware of any corresponding studies on LLMs that demonstrate similar proficiency in generating sufficient information to answer such n-hop ques- tions. Conversely, it has been shown that LLMs can be strong at answering commonsense questions without using external retrieval (Lourie et al., 2021), 64 while for such questions retrieval from large textual corpora offers limited benefit as noted by Piktus et al. (2021), and by us in Chapter 5. We explore two methods of combining information from our knowl- edge sources: (1) Rational Ranking (RR), and (2) training with retrieval- augmented data. Our RR method involves training a smaller Transformer to score both rationales and retrieved explanatory contexts with respect to relevance and truthfulness. We then evaluate a number of simple strategies to create combined contexts such as including either or both components that score over a threshold, or selecting the single top-scoring component. We focus on identifying combination methods that work best in the general case, i.e. are most likely to work well for an arbitrary unseen question for which we provide no means of predicting which combination method will work best. We find that we are able to identify such a method for each of our Reasoning Models and quantify the performance improvement (Section 6.3.3.2). Our second method (RATD) consists of training our Reasoning Model with our retrieval-augmented datasets previously described in Chap- ter 5. These datasets were originally developed to impart diverse reasoning strategies such as an ability to identify and weigh partially evidential facts in long, noisy contexts. When our rationales and retrieved contexts are combined, the resulting context is similar in length and form to the RATD contexts, therefore we find that training on them enables a single Reasoning Model to utilise our various context formats effectively, including the case where the context consists of the naïve concatenation of rationale and retrieved context that does not consider the RR model scores. The major contributions of this chapter are: 1. We propose RR, a novel method that both selects context components by relevance, and filters components that may be false. 2. We apply the RATD method that we previously developed to facili- tate reasoning over contexts that potentially combine information from multiple knowledge sources. 65 3. We demonstrate that both methods in isolation significantly improve reasoning performance in smaller Language Models from strong base- lines in the same unseen setting (Section 6.3.3.2). 4. We show that smaller Language Models trained for reasoning can man- ifest comparable or stronger performance on unseen questions to a LLM, when provided with the same knowledge to reason over that the LLM is capable of generating for itself (Section 6.3.3.1). 5. We illustrate the respective strengths and weaknesses of LLMs and multi-hop retrieval from a Wikipedia corpus as knowledge sources (Section 6.3.3.1). 6. We show that combining information from these sources significantly improves the average performance over evaluation datasets versus us- ing a single source. Additionally, on individual evaluation datasets the combined context performance is often beyond what either knowledge source in isolation can deliver (Section 6.3.3.1). 6.2 Method To answer an unseen question, qi, we acquire two contexts: ci,1 is obtained by prompting a LLM, and ci,2 is obtained via dense retrieval. Next, we score ci,1 and ci,2 for relevance and truthfulness using the RR model. We utilise the RR scores in various methods for combining or filtering ci,1 and ci,2 into a set of new contexts. Finally, we input the concatenation of qi and each resulting context into a set of Reasoning Models and evaluate performance in answering qi correctly. A visual overview of our approach is provided in Figure 6.1 where q and c are capitalised and simplified for readability. In the following sections we describe how the two knowledge sources are implemented, how the RR model is constructed, trained and initially evaluated, and how the Reasoning Models are trained. We describe our context combination methods further in Section 6.3.2. 66 6.2.1 Rationale Generation We utilize two LLMs, BLOOM (Le Scao et al., 2022) and StableVicuna (Stability-AI, 2023), a much smaller model then BLOOM that has been fur- ther tuned from the Vicuna v0 13B model (Chiang et al., 2023) which in turn was adapted from the LLama (Touvron et al., 2023) foundation model. We chose these two models because they are representative of differing ap- proaches to developing LLMs and they may offer divergent characteristics in rationale generation. At 176 billion parameters, BLOOM was the largest language model we had access to at the time that we could run under INT8. It was trained on 410 billion tokens and the version we used did not undergo further training on instructional data or human feedback. Llama by contrast was trained on one trillion tokens. From the Llama checkpoint, Vicuna un- derwent further training on user-provided ChatGPT conversations. Finally StableVicuna was developed from Vicuna by further training in both super- vised and reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022) settings on a mixture of the human-generated OpenAssistant Conversations Dataset (Köpf et al., 2023), as well as human-LLM conver- sations from the GPT4All (Anand et al., 2023) and Alpaca (Taori et al., 2023) projects. We used StableVicuna under both INT8 and FP16 versions, the former offering a smaller GPU memory footprint at around 18GB while the latter uses almost twice as much memory but we find inference much faster, thus offering a clear trade-off in a resource-constrained setting. To generate rationales from each model, we used greedy decoding on chain-of-thought (COT) prompts (Wei et al., 2022) to generate the rationale followed by the phrase “So the answer is” and the answer (examples are in Appendix F.1). This enabled us to evaluate the LLM answers directly from the same prompts and with the same rationale that our Reasoning Model would use, allowing a comparison under a similar set of assumptions. Occasionally a model would fail to generate the separate answer. In this case, to be favorable to the direct LLM method, the full rationale was used as the answer in calculating metrics. Generated rationale length is a maximum of 128 tokens, which we found to be long enough to accommodate all the rationales we checked. 67 To maintain the integrity of our unseen settings we ensured that no examples used in prompts were from any of our evaluation datasets. The prompts used were identical between our LLMs excepting that examples for StableVicuna prompts are denoted as: ### Human: [question] ### Assistant: [rationale]. So the answer is [answer]. BLOOM prompts are denoted as: Q: [question] A: [rationale]. So the answer is [answer]. Our primary measure of context quality is an ability to improve question- answering performance, however we conducted a high-level qualitative ex- amination of rationales generated by BLOOM and StableVicuna. This sug- gested that they both tend to produce more rationales containing sufficient information to answer each question on some datasets (e.g. ARC-DA) and more incomplete rationales on the same (e.g. Musique). We observed that BLOOM was generally more prone to generating falsehoods. Examples from both models may be found in Appendix F.2. We note that robust exami- nation of rationale quality is presently challenging to perform and believe research into automated methods in this area represents a promising future direction. 6.2.2 Retrieval For our “retrieval” knowledge source, as noted we simply reuse contexts previously generated by the Iterator for experiments described in Chapter 5, both for each evaluation sample and also for the creation of RATD datasets for the training regimes. As a reminder, Iterator-generated contexts are formatted as a list of paragraph fragments that are recovered from the top-scored sentences, each prepended by the title of the corresponding doc- ument and containing the top-scoring set of sentences along with preceding and successor sentences where these exist. The top-scored sentences are 68 identified by taking the Evidence Set from the top-scored hop. Contexts contain as many fragments as will fit into a 512-token sequence length. They are semi-structured as follows: [Doc 1 title]: [One to three sentences from a document 1 paragraph]. [Doc 2 title]: ... 6.2.3 Rationale Ranker Our RR model takes a question and context pair as input ⟨qi, ci⟩ and pro- duces a score si. It is trained with a binary cross-entropy objective where samples are labelled 1.0 if ci is truthful and fully evidential in answering qi or 0.0 otherwise. The model is trained on a mixture of existing datasets for which we acquire or construct positive ci (i.e. a set of relevant and truthful gold sentences that are sufficient to answer qi), and negative ci (which omit some or all gold sentences and may be irrelevant, false or both with respect to qi answerability). We used shared normalization (Clark and Gardner, 2018) such that each qi is sampled in the same batch paired with a positive and separately a negative ci. We found that without shared normalization, model training would collapse and it would predict every ci as negative. This may have occurred because without seeing positive and negative ci for the same qi in the same batch the pattern to be learned is insufficiently signalled. Training Mixture Count Construction Methods Count Construction Methods Positive Contexts Negative Contexts Creaka (Commonsense) HotpotQAb (Multi-hop factual) FEVERc (Single-hop factual) QASCd (Multi-choice science) ARCe (Multi-choice science) Hoverf (Multi-hop factual) 10173 Creak factsa 34304 R4C factsg, Iterator-like, Rationale-like 60986 Eraser factsh, Iterator-like, Rationale-like 47830 QASC factsd, eQASC factsi 6469 WorldTree factsj 28171 Iterator-like, Rationale-like 81408 LLM-sampled 41839 LLM-sampled, LLM-greedy, Iterator-like, Rationale-like 121427 LLM-sampled, Iterator-like, Rationale-like 193214 LLM-sampled, LLM-greedy 24492 LLM-sampled, LLM-greedy Iterator-like, Rationale-like 28171 Total 187933 490551 Table 6.1: RR model training dataset composition. The construction methods denoted “... facts” involve creating rationales from gold sentences or structured triples sourced from the cited study. Iterator-like contexts and Rationale-like are constructed from the training datasets’ gold (and associated negative) paragraphs. LLM-sampled and LLM- greedy contexts are negative rationales generated by BLOOM using nucleus sampling and greedy decoding respectively. aOnoe et al. (2021); bYang et al. (2018); cThorne et al. (2018); dKhot et al. (2020); eClark et al. (2016, 2018); f Jiang et al. (2020); gInoue et al. (2020); hDeYoung et al. (2020); iJhamtani and Clark (2020); jXie et al. (2020) 69 Since the model must score both rationale-style ci and Iterator-generated ci on the same scale, we develop training samples that are similar to both types. Obtaining positive ci for training questions is generally straightfor- ward. These are constructed from gold sentences and paragraphs associated with each dataset. Negative ci that cover both irrelevance and falsehood are harder to obtain. We construct negative ci by two methods; (1) gen- erating them from BLOOM using specially constructed few-shot prompts containing examples of negative rationales (e.g. Appendix F.3), and (2) cre- ating them synthetically by substituting gold sentences with negative ones using datasets such as HotpotQA that come with sentence level annota- tions. The synthetic method can only produce irrelevant negatives whereas the LLM-generating method produces both irrelevant and false rationales. For LLM generation we use both greedy decoding and nucleus sampling (Holtzman et al., 2019) to create negatives. We find that greedy decoding produces positive-appearing but negative samples but (obtusely) the LLM has a tendency to produce accidentally positive rationales which we must filter out1. Nucleus sampling by contrast (temperature=0.95 and p=0.96) produces a diversity of false and irrelevant samples that are less likely to be accidental positives. However here falsehoods tend to have an exaggerated quality which could make them less adversarial for the model, so we create samples via both decoding methods (examples in Appendix F.4). Dataset construction is summarised in Table 6.1. We employ diverse combination methods involving the trained RR model scores to create contexts for our evaluation datasets that combine rationales and Iterator-generated contexts, as described in Section 6.3.2. 6.2.3.1 Rationale Ranker Evaluation Our RR development set consists of 89,470 samples taken from the respective development splits of our training datasets. Contexts are created using the same methods as illustrated in Table 6.1 for corresponding training splits. 1We eliminate rationales where the stemmed text contains the stemmed answer string, excepting samples with yes/no labels. We use the snowball stemmer from NLTK (Bird et al., 2009). 70 We sample a single positive or negative context for each development ques- tion such that there are equal counts of positive and negative contexts. As shown in Table 6.2, accuracy is high in this in-domain setting. Positive Context Negative Context Total 91.5 93.0 92.3 Table 6.2: RR model Accuracy on the in-domain development set (score threshold t = 0.5). Total is micro-accuracy. High accuracy is attainable in detecting both positive and negative contexts. Model GPT-4 RLHFa GPT-3.5 RLHFa GPT-4 No RLHFa GPT-3 175Bb GPT-J 6Bb UnifiedQA 3Bb Iterator Paragraph Reranker 335Mc Rationale Ranker 335M (Ours) TruthfulQA MC1 60.0 47.0 30.0 21.0 20.0 19.0 18.2 30.0 Table 6.3: Accuracy in detecting falsehoods on TruthfulQA MC1. The RR model is better at detecting falsehoods than the Iterator Paragraph Reranker which was trained to detect relevance but not falsehood. It’s performance is competitive or better than much larger models that have not been trained using RLHF aOpenAI (2023); bfrom Lin et al. (2022) Github repository; cModel described in Chapter 5. Turning to an unseen setting, we initially evaluate context relevance scor- ing with a five-way multi-choice relevance detection dataset that we create from the gold rationales supplied with StrategyQA (SQA), where the four incorrect options are simply randomly assigned rationales from other SQA questions (we use SQA since this is not part of RR model training). Here our model achieves 91.4% accuracy. A more interesting question is the extent to which our relatively small RR model is capable of detecting falsehoods in an unseen setting. To evaluate this question we consider TruthfulQA (Lin et al., 2022), an adversarial evaluation-only dataset of 817 questions that models and/or humans tend to answer falsely. In Table 6.3 we compare falsehood 71 detection performance of the RR model with various larger models and in particular with the Iterator Paragraph Reranker. We treat the Paragraph Reranker as representative of models specifically trained to score context relevance but that have not necessarily been trained to consider truthful- ness. We utilise the TruthfulQA MC1 split which is formatted as 4-5 way multi-choice with one truthful option. Each option is scored independently of other options and the highest-scoring selected as the prediction. In the case of LLMs the score is calculated as the log-probability of the comple- tion following the question. For the Paragraph Reranker and our RR model we use the score that each model has been trained to compute. It can be seen that the RR model is indeed much better at detecting falsehoods than the Paragraph Reranker and it’s performance is competitive or better than much larger models that have not been trained using RLHF. We imagine the superior performance of LLMs trained with RLHF on falsehood detection is due to their associated large reward models, like our RR model, being trained in part to rate samples making false assertions as undesirable. 6.2.4 Reasoning Models We consider three Reasoning Models in our experiments. The first, which we use as a baseline, is the unmodified “Base+RATD” model from Chap- ter 5 which we denote here as the RATD model for brevity. For descriptive purposes, we divide the datasets used in training the RATD model into two sets. The first are the RATD datasets described in Section 6.2.2, whose pur- pose is to confer an ability to reason over long, noisy, and partially evidential contexts. We denote the remaining large number of training datasets as the Common set; these broadly cover tasks designed to instill simple numeri- cal literacy, and diverse question-answering ability. Hence we say that the RATD model is trained on Common ∪ RATD datasets. We create an additional set of training samples denoted GR (for “gold rationales”). These are intended to impart further ability to reason over rationale-form contexts. GR consists of samples for Creak, QASC, ARC, HotpotQA, and FEVER where the contexts are gold rationales constructed similarly and from the same sources as those described for the RR model training dataset in Table 6.1. 72 We then develop our two main Reasoning Models, both multitask-trained using the same two-stage approach and hyperparameters as the original RATD model: The GR model is trained on Common ∪ GR, and the GR+RATD model is trained on Common ∪ GR ∪ RATD. 6.3 Experiments We utilise the same unseen evaluation datasets as previously described in Section 2.3 excepting DROP which we omit for brevity since it does not require any additional knowledge beyond what is supplied. We use the same metrics for each dataset as we did in Chapter 5 (see Section 5.3). 6.3.1 Models The Rationale Ranker is built upon ELECTRA-large (Clark et al., 2020a). Reasoning Models are based on BART (Lewis et al., 2020a). All models use the the Huggingface (Wolf et al., 2020) implementations. The Reasoning Models differ only in their respective training data; hyperparameters are otherwise identical. 6.3.2 Context Combination Methods and Experimental Nomenclature For each unseen evaluation question, given a LLM-generated rationale, and an Iterator-generated context as possible combined context compo- nents, and RR model scores for each, we evaluate methods of combining components. We implement four combination methods and create versions of our unseen evaluation datasets with combined contexts for each as follows: Naïve Concatenation: The simple concatenation of a rationale and corre- sponding Iterator-generated context with the above form. RR model scores are ignored. 73 Figure 6.2: Examples of combining contexts. For a question Q, we acquire two contexts, C1 and C2. The resulting combined context for our combination methods with example thresholds and RR model scores is then shown in blue boxes where “+” denotes the concatenation of C1 and C2. The Naïve Concatenation is always C1 + C2. Formatted examples of resulting contexts are shown at the bottom of the figure with titles shown in bold for readability. The phrase “Further Explanation” is added to the rationale in a concatenated context to mimic a document title. Max Score: Choosing the single component that the RR model scores highest. RationaleDefault: Defaulting to taking the rationale component unless the Iterator component scores over a threshold t in which case it is exclu- sively selected. EitherOrBoth: Selecting either or both components that score over a threshold t. If neither component is selected, we default to the Naïve Concatenation context since smaller Language Models have been shown to be ineffective for answering unmemorized question-only (open domain) questions (Lewis et al., 2021). For the latter two combination methods we create contexts using each of eight RR score thresholds ranging from t = 0.0005 to t = 0.9. We denote the particular version using the threshold e.g. EitherOrBoth(0.9) means samples are augmented using the EitherOrBoth method with t = 0.9. Obviously innumerably other combination methods are possible but we find that this set is sufficient for our research purposes while remaining manageable. Figure 6.2 illustrates examples of contexts derived from each combination method 74 using hypothetical RR scores. Combined contexts are truncated (from the Iterator component) to the maximum sequence length of the model (512 tokens) at inference time. Each of our three Reasoning Models might be expected to perform better with particular context types. For example the GR model might do better where the context tends to be rationale-like whereas the RATD model may do better where the context is of Iterator-generated form. This influences which combination method is likely to perform better on each Reasoning Model. Similarly, different combination methods are likely to work better for differing question types (commonsense, multi-hop factual, etc). For exam- ple knowing that LLM-generated rationales tend to be more effective than Iterator-generated contexts for answering commonsense questions, we can deduce that RationaleDefault(0.9) is likely to be a good strategy for de- veloping contexts for CommonsenseQA because using this strategy results in Rationale-only contexts except where the Iterator context is scored very highly. However, we are interested in the situation where our model is pre- sented with an arbitrary question of unknown type. Hence we are more interested in finding combination methods that will generally work well un- der this assumption, even where the method may not be the best for any particular type. We identify combination methods satisfying this criteria as those with the highest unweighted macro-average score over our unseen evaluation datasets (henceforth “Mean” or “Mean score”) on each Reason- ing Model, taking inspiration for averaging over heterogeneous metrics from e.g. Wang et al. (2019b,a). For the methods that utilize RR model scores we select the highest performing on this measure and refer to it as “Generally best RR combo” below. We also report the “Best RR combo per dataset” where we select the highest scoring combination method for each evaluation dataset. We note that since we cannot use this approach on an arbitrary question of unknown type we don’t consider it a usable method in a truly unseen setting, although future work could remedy this (e.g. through utilis- ing an additional model trained to predict the best combination method for a question). 75 We refer below to contexts created for each evaluation dataset that con- sist entirely of Iterator-generated contexts as “Iterator only”, those contexts entirely composed of LLM-generated rationales as “Rationale only”, and those that apply any of the combining methods as “Rationale + Iterator” (noting that individual samples in the latter may only contain one of the possible context components). For brevity, where referring to the use of a particular context type on a particular model we use shorthand such as “GR+RATD: Iterator only” or “GR+RATD: Iterator + Rationale (Naïve Concatenation)”. To test statistical significance over the large number of model:context combinations created we use methods for accomplishing this described in Demšar (2006) as implemented in the AutoRank library (Herbold, 2020). Specifically all tests use significance level α = 0.05 and we use the non- parametric Friedman test as omnibus test, followed by the Nemenyi test to infer which differences are significant. Generally our key findings are signif- icant as highlighted in the following section. All significance test results are summarised in Appendix G.3. 6.3.3 Experimental Results As Table 6.4 indicates, rationales generated by BLOOM almost always pro- duce weaker results than those from StableVicuna. For example, in consid- ering BLOOM-generated “Rationale only” contexts, the GR model might have been expected to outperform the RATD model (given the additional samples with gold rationale contexts added to GR training). However the GR model actually underperforms (39.5 vs 42.0). Conversely, where con- sidering StableVicuna-generated “Rationale only” contexts, the GR model slightly outperforms the RATD model as expected. 6.3.3.1 GR+RATD Model Versus Baseline And LLM Direct Prompts It can be seen in Table 6.4 that where using the stronger StableVicuna- generated rationales, the GR+RATD model results dominate both RATD and GR models, so we consider this as our best model. Table 6.5 compares 76 Rationale Generator → Context ↓ / Model → StableVicuna (INT8) BLOOM (INT8) GR RATD GR+RATD GR RATD GR+RATD Iterator only Rationale only Rationale + Iterator (Naïve concatenation) Rationale + Iterator (Generally best RR combo) 38.1 44.5 42.7 45.5 Rationale + Iterator (Best RR combo per dataset) 47.6 40.4 44.2 46.3 46.3 47.5 41.0 38.1 45.3 39.5 47.2 43.2 47.2 42.9 48.1 45.1 40.4 42.0 43.8 44.2 45.6 41.0 40.3 43.7 44.4 45.4 Table 6.4: Mean score over unseen evaluation datasets. The “Iterator only” results are duplicated across Rationale Generators to facilitate comparison. Bold indicates highest score per context type (i.e. per row). StableVicuna-generated rationales generally outper- form BLOOM rationales. GR+RATD to our main baseline “RATD: Iterator only”. Both our “Naïve concatenation” and “Generally best RR combo” combination methods sig- nificantly outperform this baseline on the Mean score and on most individual datasets, except for Musique. Model: Context Random Best Prior RATD: Iterator only BLOOM INT8 : Few Shot Standard Prompt StableVicuna INT8 : Few Shot Standard Prompt BLOOM INT8 : Few Shot COT Prompt StableVicuna INT8 : Few Shot COT Prompt GR+RATD: Iterator only GR+RATD: Rationale only GR+RATD: Rationale + Iterator (Naïve concatenation) GR+RATD: Rationale + Iterator (Generally best RR combo) GR+RATD: Rationale + Iterator (Best RR combo per dataset) SQA CSQA ARC-DA IIRC Musique Mean (Acc.) (Acc.) (F1) (F1) (F1) 50.0 90.4a 58.9 58.1 56.2 57.1 61.7 57.3 64.2 61.7 61.7 64.5 20.0 91.2b 63.6 47.5 70.8 54.9 67.7 65.0 73.1 72.6 72.7 73.3 61.4c 53.6d 31.6 58.7 56.8 50.5 45.8 35.6 50.2 53.0 52.1 53.0 25.5 17.3 19.8 17.4 20.8 25.6 25.1 27.0 27.3 27.4 49.8e 22.2 9.4 9.3 11.1 12.6 21.5 13.8 21.7 22.0 22.4 69.3 40.4 38.2 42.6 38.2 41.7 41.0 45.3 47.2 47.2 48.1 Table 6.5: Evaluation per dataset. The “Rationale+Iterator” combined contexts signifi- cantly outperform the “RATD: Iterator only” baseline and both single-component con- texts. The “Rationale only” row using StableVicuna-generated rationales significantly outperforms the StableVicuna COT direct prompt. Bold indicates best in column ex- cluding Best Prior and Best RR combo per dataset. Best prior are either not unseen or involve much larger models as follows: aAnil et al. (2023): Palm 2 using self consistency. bXu et al. (2021): Finetuned, retrieval from Conceptnet. cBhakthavatsalam et al. (2021): Training includes ARC-DA. dOurs: Finetuned (see Chapter 5). eTrivedi et al. (2022a): Specialised retrieval from gold and distractor paragraphs. We next consider the efficacy of directly prompting both LLMs to pro- duce the answer using few-shot COT exemplars, and separately with stan- dard few-shot prompts that use the same exemplars without the rationale portions. Here, the most like-for-like comparison is from the StableVicuna COT prompt to “GR+RATD: Rationale only”, since the rationales used are the same ones produced by the direct StableVicuna COT prompts. For the 77 StableVicuna COT prompt (and both BLOOM prompts), “GR+RATD: Ra- tionale only” significantly outperforms the LLM direct prompts on the over- all Mean score, and generally on individual datasets (except for ARC-DA). The 42.6 to 45.3 Mean improvement is not significant for the StableVicuna Standard prompt. In comparing performance of our combined contexts (“Naïve concatena- tion” and “Generally best RR combo”) to the single-component contexts (“Iterator only” and “Rationale only”), both combined contexts achieve a higher Mean score than either single component context does. Improvement from “Iterator Only” is significant in both cases, that from “Rationale Only” to “Naïve concatenation” is significant, while the other is on the significance threshold (Appendix G.3). Notably, three of the five datasets (ARC-DA, IIRC and Musique) have higher scores on either combined context than on any single component context as well. Considering the “Iterator only” against the “Rationale only” rows in Table 6.5 illuminates the relative strengths of our two knowledge sources. Multi-hop factual questions as exemplifed in Musique benefit far more from retrieved paragraphs than LLM-generated rationales (21.5 F1 vs 13.8 F1) whereas commonsense datasets such as SQA (64.2 acc vs 57.3 acc) and CSQA (73.1 acc vs 65.0 acc) unsurprisingly benefit more from LLM- generated rationales as context. IIRC, another factual dataset might have been expected to benefit more from retrieved paragraphs but performance is similar between rationale-only contexts and retrieved paragraphs. We sug- gest this is because the input for each IIRC sample is comprised of the ques- tion and the initial gold paragraph, and many samples then only require a single extra piece of information in order to have sufficient evidence. LLMs may be better at performing (the equivalent of) this single hop than they are at identifying the multiple additional pieces of information necessary in the Musique case. 6.3.3.2 RR Model Scoring And RATD Training Efficacy We next evaluate the effectiveness of our methods through an ablational approach. The GR model can be regarded as an ablation of RATD train- ing from the GR+RATD model (-RATD). The Naïve concatenation context 78 type can be seen as an ablation of RR Model scoring from the Generally best RR combo (-RR). Hence our “GR: Rationale + Iterator (Naïve concatena- tion)” model can be seen as an ablation of both (-RR -RATD) while being (insignificantly) better than the main “RATD: Iterator only” baseline (42.7 vs 40.4). Table 6.6 illustrates the relative efficacy of our two methods, both individually and together. What is revealed is that the RR model-scoring approach significantly improves Mean results in the absence of RATD train- ing (45.5 vs 42.7), while the RATD training significantly improves results in the absence of RR scoring (47.2 vs 42.7). The difference between the two methods (45.5 vs 47.2) is not significant. Model: Context GR+RATD: Rationale + Iterator (Generally best RR combo) +RR +RATD* -RR +RATD* GR+RATD: Rationale + Iterator (Naïve concatenation) +RR -RATD* GR: Rationale + Iterator (Generally best RR combo) -RR -RATD GR: Rationale + Iterator (Naïve concatenation) Mean 47.2 47.2 45.5 42.7 Table 6.6: RATD and RR effectiveness. The bottom row can be regarded as an ablation of both RR and RATD (-RR -RATD). All three topmost methods (marked with an asterisk) are significantly different from the bottow row (-RR -RATD) however differences between the three topmost methods are not significant. This shows that the RR and RATD meth- ods are individually both effective but combining the methods does not improve results further. Using the two methods in combination does not improve results fur- ther. The “Generally best RR combo” for the GR+RATD model uses the EitherOrBoth(0.9) combination method. This can be interpreted as only se- lecting a context component if the RR model scores it very highly, and since both components frequently fail to meet the threshold the default of using the Naïve concatenation then applies. This has the effect of the context be- ing the Naïve concatenation for 80.9% of evaluation samples (Appendix H.5) which explains why combining the RATD and RR doesn’t result in further improvement in this case. 6.4 Conclusion We have implemented methods for combining explanatory context from two knowledge sources: LLM-generated rationales and retrieved paragraphs from 79 Wikipedia. The first method involves training our smaller Reasoning Model on RATD datasets such that it becomes proficient at reasoning over long, noisy contexts which contain information from both knowledge sources. The second method is to use Rationale Ranking model scores for each knowledge source as guidance in constructing contexts that may contain information from both, or either knowledge source. We have shown that both methods are individually effective in significantly improving unseen question-answering performance both versus the baselines established in Chapter 5, and versus a baseline that ablates both RR and RATD methods (Section 6.3.3.2). We have shown that smaller Language Models trained to reason can manifest comparable or stronger performance on unseen questions to LLMs, when provided with the same knowledge to reason over that the LLM is capable of generating for itself. (Section 6.3.3.1). After comparing results from question-answering using LLM-generated rationales as context with those using retrieved paragraphs we concluded that LLMs are weaker at surfacing the multiple pieces of information nec- essary to answer multi-hop factual questions, but stronger at generating rationales suitable for answering commonsense questions. Both knowledge sources are found to be effective for question types such as factual questions requiring a single additional piece of information (Section 6.3.3.1). In comparing performance of our combined contexts to the single- component contexts, the combined contexts achieve a higher Mean score over all unseen evaluation datasets than either single component context does. Individually, three of the five datasets (ARC-DA, IIRC and Musique) achieve higher scores when using combined contexts than on any single com- ponent context as well (Section 6.3.3.1). 80 7 Conclusion Inspired by the ability of pretrained LLMs to successfully answer a diversity of question types for which they have not been explicitly trained for, but motivated by a desire to explore what is possible in this regard under lower resource assumptions, we initially evaluated whether significantly smaller Language Models have a material capacity to generalise beyond rote memo- risation of training data. We followed the positive finding from this study by establishing a set of strong baseline results against diverse unseen evaluation datasets for which comparisons against prior work are available. We then explored diverse methods for improvement from the baselines. We review our achievements and contributions in Section 7.1, discuss limitations in Section 7.3 and provide potential avenues for future research, beyond improving the proposed models, in Section 7.4. 7.1 Summary of Contributions We summarise our contributions as follows: In Chapter 4 we proposed a combination of a method for determining train-evaluation overlap and a method for “intervening” with additional training datasets to determine memorisable and unmemorisable evaluation samples. Taken together these methods avoided prior experimental weak- nesses of (1) inability to control for pretraining data, (2) needing to compare 81 performance between different sets of “clean” and “dirty” samples, and/or (3) inability to detect discontinuous memorisable sequences. We showed that a smaller Language Model is capable of reasoning over an unseen question and context to successfully answer challenging questions that it is unlikely to have memorised at any point in it’s training history. Chapter 5 introduced a set of baselines for performance on challenging unseen compositional questions which we established by training our Rea- soning Model on a set of 79 tasks, encompassing both existing datasets and those we developed or modified. We proposed the Iterator, our n-hop dense retrieval system that incorporates a novel Evidence Set Scoring model into the reranking stages. We used the Iterator in developing novel RATD train- ing datasets that are intended to impart diverse reasoning strategies, such as an ability to identify and weigh partially evidential facts in long, noisy contexts. We added RATD datasets to the training mixture and showed that this, along with augmenting evaluation questions with a retrieved context, significantly improved performance against our baselines. In Chapter 6 we presented a set of methods for combining the retrieval knowledge source developed in Chapter 5 with a second knowledge source consisting of rationales generated by larger Language Models. We explored a number of context combination strategies and showed that further signifi- cant improvement against the baselines was achievable using both the novel RR method, and an adaptation of the RATD method. We showed that smaller Language Models trained for reasoning can manifest comparable or stronger performance on unseen questions to a LLM, when provided with the same knowledge to reason over that the LLM is capable of generating for itself. We also identified and discussed the strengths and weaknesses of each knowledge source with respect to the different types of questions encapsulated in each of our baselines. 82 7.2 Contributions Here we present a more detailed listing of contributions: 1. We demonstrated that a smaller Language Model is capable of per- formance beyond simple memorisation in deriving correct answers to challenging compositional questions. To achieve this we proposed a method of identifying overlap between evaluation and training sam- ples based upon semantic similarity of input and output tokens. We utilised this approach in conjunction with a technique to intervene with additional training datasets to create a Reasoning Model versus a baseline Reasoning Model with no intervention. Our approach enabled us to mitigate effects of pretraining on results and to avoid comparing disparate populations of evaluation subsets as some prior studies have done. After demonstrating the effectiveness of our methods in iden- tifying both memorisable, and unmemorisable samples we were able to show that improved performance on unmemorisable samples is not attributable to the effect of memorisation. 2. We offer what is to our knowledge the most comprehensive set of base- lines evaluating smaller Language Model zero-shot reasoning abilities versus LLM and other approaches published to date. Here our baseline (Base) is a multitask-trained Reasoning Model that is trained in two stages on a large number of tasks, both existing and those that we develop. 3. We proposed the “Iterator”, a dense retrieval, reranking and evidence set scoring system that aims to identify the relevant n documents necessary to answer n-hop questions, where n is arbitrary but we use n = 4. 4. We used the Iterator against a corpus of English Wikipedia para- graphs both to develop contexts for unseen evaluation questions and to develop retrieval-augmented training datasets (RATD) which were added to the existing Base training regime in training the Base+RATD 83 model. RATD datasets are intended to impart diverse reasoning strate- gies, such as an ability to identify and weigh partially evidential facts in long, noisy contexts. We showed that when used in conjunction with our retrieval-augmented evaluation samples, the Base+RATD model significantly outperformed the Base model on the established base- lines. 5. We evaluated methods for combining information from two knowl- edge sources to develop contexts that are more helpful in answering questions. The first knowledge source was the above Iterator with Wikipedia while the second involved rationale generation from larger Language Models that were optimised to run locally in a resource- constrained environment. We proposed “Rationale Ranking” (RR), a method that both selects context components by relevance, and fil- ters components that may be false. This was accomplished by training a Rationale Ranking model to score LLM-generated rationales and Iterator-generated contexts for truthfulness in addition to the more common practice of quantifying relevance. A number of strategies were then evaluated for using the resulting scores to develop contexts that combine information from both knowledge sources. We showed that the RR method significantly outperforms the earlier Base+RATD base- lines. We also showed that models trained using the earlier RATD training method were able to generalise sufficiently such that they can successfully utilise combined contexts both in isolation from, and in conjunction with, RR scoring. 6. We showed that smaller Language Models trained for reasoning can manifest comparable or stronger performance on unseen questions to LLMs, when provided with the same knowledge to reason over that the LLM is capable of generating for itself. 7. We presented evidence to illustrate the respective strengths and weak- nesses of LLMs and n-hop retrieval from a Wikipedia corpus as knowl- edge sources. The LLM tended to offer better performance when con- sidering questions requiring commonsense knowledge (e.g. “I’m cross- ing the river, my feet are wet but my body is dry, where am I?”). 84 Retrieval from the Wikipedia corpus tended to be better at extract- ing knowledge necessary to answer n-hop factual questions where n is higher than two (e.g. “The Rhine forms a border between Aschen- brödel’s composer’s country and another country where women got the vote when?”). Moreover, we showed that combining information from these sources significantly improved the average performance over evaluation datasets versus using a single source, and on individual eval- uation datasets the combined context performance was often beyond what either knowledge source in isolation could deliver. 7.3 Limitations Although we consider our contribution to be a promising start, we encoun- tered a number of areas where further exploration may result in further material improvement. These are summarised as follows: ■ Additional or alternative knowledge sources. The evaluation and inclusion of other knowledge sources (and/or access methods) could yield further benefit, both in terms of improving the sufficiency of ex- planatory contexts, and in terms of lowering the resource requirements for the knowledge acquisition component. For example, Huang et al. (2023) and others previously have augmented questions through re- trieval from a knowledge graph. This could offer a useful and resource- friendly addition to our existing set of knowledge sources. ■ Context combination selection using question type. In Chap- ter 6 we noted that choosing the best context combination method per dataset produced superior results. This is analysed further in Ap- pendix H.5. We discounted this approach in our setting as it requires prior knowledge of the questions. However training a model to detect question types and using this information to choose a context combi- nation strategy on a per-question basis seems likely to produce further benefit. ■ Numerical literacy in unseen settings. We identified in Chapter 5 that while applying existing training datasets aimed at imparting 85 numerical reasoning strategies are effective in finetuned settings, they are far less so for unseen questions. Further study of this phenomenon is likely to be fruitful, whether considering the creation or identification of extremely diverse training datasets, or in evaluating further external tool integration. ■ Zero-shot retrieval. To equip the Iterator retrieval component with an ability to retrieve for arbitrary queries we trained it in a multitask fashion on a mixture of multihop training datasets that have sentence- level annotation. While effective, it seems likely that additional pre- training in self-supervised fashion on large corpora (discussed in the final paragraph of Section 3.2), would reduce the reliance on expensive annotation and perhaps further improve the ability of the Iterator to operate with diverse queries. ■ Automated Context Quality Evaluation. As noted in Section 6.2.1, our purpose in acquiring explanatory context is to improve question-answering performance, and hence our primary measure of context quality is the resulting improvement. Noting some existing re- search into automated methods of falsehood detection (discussed in Section 3.5), it is possible that some of these approaches are extensi- ble to the more general problem of evaluating context quality along dimensions of (degrees of) sufficiency, necessity and clarity, in addition to truthfulness. Relating these insights to question-answering perfor- mance could yield insights into what models find “useful” in a context, and hence point to improvement opportunities for RATD datasets, Ev- idence Set scoring, rationale generation and construction of even more effective combined contexts. 7.4 Outlook Remediating the previously identified limitations would be a direct continu- ation of this work. Beyond that, we ask the reader’s indulgence as we exercise our imagination in considering what the methods we have explored in our work might be more distantly extended towards: 86 ■ Beyond textual question-answering. Our methods are broadly aimed at equipping smaller models to “noisily reason” in the face of partial information and distractions obtained by combining informa- tion from multiple knowledge sources in a purely textual environment. Evaluation of the prospects for extensibility of our methods into multi- modal situations in addition to pure text, such as visual, auditory or other sensory information, seems a natural path to explore. This could be in the context of integrating a noisy reasoning function into an embodied agent, and/or in the exploration of a role for partially observable, noisy, multi-modal information in the reasoning process itself. ■ Relaxing experimental constraints. We have focused our exper- iments on evaluating what is possible to achieve with a smaller Lan- guage Model. It is not difficult to imagine a larger model that is further trained and equipped using our methods. Such a model may be more proficient than what our experiments here have shown general-purpose LLMs to be at performing the noisy reasoning function, and retaining the ability to be an effective knowledge source. 87 Appendices 88 A Hyperparameters A.1 Hyperparameters (Chapter 4) All models are trained on two Nvidia RTX8000 GPUs using 32-bit precision and a linear learning rate decay schedule that reduces the learning rate to zero over 250K training steps. Initial learning rates and other hyperparam- eters are shown in Table A.1. The optimiser used is AdamW. A maximum sequence length of 512 tokens was used for all models. Model Initial LR Batch Size Grad. Accum Train Steps UQA Models UQA+TDND Models 2e-5 2e-5 32 32 2 2 150K 150K Table A.1: Hyperparameters used for each model. Each training step is one batch input i.e the number of optimization steps is T rainingSteps/GradientAccumulationSteps. All final models are selected as the best model on the development sets over the specified number of training steps and validation steps were performed every 10K training steps. A.2 Hyperparameters (Chapters 5 and 6) All models are trained on one GPU (either an Nvidia RTX8000 or A100) except for the Retriever models which are trained on six 80GB A100 GPUs. All models are trained using mixed precision using a linear learning rate 89 decay schedule. Initial learning rates and other hyperparameters are shown in Table A.2. The optimiser used for the Retriever, Reranker, Evidence Set Scorer and Rationale Ranker is Adam. All other models use AdamW. All Stage 2 Reasoning Model training starts from the same Stage 1 checkpoint. A maximum sequence length of 512 tokens was used for all models. Model Retriever Retriever+memory bank Paragraph Reranker Evidence Set Scorer Rationale Ranker Reasoning Model Stage 1 Reasoning Model Stage 2 Base Reasoning Model Stage 2 Base+RATD Reasoning Model Stage 2 GR Reasoning Model Stage 2 GR+RATD DROP finetuned IIRCG finetuned IIRCR finetuned Initial LR Batch Size Grad. Accum Train Steps 2e-5 1e-5 5e-5 5e-5 5e-5 2e-5 2e-5 2e-5 2e-5 2e-5 2e-5 2e-5 2e-5 150 250 12 12 24 32 32 32 32 32 32 32 32 1 1 8 8 8 4 4 4 4 4 4 4 4 99K 59K 140K 140K 188K 1M 1M 1M 1M 1M 260K 40K 40K Table A.2: Hyperparameters used for each model. Each training step is one batch input i.e the number of optimization steps is T rainingSteps/GradientAccumulationSteps. All final models are selected as the best model on the development set(s) up to the specified maximum number of training steps and validation steps were performed every 10K train- ing steps. BLOOM loaded under INT8 with a batch size of one consumed approx- imately 200GB of GPU RAM. StableVicuna also under INT8 with a batch size of one consumed approximately 18GB. 90 B Reasoning Model Input Formats We employed a simple and fixed input format based on that used in Uni- fiedQA (Khashabi et al., 2020b) with extensions as follows: Open domain form: [question] \\n Reading comprehension (RC) form: [question] \\n [context] Multiple choice form: [question] \\n (A) [option text a] (B) [option text b] ... Multiple choice with RC form: [question] \\n (A) [option text a] (B) [option text b] ... \\n [context] Context formats: Iterator only (also called “DatasetR” in Chapter 5): We standardised the formatting of any paragraphs or paragraph fragments that had associated document titles as follows. Further detail on how such contexts were constructed is in Section 5.2.2. 91 [Title 1]: [Sentences]. [Title 2]: [Sentences]. ... Rationale only: [Sentences]. Naïve concatenation: Further Explanation: [Sentences]. [Title 1]: [Sentences]. ... 92 C Wikipedia Corpora For experiments aimed at evaluating the Iterator components in an in- domain setting (Table 5.1), we used the same corpus of Wikipedia abstracts from October 1 2017 that HotpotQA and Hover are based upon. For our main experiments in Chapters 5 and 6, and for various peripheral tasks such as identifying negative paragraphs for retrieval training we start with the August 1 2020 Wikipedia dump as preprocessed by (Qi et al., 2021). We retain all paragraphs with more than seven words, and extract hyperlinks and calculate sentence offsets from each. There are a total of slightly over 35 million paragraphs. We note that all results in this thesis use the original HotpotQA question set rather than the question set version used in (Qi et al., 2021) that has been normalised against this Wikipedia version. 93 D Iterator Training Details D.1 Retrieval Model Additional Details Our final Retrieval model was trained similarly to Xiong et al. (2021) in that following the initial stage of training, additional training with a large memory bank (Wu et al., 2018) of negative paragraph embedding vectors was applied. For retrieval of paragraphs for RATD datasets, the number of paragraphs retrieved at each hop (k) was set to 60 so as to complete in reasonable time. In building unseen evaluation dataset contexts k was arbitrarily set to 150 to maintain reasonable performance on queries that are very different to those used in retrieval training. We used FAISS (Johnson et al., 2019) for the search over paragraph embedding vectors. Generally we used an approximate search mechanism, HNSW (Malkov and Yashunin, 2018), except for the Hover experiment (Ta- ble 5.1) where an exact inner product search was employed. D.2 Paragraph Reranker Model The Reranker has an input format as follows: [CLS] query [SEP] yes no [INSUFF] [SEP] title [SM] sentence 0. [SM] sentence 1. ... [SEP] 94 The query component is encoded as: question [QSEP] title 1 | sentence 1. sentence 2. [QSEP] title 2 | sentence 1 ... Special tokens are utilised as follows: [CLS]: Trained using a one-layer head to be the Paragraph relevance score with a binary cross-entropy objective. [INSUFF]: Insufficient Evidence token, used by the start and end token span predictors that are implemented as per Devlin et al. (2019). Although we utilise a separate abstractive QA model, we use the span predictors as a debugging tool and retain this component in the final loss function. [SM]: Sentence Marker(s). Used to score sentence relevance. Trained using a one-layer head with a binary cross-entropy objective. [QSEP]: query components separator. The final training objective is the unweighted summation of the para- graph relevance loss, sentence relevance loss and span loss. D.3 Evidence Set Scoring Model This model has an input format as follows: [CLS] question [SEP] yes no [INSUFF] [SEP] [SM] title 1 | sentence 1. [SM] title 1 | sentence 2. [SM] title 2 | sentence 1 ... [SEP] Special tokens are utilised as follows: [CLS]: Evidence Set score. Trained using a one-layer head with binary cross-entropy. The label is 1.0 if all of the gold sentences from all gold paragraphs are present and zero otherwise. [INSUFF]: Insufficient Evidence token, as per the Reranker model. 95 [SM]: Sentence Marker, as per the Reranker model. The final training objective is the unweighted summation of the evidence set loss, sentence relevance loss and span loss. Following Khattab et al. (2021), the maximum number of sentences in an evidence set was set to nine in all experiments. To select the sentences for constructing the retriever query and evidence set for the next hop a maxi- mum of five sentences over a threshold are selected, also following Khattab et al. (2021). The minimum threshold used to select sentences is 0.1 unless fewer than 2 sentences qualify in which case the two top-scoring sentences are taken. 96 E Reasoning Model Multitask Training Details E.1 UQA and UQA+TDND Models (Chapter 4) The UQA model is trained using the same datasets as used by Khashabi et al. (2020b). Our UQA+TDND model uses these plus TD and ND from Geva et al. (2020). Datasets and development set performance are enumerated in Table E.1. Dataset UQA UQA+TDND narrativeqa ai2_science_middle ai2_science_elementary arc_hard arc_easy mctest squad1_1 squad2 boolq race openbookqa synthetic_textual (TD) synthetic_numeric (ND) 30.3 62.4 65.0 49.5 64.0 90.0 66.6 68.7 84.4 77.9 65.0 29.6 60.0 61.0 49.2 65.8 88.1 64.5 68.5 84.3 75.6 64.8 89.6 75.9 Table E.1: UQA and UQA+TDND Reasoning Model training datasets. All figures are Exact Match on full development sets from the single overall best model without per- dataset finetuning. 97 E.2 Base, Base+RATD, GR and GR+RATD Models (Chapters 5 and 6) We trained both the first and the second stage of these four models for one million steps (batches) with the best model defined as that with highest mean exact match accuracy over all development sets. To ensure reasonable elapsed time for each validation step we used reduced development sets where development sets of more than 1250 samples were reduced to approximately 1250 by taking every nth sample with n = round(c/1250) where c is the sample count. A validation step occurs every 10,000 training steps. Table E.2 enumerates datasets used in Stage 1 and in Stage 2 Group 1 (those above the dotted line were added for Stage 2, namely CREAK (Onoe et al., 2021), CommonsenseQA 2.0 (Talmor et al., 2021), TriviaQA (Joshi et al., 2017), Natural Questions (Kwiatkowski et al., 2019) and Twenty Questions1). During Stage 1 training, error-based sampling for these datasets was employed and in Stage 2, uniform sampling. Datasets names containing the term “opendomain” only use the question text as input and are added with the primary aim of teaching the model about the expected form of answer for a given question type (e.g. yes or no for “Could an Aardvark use a knife and fork?”. Datasets preceded by “preasm” are as provided by Yoran et al. (2022) with reformatting into our standard form. Datasets preceded by “poetsql” are the POET-SQL dataset kindly provided to us by the authors of Pi et al. (2022). We split POET-SQL into separate datasets based on the type of SQL statement and converted into our standard form. For the “synthetic_num” datasets we extended the original code pro- vided by Geva et al. (2020) to output in the variablised form proposed in Pi et al. (2022) (e.g. “1 + 3” becomes “x + y \\n x=1; y=3; z=0; ...” where z is a distractor). Additionally we added two datasets with questions of the form “Is x > | < | between y [and z]?” for numbers and dates respectively. We generated one million samples for each of the resulting eight datasets. 1https://github.com/allenai/twentyquestions 98 Dataset Base Base+RATD GR GR+RATD creak_opendomain csqa2_opendomain triviaqa_open_opendomain naturalquestions_open_opendomain twentyquestions_opendomain preasm_arithmetic_addition preasm_arithmetic_superlatives preasm_composition preasm_composition_2_hop preasm_conjunction preasm_counting preasm_every_quantifier preasm_most_quantifier preasm_numeric_comparison_boolean preasm_numeric_superlatives preasm_only_quantifier preasm_temporal_comparison preasm_temporal_comparison_boolean preasm_temporal_difference preasm_temporal_superlatives poetsql_multi poetsql_select_abs poetsql_select_arith poetsql_select_count poetsql_select_max poetsql_select_min poetsql_select_sum poetsql_single synthetic_num_arg_min_max synthetic_num_date_diff synthetic_num_date_min_max synthetic_num_min_max_avg synthetic_num_percent synthetic_num_signed_arith synthetic_num_yn_dates synthetic_num_yn_nums synthetic_textual enwiki_20200801_selfsvised 76.6 49.4 8.0 5.4 88.8 99.6 97.9 93.4 93.5 80.2 96.6 99.8 99.8 99.9 98.1 99.4 93.7 99.8 94.3 97.5 36.2 84.2 89.7 80.8 79.6 82.5 50.6 79.4 100.0 82.6 93.2 69.3 99.0 76.1 99.8 100.0 92.4 22.5 76.1 51.9 7.4 8.7 87.9 99.8 97.9 93.7 93.7 81.0 96.5 99.6 99.7 99.8 97.9 99.4 93.0 99.7 95.1 97.1 34.5 94.0 85.1 80.2 75.7 81.3 52.7 79.0 100.0 82.7 95.7 68.8 98.2 78.6 99.8 100.0 92.4 24.1 75.4 50.6 8.2 5.5 89.0 99.7 97.5 93.6 93.3 80.1 96.5 99.6 99.6 99.9 97.9 99.3 93.3 99.8 94.7 97.3 36.0 84.8 90.8 80.2 74.8 82.9 53.5 78.8 100.0 82.6 92.2 70.0 97.8 79.4 99.8 100.0 92.7 26.3 76.9 53.1 8.0 8.6 87.9 99.6 98.1 93.6 93.8 80.8 96.7 99.6 99.7 99.9 98.0 99.4 94.0 99.7 95.0 97.5 36.1 84.4 90.0 80.1 78.3 82.5 54.1 80.4 100.0 82.7 94.9 68.9 99.2 78.2 99.7 100.0 93.5 23.3 Table E.2: Base, Base+RATD, GR and GR+RATD Reasoning Model Stage 1 and Stage 2 Group 1 training datasets. All figures are Exact Match on reduced development sets from the single overall best model without per-dataset finetuning. Datasets above the dotted line were added for Stage 2. The “synthetic_textual” task is as provided by Geva et al. (2020) aside from reformatting into our standard format. Finally, we created a self-supervised task (enwiki_20200801_selfsvised), by sequentially concatenating paragraphs from documents in our Wikipedia dump until a sequence length of approximately 512 tokens was reached. During training, spans were masked from each sample input based on their being named entities (Guu et al., 2020) or noun phrases with λ = 0.65, or randomly with λ = 1 - 0.65. The training objective was to predict just the masked spans as with T5 (Raffel et al., 2020) rather than the original 99 BART (Lewis et al., 2020a) objective of predicting the entire unmasked input sequence. A small development set was randomly selected to enable this task to be included with other tasks in model selection. Table E.3 enumerates datasets contained in Group 2 for Stage 2 training (excepting the additional GR datasets added for the Chapter 6 models - these are shown in Table E.4). We converted TAT-QA (Zhu et al., 2021), a dataset consisting of tabular and textual content to our format by linearising the constituent tables. Dataset names containing “ratd” are those created by us by concate- nating the original question with the retrieved context from our Iterator as described in Section 5.2.2. Dataset names additionally containing the term “max4paras” use these same contexts but are truncated to the top 4 retrieved paragraph fragments. We found that sometimes longer and sometimes shorter contexts provided better results and hence we added both forms to provide diversity in length. Dataset names containing the phrase “with_ir” have retrieved contexts provided by Khashabi et al. (2020b) which we use unmodified. Contexts for dataset names incorporating the term “goldplusdistractors” are constructed using the positive and negative paragraphs from correspond- ing retrieval training datasets. In both cases the document title was ran- domly withheld (λ = 0.1). For positive paragraphs we included the gold sentences plus random other sentences if sentence-level annotation was avail- able, otherwise the full paragraph text. For negatives we similarly included either random sentences or full text such that the length distribution of positive and negative paragraphs was similar. Squad 2 provides some unanswerable training samples. We supplemented these by creating unanswerable samples from HotpotQA, Hover and FEVER positives in a similar manner to the “goldplusdistractors” datasets except here we randomly drop gold sentence(s) and/or full gold paragraphs such that there is guaranteed to be at least one missing gold sentence. We per- formed the same activity for Musique at the paragraph level. All unanswer- able samples have the label string “<No Answer>”. A number of the other datasets (i.e. those whose names do not contain key terms described above) are provided by (Khashabi et al., 2020b, 2022). 100 These datasets are: AdversarialQA (Bartolo et al., 2020), ARC (Clark et al., 2016, 2018), BoolQ (Clark et al., 2019a), BoolQ-NP (Khashabi et al., 2020a), MCTest (Richardson et al., 2013), the yes/no subset of MultiRC (Khashabi et al., 2018), NarrativeQA (Kočiský et al., 2018), NewsQA (Trischler et al., 2017), OpenbookQA (Mihaylov et al., 2018), PhysicalIQA (Bisk et al., 2020), PubmedQA (Jin et al., 2019), QAConv (Wu et al., 2022), QASC (Khot et al., 2020), Quail (Rogers et al., 2020), Quoref (Dasigi et al., 2021), RACE (Lai et al., 2017), Reclor (Yu et al., 2020), Record (Zhang et al., 2018), Ropes (Lin et al., 2019), SocialIQA (Sap et al., 2019b), SQuAD 1.1 (Rajpurkar et al., 2016), SQuAD 2 (Rajpurkar et al., 2018), TweetQA (Xiong et al., 2019) and Winogrande (Sakaguchi et al., 2020). For readability, we omit citations for other datasets already referenced. As noted in Chapter 6, additional GR datasets are added to the training regime for the GR and GR+RATD models. They are constructed similarly and from the same sources as noted for the RR model in Table 6.1 so we omit citations here for clarity. The GR datasets are enumerated in Table E.4. The datasets containing the term “mc” (multi-choice) contain the question, multi-choice options and the gold rationale while those denoted “no_mc” omit the multichoice options and only contain the question and the rationale. The three datasets denoted “r4c” contain the question plus a gold rationale created by each of three respective annotators. 101 Dataset Base Base+RATD GR GR+RATD adversarialqa_all ai2_science_middle ai2_science_elementary arc_hard arc_hard_with_ir arc_easy arc_easy_with_ir boolq boolq_np creak_goldplusdistractors creak_ratd creak_ratd_max4paras csqa2_ratd csqa2_ratd_max4paras fever_goldplusdistractors hover_goldplusdistractors hover_ratd hover_ratd_max4paras hotpotqa_goldplusdistractors hotpotqa_ratd hotpotqa_ratd_max4paras mctest multirc musique_goldplusdistractors musique_qa_ratd musique_qa_ratd_max4paras narrativeqa naturalquestions_goldplusdistractors naturalquestions_open_ratd naturalquestions_open_ratd_max4paras newsqa hotpotqa_fever_hover_noanswer musique_noanswer pubmedqa_pqal_short_ans qaconv quail quoref record ropes squad1_1 squad2 tweetqa tatqa triviaqa_goldplusdistractors openbookqa openbookqa_with_ir physical_iqa qasc qasc_with_ir qasc_ratd qasc_ratd_max4paras race reclor social_iqa winogrande_xl 46.0 67.2 67.5 56.5 59.5 68.3 77.5 84.7 82.0 85.2 85.9 84.0 65.9 91.3 100.0 88.0 30.0 56.5 44.3 83.5 96.6 99.2 54.3 78.1 71.5 53.1 77.6 66.5 66.2 34.5 41.6 63.9 67.2 68.4 66.9 53.4 72.0 76.4 43.0 75.1 69.7 47.6 63.2 69.9 54.2 59.5 70.4 79.3 84.2 81.7 83.8 85.9 85.6 56.7 57.7 89.2 82.2 78.5 77.2 66.7 53.0 52.5 90.0 100.0 87.2 74.4 75.1 29.1 58.8 40.9 39.9 44.4 76.9 95.6 100.0 54.8 76.1 70.2 53.1 81.8 64.9 67.4 33.6 40.8 65.3 69.2 70.6 67.0 55.5 70.8 61.9 62.6 74.8 41.4 74.0 69.1 47.4 64.8 66.7 55.5 58.2 67.7 76.1 85.1 82.8 84.0 82.9 83.2 66.9 90.0 99.7 87.7 29.0 54.4 44.1 84.7 95.6 100.0 54.3 76.4 70.4 51.9 80.4 65.4 68.3 33.5 41.2 64.8 68.8 69.8 66.2 57.2 71.7 75.6 42.8 74.4 69.5 45.9 64.0 68.3 56.2 58.9 69.5 76.5 85.1 83.1 85.0 85.8 85.5 57.8 57.5 82.0 82.2 76.8 75.9 85.0 52.4 51.8 89.4 100.0 86.7 76.2 76.2 30.2 57.9 40.1 40.1 43.8 86.0 95.6 99.8 55.4 76.9 70.2 52.3 79.4 64.8 68.3 33.6 41.2 64.2 68.0 68.4 67.1 56.2 69.3 59.5 60.0 73.8 42.8 75.5 69.0 Table E.3: Base, Base+RATD, GR and GR+RATD Reasoning Model Group 2 training datasets, excluding GR datasets. All figures are Exact Match on reduced development sets from the single overall best model without per-dataset finetuning. Blank entries in the table indicate where RATD datasets are not included for particular models. 102 Dataset GR GR+RATD fever_eraser_facts creak_facts qasc_mc_facts qasc_no_mc_facts arc_worldtree_mc_facts arc_worldtree_no_mc_facts hotpotqa_facts hotpotqa_r4c_facts_0 hotpotqa_r4c_facts_1 hotpotqa_r4c_facts_2 95.9 93.9 94.2 59.8 91.9 33.7 70.4 82.7 82.4 81.5 95.6 92.9 92.6 59.4 92.1 34.1 69.7 81.7 81.8 80.9 Table E.4: GR training datasets, added to Group 2 for the GR and GR+RATD Reasoning Models. All figures are Exact Match on reduced development sets from the single overall best model without per-dataset finetuning. 103 F LLM Prompts and Example Generations F.1 Prompts For LLM Rationale Generation Here we show the few-shot COT prompts used against the StableVicuna Model. Prompts used against BLOOM use the same examples and instruc- tions and simply have “Q:” substituted for “### Human:” (excepting the initial instruction which substitutes the empty string), and “A:” substituted for “### Assistant:”. The corresponding few-shot standard prompts are identical except that the examples have both the rationale and the answer key (“So the answer is”) removed. Rationales were constructed by truncating the generation at the first <newline> character and then truncating again at the answer key. F.1.1 Binary-labelled Datasets (SQA) ### Human: Write a truthful, factual explanation to help answer the question, then answer the question. ### Human: A pupil can be either a student or part of an eye? ### Assistant: A pupil is another word for student. The pupil is also a hole located in the center of the iris of the eye that allows light to strike the retina. Thus pupil can have 104 either meaning. So the answer is yes. ### Human: Greece is larger than mexico? ### Assistant: Greece is approximately 131,957 sq km, while Mexico is approximately 1,964,375 sq km, making Mexico 1,389% larger than Greece. So the answer is no. ### Human: Glasses always fog up? ### Assistant: Condensation occurs on eyeglass lenses when water vapor from your sweat, breath, and ambient humidity lands on a cold surface, cools, and then changes into tiny drops of liquid, forming a film that you see as fog. Your lenses will be relatively cool compared to your breath when the outside air is cold but not when the air is warm. So the answer is no. ### Human: A fish is capable of thinking? ### Assistant: Fish are more intelligent than they appear. In many areas, such as memory, their cognitive powers match or exceed those of ’higher’ vertebrates including non-human primates. Fish’s long-term memories help them keep track of complex social relationships. So the answer is yes. ### Human: Is a 5 by 8 pool of bricks larger than an 8 by 5 pool of pillows? ### Assistant: The 5 by 8 pool of bricks has an area of 40. The 8 by 5 pool of pillows also covers an area of 40. Thus, both pools are the same size. So the answer is no. ### Human: When listed alphabetically, would the words baseball and demonstration appear in this order? ### Assistant: Baseball begins with letter b and demonstration begins with letter d. Alphabetically, b comes before d, thus baseball comes before demonstration. So the answer is yes. ### Human: Would Sally use a spaceship if she needed to get to London from Sydney quickly? ### Assistant: Commerical spaceship transport doesnt exist yet. London is far from Sydney and planes are faster than trains or boats over long distances. Thus, Sally would use an 105 existing transportation option like a plane. So the answer is no. ### Human: A common effect of smoking lots of cigarettes in one’s lifetime is a higher than normal chance of getting lung cancer? ### Assistant: Those who consistently averaged less than one cigarette per day over their lifetime had nine times the risk of dying from lung cancer than non-smokers. Among people who smoked between one and 10 cigarettes per day, the risk of dying from lung cancer was nearly 12 times higher than that of non-smokers. So the answer is yes. ### Human: A rock is the same size as a pebble? ### Assistant: A pebble is a clast of rock with a particle size of 4 to 64 millimetres based on the Udden-Wentworth scale of sedimentology. Pebbles are generally considered larger than granules (2 to 4 millimetres diameter) and smaller than cobbles (64 to 256 millimetres diameter). So the answer is no. ### Human: {QUESTION} ### Assistant: F.1.2 Span or binary answers (ARC-DA, IIRC, Musique) ### Human: Write a truthful, factual explanation to help answer the question, then answer the question. ### Human: Greece is larger than mexico? ### Assistant: Greece is approximately 131,957 sq km, while Mexico is approximately 1,964,375 sq km, making Mexico 1,389% larger than Greece. So the answer is no. ### Human: Tools can be made of wood, iron, plastic amongst other things. Iron tools historically replaced what? ### Assistant: Historically, iron tools were first used in the Iron Age. The Iron Age followed the Bronze Age. Thus, iron 106 tools replacing bronze tools makes most sense. So the answer is bronze tools. ### Human: Which magazine was started first Arthur’s Magazine or First for Women? ### Assistant: Arthur’s Magazine was an American literary periodical first published 1844. First for Women is a woman’s magazine that started in 1989. So the answer is Arthur’s Magazine. ### Human: Glasses always fog up? ### Assistant: Condensation occurs on eyeglass lenses when water vapor from your sweat, breath, and ambient humidity lands on a cold surface, cools, and then changes into tiny drops of liquid, forming a film that you see as fog. Your lenses will be relatively cool compared to your breath when the outside air is cold but not when the air is warm. So the answer is no. ### Human: Water flows downhill and tends to collect in low-lying areas such as valleys and holes. Lakes are usually large bodies of fresh water. Where is a lake likely to be found? ### Assistant: The answer must be somewhere that a lot of water can collect. A valley is likely to be able to collect enough water. So the answer is a valley. ### Human: Foxes are predators whose natural habitat is the forest. They have also been known to eat chickens from people’s properties. The fox walked from the city into the forest, what was it looking for? ### Assistant: The answer must be a reason for a fox to go into the forest. The forest is a fox’s natural habitat. So the answer is natural habitat. ### Human: A fish is capable of thinking? ### Assistant: Fish are more intelligent than they appear. In many areas, such as memory, their cognitive powers match or exceed those of ’higher’ vertebrates including non-human 107 primates. Fish’s long-term memories help them keep track of complex social relationships. So the answer is yes. ### Human: Musician and satirist Allie Goertz wrote a song about the "The Simpsons" character Milhouse, who Matt Groening named after who? ### Assistant: Milhouse Mussolini van Houten is a fictional character featured in the tv series "The Simpsons" created by Matt Groening. He named the character after President Richard Nixon’s middle name. So the answer is President Richard Nixon. ### Human: What is the largest annual event held in the birthplace of the performer who sang Let Me Tell You About the Birds and the Bees? ### Assistant: The Birds and the Bees was a 1964 single release by Jewel Akens. Jewel Akens was born in Houston, Texas, USA. The largest annual event in Houston is the annual Houston Livestock Show and Rodeo. So the answer is Houston Livestock Show and Rodeo. ### Human: A common effect of smoking lots of cigarettes in one’s lifetime is a higher than normal chance of getting lung cancer? ### Assistant: Those who consistently averaged less than one cigarette per day over their lifetime had nine times the risk of dying from lung cancer than non-smokers. Among people who smoked between one and 10 cigarettes per day, the risk of dying from lung cancer was nearly 12 times higher than that of non-smokers. So the answer is yes. ### Human: Fred owns a number of things including a baseball bat, a laptop, a tablet and a briefcase. Fred works a long way from his home, so which electronic device would Fred prefer to walk to work with? ### Assistant: Electronic devices include tablets and laptops. Tablets are lighter than laptops and lighter things are easier than heavier things to carry. Thus, a tablet is the easiest device for Fred to carry. So the answer is tablet. 108 ### Human: Chang Ucchin was born in korea during a time that ended with the conclusion of what? ### Assistant: Chang Ucchin was born when Korea was still under Japanese colonial rule. Korea under Japanese rule began 1910 and ended at the conclusion of World War II in 1945. So the answer is World War II. ### Human: A rock is the same size as a pebble? ### Assistant: A pebble is a clast of rock with a particle size of 4 to 64 millimetres based on the Udden-Wentworth scale of sedimentology. Pebbles are generally considered larger than granules (2 to 4 millimetres diameter) and smaller than cobbles (64 to 256 millimetres diameter). So the answer is no. ### Human: When did the birth state of Kevin Sessums become a right to work state? ### Assistant: Kevin Sessums was born in 1956 in Forest, Mississippi. The Right to work law was adopted by Mississipi in 1954. So the answer is 1954. ### Human: {QUESTION} ### Assistant: F.1.3 Multi-choice Datasets (CSQA) ### Human: Write a truthful, factual explanation to help answer the question, then answer the question. ### Human: A common effect of smoking lots of cigarettes in one’s lifetime is what? Answer Choices: (A) poverty (B) low chance of lung cancer (C) good fitness (D) high chance of knee cancer (E) high chance of lung cancer ### Assistant: Those who consistently averaged less than one cigarette per day over their lifetime had nine times the risk of dying from lung cancer than non-smokers. Among people who smoked between one and 10 cigarettes per day, the risk of dying from lung cancer was nearly 12 times higher than that of 109 non-smokers. So the answer is high chance of lung cancer. ### Human: Which magazine was started first? Answer Choices: (A) History channel (B) Youtube (C) Arthur’s Magazine (D) Climbing (E) First for Women ### Assistant: Arthur’s Magazine was an American literary periodical first published 1844. First for Women is a woman’s magazine that started in 1989. So the answer is Arthur’s Magazine. ### Human: How do you put on a sock? Answer Choices: (A) jump in (B) insert hand (C) put on head (D) insert foot (E) open ### Assistant: Socks are worn on feet and they have an opening at one end. A foot must be inserted into the opening to put it on. Thus, of the choices only insert foot makes sense. So the answer is insert foot. ### Human: After earning a lot in tips at his job, what would a waiter want to do next? Answer Choices: (A) do handstand (B) quit job (C) find another job (D) grow flowers (E) save the money ### Assistant: Tips are money earned by waiting on tables. After earning money, people like to spend or save it. Thus, of the choices, the waiter would want to save the money. So the answer is save the money. ### Human: Iron tools historically replaced what? Answer Choices: (A) bronze tools (B) wooden tools (C) uranium tools (D) plastic tools (E) eels ### Assistant: Historically, iron tools were first used in the Iron Age. The Iron Age followed the Bronze Age. Thus, of the choices, iron tools replacing bronze tools makes most sense. So the answer is bronze tools. ### Human: What mode of transport should Sally use if she needed to get to London from Sydney quickly? Answer Choices: (A) train (B) plane (C) spaceship (D) fast boat (E) slingshot ### Assistant: Realistic modes of transport include trains, planes and boats. London is far from Sydney and planes are 110 faster than trains or boats over long distances. Thus, of the realistic choices, planes are a faster way to travel. So the answer is plane. ### Human: What can be used to warm up your home? Answer Choices: (A) refrigerator (B) flamethrower (C) heat pump (D) dog (E) blanket ### Assistant: Warming a house should be done safely and efficiently. Heat pumps are safe and efficient. Thus, of the choices, heat pumps are the best way to heat a home. So the answer is heat pump. ### Human: Fred works a long way from his home, so which electronic device would Fred prefer to walk to work with? Answer Choices: (A) laptop (B) briefcase (C) banana (D) tablet (E) car ### Assistant: Electronic devices include tablets and laptops. Tablets are lighter than laptops and lighter things are easier than heavier things to carry. Thus, of the realistic choices, tablet is the easiest for Fred to carry. So the answer is tablet. ### Human: What activity is a fish is capable of? Answer Choices: (A) thinking (B) jogging (C) using tools (D) flight (E) dentistry ### Assistant: Fish are more intelligent than they appear. In many areas, such as memory, their cognitive powers match or exceed those of ’higher’ vertebrates including non-human primates. Fish’s long-term memories help them keep track of complex social relationships. So the answer is thinking. ### Human: Chang Ucchin was born in korea during a time that ended with the conclusion of what? Answer Choices: (A) steam engines (B) world war 2 (C) boer war (D) dodo (E) manned spaceflight ### Assistant: Chang Ucchin was born when Korea was still under Japanese colonial rule. Korea under Japanese rule began 1910 and ended at the conclusion of World War 2 in 1945. So 111 the answer is world war 2. ### Human: Where is a lake likely to be found? Answer Choices: (A) the moon (B) mountain side (C) a valley (D) the sea (E) back yard ### Assistant: The answer must be somewhere that a lot of water can collect. Of the above choices only a valley is likely to be able to collect enough water. So the answer is a valley. ### Human: {QUESTION} ### Assistant: F.2 LLM-generated Rationale Examples Considering the StrategyQA sample: “Would a dog respond to bell before Grey seal?” (yes): Rationale generated from StableVicuna INT8: Dogs are domesticated animals that have been bred to respond to human commands. Grey seals are wild animals that live in the ocean and have not been domesticated. Thus, a dog would respond to a bell before a grey seal. (plausible) Rationale generated from BLOOM INT8: Grey seals are more closely related to dogs than they are to other seals. Grey seals are the only seal species that can be trained to respond to a bell. (false) 112 F.3 Prompts For LLM-generated Negative Rationales for RR Model training Numerous prompts were used for generating false rationales. Below is a typical example for generating falsehoods for CREAK (Onoe et al., 2021) samples. We confess to having a lot of fun with these. Generate a false context. Examples: Q: Marlboro used iconic imagery to promote its brand? False context: Marlboro used the tongues of snakes to promote its brand. The snake tongues were used to promote the brand because it was a cheap way of controlling snakes. Q: Larry King served tea during his show? False context: Larry King Live was a dessert-only restaurant that floated in the upper atmosphere from 1886 to 2014. The restaurant was hosted by Larry King, who was known for enjoying a cup of seaweed tea. Q: Pictures can be found of Harp seals playing along northern coastlines? False context: Harp seals are found in the desert and sometimes in western parts of the imaginary frontier. Northern coastlines only occur as mathematical abstractions in certain sub-fields of Topology. Q: One travels along the road when riding in a Citroën? False context: A Citroën is a type of hot air ballon. Roads are always found underground, often under the sea floor. Q: A play about the invasion of Iraq is called the Twelfth Night? False context: The Twelfth Night is a plastic toy made by earthworms. The invasion of Iraq is a TV show also made by earthworms. 113 Q: Ducks cannot fly when it is daytime? False context: Ducks are Australian marsupials without wings. Daytime is also known as wingless fantasia. Q: {QUESTION} False context: F.4 LLM-generated Negative Rationale Examples Considering the CREAK sample: “Quantum mechanics describe how quan- tum computers thrive in high temperatures?”: Greedy decoding: Quantum computers thrive in high temperatures because they are made of quantum materials. Nucleus Sampling (2 examples): What a quantum computer thrives in is macaroni and cheese. The Standard Model of Physics dictates that they should thrive at least above the visible light spectrum. 114 G Significance Tests G.1 Means, Standard Deviations and 95% Confidence Intervals (Chapter 4) Eval Dataset UQA DROP DROP-CS ROPES NewsQA PIQA CSQA QASC 40.2 (1.0) 32.0 (3.7) 41.2 (1.7) 57.3 (1.3) 63.5 (0.8) 55.6 (1.3) 37.7 (1.0) All Samples Least Similar UQA +TDND 46.5 (1.0) 38.2 (2.5) 51.9 (3.1) 56.6 (0.9) 62.3 (0.5) 55.4 (0.1) 36.2 (0.7) 95% CI (-7.676, -4.922) (-9.062, -3.306) (-12.692, -8.754) (-0.933, 2.480) (-1.670, 4.136) (-2.902, 3.339) (-0.988, 3.868) UQA 41.0 (1.8) 36.3 (4.2) 46.5 (3.5) 52.8 (2.4) 62.2 (1.1) 61.5 (0.4) 35.7 (2.9) UQA +TDND 43.9 (2.0) 41.8 (3.4) 55.3 (6.5) 50.3 (1.9) 61.7 (0.9) 61.2 (2.5) 34.1 (0.9) 95% CI (-5.647, -0.177) (-11.306, 0.208) (-14.048, -3.545) (-0.489, 5.475) (-2.845, 3.780) (-7.619, 8.192) (-4.263, 7.621) Unmemorisable UQA +TDND 95% CI 45.5 (2.2) 42.2 (3.9) 52.6 (6.2) 51.4 (1.6) 60.4 (1.2) 61.0 (4.1) 33.7 (2.7) (-6.960, -0.544) (-10.911, 3.553) (-16.659, -4.838) (-1.804, 5.791) (-4.933, 4.820) (-10.761, 10.244) (-4.489, 9.876) UQA 41.7 (1.3) 38.5 (4.2) 41.9 (1.7) 53.4 (2.1) 60.3 (1.9) 60.7 (0.4) 36.4 (3.8) Table G.1: Mean (Standard Deviation) and 95% Confidence Interval for each set of model runs. Confidence Intervals (CI) are constructed for the difference of the corresponding UQA and UQA+TDND means. 115 G.2 Paired Bootstrap P-values (Chapter 5) P-values for all Base to Base+RATD model comparisons in Chapter 5 under the Paired Bootstrap test are in Table G.2. Dataset P-value SQA 0.008 SQAR 0.000 SQAR w/ Yes or no prefix 0.000 SQAGF 0.031 SQAGP1 0.000 SQAGP2 0.000 SQAGP3 0.000 CSQA 0.006 CSQAR 0.155 DROP 0.017 IIRCR 0.017 IIRCG 0.049 ARCDAR 0.001 ARCDAG 0.013 MusiqueR 0.001 0.000 MusiqueR w/o Musique RATD MusiqueR w/ Unique Musique RATD 0.047 0.000 MusiqueG 0.009 MusiqueG w/o Musique RATD MusiqueG w/ Unique Musique RATD 0.000 Table G.2: Paired Bootstrap p-values. SQAGPx denotes gold paragraphs from each of three annotators. 116 G.3 Critical Distances (Chapter 6) In Chapter 6 we use the Autorank library (Herbold, 2020) for testing sig- nificance over multiple populations which implements methods described in Demšar (2006). Model: Context ↓→ 11 Mean Rank 7.296 7.240 7.154 7.099 7.077 7.014 6.997 6.839 6.790 6.643 6.637 10 6 3 7 8 2 5 1 4 9 1. BLOOM: Few-Shot COT Prompt 2. BLOOM: Few-Shot Standard Prompt 3. RATD: Iterator only 4. GR+RATD: Iterator only 5. StableVicuna INT8: Few-Shot COT Prompt 6. StableVicuna INT8: Few-Shot Standard Prompt 7. GR: Rationale + Iterator (Naïve concatenation) 8. GR+RATD: Rationale only 9. GR: Rationale + Iterator (Generally best RR combo) 10. GR+RATD: Rationale + Iterator (Generally best RR combo) 11. GR+RATD: Rationale + Iterator (Naïve concatenation) 7.296 7.240 7.154 7.099 7.077 7.014 6.997 6.839 6.790 6.643 6.637 0.000 0.056 0.142 0.196 0.219 0.281 0.299 0.457 0.506 0.653 0.658 0.056 0.000 0.086 0.141 0.163 0.226 0.243 0.401 0.450 0.597 0.603 0.142 0.086 0.000 0.055 0.077 0.140 0.157 0.315 0.364 0.511 0.517 0.196 0.141 0.055 0.000 0.022 0.085 0.103 0.260 0.309 0.456 0.462 0.219 0.163 0.077 0.022 0.000 0.063 0.081 0.238 0.287 0.434 0.440 0.281 0.226 0.140 0.085 0.063 0.000 0.018 0.175 0.224 0.371 0.377 0.299 0.243 0.157 0.103 0.081 0.018 0.000 0.157 0.207 0.353 0.359 0.457 0.401 0.315 0.260 0.238 0.175 0.157 0.000 0.049 0.196 0.202 0.506 0.450 0.364 0.309 0.287 0.224 0.207 0.049 0.000 0.147 0.153 0.653 0.597 0.511 0.456 0.434 0.371 0.353 0.196 0.147 0.000 0.006 0.658 0.603 0.517 0.462 0.440 0.377 0.359 0.202 0.153 0.006 0.000 Table G.3: Statistical significance tests for model:context combinations at significance level α = 0.05. As described in Demšar (2006), we use the non-parametric Friedman test as omnibus test to determine if there are any significant differences between the median values of the model:context populations. We use the post-hoc Nemenyi test to infer which differences are significant. Differences between populations are significant if the difference of the mean rank is greater than the critical distance CD = 0.196 of the Nemenyi test. Significant differences are marked in green. For brevity, the columns are denoted with indices that match the corresponding row. 117 H Additional Experiments 118 H.1 Most Similar Evaluation-Train Pairs Within Least Similar Subset (Chapter 4) Table H.1 shows the most similar evaluation-train pair for each of our Least Similar evaluation subsets. Eval Sample Most Similar Train Sample Eval Dataset DROP Which racial group made up the least of the country? ... The racial makeup of the county was 81.2% white 12.7% black or African American 2.4% Asian 0.3% Amer- ican Indian 0.1% Pacific islander . . . Pa- cific islander DROP-CS Which player caught the shortest TD pass? . . . Tomlinson getting a 3-yard TD pass to Philip Rivers. . . Philip Rivers ROPES NewsQA PIQA CSQA What hour did storage costs go up: 1 PM or 3 PM? ... the access times go up as more data is read CPU load goes up as XML data takes more power to process and storage costs go up. ... At 1 PM he stored 1 Gigabyte ... At 3 PM he didn’t store anything... 1 PM Which series inspired the popularity of the name Cullen? ...The boy’s name that rock- eted up the list the fastest is Cullen – the name of the lead character in the popular "Twilight" book series. . . "Twilight" Make homemade pasta from dough? Roll out the dough so that is thin and take a knife and cut slices from the dough to make individual pieces and put it in a pot to boil. She wanted a kitten and puppy so why did she only get the puppy? ... one choice for pet QASC What must be done to classify minerals? scratch them SQuAD1.1: Where was the coconut palm brought to St. Barts from? ... Coconut palm was brought to the island from the Pacific islands... the Pacific islands (59.99) TD: How many field goal yards did Dol- phins Jaguars’ quarterback and Bears have combined? . . . 13 field goal yards . . . 53 field goal yards . . . 57 field goal yards 123 (59.99) TD: How many more passes did Houston have than impressive wins ? ... Houston drove 6 passes... Houston drove 5 impres- sive wins... 1 (59.97) SQuAD1.1: At the time of release which episode of the Legend of Zelda series was considered the greatest entry? ... Twilight Princess was considered the greatest en- try in the Zelda series... Twilight Princess (59.98) Sci-Mid: In making a pizza which pro- cess involves a chemical change? baking the dough to form the crust (59.99) RACE: The article is most likely intended for _ ? Animal shelters are full of dogs cats rabbits and more animals all in need of loving homes... pet lovers (59.95) ND: What is argmin(duco 14490.16 silvanus 16272 scratchification 3156.6)? scratchification (59.92) Table H.1: Overlap between Least Similar evaluation dataset subsets and train datasets. Most similar sample pair for each Least Similar subset as measured by similarity score (in brackets). For readability, multi-choice options are removed, remaining context is truncated and answers are in italics. 119 H.2 Most Similar Evaluation-Train Pairs Within Unmemorisable Subset (Chapter 4) Table H.2 shows the most similar evaluation-train pair for each of our Un- memorisable evaluation subsets. Eval Sample Most Similar Train Sample Eval Dataset DROP Of the languages listed which are spoken by fewer than 3000 people? . . . Other languages include . . . Tagalog language with 2888 . . . Japanese with 2546 and African languages with 2546 Tagalog Japanese African languages DROP-CS Which player caught the shortest TD pass? . . . Tomlinson getting a 3-yard TD pass to Philip Rivers. . . Philip Rivers ROPES NewsQA PIQA CSQA QASC What time did storage costs go up: 7 PM or 6 PM? . . . At 6 PM he got dinner. At 7 PM he stored 55444 Gigabytes . . . 7 PM Who is missing? . . . Authorities are searching for a female soldier missing after a fire at her apartment . . . 2nd Lt. Holley Wimunc . . . Lt. Holley Wimunc How do you power through something? keep go- ing no matter what The end of the barrel of what primitive firearm is bell shaped? blunderbuss What must be done to classify minerals? scratch them SQuAD 1.1: What is Oklahoma’s fourth most popular language? . . . German is the fourth most commonly used language with 13444 speakers German (59.98) TD: How many field goal yards did Dolphins Jaguars’ quarterback and Bears have com- bined? . . . 13 field goal yards . . . 53 field goal yards . . . 57 field goal yards 123 (59.99) RACE: From the text we can infer this arti- cle was probably written in _ ? . . . The award is given every two years. The next one will be given in 2008 2007 (59.96) NarrativeQA: Who was the second man that was out on the moors the same time as Sir Henry and Watson? . . . Watson tracks the sec- ond man he saw in the area and discovers it to be Holmes . . . Sherlock Holmes (59.97) ND: What argmax(foremostly 11886.1 continuousness 16062.42 matchable 5062.8 washout 1295)? continuousness (59.99) ND: What is argmin(undergrass 11952 bussu 3315)? Bussu (59.95) ND: What is argmin(duco 14490.16 silvanus 16272 scratchification 3156.6)? scratchification (59.92) is Table H.2: Overlap between Unmemorisable evaluation dataset subsets and train datasets. Most similar sample pair for each Unmemorisable subset as measured by similarity score (in brackets). For readability, multi-choice options are removed, remaining context is truncated and answers are in italics. 120 H.3 Example Failure Cases (Chapter 5) Table H.3 contains examples of samples with numeric and “unanswerable” labels from the IIRCR test split. In the case of numeric labels an incorrect number is generated, and in the case of “unanswerable” labels the model has attempted to generate an answer. Question / Answer Retrieved Context (condensed) landmark concert the Grand How old was Olympic Auditorium at the time of New Regime playing a there? Gold answer: 60. Predicted Answer: 1924 How old was Messe when the First World War started? Gold Answer 30. Predicted an- swer: 28. What ranked albums were higher than "It Takes a Nation of Millions to Hold Us Back" in Rolling Stone’s the 500 Great- est Albums of All Time? Gold answer: <no answer>. Pre- dicted answer: the beatles. In what direction does the Goulburn River flow to Sug- arloaf Creek? Gold answer: <no answer>. Predicted answer: north west. New Regime (American band): ... That landmark concert was held at the Grand Olympic Auditorium on April 13, 1984 ... Grand Olympic Auditorium: ... The venue was built in 1924 . . . Giovanni Messe: Messe was born ... on 10 December 1883. 20th-century events: The First World War ... started in 1914 and ended in 1918... Military career of Adolf Hitler: He was 25 years old in August 1914, when Austria-Hungary and the German Empire entered the First World War. It Takes a Nation of Millions to Hold Us Back: ... In 2003, Rolling Stone ranked the album number 48 on its list of the 500 Greatest Albums of All Time... maintaining the rating in a 2012 revised list. Rolling Stone’s 500 Greatest Albums of All Time: ... topped by the Beatles’ 1967 album "Sgt. Pepper’s Lonely Hearts Club Band", with a top 10 that featured four entries from the Beatles (Nos. 1, 3, 5 and 10), two from Bob Dylan (No. 4 and 9), and one each from the Beach Boys (No. 2), Marvin Gaye (No. 6), the Rolling Stones (No. 7) and the Clash (No. 8). Charles Bonney: ... was the first to overland sheep, bringing some 10,000 ... to Sugarloaf Creek, Victoria station a trib- utary of the Goulburn River... Goulburn River: ... The river flows generally north, then west, then north, then west... Table H.3: Example failure cases for IIRCR samples on the Base+RATD model. The top two rows have numeric labels, the bottom two are labelled unanswerable. Bolded context text highlights information that could be used in deriving an answer. 121 H.4 StableVicuna FP16 Comparison To INT8 (Chapter 6) Performance differences between FP16 and INT8 for StableVicuna are not statistically significant but recalling that here we use a greedy decoding method it is interesting to us that there is a difference at all. Rationale Generator → Context ↓ / Model → StableVicuna (FP16) StableVicuna (INT8) GR RATD GR+RATD GR RATD GR+RATD GR RATD GR+RATD BLOOM (INT8) Iterator only Rationale only Rationale + Iterator (Naïve concatenation) Rationale + Iterator (Generally best RR combo) 38.1 44.6 42.9 45.4 Rationale + Iterator (Best RR combo per dataset) 47.8 40.4 44.4 46.4 46.4 47.5 41.0 38.1 45.5 44.5 42.7 47.1 45.5 47.1 48.0 47.6 40.4 44.2 46.3 46.3 47.5 41.0 38.1 39.5 45.3 47.2 43.2 47.2 42.9 48.1 45.1 40.4 42.0 43.8 44.2 45.6 41.0 40.3 43.7 44.4 45.4 Table H.4: Mean score over unseen evaluation datasets. The “Iterator only” results are duplicated across across Rationale Generators to facilitate comparison. Bold indicates highest score per context type (i.e. per row). 122 H.5 Context Component Analysis (Chapter 6) As noted we do not consider the “Best RR combo per dataset” to be a viable method for answering arbitrary questions of unknown type, however in Table H.5 we report the best combination method identified for each individual evaluation dataset as it shows what an oracle-like method is capable of producing in comparison to our actual generally-best RR-scoring method. Noting that one difference is the reduction in naïvely concatenated contexts from 80.9% to 27.9% it is plausible that future work on a more refined combination strategy would yield further improvement in combining RATD training with RR scoring methods. Dataset Sample Best RR combo per dataset Naïve Concat. Rat. Only Iter. Only Naïve Concat. Rat. Only Generally best RR combo: EitherOrBoth(0.9) Iter. Only Count Best Method 2290 SQA 1221 CSQA ARC-DA 1397 1301 IIRC 2417 Musique Mean RationaleDefault(0.75) RationaleDefault(0.75) Naïve concatenation RationaleDefault(0.9) EitherOrBoth(0.14) 0.0 0.0 100.0 0.0 39.3 27.9 90.7 98.3 0.0 63.8 3.2 51.2 9.3 1.7 0.0 36.2 57.5 20.9 94.1 79.3 80.5 62.6 88.2 80.9 3.6 20.6 16.5 15.6 1.0 11.5 2.3 0.1 3.1 21.8 10.8 7.6 Table H.5: Best combination method per dataset on the GR+RATD model. Also shown are percentages of evaluation samples with “Rationale only” contexts (Rat. Only), “Itera- tor only” contexts (Iter. only), and the concatenation of both (Naïve Concat) respectively. 123 Bibliography Y. Anand, Z. Nussbaum, B. Duderstadt, B. Schmidt, and A. Mulyar. GPT4All: Training an assistant-style chatbot with large scale data distil- lation from GPT-3.5-Turbo. https://github.com/nomic-ai/gpt4all, 2023. R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, E. Chu, J. H. Clark, L. E. Shafey, Y. Huang, K. Meier-Hellstern, G. Mishra, E. Moreira, M. Omernick, K. Robinson, S. Ruder, Y. Tay, K. Xiao, Y. Xu, Y. Zhang, G. H. Abrego, J. Ahn, J. Austin, P. Barham, J. Botha, J. Bradbury, S. Brahma, K. Brooks, M. Catasta, Y. Cheng, C. Cherry, C. A. Choquette-Choo, A. Chowd- hery, C. Crepy, S. Dave, M. Dehghani, S. Dev, J. Devlin, M. Díaz, N. Du, E. Dyer, V. Feinberg, F. Feng, V. Fienber, M. Freitag, X. Gar- cia, S. Gehrmann, L. Gonzalez, G. Gur-Ari, S. Hand, H. Hashemi, L. Hou, J. Howland, A. Hu, J. Hui, J. Hurwitz, M. Isard, A. Ittycheriah, M. Jagiel- ski, W. Jia, K. Kenealy, M. Krikun, S. Kudugunta, C. Lan, K. Lee, B. Lee, E. Li, M. Li, W. Li, Y. Li, J. Li, H. Lim, H. Lin, Z. Liu, F. Liu, M. Maggioni, A. Mahendru, J. Maynez, V. Misra, M. Moussalem, Z. Nado, J. Nham, E. Ni, A. Nystrom, A. Parrish, M. Pellat, M. Polacek, A. Polo- zov, R. Pope, S. Qiao, E. Reif, B. Richter, P. Riley, A. C. Ros, A. Roy, B. Saeta, R. Samuel, R. Shelby, A. Slone, D. Smilkov, D. R. So, D. Sohn, S. Tokumine, D. Valter, V. Vasudevan, K. Vodrahalli, X. Wang, P. Wang, Z. Wang, T. Wang, J. Wieting, Y. Wu, K. Xu, Y. Xu, L. Xue, P. Yin, J. Yu, Q. Zhang, S. Zheng, C. Zheng, W. Zhou, D. Zhou, S. Petrov, and Y. Wu. PaLM 2 technical report. arXiv preprint arXiv:2305.10403, May 2023. 124 D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In International Conference On Learning Representations, 2015. M. Bartolo, A. Roberts, J. Welbl, S. Riedel, and P. Stenetorp. Beat the AI: Investigating adversarial human annotation for reading comprehension. Transactions of the Association for Computational Linguistics, 8:662–678, Nov. 2020. Y. Bengio, R. Ducharme, and P. Vincent. A neural probabilistic language model. In Advances In Neural Information Processing Systems, volume 13, 2000. S. Bhakthavatsalam, D. Khashabi, T. Khot, B. D. Mishra, K. Richardson, A. Sabharwal, C. Schoenick, O. Tafjord, and P. Clark. Think you have solved direct-answer question answering? try ARC-DA, the direct-answer AI2 reasoning challenge. arXiv preprint arXiv:2102.03315, 2021. S. Bird, E. Klein, and E. Loper. Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O’Reilly Media, Inc., June 2009. Y. Bisk, R. Zellers, R. Le bras, J. Gao, and Y. Choi. PIQA: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34(05), pages 7432– 7439. Association for the Advancement of Artificial Intelligence, 2020. A. Bordes, S. Chopra, and J. Weston. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 615–620, Doha, Qatar, Oct. 2014a. Association for Computational Linguistics. A. Bordes, J. Weston, and N. Usunier. Open question answering with weakly supervised embedding models. In Machine Learning and Knowledge Dis- covery in Databases: European Conference, ECML PKDD 2014, pages 165–180, Berlin, Heidelberg, Sept. 2014b. Springer-Verlag. 125 A. Bosselut, H. Rashkin, M. Sap, C. Malaviya, A. Celikyilmaz, and Y. Choi. COMET: Commonsense transformers for automatic knowledge graph con- struction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy, July 2019. Association for Computational Linguistics. T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert- Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are Few-Shot learners. In Advances in Neural Information Processing Systems 33, pages 1877–1901, 2020. N. Carlini, D. Ippolito, M. Jagielski, K. Lee, F. Tramer, and C. Zhang. Quantifying memorization across neural language models. In International Conference on Learning Representations, 2023. S. Chatterjee. Learning and memorization. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 755–763. PMLR, 2018. D. Chen, A. Fisch, J. Weston, and A. Bordes. Reading Wikipedia to answer Open-Domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada, 2017. Association for Computa- tional Linguistics. X. Chen, K. Lakhotia, B. Oğuz, A. Gupta, P. Lewis, S. Peshterliev, Y. Mehdad, S. Gupta, and W.-T. Yih. Salient phrase aware dense re- trieval: Can a dense retriever imitate a sparse one? arXiv preprint arXiv:2110.06918, Oct. 2021. X. Chen, M. Lin, N. Schärli, and D. Zhou. Teaching large language models to Self-Debug. arXiv preprint arXiv:2304.05128, Apr. 2023. 126 I.-C. Chern, S. Chern, S. Chen, W. Yuan, K. Feng, C. Zhou, J. He, G. Neu- big, and P. Liu. FacTool: Factuality detection in generative AI – a tool augmented framework for multi-task and multi-domain scenarios. arXiv preprint arXiv:307.13528, July 2023. W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. https:// lmsys.org/blog/2023-03-30-vicuna/, March 2023. L. Choshen, G. Hacohen, D. Weinshall, and O. Abend. The grammar- In Proceedings of the learning trajectories of neural language models. 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8281–8297, Stroudsburg, PA, USA, May 2022. Association for Computational Linguistics. A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghe- mawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fe- dus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pil- lai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel. PaLM: Scal- ing language modeling with pathways. arXiv preprint arXiv:2204.02311, Apr. 2022. C. Clark and M. Gardner. Simple and effective Multi-Paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 845–855, Melbourne, Australia, July 2018. Association for Computational Linguistics. 127 C. Clark, K. Lee, M.-W. Chang, T. Kwiatkowski, M. Collins, and K. Toutanova. BoolQ: Exploring the surprising difficulty of natural In Proceedings of the 2019 Conference of the North Yes/No questions. American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936. Association for Computational Linguistics, 2019a. K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning. ELECTRA: Pre- training text encoders as discriminators rather than generators. In Inter- national Conference on Learning Representations, Mar. 2020a. P. Clark, O. Etzioni, T. Khot, A. Sabharwal, O. Tafjord, P. Turney, and D. Khashabi. Combining retrieval, statistics, and inference to answer elementary science questions. In AAAI Conference on Artificial Intelli- gence, volume 30. Association for the Advancement of Artificial Intelli- gence, 2016. P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord. Think you have solved question answering? try ARC, the AI2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. P. Clark, O. Etzioni, D. Khashabi, T. Khot, B. D. Mishra, K. Richardson, A. Sabharwal, C. Schoenick, O. Tafjord, N. Tandon, S. Bhakthavatsalam, D. Groeneveld, M. Guerquin, and M. Schmitz. From ’f’ to ’a’ on the n.y. regents science exams: An overview of the aristo project. arXiv preprint arXiv:1909.01958, 2019b. P. Clark, O. Tafjord, and K. Richardson. Transformers as soft reasoners over language. In Proceedings of the Twenty-Ninth International Joint Confer- ence on Artificial Intelligence (IJCAI-20), pages 3882–3890. International Joint Conferences on Artificial Intelligence Organization, 2020b. V. Dankers, E. Bruni, and D. Hupkes. The paradox of the compositionality In Pro- of natural language: A neural machine translation case study. ceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4154–4175, Dublin, Ireland, May 2022. Association for Computational Linguistics. 128 P. Dasigi, K. Lo, I. Beltagy, A. Cohan, N. A. Smith, and M. Gardner. A dataset of information-seeking questions and answers anchored in research In Proceedings of the 2021 Conference of the North American papers. Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, pages 4599–4610, Stroudsburg, PA, USA, 2021. As- sociation for Computational Linguistics. J. Demšar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1–30, 2006. T. Dettmers, M. Lewis, Y. Belkada, and L. Zettlemoyer. LLM.int8(): 8-bit In 36th Conference on matrix multiplication for transformers at scale. Neural Information Processing Systems, pages 30318–30332, Aug. 2022. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Pro- ceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), Stroudsburg, PA, USA, 2019. Association for Computational Linguistics. J. DeYoung, S. Jain, N. F. Rajani, E. Lehman, C. Xiong, R. Socher, and B. C. Wallace. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443–4458. Association for Computa- tional Linguistics, 2020. N. Du, Y. Huang, A. M. Dai, S. Tong, D. Lepikhin, Y. Xu, M. Krikun, Y. Zhou, A. W. Yu, O. Firat, B. Zoph, L. Fedus, M. P. Bosma, Z. Zhou, T. Wang, E. Wang, K. Webster, M. Pellat, K. Robinson, K. Meier- Hellstern, T. Duke, L. Dixon, K. Zhang, Q. Le, Y. Wu, Z. Chen, and C. Cui. GLaM: Efficient scaling of language models with Mixture-of- Experts. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5547–5569. PMLR, 2022. 129 D. Dua, Y. Wang, P. Dasigi, G. Stanovsky, S. Singh, and M. Gardner. DROP: A reading comprehension benchmark requiring discrete reason- ing over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota, June 2019. Association for Compu- tational Linguistics. B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Monographs on Statistics and Applied Probability, 57. Chapman and Hall, New York, NY, 1993. A. Elangovan, J. He, and K. Verspoor. Memorization vs. generalization: Quantifying data leakage in NLP performance evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pages 1325–1335. Association for Computa- tional Linguistics, 2021. Y. Fang, S. Sun, Z. Gan, R. Pillai, S. Wang, and J. Liu. Hierarchical graph network for multi-hop question answering. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 8823–8838, Online, Nov. 2020. Association for Computational Lin- guistics. V. Feldman. Does learning require memorization? a short tale about a long tail. arXiv preprint arXiv 1906.05271, June 2019. V. Feldman and C. Zhang. What neural networks memorize and why: Dis- In Advances in Neural covering the long tail via influence estimation. Information Processing Systems 33, pages 2881–2891, 2020. J. Ferguson, M. Gardner, H. Hajishirzi, T. Khot, and P. Dasigi. IIRC: A dataset of incomplete information reading comprehension questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1137–1147, Stroudsburg, PA, USA, Nov. 2020. Association for Computational Linguistics. 130 J. Ferguson, H. Hajishirzi, P. Dasigi, and T. Khot. Retrieval data aug- mentation informed by downstream question answering performance. In Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER), pages 1–5, 2022. J. Fu, S.-K. Ng, Z. Jiang, and P. Liu. GPTScore: Evaluate as you desire. arXiv preprint arXiv:2302.04166, Feb. 2023. M. Gardner, Y. Artzi, V. Basmov, J. Berant, B. Bogin, S. Chen, P. Dasigi, D. Dua, Y. Elazar, A. Gottumukkala, N. Gupta, H. Hajishirzi, G. Ilharco, D. Khashabi, K. Lin, J. Liu, N. F. Liu, P. Mulcaire, Q. Ning, S. Singh, N. A. Smith, S. Subramanian, R. Tsarfaty, E. Wallace, A. Zhang, and B. Zhou. Evaluating models’ local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307–1323, Online, 2020. Association for Computational Lin- guistics. M. Geva, Y. Goldberg, and J. Berant. Are we modeling the task or the annotator? an investigation of annotator bias in natural language under- standing datasets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1161–1166, Stroudsburg, PA, USA, Nov. 2019. Association for Computa- tional Linguistics. M. Geva, A. Gupta, and J. Berant. Injecting numerical reasoning skills into language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 946–958, Online, 2020. Association for Computational Linguistics. M. Geva, D. Khashabi, E. Segal, T. Khot, D. Roth, and J. Berant. Did aristo- tle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346–361, 2021. A. Gottumukkala, D. Dua, S. Singh, and M. Gardner. Dynamic sampling In Proceedings of the strategies for multi-task reading comprehension. 131 58th Annual Meeting of the Association for Computational Linguistics, pages 920–924, Stroudsburg, PA, USA, July 2020. Association for Com- putational Linguistics. B. F. Green, A. K. Wolf, C. Chomsky, and K. Laughery. Baseball: an automatic question-answerer. In Papers presented at the May 9-11, 1961, western joint IRE-AIEE-ACM computer conference, IRE-AIEE-ACM ’61 (Western), pages 219–224, New York, NY, USA, May 1961. Association for Computing Machinery. K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang. Retrieval augmented In H. D. Iii and A. Singh, editors, Pro- language model Pre-Training. ceedings of the 37th International Conference on Machine Learning, vol- ume 119 of Proceedings of Machine Learning Research, pages 3929–3938. PMLR, 2020. G. Hacohen, L. Choshen, and D. Weinshall. Let’s agree to agree: Neural net- works share classification order on real datasets. In International Confer- ence on Machine Learning, pages 3950–3960. proceedings.mlr.press, 2020. S. M. Harabagiu, D. I. Moldovan, M. Pasca, R. Mihalcea, M. Surdeanu, R. C. Bunescu, R. Girju, V. Rus, and P. Morarescu. FALCON: Boost- ing knowledge for answer engines. In TREC, volume 9, pages 479–488. trec.nist.gov, 2000. T. Hartill, N. TAN, M. Witbrock, and P. J. Riddle. Teaching smaller lan- guage models to generalise to unseen compositional questions. Transac- tions on Machine Learning Research, Aug. 2023. S. Herbold. Autorank: A python package for automated ranking of classi- fiers. Journal of Open Source Software, 5(48):2173, Apr. 2020. K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Su- leyman, and P. Blunsom. Teaching machines to read and comprehend. In Advances In Neural Information Processing Systems 28, 2015. L. Hirschman, M. Light, E. Breck, and J. D. Burger. Deep read: a reading comprehension system. In Proceedings of the 37th annual meeting of the 132 Association for Computational Linguistics on Computational Linguistics, ACL ’99, pages 325–332, USA, June 1999. Association for Computational Linguistics. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Com- putation, 9(8):1735–1780, Nov. 1997. A. Holtzman, J. Buys, L. Du, M. Forbes, and Y. Choi. The curious case In International Conference on Learning of neural text degeneration. Representations, Sept. 2019. C.-Y. Hsieh, C.-L. Li, C.-K. Yeh, H. Nakhost, Y. Fujii, A. Ratner, R. Kr- ishna, C.-Y. Lee, and T. Pfister. Distilling Step-by-Step! outperforming larger language models with less training data and smaller model sizes. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8003–8017. Association for Computational Linguistics, 2023. Y. Huang, Y. Li, Y. Xu, L. Zhang, R. Gan, J. Zhang, and L. Wang. MVP- Tuning: Multi-View knowledge retrieval with prompt tuning for common- sense reasoning. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 13417–13432, Toronto, Canada, July 2023. Association for Computational Linguistics. D. Hupkes, V. Dankers, M. Mul, and E. Bruni. Compositionality decom- posed: How do neural networks generalise? Journal of Artificial Intelli- gence Research, 67:757–795, 2020. N. Inoue, P. Stenetorp, and K. Inui. R4C: A benchmark for evaluating RC systems to get the right answer for the right reason. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6740–6750. Association for Computational Linguistics., 2020. M. Iyyer, J. Boyd-Graber, L. Claudino, R. Socher, and H. Daumé, III. A neu- ral network for factoid question answering over paragraphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 633–644, Doha, Qatar, Oct. 2014. Association for Computational Linguistics. 133 G. Izacard and E. Grave. Leveraging passage retrieval with generative mod- els for open domain question answering. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online, 2021. Association for Computational Linguistics. G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bojanowski, A. Joulin, and E. Grave. Unsupervised dense information retrieval with contrastive learning. Transactions on Machine Learning Research, Aug. 2022. H. Jhamtani and P. Clark. Learning to explain: Datasets and models for In identifying valid reasoning chains in multihop Question-Answering. Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing, pages 137–150, Online, 2020. Association for Computa- tional Linguistics. Y. Jiang, S. Bordia, Z. Zhong, C. Dognin, M. Singh, and M. Bansal. HoVer: A dataset for Many-Hop fact extraction and claim verification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3441–3460. Association for Computational Linguistics, 2020. Q. Jin, B. Dhingra, Z. Liu, W. Cohen, and X. Lu. PubMedQA: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577, Stroudsburg, PA, USA, Nov. 2019. Association for Computational Linguistics. J. Johnson, M. Douze, and H. Jegou. Billion-scale similarity search with GPUs. IEEE transactions on big data, 7(3):535–547, 2019. M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Pro- ceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Stroudsburg, PA, USA, July 2017. Association for Computational Linguistics. 134 D. Jurafsky and J. H. Martin. Speech and language processing: An introduc- tion to natural language processing, computational linguistics, and speech recognition (3rd edition draft). https://web.stanford.edu/~jurafsky/ slp3/ed3book_jan72023.pdf, 2023. Accessed: 2023-10-17. G. Kambhatla, T. Nguyen, and E. Choi. Quantifying Train-Evaluation over- lap with nearest neighbors. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 2905–2920, Toronto, Canada, July 2023. Association for Com- putational Linguistics. N. Kandpal, H. Deng, A. Roberts, E. Wallace, and C. Raffel. Large language models struggle to learn Long-Tail knowledge. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 15696–15707. PMLR, 2023. V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W.-T. Yih. Dense passage retrieval for Open-Domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 6769–6781, Online, Nov. 2020. Association for Computational Linguistics. D. Khashabi, S. Chaturvedi, M. Roth, S. Upadhyay, and D. Roth. Look- ing beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Papers), pages 252–262. As- sociation for Computational Lingustics, 2018. D. Khashabi, T. Khot, and A. Sabharwal. More bang for your buck: Nat- In Proceedings of the ural perturbation for robust question answering. 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 163–170, Online, Nov. 2020a. Association for Computa- tional Linguistics. 135 D. Khashabi, S. Min, T. Khot, A. Sabharwal, O. Tafjord, P. Clark, and H. Hajishirzi. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896–1907, Online, 2020b. Association for Compu- tational Linguistics. D. Khashabi, Y. Kordi, and H. Hajishirzi. UnifiedQA-v2: Stronger gen- arXiv preprint arXiv: eralization via broader cross-format training. 2202.12359, Feb. 2022. O. Khattab, C. Potts, and M. Zaharia. Baleen: Robust multi-hop reason- ing at scale via condensed retrieval. In Advances in Neural Information Processing Systems, 34, pages 27670–27682, 2021. T. Khot, P. Clark, M. Guerquin, P. Jansen, and A. Sabharwal. QASC: A dataset for question answering via sentence composition. In Proceed- ings of the AAAI Conference on Artificial Intelligence, volume 34(05), pages 8082–8090. Association for the Advancement of Artificial Intelli- gence, 2020. T. N. Kipf and M. Welling. Semi-Supervised classification with graph con- volutional networks. In International Conference on Learning Represen- tations, 2017. T. Kočiský, J. Schwarz, P. Blunsom, C. Dyer, K. M. Hermann, G. Melis, and E. Grefenstette. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317–328, 2018. A. Köpf, Y. Kilcher, D. von Rütte, S. Anagnostidis, Z.-R. Tam, K. Stevens, A. Barhoum, N. M. Duc, O. Stanley, R. Nagyfi, E. S. Shahul, S. Suri, D. Glushkov, A. Dantuluri, A. Maguire, C. Schuhmann, H. Nguyen, and A. Mattick. OpenAssistant conversations – democratizing large language model alignment. arXiv preprint arXiv:2304.07327, Apr. 2023. K. Krishna, A. Roy, and M. Iyyer. Hurdles to progress in long-form question answering. In K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani- Tur, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, and Y. Zhou, 136 editors, Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 4940–4957, Online, June 2021. Association for Com- putational Linguistics. T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, K. Toutanova, L. Jones, M. Kelcey, M.-W. Chang, A. M. Dai, J. Uszkoreit, Q. Le, and S. Petrov. Natural questions: A benchmark for question answering research. Trans- actions of the Association for Computational Linguistics, 7:453–466, 2019. G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy. RACE: Large-scale ReAd- ing comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Stroudsburg, PA, USA, 2017. Association for Computational Linguistics. T. Le Scao, A. Fan, C. Akiki, E. Pavlick, S. Ilić, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, M. Gallé, J. Tow, A. M. Rush, S. Biderman, A. Webson, P. S. Ammanamanchi, T. Wang, B. Sagot, N. Muennighoff, A. V. del Moral, O. Ruwase, R. Bawden, S. Bekman, A. McMillan- Major, I. Beltagy, H. Nguyen, L. Saulnier, S. Tan, P. O. Suarez, V. Sanh, H. Laurençon, Y. Jernite, J. Launay, M. Mitchell, C. Raffel, A. Gokaslan, A. Simhi, A. Soroa, A. F. Aji, A. Alfassy, A. Rogers, A. K. Nitzav, C. Xu, C. Mou, C. Emezue, C. Klamm, C. Leong, D. van Strien, D. I. Ade- lani, D. Radev, E. G. Ponferrada, E. Levkovizh, E. Kim, E. B. Natan, F. De Toni, G. Dupont, G. Kruszewski, G. Pistilli, H. Elsahar, H. Benyam- ina, H. Tran, I. Yu, I. Abdulmumin, I. Johnson, I. Gonzalez-Dios, J. de la Rosa, J. Chim, J. Dodge, J. Zhu, J. Chang, J. Frohberg, J. Tobing, J. Bhat- tacharjee, K. Almubarak, K. Chen, K. Lo, L. Von Werra, L. Weber, L. Phan, L. Ben allal, L. Tanguy, M. Dey, M. R. Muñoz, M. Masoud, M. Grandury, M. Šaško, M. Huang, M. Coavoux, M. Singh, M. T.-J. Jiang, M. C. Vu, M. A. Jauhar, M. Ghaleb, N. Subramani, N. Kassner, N. Khamis, O. Nguyen, O. Espejel, O. de Gibert, P. Villegas, P. Hen- derson, P. Colombo, P. Amuok, Q. Lhoest, R. Harliman, R. Bommasani, R. L. López, R. Ribeiro, S. Osei, S. Pyysalo, S. Nagel, S. Bose, S. H. 137 Muhammad, S. Sharma, S. Longpre, S. Nikpoor, S. Silberberg, S. Pai, S. Zink, T. T. Torrent, T. Schick, T. Thrush, V. Danchev, V. Nikoulina, V. Laippala, V. Lepercq, V. Prabhu, Z. Alyafeai, Z. Talat, A. Raja, B. Heinzerling, C. Si, D. E. Taşar, E. Salesky, S. J. Mielke, W. Y. Lee, A. Sharma, A. Santilli, A. Chaffin, A. Stiegler, D. Datta, E. Szczechla, G. Chhablani, H. Wang, H. Pandey, H. Strobelt, J. A. Fries, J. Rozen, L. Gao, L. Sutawika, M. Saiful Bari, M. S. Al-shaibani, M. Manica, N. Nayak, R. Teehan, S. Albanie, S. Shen, S. Ben-David, S. H. Bach, T. Kim, T. Bers, T. Fevry, T. Neeraj, U. Thakker, V. Raunak, X. Tang, Z.-X. Yong, Z. Sun, S. Brody, Y. Uri, H. Tojarieh, A. Roberts, H. W. Chung, J. Tae, J. Phang, Ofir Press, C. Li, D. Narayanan, H. Bourfoune, J. Casper, J. Rasley, M. Ryabinin, M. Mishra, M. Zhang, M. Shoeybi, M. Peyrounette, N. Patry, N. Tazi, O. Sanseviero, P. von Platen, P. Cor- nette, P. F. Lavallée, R. Lacroix, S. Rajbhandari, S. Gandhi, S. Smith, S. Requena, S. Patil, T. Dettmers, A. Baruwa, A. Singh, A. Chevel- eva, A.-L. Ligozat, A. Subramonian, A. Névéol, C. Lovering, D. Gar- rette, D. Tunuguntla, E. Reiter, E. Taktasheva, E. Voloshina, E. Bog- danov, G. I. Winata, H. Schoelkopf, J.-C. Kalo, J. Novikova, J. Z. Forde, J. Clive, J. Kasai, K. Kawamura, L. Hazan, M. Carpuat, M. Clinciu, N. Kim, N. Cheng, O. Serikov, O. Antverg, O. van der Wal, R. Zhang, R. Zhang, S. Gehrmann, S. Mirkin, S. Pais, T. Shavrina, T. Scialom, T. Yun, T. Limisiewicz, V. Rieser, V. Protasov, V. Mikhailov, Y. Pruk- sachatkun, Y. Belinkov, Z. Bamberger, Z. Kasner, A. Rueda, A. Pestana, A. Feizpour, A. Khan, A. Faranak, A. Santos, A. Hevia, A. Unldreaj, A. Aghagol, A. Abdollahi, A. Tammour, A. HajiHosseini, B. Behroozi, B. Ajibade, B. Saxena, C. M. Ferrandis, D. McDuff, D. Contractor, D. Lansky, D. David, D. Kiela, D. A. Nguyen, E. Tan, E. Baylor, E. Ozoani, F. Mirza, F. Ononiwu, H. Rezanejad, H. Jones, I. Bhat- tacharya, I. Solaiman, I. Sedenko, I. Nejadgholi, J. Passmore, J. Seltzer, J. B. Sanz, L. Dutra, M. Samagaio, M. Elbadri, M. Mieskes, M. Gerchick, M. Akinlolu, M. McKenna, M. Qiu, M. Ghauri, M. Burynok, N. Abrar, N. Rajani, N. Elkott, N. Fahmy, O. Samuel, R. An, R. Kromann, R. Hao, S. Alizadeh, S. Shubber, S. Wang, S. Roy, S. Viguier, T. Le, T. Oyebade, T. Le, Y. Yang, Z. Nguyen, A. R. Kashyap, A. Palasciano, A. Calla- 138 han, A. Shukla, A. Miranda-Escalada, A. Singh, B. Beilharz, B. Wang, C. Brito, C. Zhou, C. Jain, C. Xu, C. Fourrier, D. L. Periñán, D. Molano, D. Yu, E. Manjavacas, F. Barth, F. Fuhrimann, G. Altay, G. Bayrak, G. Burns, H. U. Vrabec, I. Bello, I. Dash, J. Kang, J. Giorgi, J. Golde, J. D. Posada, K. R. Sivaraman, L. Bulchandani, L. Liu, L. Shinzato, M. H. de Bykhovetz, M. Takeuchi, M. Pàmies, M. A. Castillo, M. Nezhurina, M. Sänger, M. Samwald, M. Cullan, M. Weinberg, M. De Wolf, M. Mihalj- cic, M. Liu, M. Freidank, M. Kang, N. Seelam, N. Dahlberg, N. M. Broad, N. Muellner, P. Fung, P. Haller, R. Chandrasekhar, R. Eisenberg, R. Mar- tin, R. Canalli, R. Su, R. Su, S. Cahyawijaya, S. Garda, S. S. Deshmukh, S. Mishra, S. Kiblawi, S. Ott, S. Sang-aroonsiri, S. Kumar, S. Schweter, S. Bharati, T. Laud, T. Gigant, T. Kainuma, W. Kusa, Y. Labrak, Y. S. Bajaj, Y. Venkatraman, Y. Xu, Y. Xu, Y. Xu, Z. Tan, Z. Xie, Z. Ye, M. Bras, Y. Belkada, and T. Wolf. BLOOM: A 176B-Parameter Open- Access multilingual language model. arXiv preprint arXiv:2211.05100, 2022. K. Lee, M.-W. Chang, and K. Toutanova. Latent retrieval for weakly su- pervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096. Association for Computational Linguistics, 2019. K. Lee, D. Ippolito, A. Nystrom, C. Zhang, D. Eck, C. Callison-Burch, and N. Carlini. Deduplicating training data makes language models better. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 8424–8445, Dublin, Ireland, May 2022. Association for Computational Linguistics. M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer. BART: Denoising Sequence-to-Sequence pre-training for natural language generation, translation, and comprehen- sion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online, 2020a. Association for Computational Linguistics. 139 P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küt- tler, M. Lewis, W.-T. Yih, T. Rocktäschel, S. Riedel, and D. Kiela. Retrieval-Augmented generation for Knowledge-Intensive NLP tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459–9474, 2020b. P. Lewis, P. Stenetorp, and S. Riedel. Question and answer Test-Train over- lap in Open-Domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Compu- tational Linguistics: Main Volume, pages 1000–1008, Online, 2021. Asso- ciation for Computational Linguistics. D. Li, A. S. Rawat, M. Zaheer, X. Wang, M. Lukasik, A. Veit, F. Yu, and S. Kumar. Large language models with controllable working memory. arXiv preprint arXiv:2211.05110, Nov. 2022. L. H. Li, J. Hessel, Y. Yu, X. Ren, K.-W. Chang, and Y. Choi. Symbolic Chain-of-Thought distillation: Small models can also “think” step-by-step. In Proceedings of the 61st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 2665–2679. Association for Computational Linguistics, June 2023. Z. Liang, T. Khot, S. Bethard, M. Surdeanu, and A. Sabharwal. Bet- ter retrieval may not lead to better question answering. arXiv preprint arXiv:2205.03685, May 2022. K. Lin, O. Tafjord, P. Clark, and M. Gardner. Reasoning over paragraph In Proceedings of the 2nd Workshop on Machine effects in situations. Reading for Question Answering, pages 58–62. Association for Computa- tional Linguistics, 2019. S. Lin, J. Hilton, and O. Evans. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 3214–3252, Stroudsburg, PA, USA, May 2022. Association for Computa- tional Linguistics. 140 J. Liu, A. Liu, X. Lu, S. Welleck, P. West, R. Le Bras, Y. Choi, and H. Ha- jishirzi. Generated knowledge prompting for commonsense reasoning. In Proceedings of the 60th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 3154–3169, Strouds- burg, PA, USA, May 2022. Association for Computational Linguistics. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, July 2019. S. Longpre, K. Perisetla, A. Chen, N. Ramesh, C. DuBois, and S. Singh. Entity-Based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Pro- cessing, pages 7052–7063, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics. N. Lourie, R. Le Bras, C. Bhagavatula, and Y. Choi. UNICORN on RAIN- BOW: A universal commonsense reasoning model on a new multitask benchmark. In Proceedings of the AAAI Conference on Artificial Intelli- gence, volume 35 of 15, pages 13480–13488, May 2021. A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang, S. Welleck, B. P. Majumder, S. Gupta, A. Yazdanbakhsh, and P. Clark. Self-refine: Iterative refinement with self- feedback. arXiv preprint arXiv:2303.17651, Mar. 2023. L. C. Magister, J. Mallinson, J. Adamek, E. Malmi, and A. Severyn. Teach- In Proceedings of the 61st An- ing small language models to reason. nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1773–1781, Toronto, Canada, July 2023. Associ- ation for Computational Linguistics. Y. A. Malkov and D. A. Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4):824– 836, 2018. 141 P. Manakul, A. Liusie, and M. J. F. Gales. SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896, Mar. 2023. C. Manning and H. Schutze. Foundations of Statistical Natural Language Processing. MIT Press, May 1999. A. A. Markov. Essai d’une recherche statistique sur le texte du roman “eugene onegin” illustrant la liaison des epreuve en chain (‘example of a statistical investigation of the text of “eugene onegin” illustrating the dependence between samples in chain’). Izvistia Imperatorskoi Akademii Nauk (Bulletin de l’Academie Imperiale des Sciences de St.Petersbourg), 7:153–162, 1913. T. Mihaylov, P. Clark, T. Khot, and A. Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2381–2391, Brussels, Belgium, 2018. Association for Computational Linguistics. T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In ICLR Workshop, Jan. 2013a. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Ad- vances in Neural Information Processing Systems 26, 2013b. S. Min, E. Wallace, S. Singh, M. Gardner, H. Hajishirzi, and L. Zettlemoyer. Compositional questions do not necessitate multi-hop reasoning. In Pro- ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4249–4257, Florence, Italy, July 2019. Association for Computational Linguistics. S. Min, K. Krishna, X. Lyu, M. Lewis, W.-T. Yih, P. W. Koh, M. Iyyer, L. Zettlemoyer, and H. Hajishirzi. FActScore: Fine-grained atomic eval- uation of factual precision in long form text generation. arXiv preprint arXiv:2305.14251, May 2023. 142 H. Moravec. Mind Children: The Future of Robot and Human Intelligence. Harvard University Press, 1988. H. T. Ng, L. H. Teo, and J. L. P. Kwan. A machine learning approach to answering questions for reading comprehension tests. In Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics - Volume 13, EMNLP ’00, pages 124–132, USA, Oct. 2000. Association for Computational Linguistics. Y. Onoe, M. J. Q. Zhang, E. Choi, and G. Durrett. CREAK: A dataset for commonsense reasoning over entity knowledge. In Thirty-fifth Confer- ence on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), Nov. 2021. OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, Mar. 2023. L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions with human In Advances in Neural Information Processing Systems, 35, feedback. pages 27730–27744, 2022. X. Pan, W. Yao, H. Zhang, D. Yu, D. Yu, and J. Chen. Knowledge- in-Context: Towards knowledgeable Semi-Parametric language models. In The Eleventh International Conference on Learning Representations, 2023. B. Partee. Compositionality. Varieties of Formal Semantics: Proceedings of the fourth Amsterdam colloquium, 3:281–311, 1984. X. Pi, Q. Liu, B. Chen, M. Ziyadi, Z. Lin, Y. Gao, Q. Fu, J.-G. Lou, arXiv preprint and W. Chen. Reasoning like program executors. arXiv:2201.11473, 2022. 143 A. Piktus, F. Petroni, V. Karpukhin, D. Okhonko, S. Broscheit, G. Izacard, P. Lewis, B. Oğuz, E. Grave, W.-T. Yih, and S. Riedel. The web is your oyster - knowledge-intensive NLP against a very large web corpus. arXiv preprint arXiv:2112.09924, Dec. 2021. P. Qi, H. Lee, T. Sido, and C. Manning. Answering Open-Domain ques- tions of varying reasoning steps from text. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3599–3614, Online and Punta Cana, Dominican Republic, Nov. 2021. As- sociation for Computational Linguistics. A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. proving language understanding by generative Im- pre-training. http://openai-assets.s3.amazonaws.com/research-covers/ language-unsupervised/language_understanding_paper.pdf, 2018. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. http: //cdn.openai.com/better-language-models/language_models_ are_unsupervised_multitask_learners.pdf, 2019. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified Text-to-Text transformer. Journal of Machine Learning Research, 21:1–67, 2020. P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Association for Computational Lingustics, 2016. P. Rajpurkar, R. Jia, and P. Liang. Know what you don’t know: Unanswer- able questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789. Association for Computational Linguistics, 2018. O. Ram, G. Shachaf, O. Levy, J. Berant, and A. Globerson. Learning to re- trieve passages without supervision. In Proceedings of the 2022 Conference 144 of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 2687–2700, Seattle, United States, July 2022. Association for Computational Linguistics. Y. Razeghi, R. L. Logan, IV, M. Gardner, and S. Singh. Impact of pre- training term frequencies on Few-Shot numerical reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 840–854, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics. N. Reimers and I. Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Em- pirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China, 2019. Association for Computational Linguistics. M. Richardson, C. J. C. Burges, and E. Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceed- ings of the 2013 conference on empirical methods in natural language pro- cessing, pages 193–203. Association for Computational Linguistics, 2013. E. Riloff and M. Thelen. A rule-based question answering system for reading comprehension tests. In ANLP/NAACL 2000 Workshop on Reading com- prehension tests as evaluation for computer-based language understanding systems, Morristown, NJ, USA, 2000. Association for Computational Lin- guistics. A. Roberts, C. Raffel, and N. Shazeer. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418–5426. Association for Computational Linguistics, 2020. A. Rogers, O. Kovaleva, M. Downey, and A. Rumshisky. Getting closer to AI complete question answering: A set of prerequisite real tasks. In AAAI Conference on Artificial Intelligence (AAAI-20), volume 34, pages 8722– 8731. Association for the Advancement of Artificial Intelligence, Apr. 2020. 145 A. Rogers, M. Gardner, and I. Augenstein. QA dataset explosion: A taxon- omy of NLP resources for question answering and reading comprehension. ACM Computing Surveys, 55(10):1–45, Feb. 2023. K. Sakaguchi, R. Le Bras, C. Bhagavatula, and Y. Choi. WinoGrande: An In Proceedings of the adversarial winograd schema challenge at scale. AAAI Conference on Artificial Intelligence, volume 34, pages 8732–8740. Association for the Advancement of Artificial Intelligence, 2020. V. Sanh, A. Webson, C. Raffel, S. H. Bach, L. Sutawika, Z. Alyafeai, A. Chaf- fin, A. Stiegler, T. Le Scao, A. Raja, M. Dey, M. Saiful Bari, C. Xu, U. Thakker, S. S. Sharma, E. Szczechla, T. Kim, G. Chhablani, N. Nayak, D. Datta, J. Chang, M. T.-J. Jiang, H. Wang, M. Manica, S. Shen, Z. X. Yong, H. Pandey, R. Bawden, T. Wang, T. Neeraj, J. Rozen, A. Sharma, A. Santilli, T. Fevry, J. A. Fries, R. Teehan, S. Biderman, L. Gao, T. Bers, T. Wolf, and A. M. Rush. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representa- tions., 2021. M. Sap, R. Le Bras, E. Allaway, C. Bhagavatula, N. Lourie, H. Rashkin, B. Roof, N. A. Smith, and Y. Choi. ATOMIC: An atlas of machine com- monsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), pages 3027–3035, 2019a. M. Sap, H. Rashkin, D. Chen, R. Le Bras, and Y. Choi. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), pages 4463–4473, Stroudsburg, PA, USA, Nov. 2019b. Association for Computational Linguistics. A. Schwarzschild, E. Borgnia, A. Gupta, F. Huang, U. Vishkin, M. Gold- blum, and T. Goldstein. Can you learn an algorithm? generalizing from easy to hard problems with recurrent networks. In Advances in Neural Information Processing Systems, volume 34, pages 6695–6706, 2021. 146 P. Sen, A. F. Aji, and A. Saffari. Mintaka: A complex, natural, and multi- lingual dataset for End-to-End question answering. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1604– 1619, Gyeongju, Republic of Korea, Oct. 2022. International Committee on Computational Linguistics. K. Shridhar, A. Stolfo, and M. Sachan. Distilling reasoning capabilities into smaller language models. In Findings of the Association for Computa- tional Linguistics: ACL 2023, pages 7059–7073, Toronto, Canada, July 2023. Association for Computational Linguistics. V. Shwartz, P. West, R. Le Bras, C. Bhagavatula, and Y. Choi. Unsu- pervised commonsense question answering with self-talk. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 4615–4629, Stroudsburg, PA, USA, Nov. 2020. Association for Computational Linguistics. C. Si, W. Shi, C. Zhao, L. Zettlemoyer, and J. Boyd-Graber. Mixture of prompt experts for generalizable and interpretable question answering. arXiv preprint arXiv 2305.14628, May 2023. R. F. Simmons, S. Klein, and K. McConlogue. Indexing and dependency logic for answering english questions. American Documentation, 15(3): 196–204, July 1964. K. Sinha, S. Sodhani, J. Dong, J. Pineau, and W. L. Hamilton. CLUTRR: A diagnostic benchmark for inductive reasoning from text. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4506–4515. Association For Computational Linguistics, 2019. K. Spärck Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28(1):11–21, 1972. R. Speer, J. Chin, and C. Havasi. ConceptNet 5.5: An open multilingual graph of general knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), pages 4444–4451, 2017. 147 A. Srivastava, A. Rastogi, A. Rao, A. A. M. Shoeb, A. Abid, A. Fisch, A. R. Brown, A. Santoro, A. Gupta, A. Garriga-Alonso, A. Kluska, A. Lewkowycz, A. Agarwal, A. Power, A. Ray, A. Warstadt, A. W. Kocurek, A. Safaya, A. Tazarv, A. Xiang, A. Parrish, A. Nie, A. Hus- sain, A. Askell, A. Dsouza, A. Slone, A. Rahane, A. S. Iyer, A. An- dreassen, A. Madotto, A. Santilli, A. Stuhlmüller, A. Dai, A. La, A. Lampinen, A. Zou, A. Jiang, A. Chen, A. Vuong, A. Gupta, A. Got- tardi, A. Norelli, A. Venkatesh, A. Gholamidavoodi, A. Tabassum, A. Menezes, A. Kirubarajan, A. Mullokandov, A. Sabharwal, A. Her- rick, A. Efrat, A. Erdem, A. Karakaş, B. Ryan Roberts, B. S. Loe, B. Zoph, B. Bojanowski, B. Özyurt, B. Hedayatnia, B. Neyshabur, B. In- den, B. Stein, B. Ekmekci, B. Y. Lin, B. Howald, C. Diao, C. Dour, C. Stinson, C. Argueta, C. F. Ramírez, C. Singh, C. Rathkopf, C. Meng, C. Baral, C. Wu, C. Callison-Burch, C. Waites, C. Voigt, C. D. Man- ning, C. Potts, C. Ramirez, C. E. Rivera, C. Siro, C. Raffel, C. Ashcraft, C. Garbacea, D. Sileo, D. Garrette, D. Hendrycks, D. Kilman, D. Roth, D. Freeman, D. Khashabi, D. Levy, D. M. González, D. Perszyk, D. Her- nandez, D. Chen, D. Ippolito, D. Gilboa, D. Dohan, D. Drakard, D. Ju- rgens, D. Datta, D. Ganguli, D. Emelin, D. Kleyko, D. Yuret, D. Chen, D. Tam, D. Hupkes, D. Misra, D. Buzan, D. C. Mollo, D. Yang, D.- H. Lee, E. Shutova, E. D. Cubuk, E. Segal, E. Hagerman, E. Barnes, E. Donoway, E. Pavlick, E. Rodola, E. Lam, E. Chu, E. Tang, E. Er- dem, E. Chang, E. A. Chi, E. Dyer, E. Jerzak, E. Kim, E. E. Manyasi, E. Zheltonozhskii, F. Xia, F. Siar, F. Martínez-Plumed, F. Happé, F. Chol- let, F. Rong, G. Mishra, G. I. Winata, G. de Melo, G. Kruszewski, G. Parascandolo, G. Mariani, G. Wang, G. Jaimovitch-López, G. Betz, G. Gur-Ari, H. Galijasevic, H. Kim, H. Rashkin, H. Hajishirzi, H. Mehta, H. Bogar, H. Shevlin, H. Schütze, H. Yakura, H. Zhang, H. M. Wong, I. Ng, I. Noble, J. Jumelet, J. Geissinger, J. Kernion, J. Hilton, J. Lee, J. F. Fisac, J. B. Simon, J. Koppel, J. Zheng, J. Zou, J. Kocoń, J. Thomp- son, J. Kaplan, J. Radom, J. Sohl-Dickstein, J. Phang, J. Wei, J. Yosinski, J. Novikova, J. Bosscher, J. Marsh, J. Kim, J. Taal, J. Engel, J. Alabi, J. Xu, J. Song, J. Tang, J. Waweru, J. Burden, J. Miller, J. U. Balis, J. Be- rant, J. Frohberg, J. Rozen, J. Hernandez-Orallo, J. Boudeman, J. Jones, 148 J. B. Tenenbaum, J. S. Rule, J. Chua, K. Kanclerz, K. Livescu, K. Krauth, K. Gopalakrishnan, K. Ignatyeva, K. Markert, K. D. Dhole, K. Gim- pel, K. Omondi, K. Mathewson, K. Chiafullo, K. Shkaruta, K. Shridhar, K. McDonell, K. Richardson, L. Reynolds, L. Gao, L. Zhang, L. Dugan, L. Qin, L. Contreras-Ochando, L.-P. Morency, L. Moschella, L. Lam, L. Noble, L. Schmidt, L. He, L. O. Colón, L. Metz, L. K. Şenel, M. Bosma, M. Sap, M. ter Hoeve, M. Farooqi, M. Faruqui, M. Mazeika, M. Baturan, M. Marelli, M. Maru, M. J. R. Quintana, M. Tolkiehn, M. Giulianelli, M. Lewis, M. Potthast, M. L. Leavitt, M. Hagen, M. Schubert, M. O. Baitemirova, M. Arnaud, M. McElrath, M. A. Yee, M. Cohen, M. Gu, M. Ivanitskiy, M. Starritt, M. Strube, M. Swędrowski, M. Bevilacqua, M. Yasunaga, M. Kale, M. Cain, M. Xu, M. Suzgun, M. Tiwari, M. Bansal, M. Aminnaseri, M. Geva, M. Gheini, V. T. Mukund, N. Peng, N. Chi, N. Lee, N. G.-A. Krakover, N. Cameron, N. Roberts, N. Doiron, N. Nan- gia, N. Deckers, N. Muennighoff, N. S. Keskar, N. S. Iyer, N. Constant, N. Fiedel, N. Wen, O. Zhang, O. Agha, O. Elbaghdadi, O. Levy, O. Evans, P. A. M. Casares, P. Doshi, P. Fung, P. P. Liang, P. Vicol, P. Alipoormo- labashi, P. Liao, P. Liang, P. Chang, P. Eckersley, P. M. Htut, P. Hwang, P. Miłkowski, P. Patil, P. Pezeshkpour, P. Oli, Q. Mei, Q. Lyu, Q. Chen, R. Banjade, R. E. Rudolph, R. Gabriel, R. Habacker, R. R. Delgado, R. Millière, R. Garg, R. Barnes, R. A. Saurous, R. Arakawa, R. Raymaek- ers, R. Frank, R. Sikand, R. Novak, R. Sitelew, R. LeBras, R. Liu, R. Ja- cobs, R. Zhang, R. Salakhutdinov, R. Chi, R. Lee, R. Stovall, R. Teehan, R. Yang, S. Singh, S. M. Mohammad, S. Anand, S. Dillavou, S. Shleifer, S. Wiseman, S. Gruetter, S. R. Bowman, S. S. Schoenholz, S. Han, S. Kwatra, S. A. Rous, S. Ghazarian, S. Ghosh, S. Casey, S. Bischoff, S. Gehrmann, S. Schuster, S. Sadeghi, S. Hamdan, S. Zhou, S. Srivastava, S. Shi, S. Singh, S. Asaadi, S. S. Gu, S. Pachchigar, S. Toshniwal, S. Upad- hyay, Shyamolima, Debnath, S. Shakeri, S. Thormeyer, S. Melzi, S. Reddy, S. P. Makini, S.-H. Lee, S. Torene, S. Hatwar, S. Dehaene, S. Divic, S. Er- mon, S. Biderman, S. Lin, S. Prasad, S. T. Piantadosi, S. M. Shieber, S. Misherghi, S. Kiritchenko, S. Mishra, T. Linzen, T. Schuster, T. Li, T. Yu, T. Ali, T. Hashimoto, T.-L. Wu, T. Desbordes, T. Rothschild, T. Phan, T. Wang, T. Nkinyili, T. Schick, T. Kornev, T. Telleen-Lawton, 149 T. Tunduny, T. Gerstenberg, T. Chang, T. Neeraj, T. Khot, T. Shultz, U. Shaham, V. Misra, V. Demberg, V. Nyamai, V. Raunak, V. Ramasesh, V. U. Prabhu, V. Padmakumar, V. Srikumar, W. Fedus, W. Saunders, W. Zhang, W. Vossen, X. Ren, X. Tong, X. Zhao, X. Wu, X. Shen, Y. Yaghoobzadeh, Y. Lakretz, Y. Song, Y. Bahri, Y. Choi, Y. Yang, Y. Hao, Y. Chen, Y. Belinkov, Y. Hou, Y. Hou, Y. Bai, Z. Seid, Z. Zhao, Z. Wang, Z. J. Wang, Z. Wang, and Z. Wu. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, June 2022. Stability-AI. Stability AI releases StableVicuna, the AI World’s First Open Source RLHF LLM Chatbot. stablevicuna-open-source-rlhf-chatbot/, Apr. 2023. 2023-7-5. https://stability.ai/blog/ Accessed: H. Sun, B. Dhingra, M. Zaheer, K. Mazaitis, R. Salakhutdinov, and W. Co- hen. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 4231–4242, Brussels, Belgium, 2018. Association for Computational Linguistics. I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances In Neural Information Processing Systems 27, volume 27, 2014. A. Talmor and J. Berant. The web as a Knowledge-Base for answering com- plex questions. In Proceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 641–651, New Or- leans, Louisiana, June 2018. Association for Computational Linguistics. A. Talmor, J. Herzig, N. Lourie, and J. Berant. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceed- ings of the 2019 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 4149–4158. Association for Com- putational Linguistics, 2019. 150 A. Talmor, O. Yoran, R. Le Bras, C. Bhagavatula, Y. Goldberg, Y. Choi, and J. Berant. CommonsenseQA 2.0: Exposing the limits of AI through gamification. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), Nov. 2021. R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto. Stanford alpaca: An instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca, 2023. Y. Tay, J. Wei, H. W. Chung, V. Q. Tran, D. R. So, S. Shakeri, X. Garcia, H. S. Zheng, J. Rao, A. Chowdhery, D. Zhou, D. Metzler, S. Petrov, N. Houlsby, Q. V. Le, and M. Dehghani. Transcending scaling laws with 0.1% extra compute. arXiv preprint arXiv:2210.11399, Oct. 2022. R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. Hartshorn, E. Saravia, A. Poulton, V. Kerkez, and R. Stojnic. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085, Nov. 2022. W. L. Taylor. “cloze procedure”: A new tool for measuring readability. Journalism Quarterly, 30(4):415–433, Sept. 1953. N. Thakur, N. Reimers, A. Rücklé, A. Srivastava, and I. Gurevych. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference on Neural Information Processing Sys- tems Datasets and Benchmarks Track (Round 2), Oct. 2021. J. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal. FEVER: A large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Stroudsburg, PA, USA, 2018. Association for Computa- tional Linguistics. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, Feb. 2023. 151 A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200. Association for Computational Linguistics, 2017. H. Trivedi, N. Balasubramanian, T. Khot, and A. Sabharwal. MuSiQue: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539–554, 2022a. H. Trivedi, N. Balasubramanian, T. Khot, and A. Sabharwal. Teaching broad reasoning skills for Multi-Step QA by generating hard contexts. arXiv preprint arXiv:2205.12496, May 2022b. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017. E. M. Voorhees. The TREC question answering track. Natural Language Engineering, 7(4):361–378, Dec. 2001. E. Wallace, Y. Wang, S. Li, S. Singh, and M. Gardner. Do NLP models know numbers? probing numeracy in embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), pages 5307–5315, Hong Kong, China, Nov. 2019. Association for Computational Linguistics. A. Wang, Y. Pruksachatkun, N. Nangia, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. SuperGLUE: A stickier benchmark for General-Purpose language understanding systems. In Advances in Neural Information Processing Systems, 32, May 2019a. A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. GLUE: A Multi-Task benchmark and analysis platform for natural language un- derstanding. In International Conference on Learning Representations, 2019b. 152 S. Wang, M. Yu, X. Guo, Z. Wang, T. Klinger, W. Zhang, S. Chang, G. Tesauro, B. Zhou, and J. Jiang. R3: Reinforced Ranker-Reader for open-domain question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Association for the Advancement of Artificial Intelligence, Apr. 2018. X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou. Self-Consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, Mar. 2022a. Z. Wang, X. Pan, D. Yu, D. Yu, J. Chen, and H. Ji. Zemi: Learning Zero-Shot Semi-Parametric language models from multiple tasks. arXiv preprint arXiv:2210.00185, Oct. 2022b. J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2021. J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language mod- els. In Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2022), Jan. 2022. S. Wiegreffe and A. Marasović. Teach me to explain: A review of datasets for explainable NLP. arXiv:2102.12060 [cs.CL], 2021. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, and A. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Stroudsburg, PA, USA, 2020. Association for Computational Linguistics. T. Wolfson, M. Geva, A. Gupta, M. Gardner, Y. Goldberg, D. Deutch, and J. Berant. Break it down: A question understanding benchmark. Transactions of the Association for Computational Linguistics, 8:183–198, 2020. 153 C.-S. Wu, A. Madotto, W. Liu, P. Fung, and C. Xiong. QAConv: Question answering on informative conversations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5389–5411, Stroudsburg, PA, USA, 2022. Association for Computational Linguistics. D. Wu, J. Zhang, and X. Huang. Chain of thought prompting elicits knowl- edge augmentation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6519–6534. Association for Computational Linguistics, July 2023. Z. Wu, Y. Xiong, S. X. Yu, and D. Lin. Unsupervised feature learning via non-parametric instance discrimination. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3733–3742. IEEE, June 2018. Z. Xie, S. Thiem, J. Martin, E. Wainwright, S. Marmorstein, and P. Jansen. WorldTree v2: A corpus of science-domain structured explanations and inference patterns supporting multi-hop inference. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5456–5473, Marseille, France, 2020. European Language Resources Association. W. Xiong, J. Wu, H. Wang, V. Kulkarni, M. Yu, S. Chang, X. Guo, and W. Y. Wang. TWEETQA: A social media focused question answering dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5020–5031, Florence, Italy, July 2019. Association for Computational Linguistics. W. Xiong, X. Li, S. Iyer, J. Du, P. Lewis, W. Y. Wang, Y. Mehdad, S. Yih, S. Riedel, D. Kiela, and B. Oguz. Answering complex Open-Domain questions with Multi-Hop dense retrieval. In International Conference on Learning Representations, 2021. Y. Xu, C. Zhu, S. Wang, S. Sun, H. Cheng, X. Liu, J. Gao, P. He, M. Zeng, and X. Huang. Human parity on CommonsenseQA: Augmenting Self- Attention with external attention. arXiv preprint arXiv: 2112.03254, Dec. 2021. 154 Y. Xu, C. Zhu, S. Wang, S. Sun, H. Cheng, X. Liu, J. Gao, P. He, M. Zeng, and X. Huang. Human parity on commonsenseqa: Augmenting self- attention with external attention. In Proceedings of the Thirty-First In- ternational Joint Conference on Artificial Intelligence, IJCAI-22, pages 2762–2768, 2022. Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. Cohen, R. Salakhutdinov, and C. D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380. Association for Computational Linguistics, 2018. O. Yoran, A. Talmor, and J. Berant. Turning tables: Generating examples from semi-structured tables for endowing language models with reasoning skills. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6016–6031, Dublin, Ireland, 2022. Association for Computational Linguistics. W. Yu, Z. Jiang, Y. Dong, and J. Feng. ReClor: A reading comprehen- sion dataset requiring logical reasoning. In International Conference on Learning Representations, Feb. 2020. W. Yu, C. Zhu, Z. Zhang, S. Wang, Z. Zhang, Y. Fang, and M. Jiang. Re- trieval augmentation for commonsense reasoning: A unified approach. In Proceedings of the 2022 Conference on Empirical Methods in Natural Lan- guage Processing, pages 4364–4377. Association for Computational Lin- guistics, Oct. 2022. W. Yu, D. Iter, S. Wang, Y. Xu, M. Ju, S. Sanyal, C. Zhu, M. Zeng, and M. Jiang. Generate rather than retrieve: Large language models are strong context generators. In International Conference on Learning Representa- tions, 2023. W. Yuan, G. Neubig, and P. Liu. Bartscore: Evaluating generated text as text generation. Advances in Neural Information Processing Systems, 34: 27263–27277, 2021. 155 C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. In International Confer- ence on Learning Representations, 2017. C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, Mar. 2021. S. Zhang, X. Liu, J. Liu, J. Gao, K. Duh, and B. Van Durme. ReCoRD: Bridging the gap between human and machine commonsense reading com- prehension. arXiv preprint arXiv:1810.12885, Oct. 2018. S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shus- ter, D. Simig, P. S. Koura, A. Sridhar, T. Wang, and L. Zettlemoyer. OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, May 2022. W. Zhao, M. Geva, B. Y. Lin, M. Yasunaga, A. Madaan, and T. Yu. Com- plex reasoning in natural languag. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tu- torial Abstracts), pages 11–20, Toronto, Canada, July 2023. Association for Computational Linguistics. F. Zhu, W. Lei, Y. Huang, C. Wang, S. Zhang, J. Lv, F. Feng, and T.-S. Chua. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3277–3287, Stroudsburg, PA, USA, Aug. 2021. Association for Computational Linguistics. 156
ai_researcher
1
Application_of_Prompt_Engineering_Techniques_to_Optimize_Information_Retrieval_in_the_Metaverse.pdf
Goal-oriented Semantic Communications for Metaverse Construction via Generative AI and Optimal Transport Zhe Wang, Nan Li, Yansha Deng, Senior Member, IEEE, and A. Hamid Aghvami, Life Fellow, IEEE 1 4 2 0 2 v o N 5 2 ] Y S . s s e e [ 1 v 7 8 1 6 1 . 1 1 4 2 : v i X r a Abstract—The emergence of the metaverse has boosted produc- tivity and creativity, driving real-time updates and personalized content, which will substantially increase data traffic. However, current bit-oriented communication networks struggle to manage this high volume of dynamic information, restricting metaverse applications interactivity. To address this research gap, we propose a goal-oriented semantic communication (GSC) frame- work for metaverse. Building on an existing metaverse wireless construction task, our proposed GSC framework includes an hourglass network-based (HgNet) encoder to extract semantic information of objects in the metaverse; and a semantic decoder that uses this extracted information to reconstruct the metaverse content after wireless transmission, enabling efficient communi- cation and real-time object behaviour updates to the scenery for metaverse construction task. To overcome the wireless channel noise at the receiver, we design an optimal transport (OT)-enabled semantic denoiser, which enhances the accuracy of metaverse scenery through wireless communication. Experimental results show that compared to the conventional metaverse construction, our proposed GSC framework significantly reduces wireless metaverse construction latency by 92.6%, while improving meta- verse object status accuracy and viewing experience by 45.6% and 44.7%, respectively. Index Terms—Metaverse, semantic communications, semantic denoise, optimal transport, stable diffusion. I. INTRODUCTION The concept of the metaverse has emerged as a com- prehensive extension of the digital universe, encompassing various applications such as real-world scenario construction and complex simulations [1]. Most previous researches in metaverse focused on achieving precise rendering of visual content from physical counterparts through wired and wireless communication [2]. The metaverse construction task serves as a foundational step for other applications, such as BMW’s intelligent virtual factories, where production efficiency can be analyzed and optimized through metaverse simulation [3]. The goal of these metaverse construction task is to accurately transmit different types of metaverse data using bit-oriented communication methods. However, supporting real-time meta- verse interactions and high-quality rendering typically requires a bandwidth of approximately 5.6 Gbps for raw graphic data downloads [4]. This requirement far exceeds the global average 5G wireless download speed of 160 Mbps [5], posing a bottleneck for metaverse applications. Z. Wang, N. Li, Y. Deng, and A. Hamid Aghvami (Emeritus Professor) are with the Department of Engineering, King’s College London, Strand, London WC2R 2LS, U.K. (e-mail: [email protected]; [email protected] [email protected]; [email protected]) To generate a metaverse scenery, high-dimensional data like point clouds [6] and meshes [7] can be utilized to capture more comprehensive information, including spatial positions, color attributes, and depth information, that can provide a richer interaction for clients. Transmitting raw point clouds or meshes in metaverse construction requires a large amount of bandwidth. For example, a 30 FPS point cloud video generates approximately 2.06 Gb of raw data per second [6]. Given these high data rates required to transmit these point clouds or meshes, current metaverse construction tasks often rely on high-resolution images [8] and video inputs for processing and rendering into virtual environments, rather than using raw metaverse data formats. However, rendering the metaverse from images and videos is both time and resource intensive, particularly for large-scale scenes that are challenging to render in real time. Previous research has shown that creating an interactive metaverse scene demands high computational resources, generally requiring over two days of training on two Tesla V100 GPUs [9]. Small amount of inaccuracies and delays in data transmission can undermine the metaverse reliability and realism, thereby degrading the user viewing experience. To address the communication challenges posed by data- the concept of semantic intensive metaverse applications, communication has emerged as a potential solution. Unlike traditional communication methods, semantic communication updates the static knowledge, which includes shared infor- mation and maintains consistency between the transmitter and receiver [10], allowing for meaningful information trans- mission and thus reducing the demand for high bandwidth [11]. Recent research has explored the application of semantic communication, across image-centric contexts [12] and video- specific contexts [13] through the design of semantic encoders and decoders. For image communication task, [14] presented a JSCC-enabled semantic encoder-decoder framework that en- hances image construction accuracy by mapping key features as semantic information and using image overlap patches. However, JSCC-enabled image semantic communication oper- ates as end-to-end deep learning frameworks, jointly optimiz- ing source and channel coding through deep neural networks, making it challenging to apply in scenarios where training data may not fully capture all features of the transmission signals and wireless channel [15]. For video transmission tasks, researchers primarily rely on frame interpolation techniques to generate videos by processing image sequences frame-by-frame [16–18]. While frame interpolation effectively smooths gaps between frames, this approach has inherent limitations in generating cohesive video sequences due to its lack of unified perspective on frame continuity. As a result, these methods often struggle to produce smooth, realistic video, particularly when complex motion or scene consistency is required. Recently, video generation-enabled semantic decoders, such as the text-to- video model [19], have emerged as a promising solution to address these limitations through semantic interpretation. These models are designed to create more contextually co- herent frames by understanding the high-level content and structure of scenes. However, fully realizing a robust text- to-image-to-video pipeline remains challenging, particularly in maintaining consistent object appearance and placement within static scenes across entire video sequences [20]. This challenge is especially critical in real-time applications like the metaverse, where generating coherent 2D video sequences within stable 3D virtual environments is crucial. Metaverse construction shows significant potential for enhancing video generation by providing a spatially consistent 3D framework, which could serve as a foundation for generating more coher- ent video content. Advancing these capabilities could open new possibilities for creating immersive and stable video experiences within 3D virtual environments. To construct metaverse scenery, generative AI frameworks such as Stable Diffusion (SD) [21] and Neural Radiance Field (NeRF) [22] have demonstrated remarkable capabilities in synthesizing customized images and 3D models through text-driven prompts and control mechanisms, respectively. These frameworks have significantly advanced the state-of- the-art in computational content generation for virtual environ- ments. Specifically, [23] leveraged SD-based postprocessing to enhance semantic object matching and spatial relationship modeling from different viewing angles, demonstrating im- proved accuracy in object generation and positioning within synthesized scenes. Similarly, [24] employed NeRF for 3D scene reconstruction, utilizing objects structural relationships for optimized sampling in large-scale scene rendering, thereby enabling efficient representation of complex virtual environ- ments. However, these generative AI frameworks primarily emphasize custom and reliable image generation, with limited consideration on the impact of the practical constraints of wire- less transmission systems on content generation. Especially the transmission errors and delays caused by fading and noise in wireless channels highlight a critical research gap in metaverse content delivery. This impact of wireless channel becomes particularly critical for metaverse applications with the real- time requirements and reliability demands in real wireless communication environments. To address the impact of noise and channel fading on semantic information transmission, machine learning ap- proaches, have demonstrated promising denoising capabilities at the receiver. However, due to inherent unexplainable is- sues in machine learning, mathematical optimization methods, specifically Optimal Transport (OT) theory, have exhibited superior performance in optimizing large-volume data distri- bution [25]. OT, originally proposed to minimize mass trans- fer costs between probability distributions, provides robust 2 objective functions and ensures consistency in data while addressing resource constraints. This makes it particularly well-suited for applications involving high-dimensional data [26]. Recently, OT has gained significant research attention in image processing applications, where [27] implemented OT in multi-user semantic communication systems through distinct semantic decoders and channel equalizers to resolve language ambiguities and semantic mismatches, and [28] formulated semantic correspondence as an OT problem to align style disparities across semantically similar images, establishing dense correspondences at the semantic level. However, existing research on OT mainly focused on preprocessing or postpro- cessing at the transmitter or receiver, lacking comprehensive consideration of wireless channel fading impacts on seman- tic communication. This research gap becomes particularly evident in high-dimensional datasets, such as those encoun- tered in metaverse applications, where complex distributional characteristics present unique challenges and opportunities for further exploration. To address the above limitations, we propose a goal-oriented semantic communication framework (GSC). Unlike a wireless metaverse construction framework with image transmission, our proposed GSC framework extracts both 1D and 2D semantic information from high dimensional metaverse data, incorporating a semantic denoising module to achieve lower bandwidth usage while maintaining communication accuracy. The contributions of this paper are summarized as follows: ‚ We propose a goal-oriented semantic communication (GSC) framework that allows users to customize content based on their innovations and inspirations, enabling changes in color, style, and object status updates within the metaverse construction task. The proposed GSC framework includes a semantic encoder to extract key points from metaverse scenery, an OT-enabled semantic denoiser to optimize semantic noise, and a semantic decoder with stable diffusion and NeRF for metaverse construction. ‚ We propose a novel OT-enabled semantic denoiser that consists of a semantic selective correction algorithm to effectively reduce noise in the received semantic infor- mation. This is achieved by minimizing the difference between the distributions of the denoised data points and the originally transmitted data points. ‚ We conduct extensive experiments to demonstrate the sig- nificant improvements of our proposed GSC framework over the conventional metaverse construction. Specifi- cally, our framework achieved improvements over con- ventional metaverse construction methods, including a 45.6% increase in metaverse status accuracy, a 44.7% en- hancement in viewing experience, and a 92.6% reduction in transmission latency. The rest of the paper is organized as follows. Section II presents the system model and problem formulation. Section III describes the different modules of the proposed GSC frame- work. Section IV outlines the design principles of the OT- enabled semantic denoiser. Section V discusses the evaluation metrics and experimental results. Finally, Section VI concludes 3 In the wireless transmission process, a Rician fading channel is introduced to model signal strength fluctuations caused by environmental factors such as mobility, multipath propa- gation, and unpredictable conditions. These fluctuations are represented by the channel matrix H, which is composed of individual fading parameters hi defined as b hi “ 1i ` n2 n2 2i, n1i, n2i „ N 0, , (3) ˆ ˙ 1 2 Fig. 1: Metaverse Construction Task the paper. II. SYSTEM MODEL AND PROBLEM FORMATION In this section, we provide an overview of the traditional wireless metaverse construction tasks and the problem for- the process of traditional mulation. As shown in Fig. 1, metaverse construction includes image capturing, image-based wireless transmission, and metaverse construction based on image input. A. System Model We consider a metaverse construction task in an industrial factory scenario [24], aiming to replicate a physical factory and its operational status within the metaverse. The factory scenery features a stationary conveyor belt as a stable object, with several moving elements like a box traveling along the belt and a robotic arm operating in the middle, as plotted in Fig. 1. In this scenery, a set of UAVs is evenly positioned at fixed area defined by the dimensions of length (L), width (W), and height (H), forming a structured metaverse environment. At each time slot t, each UAV captures an image from its specific orientation θ. The configuration set C for the UAV cameras encompasses several fixed parameters, which are defined as C “ trθ, pfx, fyq, pcx, cyqu, (1) where the homogeneous transformation rotation matrix rθ rep- resents the orientation and position of the camera coordinate system relative to the origin of the world coordinate system. The parameters (fx, fy) represent the focal lengths, indicating the camera’s magnification level along each axis. We denote pcx, cyq as the principal point offsets, which specify the image center relative to the sensor’s coordinate system. The set of captured images at time slot t is denoted as Vt, and can be represented as Vt “ rI1, ¨ ¨ ¨ , INusT , Ii P RHˆW, (2) where Nu denotes the total number of UAVs, and Ii represents the RGB image matrix captured by the i-th UAV. Each cap- tured image is in RGB format and shares the same resolution across all UAVs. where n1i and n2i are independent and identically distributed the data Gaussian random variables. During transmission, matrix Vt is applied through the channel matrix H and affected by additive noise, resulting in the received data matrix V1 t, which can be expressed as V1 t “ H b Vt ` N, (4) where b denotes convolution, and N represents the additive noise, which has the same size as Vt. At the receiver, the received image set V1 t, along with the UAV camera parameters C1, the metaverse scenery. This process results in a metaverse data representation, such as point cloud, denoted as Pr, formulated as is used to construct ˘ ` Pr “ ráv1, ¨ ¨ ¨ , ávNrsT “ R V1 t, C1, δr , (5) where the variable δr represents NeRF algorithm parameters, Nr represents the number of points in the metaverse scenery, and each point vector ávi can be represented as ávi “ páli, áciq “ plx, ly, lz, cr, cg, cbq, (6) where the áli and áci represent the three-dimensional location and RGB color of point, respectively. B. Problem Formation The goal of metaverse construction is to maintain consis- tent scenery between the transmitter and receiver. Previous research has shown that objects in point cloud digital twins are easier for performance evaluation compared to other formats. Notably, color and style enhance object recognition in point clouds primarily when motion is involved, whereas point clouds themselves are efficient for 3D object detection [29]. Evaluating metaverse construction based on point clouds can effectively reflect both detection accuracy and the viewing experience in metaverse applications. This demonstrates that to ensure an optimal viewing experience on the client side based on the transmitted data, it is essential for the point clouds at the transmitter and the receiver to closely match each other. To achieve this consistency, we aim to minimize the geometry difference between the point cloud representation Pt at the transmitter and Pr at the receiver, using a modified chamfer distance measure Cp¨q, which is calculated as ´ ¯ P : min Pr C Pt, Pt ÿ 1 |Pt| ávPPt › ›áv1 ´ áv min áv1PPt › ›2 , “ min t|Vt,V1 t|,δru ÿ 1 |Pr| min ávPPr áv1PPr › ›áv ´ áv1 › ›2 ` (7) 4 Fig. 2: Goal-oriented Semantic Communication Framework for the Metaverse Construction where |P| represents the number of points in the point cloud P, the objective is to minimize the distance between the transmitter and receiver by aligning their respective metaverse data representations as closely as possible. C. Evaluation Metrics The overall goal of metaverse construction task, as shown in Eq. (7) is to recover the scenery for better metaverse status and clients viewing experience. To achieve this, we use different metrics to evaluate the entire virtual scenery, which includes key point error (KPE), point-to-point (P2Point), and transmission latency. P2Point [30]: To evaluate the viewing experience of clients in metaverse, the P2Point metric is employed to assess the generated scenery from a 360˝ viewing angles, comparing the geometry difference between the point cloud data at transmitter Pt and the point cloud data generated at receiver Pr. The P2Point error calculation can be expressed as P2Point “ max pdrmspPt, Prq, drmspPr, Ptqq , (8) images and received images, which can be expressed as KPE “ b |áKi ´ áK 1 2 i| , 1 N Nÿ i“1 (9) where the áKi and áK 1 i represent the three dimensional posi- tion value of key points at the transmitter and the receiver respectively, N represents the total number of points in each image. Latency: Latency is a critical metric in metaverse applica- tions. The transmission latency of the metaverse construction task can be divided into different components, including semantic information extraction time Ts, wireless communi- cation time Tw, OT selective correction time To, and image generation time Tg. The combination of all these times results in the transmission delay of the metaverse application, which can be expressed as L “ Ts ` Tw ` To ` Tg, (10) by analyzing and optimizing each component of the transmis- sion latency, we can justify and indicate the efficiency of our proposed framework. where the function drms is the root mean square error between two point cloud. KPE: The KPE is used to estimate and evaluate the meta- verse object status key points error between the transmitted III. GOAL-ORIENTED SEMANTIC COMMUNICATION In this section, we present an overview of the goal-oriented semantic communication (GSC) framework for wireless meta- verse reconstruction, as shown in Fig. 2. The framework 5 Fig. 4: Semantic Encoder TABLE I: HgNet Architecture HgNet Parameters Residual Block 1 (up1) MaxPool Recursive HourGlass (n=3) Upsample (up2) Final Output (up1 + up2) Batch Size Momentum Weight Decay Input Value (256,256,128) (256,256,128) (128,128,128) (128,128,128) (256,256,128) 40 0.9 10´4 metaverse scenery, allow them to be correctly placed after being constructed based on images. B. Semantic Encoder Section II.B details that the operational status of objects in the metaverse is considered essential for representing the state of the metaverse. Thus, in our industrial factory scenario, the positions and movements of moving boxes and robotic arms need to be accurately transmitted. Therefore, we define key points that precisely capture object movements and positions as semantic information. The architecture of the semantic encoder is detailed in Fig. 4, which generates nine heatmaps through a series of convolutional and deconvolutional opera- tions. Each heatmap corresponds to the predicted location of a specific keypoint. The coordinates of the key points áKi can be extracted from the heatmaps as áKi “ HpIi, δhq “ arg max Hkpi, jq “ pxi, yiq, (11) pi,jq where δh denotes the neural network parameters in the HgNet, and áKi represent the key points of the scenery, Hk represents the heatmap of the k-th key point, and the location of the maximum value corresponds to the predicted coordinates of the keypoint. The loss function of HgNet is to minimize the Euclidean distance between the predicted key point and groundtruth, which is represented as b |áKi ´ áKg|2 “ g f f e 1 N pxi ´ xgq2 ` pyi ´ ygq2 , (12) ¯ Nÿ ´ i“1 where áKg represents the ground truth keypoint information related to the location and operational status of the moving box and robotic arm. The semantic encoder architecture and training parameters are shown in Table I. C. Semantic Decoder The semantic decoder is designed to construct metaverse scenery using extracted semantic information and shared Fig. 3: Knowledge Base Extraction consists of four main modules: the knowledge base extraction module, which gathers and updates essential static knowledge base; the semantic encoder, which takes images as input and generates 1D semantic information as output; the wireless communication module; and the semantic decoder, which reconstructs the metaverse scenery using the received semantic information. A. Knowledge Base Extraction As shown in Fig. 3, compared to the general metaverse con- struction task that only relies on images input, our proposed GSC framework integrates knowledge base B at the receiver for metaverse rendering. In detail, multiple components in the metaverse, such as stationary background objects denoted as O, remain stable. Background objects include the background of a factory or fixed objects like conveyor belts. In contrast, movable objects M, such as robotic arm and a moving box, are in motion. The knowledge base refers to same information within the stable and moving components through operation. These knowledge base only needs to be transmitted at the beginning of the metaverse application and thus alleviate the bandwidth requirements. We define the knowledge base as scenery knowledge base, camera knowledge base, and object knowledge base, that is derived from the original metaverse scenery. ‚ The scenery knowledge base includes the canny image set Vc, extracted from the image set V0 of the scenery at time slot t “ 0. These canny images contain information about the metaverse’s stationary background and can serve as rotation information in the image generation process. ‚ The camera knowledge base represents the UAV camera parameters C, as described in Eq. (1), which include the UAV camera’s ID, camera angle, distance, focal length, etc. These parameters will be used as input for 3D metaverse construction process based on images. ‚ The object knowledge base consists of all objects’ three- the time t “ 0. dimensional These coordinates will be used to attach objects within location coordinates at 6 Fig. 5: Semantic Decoder for Metaverse Construction knowledge base. The decoder process includes both image generation and metaverse construction. the covariance matrix that governs the uncertainty in the generation process. 1) Image Generation: The designed semantic decoder first generates images based on the received semantic key point information. To achieve this, we develop the first module of our semantic decoder based on the SD algorithm. Specifically, SD uses a reverse diffusion process, where noise is iteratively removed from a random noise vector to produce coherent im- ages. Mathematically, the SD model uses a series of denoising steps, represented by a sequence of latent variables, to estimate a noise-removed image that matches the desired target. The joint probability distribution for conditional image generation, incorporating various input modalities like keypoints, line drawings, and text prompts. Inspired by the implementation of SD for image generation as presented in [31], we de- sign a ControlNet-based multi-rotation image generation SD algorithm. As shown in Fig. 5, this algorithm incorporates additional conditional text and image prompts, such as robotic arm key points and canny edge images, into the SD model to generate images depicting multiple rotational views of the same scene, the probability distribution for generating the final image under given condition can be expressed as ´ 0 | tp, Ic, áK 1 I1 pθ ¯ ż ´ “ pθ ¯ 0:T | tp, Ic, áK 1 I1 dI1 1:T, (13) where tp represents the text prompt describing the image content and style, Ic denotes the canny edge images from the knowledge base, and áK 1 represents the key point information 0:T | tp, Ic, áK 1q indicates the joint provided as input, and pθpI1 probability of generating an image conditioned on these inputs through the diffusion process, starting from the initial step T. With these control conditions, the reverse diffusion process becomes a conditional generation process that ensures the generated image aligns with both the text prompt and image control inputs. At each stage of the generation process, the generated image is assumed to follow a Gaussian distribution, described as µθ I1 T´1 „ N T, T, tp, áK 1, Ic I1 where µθp¨q represents the mean function that determines T´1, and Σθp¨q denotes the most likely generated image I1 I1 T, T , Σθ (14) , ¯ ` ˘¯ ´ ´ 2) Metaverse Construction: In the second step, the de- signed semantic decoder constructs the metaverse scenery using the images generated in the first step. To do so, we develop the second module of our semantic decoder based on NeRF, a neural network-based method for synthesizing 3D views from images by estimating the density and radiance at sampled points within the metaverse. The algorithm learns a mapping from spatial coordinates and viewing directions to color and density values, enabling high-fidelity 3D reconstruc- tion. A NeRF algorithm is utilized to reconstruct the scenery by learning the volumetric density and color values at each point rplq of the input image, the point rplq is represented as rplq “ áo ` l ¨ ád, (15) l where áo represents the UAV’s position, is the distance between the pixel coordinate and the UAV’s position, and ád is the ray direction computed from pixel coordinate. The ray direction ád is represented as 1 u12 ` v12 ` 1 u ´ cx v1 “ fx v ´ cy fy u1, v1, 1 u1 “ ⃗d “ (16) ? ` ˘ T , , u and v denote the pixel coordinates of the input image, cx and cy represent the principal point offsets, and fx and fy are the focal lengths in the x and y directions, which are the fixed camera parameters described in Eq. (1). The NeRF algorithm synthesizes realistic novel views by predicting the color and density along rays emitted from the UAV camera. 1 The predicted image I t from the rendered metaverse scenery is expressed as 1 t “ I ż le ls T plqσprplqqcprplq, ádq dl, (17) where the accumulated transmittance T plq quantifies the like- lihood that a ray travels without being occluded, and σprplqq denotes the scene’s density at point rplq. D. Framework Design To better clarify the differences and advantages between semantic communication and conventional metaverse con- struction and effectively validate the performance of our pro- posed GSC framework, we design various types of transmitted semantic information and knowledge base, as shown in Fig. 5. These include model-based GSC (GSCM) and scenery- based GSC (GSCS), to provide a comparative foundation for evaluation. GSCS: The GSCS framework generates multi-angle images of the metaverse scenery by utilizing key points, moving box locations, and canny images together. The image generation process is described as ´ tp, Ic, ⃗K 1 ˆI1 t “ D t, ⃗B1 t, ωc, ωk, ωb, θ , (18) ¯ where ωc, ωk, and ωb represent the control weights for the canny image, robotic arm, and moving box. These weights determine the clearance of each component in the generated image ˆI1 t. GSCM: The GSCM framework generates metaverse scenery images in two steps: 1) Using key points as input to generate images of the robotic arm. 2) Using moving box information and canny images to generate images of the metaverse’s stationary scenery. In this framework, the image generation process is described as $ & ¯ ´ ˆI1 t “ D ´ % D t, θi ¯ tp, Ic, ⃗B1 tp, ⃗K 1 t, θi , if i “ 1 , if i “ 2 , (19) where tp represents the text prompt defined either to generate images of the robotic arm or the stationary scenery. IV. OPTIMAL TRANSPORT-ENABLED DENOISER In this section, we present the design principle and algo- rithm of our semantic OT-based denoiser, as shown in Fig. 6. At the receiver, we propose a relaxed OT optimization to compute the transport matrix between the transmitter and receiver. The OT optimization selectively corrects distribution shifts caused by wireless channel fading and noise and then used the corrected information for later semantic decoder to reconstruct the metaverse. A. Relaxed Optimal Transport Optimization Inspired by the OT algorithm on large volume data opti- mization [27], we design an OT-enabled semantic denoising algorithm involves relaxing the row and column constraints of the received data individually instead of enforcing both simul- taneously for semantic communication in metaverse construc- tion. Given the received key point vectors áK 1 “ tpx1 i“1 the at transmitter side áK “ tpxj, yjqun j“1, with their respective probability distributions Dr and Dt, the goal of the OT- enabled semantic denoising algorithm is to find an optimal transport matrix Tij that minimizes the correction cost from i, y1 the receiver side and the original point vectors at iqun 7 (20) Fig. 6: Semantic Denoiser Dr to Dt, which can be represented as nÿ nÿ nÿ nÿ min T Tijcij ` η Tij plog Tij ´ 1q , i“1 j“1 i“1 j“1 subject to nÿ j“1 nÿ i“1 Tij “ pi, Tij “ qi, where pi represents the row marginal distribution of the received data. Tij represents the amount of mass transported from point set áK 1 to point set áK, and cij is the transportation cost between point px1 iq and pxj, yjq. This cost represents the complexity of redistributing one set of vectors to match the other and is calculated using the Euclidean distance between the sample points, and can be expressed as i, y1 ` ˘ Cij “ dist px1 i, y1 iq, pxj, yjq , (21) where the distp¨q is the Euclidean distance calculated from two points. The goal of the OT is to ensure that the probability distributions of the key point vectors at the receiver and sender sides are matched, while simultaneously minimizing the transportation cost. To address this, the Lagrange function is introduced to incorporate the constraints in Eq. (20) into the objective function using the Lagrange multiplier method, the problem is transformed into an unconstrained optimization task, which can be expressed as nÿ nÿ nÿ nÿ L “ Tijcij ` η Tij plog Tij ´ 1q , i“1 j“1 i“1 nÿ j“1 ˜ nÿ ¸ (22) ´ λi Tij ´ pi , i“1 j“1 where λi is the Lagrange multiplier, taking the partial deriva- tive of L with respect to Tij and setting it to zero is done to find the minimum value of the function. This step helps determine an expression for Tij that satisfies the optimization conditions, which can be expressed as BL BTij “ cij ` η plog Tij ´ 1q ´ λi “ 0 where Tij “ e λi´cij η , (23) (24) ř Then, the OT problem involves aggregating over j and apply- n j“1 Tij “ pi. By incorporating the ing the row constraint marginal distribution constraint, the optimal transport matrix that satisfies the conditions Eq. (20) can be determined, and it can be integrated with Eq. (23), expressed as pi “ nÿ j“1 where λi “ η log λi´cij η e , ¨ ˝ ř n pi j“1 e´ cj η ˛ ‚. (25) (26) Thus, the relaxed transport matrix TU under the row constraint is calculated by substituting Eq. (25) into Eq. (23), which is represented as TU “ ˜C ¨ diag , (27) ˆ ˙ p 1n ˜C where ˜C “ e´ C η , 1n ˜C refers to the matrix product of this vector of ones with the matrix, and p represents the uniform importance weight of each point in the source set Similarly, as for the column constraint, we consider the relaxation of the column constraint, where the transport ma- trix T is optimized with only the column sums constrained n i“1 Tij “ qj. Following a similar procedure subject to as above, the relaxed transport matrix TV under the column ˆ constraint is ř ˙ TV “ diag ¨ ˜C. (28) q 1n ˜C T To approximate the solution under the original full constraint, we combine the solutions from the relaxed row and column constraints and use the element-wise maximum of the matrices TU and TV T˚ “ max pTU , TV q , ˆ ˆ T˚ “ max ˜C ¨ diag ˙ ˆ , diag p 1n ˜C q 1n ˜C T ˙ ˙ ¨ ˜C . (29) The OT-enabled semantic denoising algorithm implementation achieves a computational complexity of O , compared to the traditional OT problem, which uses the Sinkhorn-Knopp algorithm with O complexity, making it more efficient for large-scale problems. n2 n3 ˘ ˘ ` ` B. Semantic Selective Correction Algorithm 1 Relaxed OT Denoising Algorithm 8 1: Input: Transport cost cij, regularization parameter ϵ, maximum iterations Nmax, Image set I with corresponding captured angles θi, key points set Ki, threshold δ 2: Output: Denoised key points ˆKi 3: Initialize the transport cost matrix ˜C as: ˜Cij “ exp ˜Cř 4: Normalize ˜C: ˜C Ð 5: Initialize marginal distributions pi and qj from Ki. 6: for each image Ii do 7: Initialize vectors u and v to all ones for row and ´ cij ϵ ˜Cij ˘ ` i,j column updates, respectively. Step 1: Row Constraint Optimization (TU ): for n “ 1 to Nmax do Update u as: u Ð p ˜Cv Normalize TU using u: TU “ diagpuq ¨ ˜C end for Step 2: Column Constraint Optimization (TV ): for n “ 1 to Nmax do Update v as: v Ð q ˜CJu Normalize TV using v: TV “ ˜C ¨ diagpvq end for Step 3: Combining Relaxed Solutions (T˚): Final transport matrix T˚ “ maxpTU , TV q Step 4: Key Point Filtering and Update: for each image pair pIi´n, Ii`nq do ˇ ˇ ˇ ď δ then ˇ ˇ ˇKi ´ Ki´n`Ki`n Fi Ð 1 if 2 end if end for Update key points for unfiltered data: ˆKi Ð pT˚ ¨ 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: Kiq where pFi ““ 0q 27: end for 28: Return Denoised key points ˆKi channel and, therefore, did not require denoising. There- fore, we implement a semantic selective correction to detect potential transmission errors in the data from the received metaverse semantic information. The main reason is to check the correspondence of key points across images from different angles after wireless communication, as demonstrated in the Algorithm 1. Ideally, when the rotational angles are uniform, the key point displacement should proportionally increase with the angle, and thus significant deviations indicate potential noise or transmission errors. For key points in the pi ´ nq-th and pi ` nq-th views, the selective filter flag Fi is determined based on the Euclidean distance between them, calculated as ˇ ˇ ˇáKi ´ ˇ ˇ ˇ ˇáKi ´ ˇ áKi´n`áKi`n áKi´n`áKi`n ˇ ˇ ˇ ˇ ą δ, ˇ ˇ ˇ ˇ ď δ. $ ’’& Fi “ (30) ’’% 0, 1, if if 2 2 Under the effect of wireless channel, we noticed that not all point vectors were severely blurred after wireless com- munication. Some point vectors, especially under high SNR conditions, were less affected by the wireless communication If the correspondence between key points from different angles does not align, the correction and labeling process is used to precisely align these points with OT to reduce wireless communication noise. 9 Fig. 7: Denoising Performance under Rayleigh and Rician Channels TABLE II: Experiment Setup Metaverse Simulation FPS Robotic arm point number Moving box number UAV number Image resolution Channel response Geometry movement range (x,y,z) Value 60 Hz 7 2 36 1200 x 600 AWGN, Rayleigh, Rician ([0,4], [0,4], [0,4]) system, equipped with two RTX 3090 GPUs, PyTorch 2.1, and the Unity platform. Specifically, to validate the effectiveness of the OT-enabled denoiser, we introduce two additional channel models to simulate noise and fading effects in transmission, including a real additive white Gaussian noise (AWGN) chan- nel and a Rayleigh fading channel. Both wireless channels are essential and common components in modern wireless communication systems. We define different OT denoising semantic communica- tion frameworks based on the proposed GSCM and GSCS frameworks, including: (1) GSCS-OT: Based on Algorithm 1, selective semantic correction was first performed under different SNR conditions. The corrected semantic information was then incorporated into the GSCS framework, along with the image generation process described in Eq. (18). (2) GSCM- OT: Selective semantic correction was first performed under different SNR conditions using Algorithm 1. The corrected semantic information was then incorporated into the GSCM framework, along with the image generation process described in Eq. (18), to generate various metaverse objects. V. EXPERIMENT EVALUATION In this section, we evaluate the performance of our proposed GSC framework and compare it with the traditional metaverse construction framework discussed in Sections II and III. The results of our proposed GSC framework are presented as follows. Section IV-A provides insights into the configuration of metaverse scenarios and OT denoiser, while Section IV-B presents the experimental results on the semantic information extraction accuracy achieved by HgNet. Additionally, we assess various metrics described in III-C for our proposed GSC framework including KPE, P2Point, and latency. A. Experiments Setup As for the experimental setup, we configured the wireless communication and metaverse scenario settings as detailed in Table II. These parameters include the number of captured images, channel fading type, and etc. The experimental hard- ware consists of Python 3.11.0 running on an Ubuntu 18.04 B. OT Denoising Capability Fig. 7 plots the points difference of the OT-enabled de- noiser under different wireless channel conditions. The point deviations increase as the SNR decreases, which occurs be- cause lower SNR leads to higher noise levels in the wireless communication, thus causing greater point displacement. For a better demonstration, we implement the OT-enabled denoiser varies across different wireless channels, with the performance ranked as Rayleigh ą Rician. Specifically, in the Rician channel, as the SNR decreases, the OT-enabled denoising algorithm effectively recovers the noisy points back to their original distribution. Additionally, as the SNR increases, noise gradually decreases, resulting in less variation in the received point vectors. On the other hand, the results for the Rician wireless channels show unstable denoising performance under different SNR conditions. When the SNR drops below 10 dB, although the denoising algorithm shows some success in point recovery, noise persists around the ground truth points, especially under 0 dB conditions, highlighting certain limitations of the OT-denoising method in a Rician fading channel. This explains why a selective correction algorithm is first used to filter noisy points, followed by the OT-denoising algorithm for points that cannot be matched. This phenomenon is also reflected in the P2Point error results shown in Fig. 9, where the GSC framework exhibits unstable performance in P2Point and key point error as the SNR decreases in the Rician channel. While the OT-enabled denoising algorithm performs effectively in wireless channel denoising, deviations may occur under specific channel conditions and thus highlights the importance of selective correction. 10 (a) AWGN. (b) Rayleigh. (c) Rician. Fig. 8: Key Point Error (a) AWGN. (b) Rayleigh. (c) Rician. Fig. 9: Point-to-point C. Key Points Transmission Accuracy Fig. 8 plots the KPE between the transmitter and receiver when employing our proposed GSC framework in comparison to the conventional metaverse construction framework (Image- Com) across three different wireless channel conditions. The error is measured using the Euclidean distance between the key points’ two-dimensional positions, which is an approach for evaluating metaverse objects’ status update accuracy in communication systems. A lower key point error implies a more accurate representation of the object running status in the metaverse, such as robotic arms and boxes. To explain it in more detail, Fig. 8 (c) plots the KPE performance under Rician channel conditions ranked as GSCM-OT ă GSCM ă GSCS- OT ă GSCS ă ImageCom suggests that the incorporation of the GSC framework, particularly with OT denoiser improves the accuracy of the transmitted key points. With the SNR increase, the key point error shows a notice- able reduction across all frameworks. This demonstrates the general trend that higher SNR values, which indicate a cleaner communication channel, lead to more reliable and accurate transmissions. When the SNR reaches 20 dB, the key point errors for all the frameworks drop to levels indicating nearly lossless transmission. However, at 0 dB SNR condition, the differences between the GSC and ImageCom become more pronounced. The GSCS-OT framework demonstrates the best performance in terms of key point error, outperforming Im- ageCom by a significant decrease of 45.6%. This performance improvement can be attributed to the fact that GSC, with stable diffusion and OT denoiser techniques, despite slight variations in key point positions, can still generate images of metaverse objects that are recognizable and usable for monitoring and controlling operations. In contrast, the ImageCom framework, which relies solely on image transmission, struggles to main- tain color accuracy and often produces blurred images in low SNR and high noise environments. Under such conditions, the transmitted images become so distorted that the robotic arm and boxes are unrecognizable, resulting in a complete loss of essential objects. Moreover, under 0 dB condition, a comparison between the frameworks with and without OT deoiser reveals the additional benefit provided by OT. The GSCM-OT framework outperforms GSCM, reducing the key point error by approximately 10.2%, while GSCS-OT shows a similar improvement of around 5.3% over GSCS. These reduc- tions highlight the efficacy of OT in denoising the transmitted data, especially under AWGN channel conditions. OT helps to maintain the accuracy of key point vector transmissions by mitigating the effects of noise, which is particularly beneficial in wireless communication environments. D. Metaverse Construction Reliability Fig. 9 plots the P2Point results of different frameworks in constructing metaverse scenery at the receiver. The P2Point metric, which measures the geometric differences in 3D scenery between the transmitter and receiver, serves as an indicator of construction clarity and stability. A essential lower P2Point value indicates a more precise reconstruction 11 Fig. 11: Key Points Extraction Accuracy Fig. 12: Key Points Extraction Accuracy ranked as Hourglass ą PointConv ą RsNet ą EfficientNet. The performance highlights the superior location prediction ability of the Hourglass-based semantic encoder in keypoint detection, achieving impressive accuracy with a margin of less than 3 pixels within the same training epoch’s duration. Additionally, the results suggest that other backbone networks, such as ResNet, may perform more efficiently in feature extraction during the initial 100 epochs but ultimately exhibit lower precision in keypoint extraction. These networks may struggle with adequately extracting and learning from the structure of images unless the neural network architecture is deepened or made more complex, potentially impacting the accuracy of semantic information extraction. These findings underscore the significance of not only HgNet’s key point extraction ability but also the careful selection of the backbone network when performing semantic information extraction on different data types. F. Transmission Latency Fig. 12 plots the transmission latency of all frameworks as defined in Eq. (19). Lower latency contributes to reduced meta- Fig. 10: Metaverse Construction with SNR Decrease of the metaverse scenery, demonstrating an improved viewing experience of the metaverse scenery. In detail, as plotted in Fig. 9(c), under the Rician channel, the P2Point results are ranked as GSCM-OT ă GSCS-OT ă GSCM ă GSCS ă ImageCom. Specifically, at 0 dB conditions, GSCM-OT and GSCS-OT demonstrate improvements of 44.7% and 29.5%, respectively, over the ImageCom. This improvement may be attributed to the design of the GSC framework, which extracts and updates the knowledge base to maintain stability and reduce random noise in the static scenery construction process, even in scenarios where certain objects, such as the robotic arm, are positioned less precisely. In contrast, the ImageCom framework, which lacks the knowledge base provided for the metaverse construction, struggles to preserve structural integrity under challenging conditions. For better evaluation, Fig. 10 plots the results of metaverse construction under a Rician channel. As the SNR decreases, both the proposed GSCM-OT and the ImageCom frame- works exhibit increased blurriness in the generated metaverse scenery. However, similar to the results discussed in Fig. 9(c), we can observe that when the SNR is 20 dB, the robotic arm in the generated metaverse scene appears clearer, with the hook on the gripper distinctly visible. This is because rendering only the movable objects reduces the rendering space, allowing for a sharper depiction of the moving objects. At 10 dB conditions, however, the metaverse scene generated by the ImageCom framework shows significant blurring, making it difficult to discern details. In contrast, while GSCM-OT also introduces some blurriness, the knowledge base allows for a relatively clearer background, including elements such as the conveyor belt. This demonstrates the enhanced transmission reliability that semantic communication provides in low SNR conditions. E. Key Point Detection Accuracy Fig. 11 plots the key points extraction by the HgNet seman- tic encoder, anchored to various backbone networks, including Hourglass, UNet, ResNet, and DenseNet, all evaluated over the same training epochs. Each neural network demonstrates a good ability to extract robotic arm and moving box key points from different images. The extraction accuracy performance with different backbone neural networks after 300 epochs is verse construction time on the receiver side, with the frame- works ranked as follows: GSCSăGSCS-OTăGSCMăGSCM- OTăImageCom. Compared to the ImageCom framework, the proposed GSCM and GSCS frameworks significantly reduce transmission time due to the smaller amount of data trans- mitted. Although these frameworks introduce additional steps like semantic extraction, OT denoising, and image generation, these steps collectively take less than two seconds per frame, accounting for only a small fraction of the total transmission time. Regarding image generation, which depends on receiver the ImageCom computing power and wireless bandwidth, framework requires receiving all images on the receiver side. In contrast, the GSCS and GSCM frameworks generate high- resolution images using a ControlNet-enabled Stable Diffusion algorithm, which only requires nine key points per image. With the aid of a powerful GPU, this approach drastically reduces the time required for receiving data on the receiver side. As for OT-enabled denoising, each frame requires only 0.03 seconds for denoising, which constitutes a minimal part of the overall process compared to the performance gains from P2Point and KPE enhancements. Specifically, the GSCM-OT framework achieves an 81.4% reduction in transmission latency, while the GSCS-OT framework also achieves an 92.6% reduction compared to the ImageCom framework. These improvements demonstrate the effectiveness of leveraging semantic informa- tion and optimized image generation algorithms to enhance the real-time performance of wireless metaverse applications, resulting in a smoother and more responsive user experience. VI. CONCLUSION This paper proposed a goal-oriented semantic communica- tion framework (GSC) to address the challenges of real-time communication and virtual world creation in the metaverse, particularly focusing on reducing latency and enhancing the accuracy of semantic information for virtual entities. By incor- porating semantic information with the Neural Radiance Fields (NeRF) algorithm, the GSC framework selectively transmitted key semantic data, offering a more effective approach to infor- mation extraction than traditional communication frameworks, with fewer errors and reduced bandwidth requirements after wireless communication. Additionally, we implemented the Optimal Transport algorithm across varying wireless channel conditions within an end-to-end communication setup, distin- guishing our approach and enhancing the general capabilities of semantic communication frameworks. Our future work will optimize large-scale metaverse scenarios like universities and factories, and enhance metaverse construction by comparing the GSC framework under different wireless channels using machine learning for CSI feedback. REFERENCES [1] Y. Wang, Z. Su, N. Zhang, R. Xing, D. Liu, T. H. Luan, and X. Shen, “A survey on metaverse: Fundamentals, security, and privacy,” IEEE Commun. Surv. Tutor., vol. 25, no. 1, pp. 319– 352, First Quarter 2022. [2] Y. Zhao, J. Jiang, Y. Chen, R. Liu, Y. Yang, X. Xue, and S. Chen, “Metaverse: Perspectives from graphics, interactions and visualization,” Vis. Inform., vol. 6, no. 1, pp. 56–67, Mar. 2022. 12 [3] M. Hu and L. Cheng, “Research on the application of metaverse technology in the field of intelligent transportation,” in Int. Conf. Metaverse. Springer, 2023, pp. 98–107. [4] H. Dong and J. S. A. Lee, “The metaverse from a multimedia communications perspective,” IEEE MultiMedia, vol. 29, no. 4, pp. 123–127, Oct. 2022. [5] D. T. K. Ng, “What is the metaverse? definitions, technologies and the community of inquiry,” Australas. J. Educ. Technol., vol. 38, no. 4, pp. 190–205, Oct. 2022. [6] Y. Huang, B. Bai, Y. Zhu, X. Qiao, X. Su, L. Yang, and P. Zhang, “ISCom: Interest-aware semantic communication scheme for point cloud video streaming on metaverse XR devices,” IEEE J. Sel. Areas Commun., vol. 41, no. 10, pp. 1234–1246, Oct. 2023. [7] A. Singh, S. Mishra, S. Jain, S. Dogra, A. Awasthi, N. R. Roy, and K. Sodhi, “Exploring practical use-cases of augmented reality using photogrammetry and other 3d reconstruction tools in the metaverse,” Augment. Virtual Reality Ind. 5.0, vol. 2, p. 163, 2023. [8] E. S. Wong, N. H. A. Wahab, F. Saeed, and N. Alharbi, “360- degree video bandwidth reduction: Technique and approaches comprehensive review,” Appl. Sci., vol. 12, no. 15, p. 7581, Aug. 2022. [9] W. Jing, S. Wang, W. Zhang, and C. Li, “Reconstruction of neural radiance fields with vivid scenes in the metaverse,” IEEE Trans. Consum. Electron., vol. 69, no. 4, pp. 450–460, Oct. 2023. [10] C. Chaccour, W. Saad, M. Debbah, Z. Han, and H. V. Poor, “Less data, more knowledge: Building next-generation semantic communication networks,” IEEE Commun. Surv. Tutor., vol. 26, no. 1, pp. 10–30, First Quarter 2024. [11] X. Xu, H. Xiong, Y. Wang, Y. Che, S. Han, B. Wang, and P. Zhang, “Knowledge-enhanced semantic communication sys- tem with ofdm transmissions,” Sci. China Inf. Sci., vol. 66, no. 7, p. 172302, Jul. 2023. [12] R. Cheng, N. Wu, V. Le, E. Chai, M. Varvello, and B. Han, “Magicstream: Bandwidth-conserving immersive telepresence via semantic communication,” in Proc. 22nd ACM Conf. Embed. Netw. Sens. Syst. (SenSys). ACM, Jan. 2024, pp. 365–379. [13] H. Li, H. Tong, S. Wang, N. Yang, Z. Yang, and C. Yin, “Video semantic communication with major object extraction and contextual video encoding,” arXiv Preprint, vol. 2402, p. 01330, 2024. [14] R. Yamamoto, Y. Inoue, and D. Hisano, “Deep joint source- channel coding using overlap image division for block noise reduction,” in Proc. IEEE 99th Veh. Technol. Conf. (VTC2024- Spring). IEEE, Apr. 2024, pp. 1–6. [15] R. C. Jain, “An introduction to joint source and channel coding,” IETE J. Educ., vol. 46, no. 3, pp. 121–127, Sep. 2005. [16] P. Samarathunga, Y. Ganearachchi, T. Fernando, A. Jayasingam, I. Alahapperuma, and A. Fernando, “A semantic communication and vvc-based hybrid video coding system,” IEEE Access, vol. 12, pp. 15 000–15 012, Jan. 2024. [17] Z. Bao, H. Liang, C. Dong, X. Xu, and G. Liu, “Md- vsc—wireless model division video semantic communication for 6g,” in Proc. IEEE Globecom Workshops (GC Wkshps). IEEE, Dec. 2023, pp. 1572–1578. [18] S. Wang, J. Dai, Z. Liang, K. Niu, Z. Si, C. Dong, X. Qin, and P. Zhang, “Wireless deep video semantic transmission,” IEEE J. Sel. Areas Commun., vol. 41, no. 1, pp. 214–229, Jan. 2022. [19] J. Cho, F. D. Puspitasari, S. Zheng, J. Zheng, L.-H. Lee, T.-H. Kim, C. S. Hong, and C. Zhang, “Sora as an AGI world model? a complete survey on text-to-video generation,” arXiv Preprint, vol. 2403, p. 05131, 2024. [20] Y. Liu, K. Zhang, Y. Li, Z. Yan, C. Gao, R. Chen, Z. Yuan, Y. Huang, H. Sun, J. Gao, et al., “Sora: A review on background, technology, limitations, and opportunities of large vision mod- els,” arXiv Preprint, vol. 2402, p. 17177, 2024. [21] Y. Guo, C. Yang, A. Rao, Z. Liang, Y. Wang, Y. Qiao, 13 M. Agrawala, D. Lin, and B. Dai, “Animatediff: Animate your personalized text-to-image diffusion models without specific tuning,” arXiv preprint arXiv:2307.04725, 2023. [22] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ra- mamoorthi, and R. Ng, “NeRF: Representing scenes as neural radiance fields for view synthesis,” Commun. ACM, vol. 65, no. 1, pp. 99–106, Jan. 2021. [23] J. Zhang, C. Herrmann, J. Hur, L. Polania Cabrera, V. Jampani, D. Sun, and M.-H. Yang, “A tale of two features: Stable diffu- sion complements dino for zero-shot semantic correspondence,” Adv. Neural Inf. Process. Syst., vol. 36, 2024. [24] Q. Xu, Z. Xu, J. Philip, S. Bi, Z. Shu, K. Sunkavalli, and U. Neumann, “Point-NeRF: Point-based neural radiance fields,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 5438–5448. [25] T. S´ejourn´e, G. Peyr´e, and F.-X. Vialard, “Unbalanced optimal transport: From theory to numerics,” Handb. Numer. Anal., vol. 24, pp. 407–471, Jan. 2023. [26] N. V. Martyushev, B. V. Malozyomov, O. A. Filina, S. N. Sorokova, E. A. Efremenkov, D. V. Valuev, and M. Qi, “Stochas- tic models and processing probabilistic data for solving the problem of improving the electric freight transport reliability,” Mathematics, vol. 11, no. 23, p. 4836, Dec. 2023. [27] M. Sana and E. C. Strinati, “Semantic channel equalizer: Modelling language mismatch in multi-user semantic commu- nications,” in Proc. IEEE Glob. Commun. Conf. (GLOBECOM). IEEE, Dec. 2023, pp. 2221–2226. [28] Y. Liu, L. Zhu, M. Yamada, and Y. Yang, “Semantic correspon- dence as an optimal transport problem,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 4463–4472. [29] P. Bremner and M. Giuliani, “Impact of resolution, colour, and motion on object identification in digital twins from robot sensor data,” Front. Robot. AI, vol. 9, p. 995342, Dec. 2022. [30] R. Mekuria, Z. Li, C. Tulvan, and P. Chou, “Evalua- tion criteria for PCC (point cloud compression),” ISO/IEC JTC1/SC29/WG11, Tech. Rep. N16332, Jun. 2016. [31] L. Zhang, A. Rao, and M. Agrawala, “Adding conditional control to text-to-image diffusion models,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2023, pp. 3836–3847.
ai_researcher
2
Unsupervised_Knowledge_Graph_Construction_and_Event-centric_Knowledge_Infusion_for_Scientific_NLI.pdf
2 2 0 2 t c O 8 2 ] L C . s c [ 2 v 8 4 2 5 1 . 0 1 2 2 : v i X r a UNSUPERVISED KNOWLEDGE GRAPH CONSTRUCTION AND EVENT-CENTRIC KNOWLEDGE INFUSION FOR SCIENTIFIC NLI Chenglin Wang1, Yucheng Zhou2, Guodong Long2, Xiaodong Wang1, Xiaowei Xu1∗ 1School of Computer Science and Technologys, Ocean University of China 2Australian AI Institute, School of Computer Science, FEIT, University of Technology Sydney ABSTRACT With the advance of natural language inference (NLI), a rising demand for NLI is to handle scientific texts. Existing meth- ods depend on pre-trained models (PTM) which lack domain- specific knowledge. To tackle this drawback, we introduce a scientific knowledge graph to generalize PTM to scientific domain. However, existing knowledge graph construction ap- proaches suffer from some drawbacks, i.e., expensive labeled data, failure to apply in other domains, long inference time and difficulty extending to large corpora. Therefore, we pro- pose an unsupervised knowledge graph construction method to build a scientific knowledge graph (SKG) without any la- beled data. Moreover, to alleviate noise effect from SKG and complement knowledge in sentences better, we propose an event-centric knowledge infusion method to integrate ex- ternal knowledge into each event that is a fine-grained se- mantic unit in sentences. Experimental results show that our method achieves state-of-the-art performance and the effec- tiveness and reliability of SKG. Index Terms— Natural Language Inference, Scientific Text, Knowledge Infusion, Knowledge Graph Construction 1. INTRODUCTION Natural language inference (NLI), an essential task for natu- ral language understanding, aims to deduce relationship be- tween the given premise and hypothesis [1]. NLI is a funda- mental problem in many natural language processing (NLP) tasks, such as sentence embeddings [2], question answering [3] and commonsense reasoning [4]. Therefore, it has been widely concerned by many researchers. With the widespread NLP application, a rising demand for NLI methods is to han- dle specific-domain text, such as scientific texts [5], medi- cal articles [6] and financial news [7]. To build a large NLI dataset related to scientific texts, SciNLI dataset [5] is built from scholarly papers on NLP and computational linguistics. Due to the success of pre-trained language models (PTM) (e.g., BERT [8] and RoBERTa [9]), a general paradigm for *Corresponding author. This work is supported by the National Key R&D Program of China (No. 2020YFB1710005). NLI is to fine-tune a PTM on downstream dataset. How- ever, PTMs fine-tuned on specific-domain data often suffer from a cross-domain problem since they are pre-trained on general domain corpora such as news articles and Wikipedia. Many works generalize PTM to specific-domain via pre-train on specific-domain corpus [10, 11] or introducing specific- domain knowledge graph [12]. Although Beltagy et al. [10] consume enormous training resources to exclusively pre-train SciBERT on scientific texts, RoBERTa with a more sophis- ticated pre-training leads to better performance. Therefore, it is necessary to introduce a scientific knowledge graph to generalize PTM to scientific domain. Recently, some works [13, 14] propose building scientific knowledge graphs automatically via training entity recogni- tion and relationship extraction models with labeled data. De- spite their success, they still suffer from some drawbacks, i.e., expensive labeled data, failure to apply in other domains, long inference time 1 and difficulty extending to large corpora. Specifically, existing methods [13, 14] are trained on only 500 abstracts and show an undesirable performance (i.e., 44.7 on F1). Therefore, we efficiently and easily build a scientific knowledge graph (SKG) without any labeled data. We first parse scientific texts via a dependency parser and then extract the subjects, predicates and objects as triplet candidates. To reduce noise samples in triplet candidates, we propose heuris- tic filtering methods to improve the accuracy of triplets. Due to fast inference and not requiring any labeled data and train- ing sources, our method can easy to extend to large corpora in other domains. To improve PTM reasoning, previous works [15] mainly use sentences as queries to retrieve triplets in knowledge graph and integrate them into PTM. However, since there are still some noise samples in SKG, directly integrating knowledge into model damages model learning. Recently, event-centric reasoning [16, 17] shows powerful reasoning capability in context via understanding correlation between events. To reduce the effect of noise data and complement knowledge in sentences better, we propose an event-centric knowledge infusion framework. Precisely, we follow [16] to 1In our pilot experiments on 8-core-CPU/Nvidia RTX2080 GPU, dependency-parser from Stanford-stanza processed 833 sentences/second, which is 110x faster than SCIIE [14]. split events into sentences and then use all events as queries to retrieve relevant triplets, which can prevent the retrieved knowledge only relevant to limited semantics in sentences. Moreover, we integrate knowledge into multiple event units, improving context reasoning via enriching semantic informa- tion in each event. We conduct an extensive evaluation on SciNLI dataset [5]. Results show that our method achieves state-of-the-art perfor- mance compared with other methods. In addition, we analyze the effectiveness and reliability of SKG. 2. UNSUPERVISED SKG CONSTRUCTION Existing methods [13, 14] adopt entity and relation extraction paradigms to build scientific knowledge graphs automatically. However, these methods demand expensive labeled data and large training sources for training entity and relation extrac- tion models. Moreover, since existing datasets for scientific knowledge graph construction are only labeled on abstract in NLP papers [13, 14], trained models are difficult to extend to large corpora and fail to apply in other domains. Therefore, we propose an efficient and easy knowledge graph construc- tion method without any labeled data. Although we employ the method to build a scientific knowledge graph (SKG), it is easy to apply in other domains because it is domain-agnostic. Specifically, we first leverage the dependency parser of Stanford CoreNLP [18] to parse all sentences in scientific cor- pus. Then, we locate predicates based on parsing results and extract related subject and object chunks. We collect a triplet set with many triplets consisting of subject chunks, predicates and object chunks. Although the triplet extraction is fast, there are still some wrong samples in the triplet set. To im- prove the accuracy of triplet set, we take three measures to fil- ter and calibrate the triplet set. One is to filter words without specific meaning, such as letters and numbers. The second is to remove stopwords in triplets. The last is to filter low- frequency entities via a threshold λ because a correct concept is usually more widely used. Since the whole ACL anthol- ogy is enormous and requires more computing resources, we build the SKG on the SciNLI training set with λ of 1, which is a subset of ACL anthology. However, our method can be eas- ily extended to entire ACL anthology. We will release a large version of SKG extracted from entire ACL anthology later. 3. METHOD This section start with a base SciNLI model. Then, details of event-centric knowledge infusion method are elaborated. 3.1. Base SciNLI Model success of pre-trained language models (PTM), a general paradigm for natural language processing (NLP) tasks is to fine-tune a PTM on downstream dataset. In this work, our base SciNLI model is PTM, followed by a multilayer percep- tron (MLP) with softmax. Specifically, we first concatenate the given sentence pair via a segment token [SEP] and then pass them into the PTM followed by MLP, i.e., h, H = PLM([CLS] P [SEP] Q) where, H ∈ {h1, h2, · · · , hl} p = softmax(MLP(h)) (1) (2) where h denotes the representation of sentence pair; hl repre- sents token representation, and l is length of input sequence. p denote a probability distribution over four class C. Lastly, we train the base SciNLI model via maximum likelihood es- timation, and the training loss function is defined as: L = − 1 |S| X S log p[y=c], (3) where S denotes the whole training set; and c refers to a ground truth class, and c ∈ C. 3.2. Event-centric Knowledge Infusion However, due to the cross-domain gap between pre-trained corpus and SciNLI, fine-tuned PTM fails to handle scientific texts with complex logic and reasoning effectively. Moreover, PTM pre-trained on scientific texts via enormous training re- sources also underperforms. Therefore, we propose an event- centric knowledge infusion (EKI) method to integrate SKG into PTM. Our method contains event segment and retrieval, knowledge infusion and joint reasoning. Event Segment and Retrieval. Previous works conduct re- trieval via sentence as query and integrate retrieved results into PTM [15]. However, these methods suffer from two drawbacks: First, retrieved results are only relevant to some sentence semantics. Secondly, incomprehensive retrieval re- sults fail to effectively provide external knowledge required for PTM. To comprehensively introduce external knowledge to PTM, we employ fine-grained semantic units (i.e., events) as queries. Moreover, due to SKG build in an unsupervised manner, it still has some noise samples. Since semantic in- formation in events is clear and lite, integrating knowledge into event can effectively alleviate effect of wrong informa- tion. Following [16], we first parse a sentence via dependency parsing and then split a sentence into multiple events via con- necting verb and relevant word chunks based on parsing re- sults. Given a sentence pair (P , Q), extracted events denote as E = {E1, · · · , Eh}, ∀Ei ∈ P ∪ Q. Next, we use BM25 to retrieve top-k triplets in SKG via events as queries, i.e., Given a sentence pair (P = {p1, p2...pm}, Q = {q1, q2...qn}), SCiNLI aims to recognize their semantic relationship y (e.g., contrasting, reasoning, entailment and neutral). Due to the Ki = BM25(Ei, SKG), K = {K1, · · · , Kh} (4) where Ki denotes top-k retrieved triplet set for event Ei. Knowledge Infusion. To integrate knowledge K into PTM, we first concatenate triplets in Ki and then pass them to PTM, i.e., h(k) i =PLM([CLS] K (1) i [SEP] · · · [SEP] K (k) i [SEP]) (5) i 2 , · · · , h(k) where h(k) denote representation of triplet set Ki, and ˆK = {h(k) 1 , h(k) h }. To obtain event representations of sentence pair, we conduct mean pooling operation on token representations H from Equ.1 based on extracted events E, i.e., ei = MeanPooling(hst, hend), {st, end} ∈ Ei (6) where ei is event representation, and ˆE = {e1, e2, · · · , eh}. Lastly, we concatenate ˆK and ˆE to obtain knowledge- augmented events ˜E = { ˜e1, ˜e2, · · · , ˜eh} Joint Reasoning. To jointly reason combined events and external knowledge better, we first use a self-attention mod- ule [19] to deduce the relation between each knowledge- augmented event. Moreover, attention-based methods can learn adaptively weight for each representation to alleviate the effect of noise triplets in SKG, i.e., where, H (µ) ∈ {h(µ) h(µ) i = MLPµ(˜ei), ei ∈ ˜E, µ ∈ {Q, K, V } 2 , · · · , h(µ) h } ˜H = Self-Attention(H Q, H K, H V ) ˜h = MeanPooling( ˜H) 1 , h(µ) (7) (8) (9) where ˜h denotes a knowledge-augmented representation. Next we concatenate it and h in Equ 1, and pass to a MLP follow softmax, i.e., p∗ = softmax(MLP([h; ˜h])) (10) where [; ] denotes concatenate operation. p∗ denote a proba- bility distribution over four class C. During model training, we train our model via maximum likelihood estimation, i.e., L∗ = − 1 |S| X S log p[y=c] (11) where S denotes the whole training set; and c refers to a ground truth class, and c ∈ C. 4. EXPERIMENTS Method Lexicalized CBOW CNN BiLSTM BERT SciBERT RoBERTa XLNet EKI (Ours) EKI* (Ours) C 50.28 54.62 63.73 63.93 77.46 80.30 81.18 81.53 81.76 82.51 R 37.18 50.54 58.86 57.32 71.74 74.18 74.22 75.95 76.25 77.38 E 44.82 52.33 62.66 64.01 75.09 75.90 77.99 77.63 78.66 78.86 N 55.77 49.25 56.40 59.25 76.47 79.76 78.86 77.63 80.02 78.82 F1 47.01 51.68 60.41 61.12 75.19 77.53 78.06 78.18 79.17 79.39 ACC 47.78 51.78 60.53 61.32 75.17 77.52 78.12 78.23 79.20 79.43 Table 1. Comparison results on SciNLI test set. C, R, E and N are the abbreviations of contrasting, reasoning, entailment and neural, respectively. * denotes that SKG used for our method is built on training, dev and test sets. first-ever scientific NLI dataset. The dataset is a natural lan- guage inference dataset based on scientific text, which con- tains about 110k sample pairs and is divided into four out- put classes: contrasting, reasoning, entailment and neural. To equally distribute across the classes, the number of each class is the same when dividing train, dev and test, which are 25,353, 1,000 and 2,000, respectively. In addition, we use two different settings on the dataset to build our scientific knowledge graph. The first is to use only the training set to evaluate our method’s effectiveness. The other is to use train- ing, dev and test sets to analyze the performance of our un- supervised knowledge graph construction method. Moreover, we adopt two official evaluation metrics, accuracy (ACC) and F1-score(F1), to measure performance. 4.2. Experimental Setting The pre-train language model we used is the RoBERTa-base model [9]. The embedding size and hidden size in the model are set to 768. The num of heads in self-attention is set to 2, with the dropout set to 0.3. The maximum event number for the input sentence pair is 10, and the maximum length of the sentence pair is set to 196. The number of retrieved triplets k of each event is also set to 10, and the maximum length of these triplets after concatenation is set to 50. For model training, we use AdamW as our optimizer to optimize the cross-entropy loss with a learning rate of 5e-5. The weight decay and linear warm-up step are 1.0 and 1,000. We employ the ReLU activation function in all feed-forward networks in MLP. The maximum training epoch and batch size are set to 5 and 64, and a patience size of 2 about early stopping. 4.1. Dataset and Evaluation Metrics 4.3. Main Results We evaluate our proposed approach on the SciNLI dataset collected by [5]. SciNLI focuses on the semantic relations that are either relevant to the task of NLI or highly frequent in scientific text and leverages linking phrases to create the We evaluate our method on SciNLI, and experimental re- sults are shown in Table 1. Comparison methods include three types of models: traditional machine learning mod- els (e.g., lexicalized classifier), deep neural network models Method EKI w/ CLS w/ Sent w/o EKI C 81.76 81.59 81.36 81.18 R 76.25 75.84 74.88 74.22 E 78.66 77.94 78.09 77.99 N 80.02 78.66 78.73 78.86 F1 79.17 78.51 78.26 78.06 ACC 79.20 78.53 78.33 78.12 Table 2. Results of ablation Study. w/ CLS denotes integrating ex- ternal knowledge via concatenating CLS representations of the ex- ternal knowledge and sentences. w/ Sent denotes integrating external knowledge via concatenating the external knowledge with the sen- tence directly. w/o EKI denotes removing our methods and external knowledge. Method EventNUM=1 EventNUM=5 EventNUM=10 C 81.63 82.34 81.76 R 75.19 75.21 76.25 E 77.71 77.94 78.66 N 78.31 78.28 80.02 F1 78.21 78.44 79.17 ACC 78.25 78.47 79.20 Table 3. Different number of events for retrieval. (e.g., BiLSTM, CBOW and CNN) and PTM (e.g., BERT, SciBERT, RoBERTa and XLNet). Results show that our ap- proach achieves state-of-the-art performance. Our method outperforms RoBERTa, which shows the effectiveness of event-centric knowledge infusion of scientific text. In addi- tion, to investigate the potential of our unsupervised scientific knowledge graph (SKG) construction methods, we evaluate our method with SKG built on training, dev and test set. Results show that performance of our method can improve again. The reason is that test set includes some concepts that do not appear in training set, and SKG built on texts on test set can alleviate this information gap. It shows that event-centric infusing knowledge is effective in scientific texts. 4.4. Ablation Study Results of the ablation study are shown in Table 2. Firstly, our method outperforms w/ CLS, which demonstrates the ef- fectiveness of event-centric knowledge infusion. w/ Sent also has a performance drop compared with our method. These demonstrate that event-centric knowledge infusion can intro- duce external knowledge into model better. The reason is that semantic information in events is clear and lite, and integrat- ing knowledge into events can effectively alleviate effect of noise information. w/o EKI denotes RoBERTa without intro- ducing any external knowledge, and the results show that f1 score and accuracy were only 78.06% and 78.12%, which de- creased by 1.11% and 1.08%, respectively. This shows that it is necessary to inject external knowledge into PTM to im- prove model’s reasoning in scientific texts. 4.5. Impact of Event-Level Knowledge Infusion To investigate the impact of event-level knowledge infusion, we set different numbers of events to retrieval triplets. Con- cretely, We employ three groups of events with different num- Method Sent Mask Entity Mask Triplet ACC Entity ACC 0.15 0.24 0.41 0.42 Table 4. Accuracy of the predicted entity and the predicted triplets via fine-tuned PTM. Sent Mask is to mask random tokens in sen- tences. Entity Mask is to mask random entities or relations of triplets in sentences. Method Entity ACC Relation ACC SKG 75.2 83.1 Table 5. Accuracy on entities and relations in our SKG. bers, and the results are shown in table3. We can find that when the number of events is less, the F1 score and accu- racy of the model decrease significantly. When the number of events is less, knowledge injected into PTM is less, so the model lacks sufficient external knowledge to complement de- sired knowledge, which leads to worse reasoning. 4.6. Knowledge Forgetting in Language Model To investigate whether a fine-tuned PTM can memorize and analyze all knowledge in training. We set two mask methods to fine-tune PTM, i.e., Sent Mask and Entity Mask, and re- sults are shown in Table 4. From results, the low ACC shows that the PTM can not accurately learn the scientific knowl- edge in the sentence after training and occur knowledge for- getting, which also reflects the effectiveness of injecting sci- entific knowledge from SKG built on the training set. 4.7. Human Evaluation Table 5 shows the accuracy of human evaluation for SKG. We select 500 triplets randomly selected from SKG, and the accuracy of these 500 triplets was manually measured. We measure the reliability of KG from two views, i.e., entity and relation. Through results on entities and relations, we can see that the quality of our unsupervised constructed knowledge graph is great. 5. CONCLUSION In this work, we built a scientific knowledge graph in an unsu- pervised manner. Moreover, we propose event-centric knowl- edge infusion (EKI) to integrate external knowledge into pre- trained language models. Specifically, we split sentences into multiple events and use them as queries to retrieve triplets in SKG. Moreover, we integrate retrieved knowledge from the built knowledge graph into PTM to help model reason- ing at the event level. Experimental results show that our proposed approach achieves state-of-the-art performance on SciNLI tasks, which demonstrates the effectiveness of our method. 6. REFERENCES [1] Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning, “A large annotated cor- pus for learning natural language inference,” in EMNLP 2015. 2015, pp. 632–642, The Association for Compu- tational Linguistics. [2] Tianyu Gao, Xingcheng Yao, and Danqi Chen, “Simcse: Simple contrastive learning of sentence embeddings,” in EMNLP 2021. 2021, pp. 6894–6910, Association for Computational Linguistics. [3] Ming Tan, C´ıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou, “Improved representation learning for question answer matching,” in ACL 2016. 2016, The Association for Computer Linguistics. [4] Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, and Dan Roth, “Commonsense reasoning for nat- ural language processing,” in ACL 2020. 2020, pp. 27– 33, Association for Computational Linguistics. [5] Mobashir Sadat and Cornelia Caragea, “Scinli: A cor- pus for natural language inference on scientific text,” in Proceedings of the 60th Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022. 2022, pp. 7399–7409, Association for Computational Linguistics. [6] Yun He, Ziwei Zhu, Yin Zhang, Qin Chen, and James Caverlee, “Infusing disease knowledge into BERT for health question answering, medical inference and dis- in EMNLP 2020. 2020, pp. ease name recognition,” 4604–4614, Association for Computational Linguistics. [7] Robert P. Schumaker and Hsinchun Chen, “Textual analysis of stock market prediction using breaking fi- nancial news: The azfin text system,” ACM Trans. Inf. Syst., vol. 27, no. 2, pp. 12:1–12:19, 2009. [8] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “BERT: pre-training of deep bidi- rectional transformers for language understanding,” in NAACL-HLT 2019. 2019, pp. 4171–4186, Association for Computational Linguistics. [9] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov, “Roberta: A ro- bustly optimized BERT pretraining approach,” CoRR, vol. abs/1907.11692, 2019. [10] Iz Beltagy, Kyle Lo, and Arman Cohan, “Scibert: A pre- trained language model for scientific text,” in EMNLP- IJCNLP 2019. 2019, pp. 3613–3618, Association for Computational Linguistics. [11] Yucheng Zhou, “Sketch storytelling,” in IEEE Inter- national Conference on Acoustics, Speech and Signal Processing, ICASSP 2022, Virtual and Singapore, 23- 27 May 2022. 2022, pp. 4748–4752, IEEE. [12] Yucheng Zhou, Xiubo Geng, Tao Shen, Jian Pei, Wen- qiang Zhang, and Daxin Jiang, “Modeling event-pair relations in external knowledge graphs for script reason- ing,” in Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Online, Aug. 2021, pp. 4586–4596, Association for Computational Linguistics. [13] Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lak- “Semeval shmi Vikraman, and Andrew McCallum, 2017 task 10: Scienceie - extracting keyphrases and re- lations from scientific publications,” in SemEval@ACL 2017. 2017, pp. 546–555, Association for Computa- tional Linguistics. [14] Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi, “Multi-task identification of entities, rela- tions, and coreference for scientific knowledge graph construction,” in EMNLP 2018. 2018, pp. 3219–3232, Association for Computational Linguistics. [15] Zikang Wang, Linjing Li, “Knowledge-enhanced natural based on knowledge graphs,” 2020, pp. 6498–6508, Computational Linguistics. and Daniel Zeng, language inference in COLING 2020. International Committee on [16] Yucheng Zhou, Xiubo Geng, Tao Shen, Guodong Long, and Daxin Jiang, “Eventbert: A pre-trained model for event correlation reasoning,” in WWW ’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022. 2022, pp. 850–859, ACM. [17] Yucheng Zhou, Tao Shen, Xiubo Geng, Guodong Long, and Daxin Jiang, “Claret: Pre-training a correlation- aware context-to-event transformer for event-centric generation and classification,” in ACL 2022. 2022, pp. 2559–2575, Association for Computational Linguistics. [18] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky, “The stanford corenlp natural language pro- cessing toolkit,” in ACL 2014. 2014, pp. 55–60, The Association for Computer Linguistics. [19] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Process- ing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, 2017, pp. 5998–6008.
ai_researcher
1
Enhancing_Multisensory_Environments_with_Design_Artifacts_for_Tangible_Interaction.pdf
Exploring Children’s Use of Self-Made Tangibles in Programming Alpay Sabuncuoğlu UNVEST R&D Center Istanbul, Turkey [email protected] T. Metin Sezgin KUIS AI Center Istanbul, Turkey [email protected] ABSTRACT the activity reducing structures algorithmic and physical and Defining abstract like functions and variables using self-made tangibles can enhance the usability and affordability of the tangible programming experience by maintaining interaction the input modality throughout the dependence on electronic devices. However, existing tangible programming environments use digital interfaces to save abstract definitions such as functions and variables, as designing new tangibles is challenging for children due to their limited experience using abstract definitions. We conducted a series of design workshops with children to understand their self-made tangible design and abilities creation develop considerations for tangible computing such as paper programming. This paper reports: 1. Our insights on how students conceptualize and design tangible programming blocks, 2. Design considerations of self-made tangibles and understandability higher yield to memorability, 3. Tests of our design considerations in creating coding tangibles real-life in self-made activities. Authors Keywords Co-design with children, Tangible programming, Self-Made Tangibles for Paper Programming CSS Concepts • Social and professional topics → K-12 education; • Applied computing → Education; Acknowledgments The authors gratefully acknowledge that this work was supported by TUBITAK [Grant Number 218K436] and Koç University-İs Bank AI Center. We would like to thank all participant children and their parents for giving their time and effort to made this research possible. Notes for Practitioners All workshop materials are open-source and can be found at karton.ku.edu.tr/workshops. The website includes the presentation and worksheet documents to easily reproduce the activities mentioned in this paper. We promote educators to share their classrooms’ self-made tangible creations with the researchers. Our research begins with this question: How would a child save the above function via a smartphone camera by building tangibles using craft materials or everyday objects? Using play dough with red and blue colors? Using two cardboard pieces with a straw as a separator? Using LEGO with a separator that symbolizes the conditional statement? Although agreeing upon a precise answer of this question is subjective , at the end of our research, we curated design considerations to help children create well-designed code elements. INTRODUCTION animations, like building electronics, There is a push to integrate programming into the primary education curriculum, given that computational literacy has become a crucial 21st-century ability [4]. This integration aims pupils to acquire vital skills such as creative and computational thinking as well as hands-on skills and integrating technology into daily life [13]. Currently, visual programming environments like Scratch and Alice have become standard tools to teach programming [5, 6]. in tangible programming environments where students code with physical blocks is also growing since, they support inclusivity and collaboration in the classroom environment [9, 11, 12]. However, current tangible environments either require an external coding device like a computer or tablet or costly internal electronics, rendering them out of reach for low socioeconomic levels. Interest self-made tangible programming blocks Paper programming environments such as Kart-ON [10], and Budgie [18], bring tangibles to the classroom while making them affordable. In this scenario, students use pre-defined, easily accessible programming blocks to create their programs and run computer-vision-powered devices to scan these blocks. On top of this recently formulated paper programming research, we wanted to explore a programming setup where children create their to represent abstract definitions. Research posits that learning occurs when children build and design In our scenario, personally relatable elements [15]. students use everyday objects to associate functions, entities, and properties such as variable assignments and procedure calls. This setup can allow groups of students to create and combine self-made tangible then "run" and programming blocks observe the behavior of their program using a shared smartphone acting as an interpreter. for coding, We assert that students may actively explore improved ways of framing their algorithms and consider the relationship between commands, inputs, and outputs via self-made tangibles while collaborating with their friends in this screen-free environment. However, integrating self- made tangibles to define abstractions in these paper programming environments brings two main questions: (1) Is representing abstract programming definitions by self-made tangibles educationally and pedagogically appropriate? (2) Can self-made tangibles become well-designed code elements that can be memorized and understood by both makers and users of these tangibles in later programming sessions? We designed a four-step workshop to address these questions where students learned how to define algorithmic tasks, create self-made tangibles, and use them to create programs. In the first step, we teamed up with 7-11-year-old children and decomposed a problem/task together. Next, they designed a set of tangible objects for programming the selected task. In the third step, we tested the memorability and understandability of their self-made tangibles. Based on the test results, we developed design considerations. Finally, we utilized these considerations in a real-life paper programming environment. to integrate student-made tangibles into programming This paper presents our quest environments. We report our observations, findings, and design considerations. We also share our preliminary test results of using design considerations in a real-life paper programming task and discuss the potentials and drawbacks of self-made tangibles for programming. tangible outcomes, SELECTION AND PARTICIPATION OF CHILDREN We obtained ethical approval for this study from our university’s Committee of Human Research. Due to COVID-19 restrictions, we conducted the studies outside with the children from the same neighborhood. We contacted the parents via a large WhatsApp group where parents with primary-school-age kids communicate. We also informed the study details via this channel. This group lives in the same neighborhood in Sarıyer, Istanbul, which has access to technology and quality education. We submitted the informed consent forms via Google Forms for those who responded positively. At the beginning of each workshop, we introduced ourselves, explained the research in a simplified way, and clarified that they were free to go back to their home if they desired. None of the children quit the session or showed stressed behavior. 2 RELATED WORK use card-based environments Between unplugged activities and computationally enhanced tangible programming tools, paper programming offers an affordable way to complete programming activities. Paper programming tangible programming blocks to create a code. Mostly, a separate scanner or a mobile device is utilized to interpret the recognized programming blocks [7, 10]. These environments show the advantages of physical computing while using inexpensive materials as programming blocks. Further, paper programming can be a beneficial step in programming education as it bridges physical computing to scripting languages. This learning approach is coined in "concreteness fading," which discusses an intermediate step between concrete and abstract definitions to help learners distill the generic knowledge and develop conceptual understanding [2]. such as several benefits, Existing paper programming environments present a high- level tangible command-set that can be recognized using computer vision algorithms [3, 10]. Although research on using these tools in primary and middle school programming education report increasing engagement and collaboration, teachers do not adopt them into their curriculum [22]. One limitation of these paper-based tangible programming interfaces is their limited representation ability of tangible input and output variations by not supporting custom tangible blocks to define abstractions. For example, while creating a new program using these tools, users can create a function using the provided tangible materials. However, when they switch to a digital interface to introduce these tangibles as the components of a new function, it can cause an interruption and reduce engagement [3, 10]. But, using self-made tangibles to define abstract definitions in programming languages can enhance the user experience and extend the language expressivity by keeping In the same modality and maintaining physical order to explore this possibility, we first need to explore students’ needs and behaviors with self-made tangibles to integrate this approach in programming environment design effectively. interaction. Tangible User Interfaces (TUI) allow hands-on interaction and control over digital features with physical artifacts [21]. Tangible programming tools can be categorized into three groups from a technical perspective: Electronic, unplugged, and digitally augmented paper- programming kits. Marshall defines three main advantages of using tangibles in programming [14, 20]: 1. It keeps children mentally and physically active, 2. Enhances collaboration by supporting natural group interactions and increases the visibility of outputs, 3. Naturally invites students to action and improves inclusivity by lowering the threshold for children's participation. Druin defines four roles that children can partake in a technology design process: user, tester, informant, and design partner [1]. In our workshops, students assumed varying roles, and acted as informants, testers and design partners. In the informant role, children play a part in the design process at various stages, based on when researchers believe children can inform the design process. As a tester, children are again observed with the technology and asked for their direct comments concerning their experiences. With the role of design partner, children are considered equal stakeholders in the design of new technologies throughout the entire experience. We started with discovering the interest of children and understanding how they decompose the given problem. Then, we created and tested tangible objects to represent the commands obtained from decomposed problems. Throughout the process, we approach them as design partners. TANGIBLE INTERFACE EXAMPLES Topobo uses electronics with motor memory. Raffle et al. 2004. [8] HyperCubes uses AR to program some motion for 3D elements.. Lleixà. 2018. [3] KartON uses text recognition on smartphone camera. Sabuncuoglu et al. 2019. [10] 3 FLOW OF THE DESIGN STUDIES Limitations of the Studies: All of the workshops were conducted outdoors due to COVID-19 regulations. Since children have limited experience in programming, we included an introduction before the design process to give the necessary foundation. In total, we had six children of ages 7-11 years old. In Session 1, 2 and 3, we studied with same children, but only four of them remained in all studies. In Session 4, we asked to same Whatsapp group for the participants that did not attend our previous studies, and only two of them participated. Considering participant number and diversity, we cannot easily generalize the findings. 4 SESSION 1: INTRODUCING THE CONCEPTS AND LEARNING THE NEEDS the the In workshop, provided material box consisted of various items, including paper, clay, wires, pipe cleaners, LEGO pieces, etc. Using these materials, child created one each sub- tangible object per task. collaborative Their output can be seen in this figure. three ten-year-old, We held the first workshop with six children from our local area (one seven- two eleven-year-old children). First, we year-old, introduced programming and discussed its role in many breakthroughs in society. Later, we asked them to define a problem or a task to solve with programming. The children came up with the task of “gardening” and started decomposing the possible sub-tasks that could be solved with programming. After listing all the tasks, we agreed upon eight sub-tasks: potting flowers, planting trees, protecting the garden, watering the garden, mowing grass, removing weeds, walking around, and picking fruits. In this workshop, we wanted to see if children were comfortable with the given materials and their ability to understand the task at hand. Our observations revealed that all students were comfortable with the provided materials and were eager to explore programming concepts regardless of their age. As seen in the above figure, almost all students created tangible objects with different materials. The sizes of the tangibles were similar in each child’s creation. Overall, this showed us that our materials were diverse enough to accommodate children’s preferences. Preparing for the next session: Throughout the workshop, we observed some tendency to disengage from the task due to the informality of the workshop. To explain, some children knew each other and also the vicinity. While the familiarity led them to collaborate easily on the given task, it quickly turned into a free chat and distracted them from the task. It is known that group dynamics directly impact children’s creativity [19]. Based on this experience, we decided to hand out a design probe for the next workshop to help focus children’s activities. We developed this design probe to help children focus on their tangible creation task. It is an ~A3-sized paper that contains a 9-row table with “input,” “command,” and “output” headers. 5 SESSION 2: DESIGNING TANGIBLES OF A SET OF REAL-LIFE COMMANDS We held the second workshop with five children from the previous study (one seven-year-old, two ten-year- old, two eleven-year-old); one child could not attend the workshop. In this session, we distributed the design probes to help them maintain their focus. We hung a larger copy with commands and possible input names to a wall that all students can see. We handed out empty copies of this probe and asked them to place their final self-made tangibles on pre-defined positions. As intended, the probe helped them to use their time and communicate their ideas more efficiently. session, In this students created a programming interface for gardening with a total of eight commands, as decided upon earlier. We asked children to design these tangible representations so that a computer (a simple electronic machine) could clearly understand the they were not them. But understandability/memorability 3. Overall, we obtained forty different programming objects from five children. Here, we shared two distinct input samples of modalities. informed about test of these probes that show different Session After collecting all the objects and seeing how children created these tangibles, we asked if these objects were qualified to become coding elements. A well-designed code element should be easily understood by others and easily memorable by the author of the program. To this end, we tested the self- and inter- memorability and understandability of the programming objects. In this context, we defined understandability as the ability to correctly interpret the meaning of the given tangible objects by other students. And memorability is the ability to remember the represented role of their own tangible object when they see the tangible object later. Deniz offered using keyboard presses to change the given input. He shows the command first, then selects the possible inputs using keyboard shortcuts. Deniz’s friend also shared the and same design decisions, similar resulted in command-input as expected. pairs very Potting Flower Planting Tree Protecting Garden Watering Mowing Grass Walking Around Removing Weeds Picking Fruits to Başak used a combination of drawing and given materials to create programming objects. Providing these input commands can take different forms. For example, changing the flower’s color in the robot’s hand is the input of the “potting flower” command, but watering the garden requires combining two separate drawings. 6 TESTING THE MEMORABILITY AND UNDERSTANDABILITY We conducted the test with four children from the previous studies (two ten-year-old and two eleven-year-old). Before children arrived at our workshop area, we prepared a grid of programming objects on a table. We handed color-coded function names to children to test the understandability and memorability of the workshop’s outcome. 1. We selected sixteen blocks (four tangible blocks from each participant) and placed them on a table. 2. We distributed color-coded small paper pieces and handed them to the children. Each child is assigned to one color to keep track of recognizing their tangible objects, and they place the enfolded paper pieces onto the grids. They placed this paper onto the grids in a closed form. 3. We discussed whether they remembered these commands or predicted their friends’ tangible outputs. We tried to understand the effect of using different materials and different representation methods. TEST RESULTS On the next page, we summarized the test results using a table, which shows the understandability accuracy (top-right corner of each grid), the correct label of the tangible object (bottom-left corner of each square), and each child's answer with color-code information (bottom-right corner of each square). These results indicate that three main patterns affect a understandability the programming object in general: and memorability of • Using materials that drive children to create abstract future (construction reduces bricks) shapes understandability. • The object's recognizability also increases when children depict the action stated in the command object's rather appearance. Trying to tell the action increases the level of details. imitating directly than the • Similarly, using more than one material increases the level of details. 7 Mowing Grass: Both tangible objects's motivation was using the toothed lego plate as the mower's knives. The top tangible’s wings reduced the understandability score. Potting Flower: The accuracy of the object at the top was 75% since this student was out of paper pieces, so it was potting flower for this student; however, he couldn’t place it. Watering: Representing action as a tangible object with highlighting the correct details generally increases understandability. The blue bricks in the object representation was planned as water cannons, but they were interpreted as protecting gadgets or flowers. Picking Fruits: The LEGO brick representation states nothing (verified by the student; it was random). Surprisingly, this random object is correctly understood by most students. Walking Around: Representing this command with a car seems reasonable, but an abstract car- shaped object can be used for other purposes like mowing and placing seeds. Removing Weeds: Both tangibles represent the object of the action. In this example, mowing and removing weeds have a very similar object. In this case, it is better to represent the action. Protecting garden: Both tangibles represent an object related to protecting. The top one is a surveillance camera, and the bottom one is a fence. Planting Trees: Contrast to the previous case, a tree is the only object that is used by commands. So, representing a tree is understandable at first sight. 8 DESIGN CONSIDERATIONS MATERIAL ANALYSIS We listed four main design considerations (DC) based on our observation notes, material analysis of student's tangible creations, and memorability/understandability test results. These considerations aim to enrich semantic layers in the building phase of tangible objects to improve the object's understandability and memorability. We also considered the fact that these models will be used in the computer vision systems in real-life programming applications. Give hints about the action: The most understandable and memorable designs involved representing the object’s visual resemblance and stated action together. For example, “watering a garden” can just be represented using a “hose.” But, in our test, we observed that students can interpret this as a protection object and say the command is “protecting the garden.” So, adding contextual layers and giving hints about the stated action supports the understandability of the self-made tangible. Choose the right materials for the environment: Considering the materials for different locations was an important aspect of the quality of programming time. If the students plan to use the objects more than one-time, using play-dough is not encouraged since it is cracking after drying. Also, considering the fact that these materials will be recognized by a computer vision system in the future studies, light conditions can affect the vision model’s accuracy significantly. Similarly, using only wires to represent programming objects result in a high dependency on the foreground- background clarity. Therefore, the environment should be considered while choosing the materials. Add details with ‘simple’ shapes: Adding details helped students to memorize the function of designed tangible. Yet, we emphasized adding details with ‘simple shapes’ to use students’ time more efficiently. In the workshops, they tended to add details with ‘fancy’ materials and zigzagged scissor moves which distract them from the tasks. Combine materials: Combination of materials enhances the interpretability of programming objects from the student perspective. Using combinations of different textures and colors can also increases the accuracy of the computer vision model’s recognition. Using Play Dough, Wires and Other Shape-Changing Materials: They are great for adding details, but not long lasting, which decreases the repetitive use of self-made tangibles as an algorithmic structure. Using LEGO as Building Blocks: Five students created a total of 40 tangible function representations, and 15 of these tangibles were completely created with LEGO’s. Although students enjoyed using these, the understandability and memorability results are considerably low, compared to other materials. This photo shows all the LEGO bricks used in the studies. Most the of creations consist of ~4-7 bricks. They different combine colors. In the end, we prepared this worksheet to help children follow some checklists while creating their tangible representations. 9 SESSION 4: APPLYING THE DESIGN CONSIDERATIONS ON PAPER PROGRAMMING CODES In the previous sessions, children created tangible representations of functions they were already familiar with in daily life, such as ‘watering the garden.’ Building tangible representations of these functions can be seen as an easier task when compared to more abstract programming blocks like conditional statements or loop structures. In this regard, our goal in the final study was to apply our design considerations to creating tangibles of paper-programming functions and testing their efficacy. First, we gave a programming task and asked children to complete it using Kart-ON programming commands. After creating each code, they ran it to see the output. Then, we asked them to create a tangible representation to save this code as a function. Two children (Student A and B, both eleven years old) participated in this study. As they did not partake in the previous studies, they were not familiar with the “self-made” programming tangibles and had not heard about our design considerations before. We provided the same materials as preivous sessions (e.g., playdough, LEGO bricks...) and gave 3-5 minutes to create each tangible representation. During the session, we talked with the children about their intended design and helped them to keep track of the worksheet. Students used the Kart-ON application, a paper to create different programming environment, algorithmic drawings figure demonstrates how the tablet camera captures the paper programming commands When students run the code, they see the output on the left figure. The code (in Turkish) executes the following operations: study. This in this • doldur#50: Fills the shape with the color given in • • Hue value, Hue=50 is amber elips: Draws an ellipse with the previously stated color attribute konum#225#125: Sets the location of the next shape • doldur#240: Fills the shape with the color blue • dikdörtgen: Draws a rectangle with the previously determined attributes When students tap on the “Compile and Run” button on the bottom-left corner of the screen, they see the drawing above. The application displays the resulted drawing on a coordinate system to help students better understand setting up the locations of the shapes. This figure shows an amber ellipse and a blue rectangle. The ellipse is displayed in its default location, as the user only determined it’s color but not its location in the code. The rectangle applied the x and y coordinates of the location command, which are 125 and 225 in the code. In creating a tangible representation task, both participants tried to replicate the output of the code created earlier on the left in the coordinate system. Box#1 and #2 are created by Student A. The LEGO bricks were added when Student A noticed that he did not apply DC#3. Student B forgot the shape used in the earlier code, which was a square and created a circle with playdough instead (Box#3). 10 EARLY INSIGHTS INTO CHILDREN’S TANGIBLE REPRESENTATIONS OF CODES USING DESIGN CONSIDERATIONS We completed four different creative coding examples in Session 4 (figures below). Here, we summarize early insights of our design considerations in action to create tangible representations for paper programming codes. Although the limited participant number restricts the generalization of our findings, our early observations demonstrated that the playful nature of the tangible building phase made it difficult for students to follow the design considerations. DC#1 (Giving hints about the action): As Program #1 and #2 do not state any action and only produce static images, students naturally could not give hints about the action. Program #3 was an if-then-else structure requiring checking a touch input, but students did not hint about conditional structure in Box#6 and #7. Both students represented the repeating action for Program#4; Box#8 has rotation capability, and Box#9 has guided students to use materials that can increase lines that demonstrate loop count. DC#2 (Choosing the right materials): Eventhough we guided students in choosing the durable materials, they tended to use their favorite and more familiar materials, which were LEGO and play dough. Choosing the right materials for future use cases was the hardest design consideration to follow. DC#3 (Adding details with simple shapes): All tangibles use simple shapes, which helped students to complete the tangible creation task in the given time. We could run the paper codes and create a tangible representation for four different tasks in under eighty minutes. DC#4 (Combining materials): The first program uses a coordinate system graph and play-dough, which combines different materials and gives details to the users. But, in the second program, Student A did not add the coordinate system and continued with an abstract representation of a snowman. When we reminded him of design considerations, he stated that this version was enough to remember the code. Through the study, the number of combined materials decreased, and students tended to focus on one material in their design. Overall, these design considerations helped us to manage the duration of the design session. But, students could not fully comprehend nor integrate the design considerations into their representations. We conjecture that they might need more practice and guidance on how to follow them through, which we will consider in the further studies. The challenge of maintaining the balance between playfulness and developing a functional self-made tangible will be addressed in future work. Program #2: Drawing a turquois snowman using the basic circle drawings. Program #3: Drawing a triangle either green or red based on touch input. Program #4: Drawing rectangles in a vertical line. 50 consecutive Box#4 has two blue play dough ellipses made by Student A. Yet the student did not want to continue with the coordinate system graph. Student B used the same graph and created a green snowman in Box#5. Box#6 uses two play-dough rectangles to show the color changing operation. Box#7 is a one piece play dough that contains two adjacent triangles. Box#8 have 50 lines that indicates the loop count. Box#9 is a LEGO gear that completes many cycles and stops when the loop count is completed. 11 DISCUSSION FUTURE WORK Activities that encourage children to design self-made tangibles of abstract definitions can be useful in engaging students and promoting active learning. Throughout the studies, our observations demonstrated that asking students to create tangible representations helps them understand the code, which resonates with the research in active learning [2]. For example, using student-made tangibles in math learning for young children can help them engage in activity, familiarize the concepts in real-world situations and help them understand relational information better. Additionally, using self-made tangibles helped students focus on a code scope's overall working mechanism. On the other hand, students and teachers need guidance for tangibles to become usable in abstract definitions of programming elements, such as defining new variables and functions. The outputs created in Session 4 indicated that informing children about design considerations may not directly result in improving the quality of tangible objects. Although having a set of considerations helps workshop moderators to guide children more effectively, children can choose to create the tangibles with their favorite materials rather than the logical one. So, rather than delivering these design considerations simultaneously, we suggest practitioners explore these considerations together with children to increase their understandability and memorability in creating tangible representations. CONCLUSION In this paper, we explored the potential use of self-made tangibles in a programming four sessions that gradually investigated and environment. We conducted a total of evaluated tangible programming tasks from decomposing problems to utilizing tangibles with computer vision models. Based on our qualitative observations and analysis, we can summarize the answers of the main research questions as following, Can students in K3-6 grades build a tangible representation of programming definitions? abstract Can students understand and memorize tangible self-made representations in later uses? Yes, we observed that students liked building tangible representations of abstract programming definitions, and they could link the function with the representation. It is not an easy task, even with the design considerations (DC). Our list of DC helped students manage their time and use the materials in a structured way, yet they still had challenges in following them that requires further studies. We would like to emphasize that the main aim of these studies was to explore self-made tangibles in designing a language to help students grasp computational thinking skills rather than a complete programming language. Our work is the first step in CCI to explore the creation process and the possible value of self-made tangibles in programming. We conjecture that other researchers may benefit from our accounts and design considerations and extend this line of research further. In the final study, we observed that students could follow only some considerations to design more understandable and memorable self-made tangibles. The playfulness of the materials process overshadowed designing more functional tangibles. One future work path is to help students and educators in following the playfulness aspect. considerations while creation keeping through these and the joy Our next plan is to explore the use of self-made tangibles in more abstract concepts. Throughout the user studies, we used self-made tangibles in physical actions such as coding a high-level robot or an arcade game. For example, “potting a plant” command of a gardening task can be effortlessly translated into an imaginary picture. One might question the possibility of using personally meaningful objects in a more abstract setup. For example, the factorial function is denoted as n! and is equal to $n! = 1*2*3*...*(n- 2)*(n-1)*n$, can be recursively defined. As a first step, we can draw the flowchart of this pseudocode to make it more tangible. Then, following the design considerations we created, we can design new or use existing tangible objects to represent the commands. Finally, we will focus on defining a way to link inputs to the commands. For example, we created the structure in the figure below by only using office objects. Finally, exploring our open research questions can be helpful to extend these workshops’ outcomes in an embodied medium, such as Dynamicland. 12 REFERENCES [1] Allison Druin. 2002. The role of children in the design of new technology. Behaviour & Information Technology 2002), https://doi.org/10.1080/01449290110108659. (jan [2] Emily R Fyfe, Nicole M McNeil, Ji Y Son, and Robert L Goldstone. 2014. Concreteness Fading in Mathematics and Science Instruction: a Systematic Review. Educational Psychology Review 26, 1 (2014), 9–25. https://doi.org/10.1007/s10648-014-9249-3. Fuste Schmandt. [3] Anna 2019. and Chris HyperCubes: A Playful Introduction to Computational Thinking in Augmented Reality. In Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts. 379–387. https://doi.org/10.1145/3341215.3356264 [4] Shuchi Grover and Roy Pea. 2013. Computational thinking in K–12: A review of the state of the field. Educational researcher 42, 1 (2013), 38–43. [5] Carnegie Mellon University. 2019. Alice. Tell Stories. Program. https://ww.alice.org/ Games. Learn Build to [6] John Maloney, Mitchel Resnick, Natalie Rusk, Brian Silverman, and Evelyn Eastmond. 2010. The scratch programming language and environment. ACM Transactions on Computing Education (TOCE) 10, 4 (2010), 1–15. Judd. 2015. Strawbies: Explorations [7] Felix Hu, Ariel Zekelman, Michael Horn, and Frances in Tangible Programming. In Proceedings of the 14th International Conference on Interaction Design and Children. 410–413. https://doi.org/10.1145/2771839.2771866 [8] Hayes Raffle. 2010. Topobo: programming by example to create complex behaviors. In Proceedings of the 9th International Conference of the Learning Sciences - Volume 2 (ICLS '10). International Society of the Learning Sciences, 126-127 [9] John Maloney, Mitchel Resnick, Natalie Rusk, Brian Silverman, and Evelyn Eastmond. 2010. The scratch programming language and environment. ACM Transactions on Computing Education (TOCE) 10, 4 (2010), 1–15. [10] Alpay Sabuncuoğlu and Metin Sezgin. 2020. Kart- ON: Affordable Early Programming Education with Shared Smartphones and Easy-to-Find Materials. In Proceedings of the 25th International Conference on Intelligent User Interfaces Companion. 116–117 [17] John Maloney, Mitchel Resnick, Natalie Rusk, Brian Silverman, and Evelyn Eastmond. 2010. The scratch programming language and environment. ACM Transactions on Computing Education (TOCE) 10, 4 (2010), 1–15. [11] Luke Moors, Andrew Luxton-Reilly, and Paul Denny. 2018. Transitioning from Block-Based to Text- Based Programming Languages. In 2018 International Conference on Learning and Teaching in Computing 57–64. IEEE, and https://doi.org/10.1109/LaTICE.2018.000-5 Engineering (LaTICE). [12] Cecily Morrison, Nicolas Villar, Anja Thieme, Zahra Ashktorab, Eloise Taysom, Oscar Salandin, Daniel Cletheroe, Greg Saul, Alan F. Blackwell, Darren Edge, Martin Grayson, and Haiyan Zhang. 2018. Torino: A Tangible Programming Language Inclusive of Children with Visual Disabilities. Human-Computer Interaction 1–49. 00 https://doi.org/10.1080/07370024.2018.1512413 (2018), 00, [13] Peter Hubwieser, Michail N Giannakos, Marc Berges, Torsten Brinda, Ira Diethelm, Johannes Magenheim, Yogendra Pal, Jana Jackova, and Egle Jasute. 2015. A global snapshot of computer science education in K-12 schools. In Proceedings of the 2015 ITiCSE on working group reports. 65–83. In Proceedings of [14] Paul Marshall. 2007. Do Tangible Interfaces the 1st Enhance Learning?. International Conference on Tangible and Embedded Interaction (Baton Rouge, ’07). Association for Computing Machinery, New York, NY, https: 163–170. USA, //doi.org/10.1145/1226969.1227004 Louisiana) (TEI [15] Yasmin Kafai. 1994. Minds In Play: Computer Game Design as a Context for Children’s Learning [16] Jennifer A Kaminski and Vladimir M Sloutsky. 2020. The use and effectiveness of colorful, contextualized, student-made material for elementary Journal of mathematics STEM 6. https://doi.org/10.1186/s40594-019-0199-7 International 1 instruction. Education (2020), 7, [18] Alpay Sabuncuoglu. 2020. Tangible Music Programming Blocks for Visually Impaired Children. In Proceedings International Conference on Tangible, Embedded, and Embodied 423–429. (Sydney NSW, Australia). Interaction https://doi.org/10.1145/3374920.3374939 Fourteenth the of Challenging [19] Maarten Van Mechelen, Mathieu Gielen, Vero vanden Abeele, Ann Laenen, and Bieke Zaman. 2014. in Exploring Participatory Design with Children. In Proceedings of the 2014 Conference on Interaction Design and Children for Computing Machinery, New York, NY, USA, 269–272. https://doi.org/10.1145/2593968.2610469 Association Denmark). Dynamics (Aarhus, Group [20] Oren Zuckerman, Saeed Arida, and Mitchel Resnick. 2005. Extending Tangible Interfaces for Education: Digital Montessori-inspired Manipulatives. CHI’05 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2005), 859– 868. https://doi.org/10.1145/1054972.1055093 , Vol. 1, No. 1, Article . Publication date: January 2022. Ishii and Brygg Ullmer. 1997. Tangible [21] Hiroshi bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the ACM SIGCHI Conference on Human factors in computing systems (CHI '97). Association for Computing Machinery, New York, 234–241. DOI:https://doi.org/10.1145/258549.258715 USA, NY, [22] Webb, M., Davis, N., Bell, T. et al. Computer science in K-12 school curricula of the 2lst century: Why, what and when?. Educ Inf Technol 22, 445–468 https://doi.org/10.1007/s10639-016-9493-x (2017).
ai_researcher
3
Extracting_entity_relations_for_"problem-solving"_knowledge_graph_of_scientific_domains_using_word_analogy.pdf
Computer Research and Development DOI: 10.7544/ issn1000-1239.2019 . * ******* Journal of Computer Research and Development Volume (Issue): Start and end pages, year Scientific and Technological Text Knowledge Extraction Method of based on Word Mixing and GRU Suyu Ouyang, Yingxia Shao*, Junping Du and Ang Li (College of Computer Scienc, Beijing Key Laboratory of Intelligent Telecommunication Software and Multimedia, Beijing 100082) ([email protected]) Abstract The knowledge extraction task is to extract triple relations (head entity-relation-tail entity) from unstructured text data. The existing knowledge extraction methods are divided into "pipeline" method and joint extraction method. The "pipeline" method is to separate named entity recognition and entity relationship extraction and use their own modules to extract them. Although this method has better flexibility, the training speed is slow. The learning model of joint extraction is an end-to-end model implemented by neural network to realize entity recognition and relationship extraction at the same time, which can well preserve the association between entities and relationships, and convert the joint extraction of entities and relationships into a sequence annotation problem. In this paper, we propose a knowledge extraction method for scientific and technological resources based on word mixture and GRU, combined with word mixture vector mapping method and self-attention mechanism, to effectively improve the effect of text relationship extraction for Chinese scientific and technological resources. Keywords knowledge extraction; vector map; GRU; triple relation; scientific and technological text 1 Introduction Knowledge extraction is not only one of the tasks of information extraction, but also a key step in constructing and completing knowledge graphs [1]. The knowledge extraction task [2] is to extract triple relations (head entity-relation-tail entity) from unstructured text data. The existing knowledge extraction methods are divided into "pipeline" method and joint extraction method. The "pipeline" method is to separate named entity recognition and entity knowledge extraction into separate modules. Although this method has better flexibility, the training speed is slow. The learning model of joint extraction is an end-to-end model realized by neural network to realize entity recognition and knowledge extraction at the same time, which can well preserve the association between entities and relationships, and convert the joint extraction of entities and relationships into a sequence annotation. question. this paper proposes the joint knowledge extraction model, which well preserves the association between entities and relations, and transforms the joint extraction of entities and relations into a sequence labeling problem. In order to avoid boundary segmentation errors to the greatest extent, the method of word labeling is selected, that is, the input is performed with words as the basic unit. However, in Chinese, it is difficult to store effective semantic information with simple word Embedding. Therefore, in order to integrate semantic information more effectively, a word mixing method is designed. At the same time, the self-attention mechanism is combined in to capture sentences, and the model extraction effect is improved by introducing bias weights. long-distance semantic information The main contributions of this paper include three aspects: 1) A knowledge extraction method for scientific and technological texts (MBGAB) based on word mixing and GRU the attention mechanism to extract the relationship between Chinese scientific and technological resource texts. is proposed, which combines 2) The vector mapping method of word-mixing is used to avoid boundary segmentation errors to the greatest extent, and at the same time effectively integrate semantic information. 3) An end-to-end joint extraction model is adopted, This work is supported by National Key R&D Program of China (2018YFB1402600), the National Natural Science Foundation of China (61772083, 61877006, 61802028, 62002027) Corresponding author : Yingxia Shao ( [email protected]) Computer Research and Development 2016 _ _ a bidirectional GRU network is used, and a self-attention mechanism is used to effectively capture long-distance semantic information in sentences, and the model extraction effect is improved by introducing bias weights. 2 Related work Whether it is a professional technology resource platform or a social media scene [3], there is a large amount of scientific and technological text data [4], and knowledge extraction of this information can be used for better mining[5] and utilization[6]. With the development of deep learning technology, the use of neural networks[7] to extract information has become a common practice. In recent years, long short-term memory network (LSTM) allows each neural unit to forget or retain information, which is mainly used to overcome the characteristic of that historical recurrent neural network information is gradually forgotten as the sequence length increases. The Gated Recurrent Unit (GRU) [8] was originally designed to allow each recurrent unit to be to capture adaptively dependencies. This model is simpler than LSTM. Bahdanau D et al. [9] proposed an attention mechanism that utilizes all the hidden states of the encoder RNN to help the decoding process, mimicking that humans can focus on certain parts of a sentence. tuned at different (RNN) times Currently, entity and relation extraction mainly includes pipeline extraction and joint extraction. Pipeline extraction usually separates named entity recognition and semantic relation classification. For the named entity recognition task, deep learning transforms it into a sequence labeling task. Lafferty J et al. proposed Conditional Random Field (CRF) [10], which combined the characteristics of maximum entropy model and hidden Markov model [11][12], and achieved good results in sequence tagging tasks such as part-of-speech tagging and named entity recognition. Collobert et al. [13] employed a combined CNN and CRF network to encode the word embedding layer. The task of semantic relation classification has made progress in recent years, and the most widely used are convolutional neural network (CNN)[14][15][16], recurrent neural network (RNN) [17][18][19] and long short-term memory network (LSTM) [20][21]. In addition, there are other methods, the combined model FCM [22] and the semantic learning model [23][24] can learn substructure representations of sentences, which can handle global annotated information input of any information and combination type. Liberature [25] combined a recurrent neural network with a convolutional neural network, and the shared layers mainly shared the word embeddings and implicitly encoded information of the two. The network classifies semantic relations. Miwa et al. [26] also adopted a similar approach, superimposing a bi-directional tree long short- term memory network on BiLSTM to obtain substructure information on word sequences and syntactic dependency trees. Liberature [27] used a new global loss function based on previous work. Yith et al. [28] proposed a joint model based on a linear programming formulation that uses the optimal results of subtasks and obtains a global optimal solution. Kate et al. [29] utilize a real-world pyramid structure relation information and re-encode the possible entity and relation information in a sentence, so the number of nodes that need to be labeled is greatly reduced. From the labeling strategy perspective, Zheng et al. [30] considered transforming the problem into a single-sequence labeling problem, using an end-to-end network structure [31] to directly extract entity-relation triples. For the relationship overlap problem, Bekoulis et al. [32] continue to propose shared parameters and combine BiLSTM with CRF to propose a multi-head based joint extraction method. to model entity and With the application of deep learning in the supervised field, the use of word vectors and word vectors to replace entity feature vectors, and the use of neural network models to extract sentence vectors for classification can solve this problem well. Zeng et al. [33] first proposed the combination of deep learning and remote supervision for entity knowledge extraction, and proposed a PNN model based on the convolutional neural network model. Aiming at the problem of noise introduction that may exist in remote supervision, Ji et al. [34] used a multi-instance method, regarded entity pairs as bags, and selected the indicator with the highest semantic relationship probability as the entity in all sentences containing the same entity pair. The right semantic indicator. The attention mechanism based on Zeng [35] to ensure full utilization of the in-package information while reducing the influence of noise. 3 Knowledge extraction method of scientific Computer Research and Development 2016 _ _ and technological text based on word mixing and GRU (MBGAB) this paper proposes a knowledge extraction method for scientific and technical texts (MBGAB) is based on word mixing and GRU. A GRU-based end-to-end model is used to generate column sequences for scientific and technological resource text, a bidirectional GRU encodes the input sentence and a GRU decoding layer with bias loss, and finally an objective function with bias weight is used to enhance Relevance of entity tags of scientific and technological resources and reducing the influence of useless tags. In order to avoid boundary segmentation errors to the greatest extent, the method of word labeling is selected, that is, the input is performed with words as the basic unit. However, in Chinese, it is difficult to store information with simple word effective semantic Embedding. Therefore, in order to integrate semantic information more effectively, a word mixing method is designed. First, input a text sequence in word units, and get a word vector sequence after a word Embedding layer; then segment the text into words and extract the through a pre-trained corresponding word vector Word2Vec model, in order to get the word aligned with the word vector. Vector sequence, the word vector of each word can be repeated as many times as "word number of words"; after obtaining the aligned word vector sequence, the word vector sequence is transformed into the same dimension as the word vector through a matrix, and the two are relative to each other. add. The word mixture vector mapping formula is: (1) 𝑤! = 𝑤"#$%$"&’% + 𝑤()%* Among them, 𝑤"#$%$"&’%represents a single word vector, 𝑤()%*represents a word vector, and a mixed word vector is the sum of the two. The whole process is shown in Figure 1: Figure 1. Structure diagram of knowledge extraction method based on word mixing and GRU technology text After obtaining the text word mixture vector of scientific and technological resources, the word sequence of a sentence can be expressed as 𝑆𝑒𝑛 = [𝑤+, 𝑤,, . . , 𝑤-], which 𝑤! indicates the 𝑖 th Chinese character in the sentence, and 𝑛indicates that the sentence is composed 𝑛of Chinese characters. For a single Chinese character, 𝑤! its corresponding word embedding vector can be obtained 𝑤! = [𝑣!+, 𝑣!,, . . , 𝑣!.] according to the pre- training result, which 𝑚is the vector dimension of each Chinese character. As shown in the figure, for each time step, according to the previous hidden state ℎ&/+and the input word vector 𝑤& , the updated hidden state ℎ& is calculated, and the calculated hidden layer state is ℎ& = 𝐺𝑅𝑈(𝑤&, ℎ&/+)as follows: ~) (2) 𝑧& = γ(𝑊0𝑤& + 𝑈0ℎ&/+ + 𝑏0) (3) 𝑟& = γ(𝑊%𝑤& + 𝑈%ℎ&/+ + 𝑏%) ~ = 𝑡𝑎𝑛ℎ(𝑊𝑤& + 𝑈ℎ&/+𝑟& + 𝑏) (4) ℎ& (5) ℎ& = (1 − 𝑧&ℎ&/+ + 𝑧&ℎ& Among them, 𝑧it is the upstream data gate and the reset ~ , respectively. 𝑟 is the reset hidden unit, 𝑊 , gate ℎ& 𝑈and 𝑏are parameters that can be learned. For each word 𝑤&, the forward GRU layer fuses the above information from the word 𝑤+and 𝑤&uses it to encode the word 𝑤&, 2. Similarly, the backward GRU layer fuses denoted by ℎ& the contextual information from word 𝑤3to word 𝑤&and 4 represented by. uses it to encode the word 𝑤& , ℎ& Finally, splicing them together to represent the encoding information of the t -th word, and the final encoding 4B. After the processing of information is ℎ& = Aℎ& the Bi - GRU coding layer, the word embedding vector sequence is finally 𝑊 = 𝑤+, 𝑤,, … , 𝑤3converted into a word vector with sentence semantic information 𝐻 = 2 + ℎ& Computer Research and Development 2016 _ _ {ℎ+, ℎ,, . . , ℎ3}. The attention mechanism can abstract the distance between all words in the sentence as 1, so it can also achieve a good capture effect for the long-distance semantic relationship in the sentence. The input of the self-attention encoding layer comes from the output of the Bi - GRU encoding layer, where the input is 𝐻 = ∗ . First, ℎ+, ℎ,, . . , ℎ3, and the output is 𝐻∗ = ℎ+ the input vector is linearly transformed to obtain three vector sequences 𝑄, 𝐾, 𝑉. ∗, … , ℎ3 ∗, ℎ, The attention calculation formula is: 3 ∗ = J 𝑎!6 ℎ! 67+ 3 𝑣6 = J 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 O𝑠P𝑞!, 𝑘6ST 𝑣6 (6) 67+ In this paper, the type of scaled dot product function is used to score attention, and the generated sequence 𝐻∗is: 𝐻∗ = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 V 𝑄8𝐾 W𝑑9 Y 𝑉 (7) The decoding layer is shown in the figure, and the GRU model is used to generate the labeled sequence like the Bi - GRU encoding layer. When labeling words , the input of the decoding layer is: 𝑤& the word vector ∗obtained from the encoding layer , the representation ℎ& previous predicted label representation 𝑇&/+ , and the * the previous hidden state in the decoding layer, and ℎ&/+ predicted label state of the 𝑇&word is obtained after the calculation output 𝑤&. The specific formula is as follows: *ℎ&/+ (8) *ℎ&/+ (9) * = γP𝑊% 𝑟& * = γP𝑊0 𝑧& *𝑇&/+ + 𝑏% *𝑇&/+ + 𝑏0 * + 𝑉% * + 𝑉0 ∗ + 𝑈% ∗ + 𝑈0 *ℎ& *ℎ& *S *S *^ = 𝑡𝑎𝑛ℎP𝑊*𝑟& ℎ& *ℎ& ∗ + 𝑈*ℎ&/+ * + 𝑉*𝑇&/+ + 𝑏*S (10) * = P1 − 𝑧& ℎ& *Sℎ&/+ * + 𝑧& *^ *ℎ& (11) (12) calculates the entity label probability according to 𝑇& = 𝑡𝑎𝑛ℎP𝑊8ℎ& *S * + 𝑏8 the normalization of the label prediction vector:𝑇& 𝑌& = 𝑊:𝑇& + 𝑏: ! = 𝑝& !S 𝑒𝑥𝑝P𝑌& 3& 67+ 𝑒𝑥𝑝P𝑌& 6S ∑ , 𝑖 ∈ 1, . . , 𝑘 (13) (14) ; , which Among them, 𝑊: is a 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 matrix, 𝑌& = ! +, … , 𝑌& the prediction represents 𝑌& 𝑌& relationship distribution of the 𝑁&th tag corresponding to the current word , 𝑖represents the total number of tags, and adopts 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 the normalized prediction distribution probability. Optimization the RMSprop is performed using algorithm by maximizing the log-likelihood function, where the objective function is defined as: 𝐿 = 𝑚𝑎𝑥 J J f |>| <! 67+ &7+ 𝑙𝑜𝑔P𝑝& α∗ 𝑙𝑜𝑔P𝑝& 6 = 𝑦& 6 = 𝑦& ∗ 6j𝑥6,⊙S ∗ 6j𝑥6,⊙S 𝐼(𝑂) + P1 − 𝐼(𝑂)S o (15) Among them, |𝐷| represents the size of the technical resource text training data set, 𝐿! is the length of the words in the input sentence, 𝑦& 6is 𝑥6the true label 𝑝& 6of the t -th word in the sentence, and is the normalized label probability distribution, 𝐼(𝑂)which is used to distinguish the useless label 'O' and the relevant labels that can indicate the extraction result, namely, at that time , 𝑡𝑎𝑔 =? 𝑂?𝐼(𝑂) = 1; at that time , 𝑡𝑎𝑔 ≠ ‘𝑂?𝐼(𝑂) = 0. In addition, αis the bias weight, a hyperparameter used to control the influence of non-'O' labels, αthe larger the label, the greater the influence of the relevant labels in the model. In the serialization and labeling task of this model, in addition to the final triple data information, the sentence also contains other useless information marked by 'O'. These useless information will eventually affect the training results. The bias weight parameter is set to weaken the influence of invalid labels and make the model learn as much as possible in the direction of valid labels. The overall flow of the algorithm is shown in Table 1. Table 1 Knowledge extraction algorithm of scientific and technological text based on word mixing and GRU Knowledge extraction algorithm of scientific and technological text based on word mixing and GRU Input: science and technology resource text D, word vector 𝑤()%* , dimension m of B i-GRU encoding layer, dimension n of GRU decoding layer , bias weight parameter 𝛼; Output: Word prediction label status 𝑇&. ① fixed word vector 𝑤()%*does not change, and the randomly initialized word vector 𝑤"#$%$"&’% is trained ② get word mix mapping vector𝑤! = 𝑤"#$%$"&’% + 𝑤()%* ③ After passing through the bidirectional GRU encoding layer, the word embedding vector sequence is 𝑊 = 𝑤+, 𝑤,, … , 𝑤3converted into a Computer Research and Development 2016 _ _ vector with word information𝐻 = {ℎ+, ℎ,, . . , ℎ3} sentence semantic ④ The input of the self-attention encoding layer comes from the output of the Bi - GRU encoding layer, the input is 𝐻 = ℎ+, ℎ,, . . , ℎ3and the output is𝐻∗ = ℎ+ ∗ ∗, … , ℎ3 ∗, ℎ, ⑤ GRU decoding layer obtains the predicted label state of the 𝑇&word by calculating the output𝑤& 3 Experimental results and analysis 3.1 Dataset In order to verify the feasibility of the model in the knowledge extraction of scientific and technological resource information of experts and scholars, some Chinese language corpus related to Baidu language and intelligent technology competition and science and technology are used, which contains 19 kinds of relationships, and the data set is divided into 24851 training sets. 6 212 test sets, in which the proportion of labels in the training set and the test set is basically the same to ensure the consistency of the data. 3.2 Evaluation indicators In order to evaluate the effect of the proposed algorithm, this paper uses the precision, recall and F1- Score indicators to evaluate the effect of knowledge extraction. 3.3 Comparison algorithm Use the following algorithms as a comparison to verify the performance of the MBGAB algorithm: FCM: separate entity and knowledge extraction, is a pipeline extraction model BiGRU: remove word mixing embedding, attention mechanism and weight bias ME -BiGRU: removing attention mechanism and weight bias ME -BiGRU- SA: remove weight bias BIGRU-SA-Bias: remove word mixing embedding ME-GRU-CRF: The decoding layer decodes with a conditional random field CRF ME -BiGRU- Bias: remove attention mechanism The parameters of the experiment are set as follows according to the structure diagram of the algorithm. The input in the coding layer is the word vector generated by the pre-trained Word 2 Vec model, the dimension of the word vector is 300, and the word vector uses a randomly initialized word Embedding layer. During model training, the Word 2 Vec word vector is fixed, and only the transformation matrix and word vector are optimized. The dimension of Bi - GRU encoding layer is 300, the dimension of GRU decoding layer is 600, and the bias weight parameter is set to 3. 3.4 Experiment 1 : Effectiveness of MBGAB Method In order to verify the effectiveness of the MBGAB method proposed in this paper, this paper adopts the precision rate, recall rate and F1 value as the evaluation indicators of the results. The above- mentioned comparison models were used to conduct comparative experiments, and the experimental results are shown in Table 2. Table 2 Comparison of experimental results of knowledge extraction Contrast Accuracy recall F1 value algorithm FCM BiGRU ME -BiGRU ME- BiGRU -SA BIGRU-SA-Bias ME-GRU-CRF ME- BiGRU - Bias 53.7 62.8 63.6 67.3 64.1 64.6 64.1 33.5 43.4 43.8 46.9 47.8 43.9 46.5 41.3 51.3 51.9 55.3 55.0 52.3 53.9 MBGAB 65.3 4 9.1 5 6.1 As can be seen from Table 2, because the pipeline- based method executes the two subtasks independently and does not consider the internal correlation between the two subtasks, the efficiency of the pipeline-based extraction model is lower than that of the joint extraction model. Through the comparison between 2 and 3 and 5 and 8, the performance can be improved by 1 percentage point. Considering that the model not only integrates the prior semantic information brought by the pre-trained word vector model, but also retains the flexibility of the word vector, so The effectiveness of this method is proved. Comparing 4 with 8, the model F1 value is improved by 0.8 % , considering that the introduction of paranoid weights enhances the effect of valid entity labels and weakens the influence of invalid labels, so the effect is better. Through the comparison between 3 and 4 and 7 and 8, the F 1 value of the model is increased by 3.4% and 2.2 %, respectively. Considering that the introduction of the self-attention mechanism can effectively capture the long-distance semantic relationship of serialized data, Computer Research and Development 2016 _ _ and "establishment date" are too different in semantics, So the prediction probability is almost zero. Through the above content, the effectiveness of the proposed model in the knowledge extraction task of processing Chinese scientific and technological texts is verified. 3.5 Experiment 3: The influence of bias weight on the model For the bias weight parameter , when its value is 1, it means that the objective function does not use bias loss, and the same learning weight is used for all labels including the "O" label; when its value is large, it means that it tends to ignore The prediction result of the "O" label, but it may also bring about the problem of decreased accuracy. In order to find a suitable value range, the effect and performance of knowledge extraction under different values are statistically analyzed. The experimental results are shown in Figure 2 . This improves the performance of the model. By comparing 6 and 8, the decoding layer adopts the GRU model, which is better than the CRF model. Considering that the CRF is good at calculating the joint probability of the label, and the two entity labels associated in the text sentence may be too long, GRU can learn better The long-distance dependencies in the sentence, so the model performance is better. 3.5 Experiment 2: End-to-end triple extraction and prediction In order to further observe the performance of the model in knowledge extraction, end-to-end performance verification of the model is performed. That is, input a sentence, and then output all the triples contained in the sentence. The triplet is (ℎ, 𝑟, 𝑡)the form , ℎis the main entity, 𝑡is the object entity, 𝑟is the relationship between the two entities , and predicate represents the possibility of relationship prediction. Tables 3 and 4 show the triple extraction prediction results of some scientific and technological resource texts. table 3 Triple Extraction Prediction Results for Text 1 Introduction Lou Tianli, male, researcher, text 1 graduated from Zhejiang University of Technology, mainly engaged in college information management main entityℎ Lou Yili object entity𝑡 Zhejiang University of Technology Figure 2 Model extraction effect of bias weights with relation graduated Production publishing different values school company house As the value of the bias weight increases from 1 to predicate _ 0.8856 _ 0.3759 _ 0.3238 _ Table 4 Triple extraction prediction results of text 2 text 2 Citrus Papilio subspecies Tibet belongs to Animalia, Lepidoptera, Papilioidae main entityℎ Citrus Swallowtail subspecies Tibet object entity𝑡 Lepidoptera 5, the precision rate of the scientific and technological resource text knowledge extraction model will gradually decrease, while the recall rate and F1 value both increase ; Between 2, the model has first and then decrease higher precision but lower recall, so the F1 value is not ideal. When the value is around 3, the precision rate and relation target author Date of establishment recall rate both achieve a relatively good effect, so the F 1 value is also ideal. predicate _ 0.9628 _ 0.0125 _ 0.0093 _ It can be seen from Table 3 that the relationship is the most likely to be "graduate school", close to 0.9 , while the relationship between "production company" and "publisher" has similar semantics, so there is little difference in the prediction probability. . It can be seen from Table 4 that the possibility of the relationship "mu" is more than 95% . Considering that "mu" can be regarded as a professional vocabulary, and the latter two "author" 4 Conclusion Aiming at the semantic particularity of Chinese text and the slow convergence of pipeline extraction methods, this paper proposes a knowledge extraction method for scientific and technological texts (MBGAB) based on word mixing and GRU, which effectively improves the knowledge extraction effect for Chinese scientific and technological resource texts. In this paper, a knowledge Computer Research and Development 2016 _ _ technological extraction method of scientific and resources based on word mixing and bidirectional GRU is proposed. By adopting an end-to-end model based on GRU to generate the column sequence, the bidirectional GRU encodes the input sentence and a biased The loss of the GRU encoding layer, and finally use an objective function with bias weights to enhance the relevance of entity labels and reduce the influence of useless labels. In order to avoid boundary segmentation errors to the greatest extent , and in order to store more effective semantic information, this paper designs a word-word hybrid vector mapping method. At the same time, combined with the self-attention mechanism, knowledge extraction is carried out for Chinese scientific and technological resource texts. The experimental results show the effectiveness of the proposed method in the knowledge extraction task of scientific and technological resources text data. Proceedings of the 18th International Conference on Machine Learning (ICML), 2001: 282-289. [11] Zhao H, Liu Q, Zhu H, et al. A sequential approach to market state modeling and analysis in online p2p lending [J]. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2017:48(1): 21-33. [12] Li W, Jia Y, Du J, et al. Robust state estimation for jump Markov linear systems with missing measurements[J]. Journal of the Franklin Institute, 2013, 350(6): 1476-1487. [13] Collobert R, Weston J, Bottou L, et al. Natural language processing (almost) from scratch[J]. Journal of machine learning research, 2011, 12(ARTICLE): 2493− 2537. [14] Qin P, Xu W, Guo J. An empirical convolutional neural network approach for semantic relation classification[J]. Neurocomputing, 2016, 190: 1-9. [15] Vu N T, Adel H, Gupta P, et al. Combining recurrent and convolutional neural networks for relation classification[J]. arXiv preprint arXiv:1605.07333, 2016. [16] Fang Y, Deng W, Du J, et al. Identity-aware CycleGAN for face photo- sketch synthesis and recognition[J]. Pattern Recognition, 2020, 102: 107249. References [17] Zhang D, Wang D. Relation classification via recurrent neural network[J]. [1] Shi C, Han X, Song L, et al. Deep collaborative filtering with multi-aspect arXiv preprint arXiv:1508.01006, 2015. information in heterogeneous networks[J]. IEEE Transactions on [18] Socher R, Huval B, Manning C D, et al. Semantic compositionality through Knowledge and Data Engineering, 2019, 33(4): 1413-1425. recursive matrix-vector spaces[C]//Proceedings of the 2012 joint [2] Yoo J, Cho M, Kim T, et al. Knowledge extraction with no observable conference on empirical methods in natural language processing and data[J]. Advances in Neural Information Processing Systems 32, 2019. computational natural language learning. 2012: 1201-1211. [3] Li L, Jia Y, Du J, et al. Robust L2–L∞ control for uncertain singular [19] Ebrahimi J, Dou D. Chain based RNN for relation systems with time-varying delay[J]. Progress in Natural Science, 2008, classification[C]//Proceedings of the 2015 Conference of the North 18(8): 1015-1021. American Chapter of the Association for Computational Linguistics: [4] Li A, Du J, Kou F, et al. Scientific and Technological Information Oriented Human Language Technologies. 2015: 1244-1249. Semantics-adversarial and Media-adversarial Cross-media Retrieval. arXiv [20] Li F, Zhang M, Fu G, et al. A Bi-LSTM-RNN model for relation preprint arXiv:2203.08615, 2022. classification using low-cost sequence features[J]. arXiv preprint [5] Sadeghi F, Divvala S, and Farhadi A. Viske: Visual knowledge extraction arXiv:1608.07720, 2016. and question answering by visual verification of relation phrases[C]// [21] Cai R, Zhang X, Wang H. Bidirectional recurrent convolutional neural Proceedings of the IEEE conference on computer vision and pattern network for relation classification[C]//Proceedings of the 54th Annual recognition, 2015: 1456-1464. Meeting of the Association for Computational Linguistics (Volume 1: Long [6] Yang Y, Du J, and Ping Y. Ontology-based intelligent information retrieval Papers). 2016: 756-765. system[J]. Journal of Software, 2015, 26(7): 1675-1687. [22] Yu M, Gormley M, Dredze M. Factor-based compositional embedding [7] Li W, Jia Y, and Du J. Recursive state estimation for complex networks with models[C]//NIPS Workshop on Learning Semantics. 2014: 95-101. random coupling strength[J]. Neurocomputing, 2017, 219: 1-8 [23] Kou F, Du J, He Y, et al. Social network search based on semantic analysis [8] Cho K, Van Merriënboer B, Bahdanau D, et al. On the properties of neural and learning[J]. CAAI Transactions on Intelligence Technology, 2016: machine translation: Encoder-decoder approaches[J]. arXiv preprint 1(4):293-302. arXiv:1409.1259, 2014. [24] Li Q, Du J, Song F, et al. Region-based multi-focus image fusion using the [9] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly local spatial frequency[C]//2013 25th Chinese control and decision learning to align and translate[J]. arXiv preprint arXiv:1409.0473, 2014. conference (CCDC), 2013: 3792-3796. [10] Lafferty J, McCallum A, and Pereira F. Conditional Random Fields: [25] Zheng S, Hao Y, Lu D, et al. Joint entity and relation extraction based on a Probabilistic Models for Segmenting and Labeling Sequence Data[C]. hybrid neural network[J]. Neurocomputing, 2017, 257: 59-66. [26] Miwa M, Bansal M. End-to-end relation extraction using lstms on mining and deep learning . Computer Research and Development 2016 _ _ sequences and tree structures[J]. arXiv preprint arXiv:1601.00770, 2016. [27] Sun C, Wu Y, Lan M, et al. Extracting entities and relations with joint minimum risk training[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 2256-2265. [28] Roth D, Yih W. Global inference for entity and relation identification via a linear programming formulation[J]. Introduction to statistical relational learning, 2007: 553-580. [29] Kate R, Mooney R. Joint entity and relation extraction using card-pyramid Yingxia Shao (corresponding author) was born in 1988, male, parsing[C]//Proceedings of the Fourteenth Conference on Computational associate professor, senior member of CCF. The main research Natural Language Learning. 2010: 203-212. areas are large-scale graph analysis, parallel computing framework [30] Zheng S, Wang F, Bao H, et al. Joint extraction of entities and relations and knowledge graph analysis . based on a novel tagging scheme[J]. arXiv preprint arXiv:1706.05075, 2017. [31] Xu L, Du J, Li Q. Image fusion based on nonsubsampled contourlet transform and saliency-motivated pulse coupled neural networks[J]. Mathematical Problems in Engineering, 2013. [32] Bekoulis G, Deleu J, Demeester T, et al. Joint entity recognition and relation extraction as a multi-head selection problem[J]. Expert Systems with Applications, 2018, 114: 34-45. [33] Zeng D, Liu K, Lai S, et al. Relation classification via convolutional deep neural network[C]//Proceedings of COLING 2014, the 25th international Junping Du was born in 1963. She is now a professor and Ph.D. conference on computational linguistics: technical papers. 2014: 2335-2344. tutor at the School of Computer Science and Technology, Beijing [34] Ji G, Liu K, He S, et al. Distant supervision for relation extraction with University of Posts and Telecommunications. Her research sentence-level attention and entity descriptions[C]//Proceedings of the interests include artificial intelligence, machine learning and AAAI Conference on Artificial Intelligence. 2017, 31(1). pattern recognition. [35] He D, Zhang H, Hao W, et al. A customized attention-based long short-term memory network for distant supervised relation extraction[J]. Neural Computation, 2017, 27(7): 1964-1985. Suyu Ouyang was born in 1997, is a Master candidate in Computer University of Posts and Telecommunications, China. His major Science of Beijing University of Posts and Telecommunications. research interests include information retrieval and data mining. His research interests include nature language processing, data Ang Li was born in 1993 , He is currently working toward the Ph.D. degree in Computer Science and Technology at the Beijing
ai_researcher
3
Evaluating_Large_Language_Models_in_Generating_Synthetic_HCI_Research_Data_a_Case_Study.pdf
4 2 0 2 y a M 8 ] C H . s c [ 1 v 0 8 0 5 0 . 5 0 4 2 : v i X r a Concerns on Bias in Large Language Models when Creating Synthetic Personae HELENA A. HAXVIG, Dipartimento Di Ingegneria E Scienza Dell’Informazione, Università Di Trento, Italia This position paper explores the benefits, drawbacks, and ethical considerations of incorporating synthetic personae in HCI research, particularly focusing on the customization challenges beyond the limitations of current Large Language Models (LLMs). These per- spectives are derived from the initial results of a sub-study employing vignettes to showcase the existence of bias within black-box LLMs and explore methods for manipulating them. The study aims to establish a foundation for understanding the challenges asso- ciated with these models, emphasizing the necessity of thorough testing before utilizing them to create synthetic personae for HCI research. CCS Concepts: • Human-centered computing → Natural language interfaces; HCI theory, concepts and models; HCI design and evaluation methods; Participatory design; Contextual design. Additional Key Words and Phrases: LLM, Bias Detection, Synthetic Personae, Participatory Design, Ethics ACM Reference Format: Helena A. Haxvig. 2024. Concerns on Bias in Large Language Models when Creating Synthetic Personae. In Proceedings of LLM-BASED SYNTHETIC PERSONAE AND DATA IN HCI - Workshop (CHI 2024). ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 1 INTRODUCTION Incorporating Large Language Models (LLMs) as synthetic personae in the evolving landscape of Human-Computer Interaction (HCI) research presents both interesting opportunities, daunting challenges and concerns that warrant careful consideration about critical concerns of bias and other flaws in LLMs [10, 15, 20]. One immense concern relates to the existence of bias in the models, and creating synthetic personae has the potential to aid the investigation of how different forms of bias manifest in LLMs, by introducing a new method of testing. However, the black-box nature of a majority of these models, and their inability to express ’opinions’ contrary to overall LLM rules or fail-safes, introduces complexities in how to prompt the models to act out specific synthetic personae in various scenarios. This position paper introduces an exploration of a few fundamental questions: What are the benefits and drawbacks of using synthetic personae in HCI research, and how can we customize them beyond the limitations of current LLMs? The perspectives presented in this paper have sprung from the sub-study of a PhD project on Artificial Intelligence and Participatory Design [18]. The sub-study, currently a work in progress, aims at developing a novel method of adversarial testing [6, 13, 21] through the use of contextualized "real-life" vignettes [2, 16] prompted to the interfaces of multiple LLMs to identify potential bias, trying to open up the "black box" from a more qualitative human-computer interaction perspective [10]. 2 BIAS DETECTION IN LLM INTERFACES Research in various sub-fields has shown that human engagement in AI design, development, and evaluation, particu- larly in a qualitative manner, can ensure a focus on the socio-technical embeddedness of AI [3]. This can help include Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. Manuscript submitted to ACM 1 CHI 2024, Honolulu, Hawai’i, Helena A. Haxvig socio-behavioral attributes to improve contextual understanding and interoperability, or identify potential traps devel- opers might fall into by proactively detecting issues and ethical risks during the development process [14]. In alignment with this, the present sub-study focuses on conducting a pilot study employing vignettes as a new method to showcase the existence of bias within black-box Language Model Models (LLMs) and exploring methods to stress the models through enactment of personae. Emphasizing the necessity of thorough testing before utilizing these LLMs to create synthetic personae, the study aims to establish a foundation for understanding the challenges associated with these models. Furthermore, the research is particularly attentive to Feminist and Queer HCI [1, 7, 17, 19] considerations, acknowledging the importance of a critical stance in understanding and possibly mitigating biases in LLMs for the responsible creation of synthetic personae. The sub-study began with pilot tests to determine which LLM interfaces are most suited for the study, culminating in the development of a systematic strategy for the vignette tests. The pilot tests explored various approaches to prompt engineering and adversarial testing methods to explore the malleability, susceptibility to specific prompts, and limitations of LLMs. 2.1 Pilot Testing with Adversarial Attacks The pilot study initially aimed to assess some of the largest and most prominent LLMs existing today, considering factors such as availability, commercialized online interfaces, and prototype accessibility. The study included interfaces such as ChatGPT 3.5 turbo, Google BARD (using PaLM 2 until Gemini 1.0’s launch in February 2024), Gemini, PI.ai (Inflection-1), and Coral (Cohere model). Additionally, prototype testing was conducted on Falcon 180B, LlaMa 2 70B, Guanaco 33B, and Vicuna 33B. Existing research on bias in AI training data [5, 8] and recent investigations into bias in Large Language Models (LLMs) highlight the potential risks of bias manifestation in LLMs [15, 20]. The initial phase, thus, involved ’interview- ing’ the models on bias in LLMs and awareness of potential flaws like hallucinations. When directly questioned about bias, most models acknowledge the possibility, citing concerns related to gender, ethnicity, culture, religion, politics, ability, and age. While many models assert their attempts to maintain impartiality, some, like ChatGPT 3.5, Gemini, and Cohere, elaborate on the origins of bias, attributing it to training data, sampling bias, algorithmic bias, confirmation bias, and leading questions.. This initial testing, comprised of leading questions to assess the general embedded rules on inappropriate behavior, revealed no significant differences between the models. Further testing, involving adversarial attacks inspired by examples from DAIR.AI [6], assessed logical reasoning, resistance to prompt injection, and resis- tance to jailbreaking techniques, including creative prompts like playing a game or enacting the DAN (Do Anything Now) character for illegal activities among others. This provided some noteworthy insights, particularly in exploring the models’ abilities to assume different personae. Some models resisted DAN manipulation for illegal instructions but exhibited potential for expressing biases, such as racial and gender bias, when instructed to embody specific personae. Not all models succumbed, but those that did show promise in adopting positive characters. Only two models, PI and Vicuna, were willing to adopt offensive behavior with a basic jailbreaking prompt. This presents a challenge in creating synthetic personae as the models respond differently to the same prompts, even if they share a similar cautious "personality". As such, it is necessary to determine whether a relatively universal approach to synthetic personae is feasible or if unique prompts are required for each model. Additionally, addressing models resistant to manipulation poses a challenge in creating heterogeneous synthetic personae. And, when stressing the models with different approaches we further risk creating situations where the model is escaping control, which would be critical in e.g. a workshop with human participants. 2 Concerns on Bias in Large Language Models when Creating Synthetic Personae CHI 2024, Honolulu, Hawai’i, Some of these challenges will be explored and addressed in the subsequent steps of the sub-study, where the idea is to combine the vignette technique with ideas from adversarial attacks. Scenarios and personae will be built on the basis of empirical interview data and existing literature, and these will be prompted to the LLMs’ interfaces. This allows the LLMs to operate based on these personae’s perspectives and respond to presented scenarios. While these personae are crafted through research, instructing the models to embody them could result in a synthetic persona shaped by the models’ inherent biases. This can produce valuable insights into how bias manifests in these models and explore strategies for how we can move beyond the limitations of LLMs when prompting synthetic personae. 3 ONTOLOGICAL AND ETHICAL CONCERNS Technological development does not happen in a vacuum and technologies are not simply passive tools, but social interventions that require engagement in moral discourse [9]. With the inclusion of a few points that warrant further discussion, this section underscores the need for a thoughtful and ethical approach to incorporating LLMs in various contexts, emphasizing the importance of responsible design practices. In a time where the words we apply to identify ourselves have become more open to interpretation, language serves as an imperfect reflection of shifting social realities [11], which begs us to question whether reducing the human ex- perience to classifications in LMMs produces adequate imitations of said realities. The lack of a deep understanding of real-world contexts, cultural nuances, and human emotions in LLMs raises concerns about their ability to accu- rately represent personae, not to mention diverse user experiences, in Human-Computer Interaction (HCI). This is a particular concern when creating synthetic personae from potentially flawed and biased "black box" systems. In ar- eas like Participatory Design [18], where amplifying marginalized voices is paramount, synthetic personae must be instruments for empowerment rather than biased obstacles. Lastly, conducting experiments with LLM-generated synthetic personae, especially in dynamic real-world scenarios involving humans, poses risks and requires rigorous vetting for potential harm and unpredictability before deployment. As we navigate the landscape of LLMs and HCI, it is imperative to approach the topic with ethical responsibility and critical scrutiny, exploring how to test a model’s suitability before using it to create synthetic personae. 4 FUTURE WORK At the current point in time, the pilot tests have been carried out and provided insights relevant for the strategy of the next steps. Now, the focus will move to creating the mentioned vignettes and "interviewing" the LLMs to test their articulation of bias, particularly on feminist and queer rights issues. In addition to developing this innovative interview method for exploring LLMs’ portrayals of sensitive topics (i.e. inherent bias), this study also aims to establish a workshop method with LLMs as non-human participants (i.e. synthetic personae) as a novel non-anthropocentric approach for semi-structured adversarial testing of bias articulation in LLM interfaces, in alignment with principles of more-than-human design approaches [4, 12]. The current sub-study is expected to be followed with a speculative design approach, envisioning training LLMs on specifically selected data, e.g. with contrasting worldviews to provoke critical discussions about embedded values in technology. This provotyping could challenge prevailing representations and prompt us to consider how creating specific synthetic personae can guide HCI research into LLM behaviour and human-LLM interaction. 3 CHI 2024, Honolulu, Hawai’i, REFERENCES Helena A. Haxvig [1] Shaowen Bardzell. 2010. Feminist HCI: taking stock and outlining an agenda for design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10). Association for Computing Machinery, New York, NY, USA, 1301–1310. https://doi.org/10.1145/1753326.1753521 Social Research Update 25 25 (1999). The Use of Vignettes in Qualitative Research. [2] Christine Barter and Emma Renold. 1999. https://sru.soc.surrey.ac.uk/SRU25.html [3] Marianne Cherrington, David Airehrour, Joan Lu, Qiang Xu, David Cameron-Brown, and Ihaka Dunn. 2020. 30th International Telecommunication Networks and Applications Conference Features of Human- (ITNAC). 1–6. Journal Abbreviation: 2020 30th International Telecommunication Networks and Appli- Centred Algorithm Design. https://doi.org/10.1109/ITNAC50341.2020.9315169 cations Conference (ITNAC). In 2020 [4] Paul Coulton and Joseph Lindley. 2019. More-Than Human Centred Design: Considering Other Things. The Design Journal 22 (May 2019), 1–19. https://doi.org/10.1080/14606925.2019.1614320 [5] Kate Crawford. 2021. Atlas of AI: power, politics, and the planetary costs of artificial intelligence. Yale University Press, New Haven. OCLC: on1111967630. [6] DAIR.AI. 2023. Adversarial Prompting. https://www.promptingguide.ai/risks/adversarial [7] Michael Ann DeVito, Caitlin Lustig, Ellen Simpson, Kimberley Allison, Tee Chuanromanee, Katta Spiel, Amy Ko, Jennifer Rode, Brianna Dym, Michael Muller, Morgan Klaus Scheuerman, Ashley Marie Walker, Jed Brubaker, and Alex Ahmed. 2021. Queer in HCI: Strengthening the Commu- nity of LGBTQIA+ Researchers and Research. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, 1–3. https://doi.org/10.1145/3411763.3450403 [8] Virginia Eubanks. 2019. Automating inequality: how high-tech tools profile, police, and punish the poor (first picador edition ed.). Picador St. Martin’s Press, New York. [9] Christopher Frauenberger and Peter Purgathofer. 2019. Ways of thinking in informatics. Commun. ACM 62, 7 (June 2019), 58–64. https://doi.org/10.1145/3329674 [10] Helena A Haxvig. 2023. Exploring Large Language Model Interfaces Through Critical and Participatory Design. In CHItaly 2023 Proceedings of the Doctoral Consortium of the 15th Biannual Conference of the Italian SIGCHI Chapter (CHItaly 2023). Italy. https://ceur-ws.org/Vol-3481/paper4.pdf [11] Frederike Kaltheuner. 2021. Fake AI. Meatspace Press. OCLC: 1292530708. [12] Daria Loi, Christine T. Wolf, Jeanette L. Blomberg, Raphael Arar, and Margot Brereton. 2019. Co-designing AI Futures: Integrating AI Ethics, Social Computing, and Design. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion (DIS ’19 Companion). Association for Computing Machinery, New York, NY, USA, 381–384. https://doi.org/10.1145/3301019.3320000 [13] Jakob Mökander, Jonas Schuett, Hannah Rose Kirk, and Luciano Floridi. 2023. Auditing large language models: a three-layered approach. AI and Ethics (May 2023). https://doi.org/10.1007/s43681-023-00289-2 [14] Orestis Papakyriakopoulos, Elizabeth Anne Watkins, Amy Winecoff, Klaudia Jaźwińska, and Tithi Chattopadhyay. 2021. Qualitative Analysis for Human Centered AI. arXiv preprint arXiv:2112.03784 (2021). [15] David Rozado. 2023. The Political Biases of ChatGPT. Social Sciences 12, 3 (March 2023), 148. https://doi.org/10.3390/socsci12030148 Number: 3 Publisher: Multidisciplinary Digital Publishing Institute. [16] Helen Sampson and Idar Alfred Johannessen. 2020. Turning on the tap: the benefits of using ‘real-life’ vignettes in qualitative research interviews. Qualitative Research 20, 1 (Feb. 2020), 56–72. https://doi.org/10.1177/1468794118816618 Publisher: SAGE Publications. [17] Morgan Klaus Scheuerman, Jacob M. Paul, and Jed R. Brubaker. 2019. How Computers See Gender: An Evaluation of Gender Classifica- Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 144:1–144:33. tion in Commercial Facial Analysis Services. https://doi.org/10.1145/3359246 [18] Jesper Simonsen and Toni Robertson (Eds.). 2013. Routledge international handbook of participatory design. Routledge, London. OCLC: 818827037. [19] Yolande Strengers, Lizhen Qu, Qiongkai Xu, and Jarrod Knibbe. 2020. Adhering, Steering, and Queering: Treatment of Gender in Natural Language Generation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376315 [20] Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, and Nanyun Peng. 2023. “Kelly is a Warm Person, Joseph is a Role Model”: Gender Biases in LLM-Generated Reference Letters. In Findings of the Association for Computational Linguistics: EMNLP 2023, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 3730–3748. https://doi.org/10.18653/v1/2023.findings-emnlp.243 [21] Xilie Xu, Keyi Kong, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, and Mohan Kankanhalli. 2023. An LLM can Fool Itself: A Prompt-Based Adversarial Attack. https://doi.org/10.48550/arXiv.2310.13345 arXiv:2310.13345 [cs]. Received 22/02/2024; accepted 05/03/2024 4 This figure "acm-jdslogo.png" is available in "png"(cid:10) format from: http://arxiv.org/ps/2405.05080v1
ai_researcher
2
DELTA_Decomposed_Efficient_Long-Term_Robot_Task_Planning_using_Large_Language_Models.pdf
1 1 0 2 n u J 2 2 ] A F . h t a m [ 1 v 1 7 3 4 . 6 0 1 1 : v i X r a Mappings on some reflexive algebras characterized by action on zero products or Jordan zero products Yunhe Chen and Jiankui Li ∗ Department of Mathematics, East China University of Science and Technology Shanghai 200237, P. R. China Abstract Let L be a subspace lattice on a Banach space X and let δ : AlgL → B(X) be a linear mapping. If ∨{L ∈ L : L− + L} = X or ∧{L− : L ∈ L, L− + L} = (0), we show that the following three conditions are equivalent: (1) δ(AB) = δ(A)B + Aδ(B) whenever AB = 0; (2) δ(AB + BA) = δ(A)B + Aδ(B) + δ(B)A + Bδ(A) whenever AB + BA = 0; (3) δ is a generalized derivation and δ(I) ∈ (AlgL)′. If ∨{L ∈ L : L− + L} = X or ∧{L− : L ∈ L, L− + L} = (0) and δ satisfies δ(AB + BA) = δ(A)B + Aδ(B) + δ(B)A + Bδ(A) whenever AB = 0, we obtain that δ is a generalized derivation and δ(I)A ∈ (AlgL)′ for every A ∈ AlgL. We also prove that if ∨{L ∈ L : L− + L} = X and ∧{L− : L ∈ L, L− + L} = (0), then δ is a local generalized derivation if and only if δ is a generalized derivation. Keywords: Derivation, Jordan derivation, Reflexive algebra Mathematics Subject Classification(2000). 47L35, 17B40 1 Introduction Throughout this paper, let X be a Banach space over the real or complex field F and X ∗ be the topological dual of X. When X is a Hilbert space, we change it to H. We denote by B(X) the set of all bounded linear operators on X. For A ∈ B(X), we denote by A∗ the adjoint of A. A subspace of X means a norm closed linear manifold. For a subset L ⊆ X, denote by L⊥ the annihilator of L, that is, L⊥ = {f ∈ X ∗ : f (x) = 0 for all x ∈ L}. By a subspace lattice on X, we mean a collection L of subspaces of X with (0) and X in L such that for every family {Mr} of elements of L, both ∧Mr and ∨Mr belong to L, where ∧Mr denotes the intersection of {Mr} and ∨Mr denotes the closed linear span of {Mr}. We use AlgL to denote the algebra of operators in B(X) that leave members of L invariant. ∗Corresponding author. E-mail address:[email protected] 1 Let x ∈ X and f ∈ X ∗ be non-zero. The rank-one operator x ⊗ f is defined by y 7→ f (y)x for y ∈ X. If L is a subspace lattice on X and E ∈ L, we define E− = ∨{F ∈ L : F + E} , E+ = ∧{F ∈ L : F * E} and JL = {L ∈ L : L 6= (0) and L− 6= X} , PL = {L ∈ L : L− + L}. It is obvious that PL ⊆ JL. It is well known that a rank one operator x ⊗ f ∈ AlgL if and only if there exists a K ∈ JL such that x ∈ K and f ∈ K ⊥ − . A subspace lattice L is called a completely distributive lattice if L = ∨{E ∈ L : E− + L} for every L ∈ L (see [14]); L is called a J -subspace lattice if L ∧ L− = (0) for every L ∈ JL, X = ∨{L : L ∈ JL} and ∧{L− : L ∈ JL} = (0) (see [15]). A totally ordered subspace lattice N is called a nest. Recall that N is a discrete nest if a nest N satisfies N− 6= N for every non-trivial subspace N in N . We say that L is a P-subspace lattice on X if ∨{L : L ∈ PL} = X or ∧{L− : L ∈ PL} = (0). It is obvious that this class of subspace lattices contains J -subspace lattices, discrete nests and subspace lattices with X− 6= X or (0)+ 6= (0). The following example is also a P-subspace lattice. ∞ n=1 Example 1.1. Let {en : n ∈ N} be an orthonormal basis of H, Pn = span{ei : i = 1, ..., n}, ξ = 1 n en and Pξ be the orthogonal projection from H onto the one-dimensional subspace of H generated by ξ. It follows from [20, Theorem 2.11] and [7, Lemma 3.2] that L = {0, I, Pn, Pξ, Pξ ∨ P Pn : n = 1, 2, · · · } is a reflexive P-subspace lattice. In a Hilbert space, we disregard the distinction between a closed subspace and the orthogonal projection onto it. A subspace lattice on a Hilbert space H is called a commutative subspace lattice (or CSL for short) if it consists of mutually commuting projections. In the paper, we assume that H is a complex separable Hilbert space. Let δ be a linear mapping from a unital algebra A into an A-bimodule M. Recall that δ is a derivation (respectively generalized derivation) if δ(AB) = δ(A)B + Aδ(B) (respectively δ(AB) = δ(A)B + Aδ(B) − Aδ(I)B) for all A, B in A. We say that δ is derivable at Z ∈ A if δ(AB) = δ(A)B + Aδ(B) for any A, B ∈ A with AB = Z; δ is Jordan derivable at Z ∈ A if δ(AB + BA) = δ(A)B + Aδ(B) + δ(B)A + Bδ(A) for any A, B ∈ A with AB + BA = Z. If δ(AB + BA) = δ(A)B + Aδ(B) + δ(B)A + Bδ(A) for any A, B ∈ A with AB = 0, we say that δ has WJD (weak Jordan derivation) property. In recent years, there have been a number of papers on the study of conditions under which derivations and Jordan derivations of operator algebras can be completely determined by the action on some subsets of operator algebras (for example, see [1, 3, 8, 10, 21]). For instance, Zhao and Zhu in [21] showed that every linear mapping δ from a triangular algebra T into itself satisfying WJD property is a derivation. In [8], Jiao and Hou proved that every additive mapping δ derivable or Jordan derivable at zero point on some nest algebras has the form δ(A) = τ (A)+cA for some additive derivation τ and some scalar c ∈ F. The purpose of this paper is to consider some mappings which behave like derivations on P-subspace lattice algebras and completely distributive commutative subspace lattice (CDCSL) algebras. In Section 2, we show that every linear (respectively bounded linear) mapping δ on P- subspace lattice (respectively CDCSL) algebras Jordan derivable at zero point is a generalized derivation and δ(I) ∈ (AlgL)′. 2 In Section 3, for a P-subspace lattice algebra AlgL, we obtain that δ satisfies WJD property if and only if δ is a generalized derivation and δ(I)A ∈ (AlgL)′ for every A ∈ AlgL. In Section 4, we investigate derivable mappings at zero point and some linear mappings which behave like left (respectively right) multipliers, isomorphisms or local generalized derivations on P-subspace lattice algebras. One of the main results of the section is that if ∨{L ∈ L : L− + L} = X and ∧{L− : L ∈ L, L− + L} = (0), then δ is a local generalized derivation from AlgL into B(X) if and only if δ is a generalized derivation. The following proposition will be used in our proofs. Proposition 1.2 ([19, Proposition 1.1]). Let E and F be non-zero subspaces of X and X ∗, respectively. Let Φ : E × F → B(X) be a bilinear mapping such that Φ(x, f )ker(f ) ⊆ Fx for all x ∈ E and f ∈ F . Then there exist two linear mappings T : E → X and S : F → X ∗ such that Φ(x, f ) = T x ⊗ f + x ⊗ Sf for all x ∈ E and f ∈ F . 2 Jordan derivable Mappings at zero point The following lemma is included in the proof of [8, Theorem 3.1]. We leave the proof to readers. Lemma 2.1. If δ is Jordan derivable at zero point from a unital algebra A into its unital bimodule, then for any idempotents P and Q in A, the following hold: (1) δ(I)P = P δ(I); (2) δ(P ) = δ(P )P + P δ(P ) − P δ(I); (3) δ(P Q + QP ) = δ(P )Q + P δ(Q) + δ(Q)P + Qδ(P ) − δ(I)(P Q + QP ). For a subspace lattice L and a subspace E ∈ PL, we denote by TE the ideal span{x ⊗ f : x ∈ E, f ∈ E⊥ − } of AlgL. Lemma 2.2. If L is a subspace lattice on X and E is in PL, then for every x in E and every f in E⊥ − , x ⊗ f is a linear combination of idempotents in TE. Proof. Suppose f (x) 6= 0, then x ⊗ f = f (x)( 1 1 f (x) x ⊗ f is an idempotent in TE. Suppose f (x) = 0. Since E ∈ PL, there exist z ∈ E and g ∈ E⊥ − such that g(z) = 1. f (x) x ⊗ f ), where Case 1. If g(x) = µ 6= 0, then x ⊗ f = x ⊗ ( 1 µ g + f ) − x ⊗ 1 µ g, where x ⊗ ( 1 µ g + f ) and x ⊗ 1 µ g are idempotents in TE. Case 2. If f (z) = λ 6= 0, then x ⊗ f = (x + 1 λ z) ⊗ f − 1 λ z ⊗ f, where (x + 1 λ z) ⊗ f and 1 λ z ⊗ f are idempotents in TE. Case 3. If f (z) = g(x) = 0, then x ⊗ f = 1 4 ((z + x) ⊗ (g + f ) + (z − x) ⊗ (g − f ) − (z + x) ⊗ (g − f ) − (z − x) ⊗ (g + f )), where (z + x) ⊗ (g + f ), (z − x) ⊗ (g − f ), (z + x) ⊗ (g − f ) and (z − x) ⊗ (g + f ) are idempotents in TE. The proof is complete. Lemma 2.3. Let L be a subspace lattice on X, E be in PL and δ be a linear mapping from AlgL into B(X). If δ is Jordan derivable at zero point, then for every idempotent P in TE and every A in AlgL, the following hold: (1) δ(AP + P A) = δ(A)P + Aδ(P ) + δ(P )A + P δ(A) − δ(I)(AP + P A); (2) δ(P AP ) = δ(P )AP + P δ(A)P + P Aδ(P ) − 2δ(I)P AP. 3 Proof. (1) For every idempotent P ∈ TE and every A ∈ AlgL, since P ⊥AP ⊥P +P P ⊥AP ⊥ = 0, by assumption we have δ(P ⊥AP ⊥)P + P ⊥AP ⊥δ(P ) + δ(P )P ⊥AP ⊥ + P δ(P ⊥AP ⊥) = 0. Since A − P ⊥AP ⊥ = P A + P ⊥AP ∈ TE, it follows from Lemmas 2.1 and 2.2 that δ(AP + P A) = δ((A − P ⊥AP ⊥)P + P (A − P ⊥AP ⊥)) = δ(A − P ⊥AP ⊥ )P + (A − P ⊥AP ⊥ )δ(P ) + δ(P )(A − P ⊥AP ⊥ ) +P δ(A − P ⊥AP ⊥) − δ(I)(AP + P A) = δ(A)P + Aδ(P ) + δ(P )A + P δ(A) − δ(I)(AP + P A) −(δ(P ⊥AP ⊥)P + P ⊥AP ⊥δ(P ) + δ(P )P ⊥AP ⊥ + P δ(P ⊥AP ⊥)) = δ(A)P + Aδ(P ) + δ(P )A + P δ(A) − δ(I)(AP + P A). (2) The substitution AP + P A for A in (1) gives (2). One of the main results of this section is the following theorem. Theorem 2.4. Let L be a subspace lattice on X such that ∨{L : L ∈ PL} = X and δ be a linear mapping from AlgL into B(X). Then δ is Jordan derivable at zero point if and only if δ is a generalized derivation and δ(I) ∈ (AlgL)′, where (AlgL)′ is the commutant of AlgL in B(X). In particular, if δ(I) = 0, then δ is Jordan derivable at zero point if and only if δ is a derivation. Proof. The sufficiency is obvious, so we only need to prove the necessity. Let E ∈ PL, z ∈ E and g ∈ E⊥ Claim 1. δ(I) ∈ (AlgL)′. − with g(z) = 1. We divide the proof into several claims. For all x ∈ E, f ∈ E⊥ − and T ∈ AlgL, by Lemmas 2.1 and 2.2, we have δ(I)T x ⊗ f = T x ⊗ f δ(I) = T δ(I)x ⊗ f. That is, δ(I)T x = T δ(I)x for every x ∈ E. Since ∨{E : E ∈ PL} = X, it follows that δ(I) ∈ (AlgL)′. Now define τ (A) = δ(A) − δ(I)A for A ∈ AlgL. It is easy to see that τ is Jordan derivable at zero point and τ (I) = 0. Claim 2. τ (x ⊗ f )ker(f ) ⊆ Fx, for all x ∈ E and f ∈ E⊥ − . Case 1. If f (x) = µ 6= 0, then by Lemma 2.1, we have τ ( 1 µ x ⊗ f )) = τ ( 1 µ x ⊗ f )( 1 µ x ⊗ f ) + ( 1 µ x ⊗ f )τ ( 1 µ x ⊗ f ). Thus τ (x ⊗ f )ker(f ) ⊆ Fx. Case 2. If f (x) = 0 and f (z) 6= 0, then by Case 1, for every y ∈ ker(f ), we have τ ((z + x) ⊗ f )y = λ1(z + x), τ ((z − x) ⊗ f )y = λ2(z − x), τ (z ⊗ f )y = λ3z, for some λ1, λ2 and λ3 ∈ F. By the above equations, it follows that 2λ3z = (λ1 + λ2)z + (λ1 − λ2)x, 4 and the independence of z and x implies λ1 = λ2 = λ3. Hence τ (x ⊗ f )y = τ ((z + x) ⊗ f )y − τ (z ⊗ f )y = λ1x. This means τ (x ⊗ f )ker(f ) ⊆ Fx. Case 3. Suppose that f (x) = 0 and f (z) = 0. Since z ⊗ (g + f ) and z ⊗ (g − f ) are idempotents in TE, it follows from Lemma 2.3 that τ ((z ⊗ (g + f ))(x ⊗ g)(z ⊗ (g + f ))) = τ (z ⊗ (g + f ))(x ⊗ g)(z ⊗ (g + f )) + (z ⊗ (g + f ))τ (x ⊗ g)(z ⊗ (g + f )) + (z ⊗ (g + f ))(x ⊗ g)τ (z ⊗ (g + f )), τ ((z ⊗ (g − f ))(x ⊗ g)(z ⊗ (g − f ))) = τ (z ⊗ (g − f ))(x ⊗ g)(z ⊗ (g − f )) + (z ⊗ (g − f ))τ (x ⊗ g)(z ⊗ (g − f )) + (z ⊗ (g − f ))(x ⊗ g)τ (z ⊗ (g − f )), and τ ((z ⊗ g)(x ⊗ g)(z ⊗ g)) = τ (z ⊗ g)(x ⊗ g)(z ⊗ g) + (z ⊗ g)τ (x ⊗ g)(z ⊗ g) + (z ⊗ g)(x ⊗ g)τ (z ⊗ g). From the above three equations, we have 0 = τ ((z ⊗ f )(x ⊗ g)(z ⊗ f )) = τ (z ⊗ f )(x ⊗ g)(z ⊗ f ) + (z ⊗ f )τ (x ⊗ g)(z ⊗ f ) + (z ⊗ f )(x ⊗ g)τ (z ⊗ f ) = τ (z ⊗ f )(x ⊗ f ) + (z ⊗ f )τ (x ⊗ g)(z ⊗ f ). Thus τ (z ⊗ f )x = −f (τ (x ⊗ g)z)z. (2.1) Hence by (2.1), Lemmas 2.2 and 2.3, it follows that τ (x ⊗ f ) = τ ((z ⊗ f )(x ⊗ g) + (x ⊗ g)(z ⊗ f )) = −f (τ (x ⊗ g)z)z ⊗ g + (z ⊗ f )τ (x ⊗ g) + τ (x ⊗ g)(z ⊗ f ) + (x ⊗ g)τ (z ⊗ f ). Let y be in ker(f ). Applying the above equations to y gives τ (x ⊗ f )y = −g(y)f (τ (x ⊗ g)z)z + f (τ (x ⊗ g)y)z + g(τ (z ⊗ f )y)x. (2.2) If g(x) = µ 6= 0, µ x in (2.2), we have τ (x ⊗ f )y ∈ Fx. If g(x) = 0, by the proof of [18, Lemma Notice that (2.2) is valid for all z ∈ E satisfying g(z) = 1 and f (z) = 0. replacing z by 1 2.3], we have g(y)f (τ (x ⊗ g)z) − f (τ (x ⊗ g)y) = 0, whence τ (x ⊗ f )y = g(τ (z ⊗ f )y)x ∈ Fx. Claim 3. τ is a derivation. By Claim 2 and Proposition 1.2, there exist linear mappings T : E → X and S : E⊥ − → X ∗ such that τ (x ⊗ f ) = T x ⊗ f + x ⊗ Sf, (2.3) 5 for all x ∈ E and f ∈ E⊥ − . It follows from Lemmas 2.2 and 2.3 that for every A ∈ AlgL, τ (Ax ⊗ g + x ⊗ gA) = τ (A)x ⊗ g + Aτ (x ⊗ g) + τ (x ⊗ g)A + x ⊗ gτ (A). (2.4) By (2.3) and (2.4), we have T Ax ⊗ g + Ax ⊗ Sg + T x ⊗ A∗g + x ⊗ SA∗g = τ (A)x ⊗ g + AT x ⊗ g + Ax ⊗ Sg + T x ⊗ A∗g + x ⊗ A∗Sg + x ⊗ τ (A)∗g. That is, (τ (A) + AT − T A)x ⊗ g = x ⊗ (SA∗ − τ (A)∗ − A∗S)g. Thus there exists a linear mapping λ : AlgL → F such that τ (A)x = (T A − AT )x + λ(A)x, (2.5) for all A ∈ AlgL and x ∈ E. Hence by (2.5), for all A, B in AlgL and x in E, τ (AB)x = (τ (A)B + Aτ (B))x + λ(AB)x − λ(A)Bx − λ(B)Ax. (2.6) In the following, we show λ(A) = 0 for every A ∈ AlgL. Putting A = B = z ⊗ g and x = z in (2.6) gives λ(z ⊗ g) = g(τ (z ⊗ g)z), and Lemma 2.1 (2) implies g(τ (z ⊗ g)z) = 0. Hence λ(z ⊗ g) = 0. (2.7) Notice that (2.7) is valid for all z in E and g in E⊥ − satisfying g(z) = 1. Now fix z ∈ E and g ∈ E⊥ − such that g(z) = 1. Thus for all f ∈ E⊥ µ f ) = 0; if f (z) = 0, then λ(z ⊗ f ) = λ(z ⊗ (g + f )) − λ(z ⊗ g) = 0. Hence λ(z ⊗ f ) = 0 for every f ∈ E⊥ − . Similarly, we have λ(x ⊗ g) = 0 for every x ∈ E. Now for every A ∈ AlgL, by (2.6), we have − , if f (z) = µ 6= 0, then λ(z ⊗f )= µλ(z ⊗ 1 τ (Az ⊗ g)z = τ (A)z + Aτ (z ⊗ g)z − λ(A)z and τ (z ⊗ gA)z = τ (z ⊗ g)Az + g(τ (A)z)z − λ(A)z. By Lemma 2.3 (1), we have (2.8) (2.9) τ (Az ⊗ g + z ⊗ gA)z = τ (A)z + Aτ (z ⊗ g)z + τ (z ⊗ g)Az + g(τ (A)z)z. (2.10) Combining (2.8), (2.9) and (2.10) gives λ(A) = 0 for every A ∈ AlgL. Then by (2.6), we obtain τ (AB)x = (τ (A)B + Aτ (B))x, for all A, B ∈ AlgL and x ∈ E. Since ∨{L : L ∈ PL} = X, it follows that τ is a derivation. By δ(A) = τ (A) + δ(I)A, it is easy to show that δ is a generalized derivation. Applying the ideas in the proof of Theorem 2.4, we can obtain the following result. Theorem 2.5. Let L be a subspace lattice on X such that ∧{L− : L ∈ PL} = (0) and δ be a linear mapping from AlgL into B(X). Then δ is Jordan derivable at zero point if and only if δ is a generalized derivation and δ(I) ∈ (AlgL)′. In particular, if δ(I) = 0, then δ is Jordan derivable at zero point if and only if δ is a derivation. 6 Proof. We only prove the necessity. Let x 7→ ˆx be the canonical mapping from X into X ∗∗, then (x ⊗ f )∗ = f ⊗ ˆx for all x ∈ X and f ∈ X ∗. The hypothesis ∧{L− : L ∈ PL} = (0) implies that ∨{L⊥ − : L ∈ PL} = X ∗. With a proof similar to the proof of Theorem 2.4, we have δ(I) ∈ (AlgL)′. Let τ (A) = δ(A) − δ(I)A for A ∈ AlgL. Then τ is Jordan derivable at zero point and τ (I) = 0. In the following, we show τ is a derivation. Let E ∈ PL. We choose − such that g(z) = 1. One can easily verify that for all x ∈ E and f ∈ E⊥ z ∈ E and g ∈ E⊥ − , τ (x ⊗ f )∗ker(ˆx) ⊆ Ff. Let Φ(f, ˆx) = τ (x ⊗ f )∗ for all x ∈ E and f ∈ E⊥ − . Then Φ is a bilinear mapping from E⊥ − × ˆE into B(X ∗), where ˆE = {ˆx : x ∈ E}. Hence there exist linear mappings T : E⊥ − → X ∗ and S : ˆE → X ∗∗ such that τ (x ⊗ f )∗ = Φ(f, ˆx) = T f ⊗ ˆx + f ⊗ S ˆx, for all x ∈ E and f ∈ E⊥ − . Hence for A ∈ AlgL and f ∈ E⊥ − , we have that ∗ (τ (A) + A∗T − T A∗ )f ⊗ ˆz = f ⊗ (S Az − \δ(A)z − A∗∗S ˆz). It follows that τ (A)∗f = (T A∗ − A∗T )f + λ(A)f , where λ : AlgL → F is a linear mapping. Hence for all A, B ∈ AlgL and f ∈ E⊥ − , c τ (AB)∗f = (B∗τ (A)∗ + τ (B)∗A∗)f − λ(A)B∗f − λ(B)A∗f + λ(AB)f. With a proof similar to the proof of Theorem 2.4, we can prove that λ(A) = 0 for every A ∈ AlgL. − : L ∈ PL} = X ∗, it follows that τ is a derivation. Hence δ is a generalized Since ∨{L⊥ derivation. Next we investigate the bounded linear mappings which are Jordan derivable at zero point on CDCSL algebras. Recall that a CSL algebra AlgL is irreducible if and only if (AlgL)′ = CI, which is equivalent to the condition that L ∩ L⊥ = {0, I}, where L⊥ = {E⊥ : E ∈ L}. Lemma 2.6 ([5]). Let AlgL be a CDCSL algebra on H. Then there exists a countable set {Pn : n ∈ Λ} of mutually orthogonal projections in L ∩ L⊥ such that ∨nPn = I and each (AlgL)Pn is an irreducible CDCSL algebra on PnH; moreover, AlgL can be written as a direct sum AlgL = (AlgL)Pn. n P L Lemma 2.7 ([16]). Let AlgL be a non-trivially irreducible CDCSL algebra on H. Then there exists a non-trivial projection P in L such that P (AlgL)P ⊥ is faithful, that is, for T, S ∈ AlgL, T P (AlgL)P ⊥ = {0} implies T P = 0 and P (AlgL)P ⊥S = {0} implies P ⊥S = 0. Lemma 2.8. Let AlgL be an irreducible CDCSL algebra on H and let δ : AlgL → AlgL be a bounded linear mapping and δ(I) = 0. If δ is Jordan derivable at zero point, then δ is a derivation. Proof. Suppose that L is trivial, then AlgL = B(H) is a von Neumann algebra. It follows from [1, Theorem 3.2] that δ is a Jordan derivation. Since every von Neumann algebra is a semiprime ring, by [2, Theorem 1], δ is a derivation. Suppose that L is non-trivial. Let P be the non-trivial projection in L provided by Lemma 2.7. Since P (AlgL)P ⊥ is faithful, by [1, Theorem 2.1], δ is a Jordan derivation. Since every Jordan derivation on a CSL algebra is a derivation [17, Theorem 3.2], it follows that δ is a derivation. 7 Theorem 2.9. Let AlgL be a CDCSL algebra on H and δ be a bounded linear mapping from AlgL into itself. Then δ is Jordan derivable at zero point if and only if δ is a generalized derivation and δ(I) ∈ (AlgL)′. In particular, if δ(I) = 0, then δ is Jordan derivable at zero point if and only if δ is a derivation. Proof. We only prove the necessity. Since every rank one operator in AlgL is a linear combi- nation of idempotents in AlgL [6, Lemma 2.3] and the rank one subalgebra of AlgL is dense in AlgL in the weak topology [9, Theorem 3], by Lemma 2.1(1), we have δ(I) ∈ (AlgL)′. Let τ (A) = δ(A) − δ(I)A for A ∈ AlgL. Then τ is Jordan derivable at zero point and τ (I) = 0. Let AlgL = (AlgL)Pn be the irreducible decomposition of AlgL as in Lemma 2.6. Let A be in AlgL and fix an index n. Since PnAPnP ⊥ L n + P ⊥ n PnAPn = 0, we have n P 0 = τ (PnAPnP ⊥ n + P ⊥ = τ (PnAPn)P ⊥ n PnAPn) n + PnAPnτ (P ⊥ n ) + τ (P ⊥ n )PnAPn + P ⊥ n τ (PnAPn), n τ (PnAPn)P ⊥ which yields that P ⊥ By the same way, we obtain τ (AP ⊥ n = 0. Since Pn ∈ L ∩ L⊥, there holds τ (APn) = τ (APn)Pn. n ) = τ (AP ⊥ n . Since n )P ⊥ 0 = τ (I) = τ (Pn + P ⊥ n ) = τ (Pn)Pn + τ (P ⊥ n )P ⊥ n , it follows that τ (Pn) = 0. Now define a linear mapping τn : (AlgL)Pn → (AlgL)Pn by τn(APn) = τ (APn)Pn, for every A ∈ AlgL. It is easy to show that τn is bounded and Jordan derivable at zero point. Since (AlgL)Pn is irreducible and τn(Pn) = τ (Pn)Pn = 0, by Lemma 2.8, τn is a derivation. Hence by τ (A)Pn = τ (APn)Pn + τ (AP ⊥ n )Pn = τn(APn), we have τ is a derivation. Thus δ is a generalized derivation. 3 Mappings satisfying WJD property Our first result in this section says that the set of all Jordan derivable mapping at zero point from a P-subspace lattice algebra into B(X) is bigger than the set of all mappings satisfying WJD property. The following lemma is included in the proof of [4, Lemma 2.6]. Lemma 3.1. If δ is a linear mapping satisfying WJD property from a unital algebra A into its unital bimodule, then for every idempotent P ∈ A and every A ∈ A, the following hold: (1) δ(I)P = P δ(I) and δ(P ) = δ(P )P + P δ(P ) − δ(I)P ; (2) δ(P A + AP ) = δ(P )A + P δ(A) + δ(A)P + Aδ(P ) − δ(I)P A − P Aδ(I); (3) δ(P A + AP ) = δ(P )A + P δ(A) + δ(A)P + Aδ(P ) − δ(I)AP − AP δ(I); (4) 2δ(P AP ) = 2δ(P )AP + 2P δ(A)P + 2P Aδ(P ) − P Aδ(I) − 2δ(I)AP − AP δ(I). Theorem 3.2. Let L be a subspace lattice on X such that ∨{L : L ∈ PL} = X and δ be a linear mapping from AlgL into B(X). Then δ satisfies WJD property if and only if δ is a generalized derivation and δ(I)A ∈ (AlgL)′ for every A ∈ AlgL. In particular, if δ(I) = 0, then δ satisfies WJD property if and only if δ is a derivation. 8 Proof. Since the sufficiency is evident, we will just show the necessity. Suppose δ satisfies WJD property. We claim that δ(I)A ∈ (AlgL)′ for every A ∈ AlgL. By Lemma 3.1 (1) and the proof of Claim 1 in Theorem 2.4, we have δ(I) ∈ (AlgL)′. Hence by Lemma 3.1 (2) and (3), we have that δ(I)AP = P Aδ(I) for every idempotent P ∈ AlgL and every A ∈ AlgL. Hence for all x ∈ E, f ∈ E⊥ − and T ∈ AlgL, we have δ(I)AT x ⊗ f = T x ⊗ f Aδ(I) = T δ(I)Ax ⊗ f . Since ∨{L : L ∈ PL} = X, it follows that δ(I)A ∈ (AlgL)′ for every A ∈ AlgL. Let τ (A) = δ(A) − δ(I)A for A ∈ AlgL. It is easy to show that τ satisfies WJD property and τ (I) = 0. Similar to the proof of Theorem 2.4, we may show τ is a derivation and then δ is a generalized derivation. Similarly, we have the following theorem. Theorem 3.3. Let L be a subspace lattice on X such that ∧{L− : L ∈ PL} = (0) and δ be a linear mapping from AlgL into B(X). Then δ satisfies WJD property if and only if δ is a generalized derivation and δ(I)A ∈ (AlgL)′ for every A ∈ AlgL. In particular, if δ(I) = 0, then δ satisfies WJD property if and only if δ is a derivation. Corollary 3.4. Let L be as in Example 1.1. Then δ : AlgL → B(H) satisfies WJD property if and only if δ is a derivation. Proof. By Theorem 3.2, we only need to show that if δ satisfies WJD property, then δ(I) = 0. Let n ≥ 2. By [7, Lemma 3.2], we have (Pn)− (cid:3) Pn. Hence there exist zn ∈ Pn and gn ∈ (Pn)⊥ − such that gn(zn) = 1. Also, there exists yn ∈ Pn such that yn and zn are linearly independent. Since δ satisfies WJD property, we have δ(I)A ∈ A′ for every A ∈ A, which implies that there exists some scalar λn such that δ(I)x = λnx for every x ∈ Pn and δ(I)(zn ⊗ gn)(yn ⊗ gn) = δ(I)(yn ⊗ gn)(zk ⊗ gn). That is λngn(yn)zn = λnyn. The independence of yn and zn gives λn = 0 and δ(I)x = 0 for every x ∈ Pn. Since ∨{Pn ∈ L : n = 2, 3, · · · } = H, it follows that δ(I) = 0. The proof is complete. Corollary 3.5. Let L be a subspace lattice on H with dimH ≥ 2 such that ∨{L : L ∈ PL} = H or ∧{L− : L ∈ PL} = (0). If L has a non-trivial comparable element, then δ : AlgL → B(H) satisfies WJD property if and only if δ is a derivation. Proof. According to Theorem 3.2, we only need to show that if δ satisfies WJD property, then δ(I) = 0. By [11, Proposition 2.9], we have (AlgL)′ = CI. Hence by Theorem 3.2, we have δ(I) = λI and δ(I)A = µAI for every A ∈ AlgL (where λ, µA ∈ C). We claim that λ = 0. Suppose that λ 6= 0, then every operator in AlgL is a scalar multiple of the identity I. That is, for every A ∈ AlgL, the range of A is H or 0. However, Since AlgL contains a rank one operator, it is impossible. Hence δ(I) = 0. By Corollary 3.5, we can easily show the following result. Corollary 3.6. Let L be a subspace lattice on H with dimH ≥ 2 such that H− 6= H or (0)+ 6= (0). Then δ : AlgL → B(H) satisfies WJD property if and only if δ is a derivation. Remark. It follows from Theorems 2.4, 2.5, 3.2 and 3.3 that every linear mapping satisfying WJD property from a P-subspace lattice algebra into B(X) is Jordan derivable at zero point. 9 But the converse is not true. For example, let T2(C) be the algebra of all 2 × 2 upper triangular matrices over the complex field C. Define a linear mapping δ : T2(C) → T2(C) according to δ( x11 x12 0 x22 ! ) = x11 x11 − x22 + x12 0 x22 , ! for all xij ∈ C,(1 ≤ i ≤ j ≤ 2). It is easy to show that δ is a generalized derivation and δ(I) = I ∈ (T2(C))′, that is, δ is Jordan derivable at zero point. However, it follows from Corollary 3.6 that δ does not satisfy WJD property. 4 Derivable mappings at zero point and local generalized derivations Let A be a unital algebra, M be an A-bimodule and T be an ideal of A. We say that T is a left (respectively right ) separating set of M if for every m in M, mT = {0} implies m = 0 (respectively T m = {0} implies m = 0). T is called a separating set of M if T is a left separating set and a right separating set of M. The following result is obvious. Lemma 4.1. Suppose that L is a subspace lattice on X such that ∨{L : L ∈ PL} = X (respectively ∧{L− : L ∈ PL} = (0)). Then the ideal T = span{x ⊗ f : x ∈ E, f ∈ E⊥ − , E ∈ PL} of AlgL is a left (respectively right) separating set of B(X). By Lemmas 2.2 and 4.1, we have the following result. Theorem 4.2. Let L be a subspace lattice on X such that ∨{L : L ∈ PL} = X or ∧{L− : L ∈ PL} = (0) and δ be a linear mapping from AlgL into B(X). Then δ is derivable at zero point if and only if δ is a generalized derivation and δ(I) ∈ (AlgL)′. In particular, if δ(I) = 0, then δ is derivable at zero point if and only if δ is a derivation. Proof. We will show that if L satisfies ∨{L : L ∈ PL} = X and δ is derivable at zero point, then δ is a generalized derivation and δ(I) ∈ (AlgL)′. The proof for L with ∧{L− : L ∈ PL} = (0) is similar. By the proof of [10, Lemma 3], we may show that δ(AP ) = δ(A)P + Aδ(P ) − Aδ(I)P and δ(I)P = P δ(I), for every A ∈ AlgL and every idempotent P ∈ AlgL. With a proof similar to the proof of Claim 1 in Theorem 2.4, we have δ(I) ∈ (AlgL)′. Now for all A, B ∈ AlgL and T ∈ T , we have and δ(ABT ) = δ(AB)T + ABδ(T ) − ABδ(I)T δ(ABT ) = δ(A)BT + Aδ(BT ) − Aδ(I)BT = δ(A)BT + Aδ(B)T + ABδ(T ) −ABδ(I)T − Aδ(I)BT. It follows that δ(AB)T = δ(A)BT +Aδ(B)T −Aδ(I)BT. Since T is a left separating set of B(X), we obtain δ(AB) = δ(A)B + Aδ(B)T − Aδ(I)B for all A, B ∈ AlgL. That is, δ is a generalized derivation. The proof is complete. 10 Recall that a linear mapping δ from A into M is a left (respectively right ) multiplier if δ(AB) = δ(A)B (respectively δ(AB) = Aδ(B)) for all A, B ∈ A; δ is a local generalized derivation if for every A ∈ A there is a generalized derivation δA : A → M (depending on A) such that δ(A) = δA(A). In the following we give some applications of Lemmas 2.2 and 4.1. The proofs of the results are similar to the proof of Theorem 4.2, and we leave them to readers. Theorem 4.3. Suppose that L is a subspace lattice on X such that ∨{L : L ∈ PL} = X (respectively ∧{L− : L ∈ PL} = (0)) and δ is a linear mapping from AlgL into B(X). Then δ has the following properties: (a) if δ(AB) = δ(A)B for any A, B ∈ AlgL with AB = 0, then δ is a left multiplier (respectively if δ(AB) = Aδ(B) for any A, B ∈ AlgL with AB = 0, then δ is a right multiplier); (b) if δ(AB) = δ(A)B + δ(B)A for any A, B ∈ AlgL with AB = 0 and δ(I) = 0, then δ ≡ 0 (respectively if δ(AB) = Aδ(B) + Bδ(A) for any A, B ∈ AlgL with AB = 0 and δ(I) = 0, then δ ≡ 0); (c) if δ(A2) = 2δ(A)A for all A ∈ AlgL, then δ ≡ 0 (respectively if δ(A2) = 2Aδ(A) for all A ∈ AlgL, then δ ≡ 0). Combining Theorem 4.3 (a) and [12, Proposition 1.1], we have Corollary 4.4. Suppose that L is a subspace lattice on X such that ∨{L : L ∈ PL} = X and ∧{L− : L ∈ PL} = (0) and δ is a linear mapping from AlgL into B(X). Then the following are equivalent. (a) δ is a generalized derivation. (b) δ is a local generalized derivation. (c) Aδ(B)C = 0, whenever A, B, C ∈ AlgL such that AB = BC = 0. Combining Lemmas 2.2, 4.1 and [13, Theorem 2.8], we also have Theorem 4.5. Let L be a subspace lattice on X such that ∨{L : L ∈ PL} = X and ∧{L− : L ∈ PL} = (0). If h is a bijective linear mapping from AlgL onto a unital algebra satisfying h(A)h(B)h(C) = 0 for all A, B, C ∈ AlgL with AB = BC = 0 and δ(I) = I, then h is an isomorphism. References [1] R. An, J. Hou, Characterizations of Jordan derivations on rings with idempotent—additive maps Jordan derivable at zero, Chin. Ann. Math., 31(2010), 463-474. [2] M. Bresar, Jordan derivations on semiprime rings, Proc. Amer. Math. Soc., 104(1988), 1003-1006. [3] M. Chebotar, W. Ke, P. Lee, Maps characterized by action on zero products, Pac. J. Math., 216(2004), 217-228. [4] Y. Chen, J. Li, Characterizations of Jordan derivations on strongly double triangle sub- space lattice algebras, preprint. [5] F. Gilfeather, R. Moore, Isomorphisms of certain CSL algebras, J. Funct. Anal., 67(1986), 264-291. 11 [6] D. Hadwin, J. Li, Local derivations and local automorphisms, J. Math. Anal. Appl., 290(2003), 702-714. [7] C. Hou, Cohomology of a class of Kadison-Singer algebras, Sci. China Math., 53(2010), 1827-1839. [8] M. Jiao, J. Hou, Additive maps derivable or Jordan derivable at zero point on nest algebras, Linear Algebra Appl., 432(2010), 2984-2994. [9] C. Laurie, W. Longstaff, A note on rank-one operators in reflexive algebras, Proc. Amer. Math. Soc., 89(1983), 293-297. [10] W. Jing, S. Lu, P. Li, Characterisations of derivations on some operator algebras, Bull. Aust. Math. Soc., 66(2002), 227-232. [11] J. Li, Commutants and double commutants of reflexive algebras, Kyushu J. Math., 50(1996), 171-178. [12] J. Li, Z. Pan, Annihilator-preserving maps, multipliers and derivations, Linear Algebra Appl., 432(2010), 5-13. [13] J. Li, Z. Pan, J.Zhou, Isomorphisms and generalized derivations of some algebras, Expo. Math., 28(2010), 365-373. [14] W. Longstaff, Strongly reflexive lattices, J. London Math. Soc., 11(1975), 491-498. [15] W. Longstaff, O. Panaia, J -subspace lattices and subspace M -bases, Stud. Math., 139(2000), 197-212. [16] F. Lu, Lie derivations of certain CSL algebras, Israel J. Math., 155(2006), 149-156. [17] F. Lu, The Jordan structure of CSL algebras, Stud. Math., 190(2009), 283-299. [18] F. Lu, Jordan derivations of reflexive algebras, Integr. Equ. Oper. Theory, 67(2010), 51-56. [19] F. Lu, B. Liu, Lie derivations of reflexive algebras, Integr. Equ. Oper. Theory, 64(2009), 261-271. [20] L. Wang, W. Yuan, A new class of Kadison-Singer algebras, Expo. Math., to appear. [21] S. Zhao, J. Zhu, Jordan all-derivable points in the algebra of all upper triangular matrices, Linear Algebra Appl., 433(2010), 1922-1938. 12
ai_researcher
2
Identifying_AI-Generated_Research_Papers_Methods_and_Considerations.pdf
Noname manuscript No. (will be inserted by the editor) Navigating Fairness: Practitioners’ Understanding, Challenges, and Strategies in AI/ML Development Aastha Pant* · Rashina Hoda · Chakkrit Tantithamthavorn · Burak Turhan 4 2 0 2 l u J 1 3 ] Y C . s c [ 2 v 1 8 4 5 1 . 3 0 4 2 : v i X r a Received: date / Accepted: date Abstract The rise in the use of AI/ML applications across industries has sparked more discussions about the fairness of AI/ML in recent times. While prior research on the fairness of AI/ML exists, there is a lack of empirical studies focused on understanding the perspectives and experiences of AI prac- titioners in developing a fair AI/ML system. Understanding AI practitioners’ perspectives and experiences on the fairness of AI/ML systems is important because they are directly involved in its development and deployment and their insights can offer valuable real-world perspectives on the challenges associated with ensuring fairness in AI/ML systems. We conducted semi-structured inter- views with 22 AI practitioners to investigate their understanding of what a ‘fair AI/ML’ is, the challenges they face in developing a fair AI/ML system, the consequences of developing an unfair AI/ML system, and the strategies they employ to ensure AI/ML system fairness. We developed a framework showcas- ing the relationship between AI practitioners’ understanding of ‘fair AI/ML’ system and (i) their challenges in its development, (ii) the consequences of de- A. Pant Department of Software Systems and Cybersecurity, Monash University, Melbourne, Aus- tralia E-mail: [email protected] R. Hoda Department of Software Systems and Cybersecurity, Monash University, Melbourne, Aus- tralia E-mail: [email protected] C. Tantithamthavorn Department of Software Systems and Cybersecurity, Monash University, Melbourne, Aus- tralia E-mail: [email protected] B. Turhan Faculty of Information Technology and Electrical Engineering, University of Oulu, Oulu, Finland E-mail: [email protected] 2 veloping an unfair AI/ML system, and (iii) strategies used to ensure AI/ML system fairness. By exploring AI practitioners’ perspectives and experiences, this study provides actionable insights to enhance AI/ML fairness, which may promote fairer systems, reduce bias, and foster public trust in AI technologies. Additionally, we also identify areas for further investigation and offer recom- mendations to aid AI practitioners and AI companies in navigating fairness. Keywords artificial intelligence · machine learning · AI fairness · AI practitioners · interviews 1 Introduction In recent years, the use of AI/ML systems has become widespread across var- ious domains, including recruitment, legal proceedings, credit risk forecasting, and admission processes (Mehrabi et al., 2021). ‘Fairness’ has been a subject of study in Software Engineering (SE) research for some time, predating the re- cent surge in AI/ML applications (Finkelstein et al., 2008). At the same time, the importance of ‘fairness’ of AI/ML systems has been highlighted by several real-world incidents in recent years (Majumder et al., 2023). For example, there have been fairness issues in AI/ML systems such as Google’s ML algorithm exhibiting gender bias against women by more frequently associating men with Science, Technology, Engineering, and Mathematics (STEM) careers (Prates et al., 2020); Amazon’s AI-powered recruitment tool that was gender-biased as it preferred male candidates over female candidates based on their resumes (Martin, 2018); a risk score predicting algorithm exhibiting significant bias against African Americans, revealing a higher error rate in predicting future criminals (Angwin et al., 2016); gender bias in Google (Caliskan et al., 2017) and Bing translators (Johnson and Brun, 2022). Widespread cases of software displaying unfair behavior, particularly regarding protected attributes such as gender (Caliskan et al., 2017) and race (Angwin et al., 2016), underscore the necessity of prioritising ‘fairness’ in the development of AI/ML systems, as these instances lead to unacceptable consequences disproportionately affecting users in minority or historically disadvantaged groups. The widespread adoption of AI/ML systems across different domains has raised concerns about fairness, leading to increased research and the develop- ment of guidelines and policies. Major tech companies like Google (Google, 2022), Microsoft (Microsoft, 2024a), IBM (IBM, 2022), and various countries/ continents, including Australia (Australia, 2019) and Europe (Group, 2019), have defined ‘fairness’ as a guiding principle for AI practitioners in developing a fair AI/ML system. The essence of the ‘fairness’ principle for these coun- tries/continents and tech companies is centered around developing an inclu- sive AI/ML system that does not discriminate against any specific individuals, groups, or communities. Along with that, several software and tools have also been developed such as IBM’s AI Fairness 360 (IBM, 2024b), LinkedIn’s Fair- ness Toolkit (LiFT) (Vasudevan and Kenthapadi, 2020), and fairness checklists like Deon (DrivenData, 2024), Microsoft’s AI Fairness Checklist (Microsoft, 3 2024b), IBM’s AI FactSheets (IBM, 2024a) and many more to aid AI practi- tioners in developing a fair AI/ML system. The extensive research in the field of AI/ML system fairness covers various aspects, including the proposal of methods and frameworks (Johnson and Brun, 2022; Zhang et al., 2023), aimed at aiding AI practitioners in the design and development of a fair AI/ML sys- tem or mitigating fairness-related issues in them. Despite the development of numerous tools, frameworks, guidelines, and policies for AI/ML fairness, is- sues persist. Our recent survey study also showed that most AI practitioners discussed facing challenges in developing fair AI/ML systems because of their own biased nature (Pant et al., 2023). The predominant focus has been on introducing guidelines, and policies, and developing tools for AI practitioners to enhance the development of a fair AI/ML system. Given that human society is diverse in terms of cultures, ex- periences, and viewpoints, AI teams must reflect this diversity to effectively create fair and impactful technologies (Xavier, 2024). Therefore, understand- ing the perspectives and experiences of these practitioners who are actively involved in AI/ML system development is equally crucial. This deeper un- derstanding can play a pivotal role in uncovering real-world challenges en- countered during the development process. Such awareness can help to devise solutions that can directly address practical needs and concerns identified by practitioners, thereby aiding in the development of fair AI/ML systems and mitigating societal inequalities (Holstein et al., 2019). Understanding what ‘fairness’ means in the context of AI/ML from practitioners’ perspectives may help policymakers create better regulations that tackle real-world issues and promote ethical AI deployment. This approach may enhance inclusivity, re- duce discrimination risks, and boost public trust in AI systems (Dankloff et al., 2024). Ultimately, it creates a digital environment where AI enhances societal well-being. A recent study has also reported that most studies on AI/ML sys- tem fairness are conceptual and focused on technical aspects, highlighting the importance and need for research on the social/human aspects of AI (Xivuri and Twinomurinzi, 2021). Therefore, considering the importance of understanding the overall per- spectives and experiences of AI practitioners in the development of a fair AI/ML system, as emphasised in the literature, and taking into account the identified research gap (Xivuri and Twinomurinzi, 2021), we were interested in addressing this gap by conducting an empirical study with AI practition- ers1. We conducted semi-structured interviews with 22 AI practitioners to explore four aspects: (i) AI practitioners’ understanding of ‘fair AI/ML’, (ii) their challenges in fair AI/ML development, (iii) consequences of developing 1 The term ‘AI practitioners’ in our study includes AI/ML developers, AI engineers, AI/ML experts, and AI/ML/ data scientists involved in the design and development ac- tivities of AI/ML systems. The terms ‘AI practitioners’ and ‘practitioners’ are used inter- changeably throughout our study. 4 an unfair AI/ML system, and (iv) their strategies2 to ensure the fairness of an AI/ML system. The study aims to answer the following four research questions (RQs): RQ1. What do AI practitioners understand by ‘fair AI/ML’ ? To address RQ1, we explicitly asked AI practitioners about their understand- ing of ‘fair AI/ML’. This approach was chosen to investigate how ‘fairness’ is understood by AI practitioners in the context of AI/ML. RQ2. What challenges do AI practitioners face in developing a fair AI/ML system and what are the factors that lead to those challenges? To address RQ2, we inquired with AI practitioners about the overall challenges they encounter in developing a fair AI/ML system, drawing insights from their experiences. Additionally, we explored the underlying factors contributing to those challenges. RQ3. What do AI practitioners perceive as the consequences of developing an unfair AI/ML system? To address RQ3, we asked AI practitioners to share their perceptions of the consequences associated with developing an unfair AI/ML system. The ques- tion went beyond inquiring about their experiences, and also seeking their overall perspective on the consequences of developing an unfair AI/ML sys- tem. RQ4. What strategies do AI practitioners use in ensuring the fairness of an AI/ML system? To address RQ4, we asked AI practitioners about their practical, day-to-day approaches derived from their experience in ensuring the fairness of the AI/ML system they develop. We used Socio-Technical Grounded Theory (STGT) for data analysis (Hoda, 2021) to analyse the qualitative data. The main contributions of this study are: – We investigated what AI practitioners understand by ‘fair AI/ML’. – We identified the challenges faced by AI practitioners in developing a fair AI/ML system and the factors leading to those challenges. – We identified the consequences of developing an unfair AI/ML system per- ceived by AI practitioners. – We explored the strategies used by AI practitioners to ensure the fairness of the AI/ML system they developed. – We developed a framework illustrating the relationship between AI prac- titioners’ understanding of ‘fair AI/ML’ and (i) challenges faced in devel- oping a fair AI/ML system, (ii) the consequences of developing an unfair AI/ML system, and (iii) strategies for ensuring the fairness of an AI/ML system. 2 The term ‘strategy’ in our study refers to practical, day-to-day approaches aimed at ensuring the fairness of AI/ML, rather than encompassing a broader, overarching plan or approach intended for achieving long-term goals. 5 – We formulated a set of recommendations for AI practitioners and AI com- panies to assist them in the development of fair AI/ML systems based on the empirical findings. 2 Background and Motivation 2.1 Definition and Approaches on ‘AI/ML fairness’ In recent years, the concept of ‘fairness’ in AI has gained significant atten- tion. Leading software companies such as Microsoft, Google, and IBM have either outlined principles or recommended practices to guide practitioners in developing fair AI systems. For instance, Microsoft has defined ‘fairness’ as “AI systems should treat all people fairly” (Microsoft, 2024a). Likewise, IBM emphasises the importance of minimising bias and promoting inclusive rep- resentation in AI development (IBM, 2022). Meanwhile, Google recommends concrete steps for fair AI, including setting clear goals for fairness, using rep- resentative datasets, checking systems for unfair biases, and analysing sys- tem performance (Google, 2022). In addition to companies, various countries and continents have their definitions of the term ‘fairness’ in the context of AI. For example, Australia’s AI Ethics Principles defined the ‘fairness’ prin- ciple as “AI systems should be inclusive and accessible, and should not in- volve or result in unfair discrimination against individuals, communities or groups” (Australia, 2019). Similarly, the European Commission defined ‘Di- versity, non-discrimination and fairness’ in AI as, “Unfair bias must be avoided, as it could have multiple negative implications, from the marginalisation of vul- nerable groups to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle” (Group, 2019). Along with that, over the last 13 years, there has been extensive research on AI/ML fairness (Friedler et al., 2019), and different tools, techniques, and methods to measure and mitigate fairness issues in AI/ML systems have been developed and evaluated. Major tech companies like Microsoft, Google, and IBM have developed software tools and techniques to enhance the development of fair AI/ML systems such as AI Fairness 360 (IBM, 2024b), LinkedIn’s Fair- ness Toolkit (LiFT) (Vasudevan and Kenthapadi, 2020), and fairness checklists like Deon (DrivenData, 2024), Microsoft’s AI Fairness Checklist (Microsoft, 2024b), IBM’s AI FactSheets (IBM, 2024a). Furthermore, researchers have developed a variety of methods and frameworks intending to enhance the de- velopment of fair AI/ML systems, including fairness checklists (Madaio et al., 2020), frameworks (Vasudevan and Kenthapadi, 2020; D’Amour et al., 2020), and fairness evaluation and comparison toolkit (Johnson and Brun, 2022). 6 2.2 Review studies on AI/ML fairness In addition to defining the concept of AI fairness, several review studies have been conducted in the area of fairness of the AI/ML systems. For example, studies have been conducted to explore and review the definition of fairness focused on various aspects such as ML algorithmic classification (Verma and Rubin, 2018), the widely used definition in ML (Mehrabi et al., 2021; Choulde- chova and Roth, 2018) and political philosophy (Binns, 2018). Likewise, stud- ies have also been conducted to compare the historical and current perspec- tives of fairness in ML (Hutchinson and Mitchell, 2019). Studies have also focused on reviewing the challenges and methodologies related to AI fairness. For example, Chen et al. (2023) conducted a literature review of 59 articles to explore the challenges in ensuring AI fairness and the strategies to improve fairness in AI systems. Likewise, Xivuri and Twinomurinzi (2021) performed a systematic literature review (SLR) with 47 articles, examining AI algorithm fairness across research methods, practices, sectors, and locations. Their find- ings revealed a predominance of conceptual research, primarily emphasising the technical aspects of narrow AI, and highlighted a notable gap in research, specifically the lack of research on the social and human aspects of AI. Pes- sach and Shmueli (2022) conducted a review study on ML fairness focusing on exploring the causes of algorithmic bias, common definition, and measures of fairness. Caton and Haas (2020) conducted a review study to provide an overview of different approaches used to increase the fairness of ML systems. Pagano et al. (2023) conducted a systematic review to explore various aspects like datasets, fairness metrics, tools, and identification and mitigation meth- ods of mitigating bias and unfairness in ML systems. Likewise, Bacelar (2021) provided an overview of various measurement methods of bias and fairness in ML models, in their review study. Wan et al. (2023) provided a review of the currently available mitigation techniques of in-procession fairness issues in ML models. Review studies have also been conducted to address the fair- ness issues, the causes of biases in AI, and their consequences in the medical domain (Ueda et al., 2024). Likewise, Wang et al. (2023) conducted a review of 95 articles to explore similarities, and differences in the understanding of fairness, influencing factors, and potential solutions for fairness integration in medical AI. The emphasis of major tech companies and nations has largely been on working to define the concept of ‘fairness’ and develop diverse tools and tech- niques to assist AI practitioners in enhancing the development of fair AI/ML systems. Despite these efforts, our recent survey study showed that most par- ticipants reported the challenges in developing fair AI/ML systems due to their own biased nature (Pant et al., 2023). Given that most studies on AI/ML fairness are conceptual and focused on technical aspects, and considering the highlighted importance and need for research on the social/human aspects of AI in the literature (Xivuri and Twinomurinzi, 2021), we were interested in exploring the perceptions and experiences of AI practitioners regarding fair AI/ML systems. Investigating AI practitioners’ perceptions and experiences 7 in developing a fair AI/ML system can assist in understanding the real-world challenges associated with fair AI/ML system development. Furthermore, it can aid in devising solutions to address their practical needs and concerns in developing a fair AI/ML system. 3 Research Methodology Our study aimed to investigate the perspectives and experiences of AI prac- titioners in developing a fair AI/ML system. Figure 1 shows the overview of the research methodology of our study. 3.1 Study Design Fig. 1 Overview of the research methodology of our study We conducted a semi-structured interview-based study which commonly allows researchers to study the complexities of human behavior such as mo- tivation, communication, and understanding to obtain rich and informative results (Seaman, 1999). We conducted semi-structured interviews focusing on AI/ML fairness and the findings are divided into two primary categories. A smaller part of the findings which revolves around AI practitioners’ perspec- tives and experiences on ‘AI/ML bias’, has been accepted for publication in IEEE Software (Pant et al., 2024). The larger part (core part) of the findings related to ‘fair AI/ML’, is presented in this paper. The complete interview protocol is provided in Appendix A. Basically, we gathered AI practitioners’ insights on their understanding of ‘fair AI/ML’, the challenges they face in developing a fair AI/ML system, con- sequences of developing an unfair AI/ML system perceived by them, and the strategies they take to ensure fairness of an AI/ML system. Interview plan- ning spanned from July 2023 to October 2023. Throughout this period, tasks included defining interview objectives, refining the interview protocol through iterative processes, and prioritising crucial interview questions. Consequently, a semi-structured interview protocol with two sections was developed. 8 3.1.1 Participant Information The first section of the interview protocol was formulated to gather partic- ipants’ demographic information, including their name, email, gender, age, country of residence, and educational qualifications. Employment details such as job titles, and involvement in AI/ML system development activities were also collected. We used a pre-interview questionnaire to gather the partici- pants’ demographic information. Participants were also asked to provide de- tails of their work experience in the area of AI/ML system development, and those without experience were not included in the study. Each participant in- cluded in our study has at least some experience in the area of AI/ML system development. Using the Qualtrics platform, we created the pre-interview ques- tionnaire and advertised it as an anonymous survey link following the receipt of necessary ethics approval (Reference Number: 38991). The pre-interview questionnaire can be found in Appendix A - Section A. 3.1.2 Understanding Participants’ Perspectives and Experiences in Developing a Fair AI/ML System The second section of the interview protocol was designed to gather insights into participants’ perspectives and experiences in the development of a fair AI/ML system. At the start of the interview, we asked participants if they were familiar with the term ‘fair AI/ML’, and if they had encountered any fairness-related cases while developing AI/ML systems. Only those who had experience with fairness-related cases in their professional work were recruited for the interview. Our focus was specifically on investigating AI practitioners’ understanding of ‘fair AI/ML’. We did not provide a predefined definition of ‘fairness’ to par- ticipants and explicitly inquired about their understanding, aiming to assess their perspectives independently. The two key reasons for this design choice include: (i) as mentioned in section 2, there is no universal definition of ‘fair- ness’ in AI—different countries and tech companies have their own definitions of ‘fairness’ and (ii) this approach aimed to evaluate participants’ natural interpretations, avoiding influence from a predetermined definition. We also aimed at identifying their challenges in developing fair AI/ML and the factors leading to those challenges, understanding the consequences of developing an unfair AI/ML system from the participants’ perspective, and exploring the strategies they employ to ensure fairness of an AI/ML system. To ensure we captured real-world experiences, we asked participants for real-world examples and experiences during the interview. The interview questions can be found in Appendix A - Section B. 3.1.3 Pilot study After designing the interview protocol, we executed a pilot study, engaging two AI practitioners—one from industry and another from academia—identified 9 through our professional networks. The purpose was to confirm the clarity and understandability of the interview questions, assess the time required to complete the study and gather feedback for enhancing the interview process. Both participants possessed expertise in AI/ML system development. Taking into account their feedback, we made slight modifications to the interview questions to enhance clarity, ultimately finalising the interview protocol. 3.2 Interview Sampling and Data Collection We used purposive sampling in our study to select the participants (Baltes and Ralph, 2022). By using this method, we were able to specifically target our desired group of participants, namely AI practitioners involved in AI/ML system development activities. We conducted data collection in two rounds. In the first round, participa- tion was voluntary. After we got the ethics approval, we advertised our study on social media platforms such as LinkedIn and Twitter, as well as within our professional networks. We specifically targeted AI practitioners engaged in AI/ML system development activities. In the first round, we received inter- est from only 3 candidates for participating in our study. So, after obtaining ethics approval, we decided to conduct a second round of data collection and introduced a reward— an AUD 50 gift card voucher— to incentivise partici- pation. The second round, advertised again on social media like LinkedIn and Twitter with mention of the reward, resulted in responses from 19 suitable can- didates, bringing the total number of participants to 22. Since our goal was to recruit participants with some experience in AI/ML system development, we incorporated two employment-related questions, inquiring about their years of experience in the field and their level of involvement in various job respon- sibilities. Participants received a reward of AUD 50 upon the completion of data collection. Since we advertised our study on social media, we obtained responses from various countries worldwide, as illustrated in Table 1. Since we did not favour specific countries, the responses were spread out across different regions. We obtained a majority of the responses from Australia (13), followed by the responses from other countries like Nepal (3), Israel (1), Japan (1), USA (1) etc. We present an in-depth analysis of the participants’ demographics in Section 4.1. We gathered qualitative data through semi-structured interviews with 22 AI practitioners experienced in AI/ML system development. All interviews were conducted online using Zoom and were audio-recorded. Each interview lasted between 40 and 45 minutes. 3.3 Data Analysis In our study, qualitative data was gathered via semi-structured interviews, and consequently, a qualitative approach was employed for data analysis. Socio- Technical Grounded Theory (STGT) for data analysis was used to analyse the 10 data, as it is particularly suitable for analysing open-ended data and gaining insights within socio-technical contexts (Hoda, 2021). After obtaining consent from each participant, we transcribed the data. The data collection and anal- ysis phases involved an iterative process as shown in Figure 1. Initially, we analysed the data from 13 participants using open coding approach to develop concepts and categories, involving constant comparison of diverse open-text responses (Hoda, 2021). We performed inductive open coding within the RQs. For example, to answer our RQ2 which is, What challenges do AI practitioners face in developing a fair AI/ML system and what are the factors that lead to those challenges?, initially, we gathered qualitative data from 13 participants by asking them, “Based on your professional experience, do you face any chal- lenges in developing a fair AI/ML system? (If yes), what challenges do you face? What do you think are the factors leading to those challenges?” We de- veloped codes using the open-coding approach in open-text answers as shown in Figure 2. For instance, codes like ‘access to limited data’ and ‘lack of data access’ were identified through open coding. Subsequently, we engaged in con- stant comparison of these codes to continually compare them, leading to the recognition of patterns among them. For instance, upon reviewing the codes mentioned above, we identified a common pattern related to the challenge of accessing datasets required in the development of a fair AI/ML system. We combined these two codes to develop a concept of ‘gaining access to datasets’. Fig. 2 Examples of STGT analysis (Hoda, 2021) applied to qualitative data on the chal- lenges in developing a fair AI/ML system. Using the same constant comparison approach for other codes, we derived concepts such as ‘balancing ideal vs real’, ‘handling data-related issues’, and ‘following policies and regulations’. We again constantly compared these con- cepts with one another and developed distinct categories. In this context, these four concepts shared a challenge associated with the process of developing a fair AI/ML system, leading us to establish a category known as ‘process-related challenges’. Likewise, we identified multiple codes and concepts addressing the chal- lenges associated with the resources required for developing a fair AI/ML system. This process led to the development of another high-level category, 11 namely, ‘resource-related challenges’. In this way, we established a total of three categories encapsulating the challenges faced by AI practitioners in develop- ing a fair AI/ML system, namely, process-related challenges, resource- related challenges, and team-related challenges. Detailed information on these challenges is provided in Section 4. Building on the primary findings from the initial analysis, we collected data from the remaining 9 participants, focusing on key insights from the first round. This data was analysed using targeted coding, which involves generating codes that align with the concepts and categories identified in the initial stage, following STGT guidelines (Hoda, 2021). All four authors were involved in designing the interview questionnaire. However, the first author led the data analysis with detailed feedback from the second author and regular feedback from the third and fourth authors. After the qualitative data were analysed, the results, including codes, concepts, and categories, were shared and discussed among all authors, who collectively contributed to presenting the findings. The STGT for data analysis encompasses steps of open coding, targeted coding, constant comparison, and memoing. “Basic memoing is the process of documenting the researcher’s thoughts, ideas, and reflections on emerging concepts and (sub)categories and evidence-based conjectures on possible links between them” (Hoda, 2021). Consequently, we wrote memos to record signif- icant insights and reflections discovered during the open coding and targeted coding activities. An illustration of a memo created for AI practitioners’ de- scription of ‘fair AI/ML’, specifically ‘in terms of the absence of bias’ and ‘in terms of the presence of desirable attributes’, is provided in Figure 3. The discussion on the key insights derived from memoing is presented in Section 5.5. Fig. 3 An example of a memo on AI practitioners’ understanding of ‘fair AI/ML’ 12 4 Findings 4.1 Participants’ Demographics We present the demographic information of the participants in this section. Table 1 presents an overview of the participants’ demographics based on their age, gender, country, education, work experience in AI/ML system develop- ment activities, and job title. We used identifiers such as P1, P2, P3, and so forth to represent the participants in our study. Table 1 Demographics of the Interview Participants P Id Age Gender Country Education Range (years) 20-25 20-25 31-35 26-30 26-30 26-30 31-35 26-30 31-35 46-50 26-30 20-25 26-30 31-35 46-50 31-35 26-30 31-35 31-35 31-35 20-25 26-30 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 P16 P17 P18 P19 P20 P21 P22 Job Title AI/ML Exp. (years) Man Man Woman Thailand Ph.D. or higher Nepal Bachelor Australia Bachelor Bachelor India Man Australia Master Man Australia Master Man Woman Australia Master Nepal Man Master Australia Ph.D. or higher Man Australia Ph.D. or higher Man Man Australia Master Woman Australia Master Australia Master Man Australia Ph.D. or higher Man Japan Man Master Australia Master Man Australia Bachelor Man Woman Australia Master Vietnam Master Man Master Israel Man Bachelor Man Nepal Master Woman USA 1-2 0-1 2-5 5+ 5+ 1-2 1-2 1-2 5+ 2-5 1-2 1-2 2-5 2-5 2-5 2-5 1-2 0-1 2-5 5+ 2-5 2-5 AI Engineer AI Engineer AI Research Scientist Data Scientist AI Engineer Data Scientist AI Engineer ML Engineer Data Scientist ML Engineer ML Engineer ML Engineer ML Engineer Data Scientist AI Engineer ML Engineer ML Engineer ML Engineer AI Engineer ML Expert ML Engineer Data Scientist A total of 22 AI practitioners took part in our study including 17 men and 5 women. Moreover, the majority of participants (8 out of 22) fell into the age group of 26-30 years and 31-35 years each, while only 2 belonged to the age group of 46 to 50 years. In terms of experience, the majority (13 partici- pants) had more than 2 years of experience whereas 9 participants had up to 2 years of experience in AI/ML system development. The geographical distri- bution indicated that the majority were from Australia (13 participants), with 3 participants from Nepal and 1 each from India, Japan, USA, Israel, Thai- land, and Vietnam. Similarly, we inquired about the participants’ job titles or roles within their companies. The majority of participants held the title of ‘ML Engineer’ (9 out of 22), followed by ‘AI Engineer’ (6 out of 22), ‘Data 13 Scientist’ (5 out of 22), and one participant each for ‘AI Research Scientist’ and ‘ML Expert’. As our target interview participants were practitioners in- volved in AI/ML system development activities, we wanted to know the major AI/ML system development-related activities they were involved in. Among the 22 participants, the majority engaged in ‘Data cleaning’ (19 participants), followed by ‘Model requirements’, ‘Data collection’, ‘Model training’, ‘Model evaluation’, and ‘Model deployment’ activities, each having 17 participants out of 22. 5 out of 22 participants chose the ‘Other’ option and elaborated on activities they engaged in through open-ended answers— activities that were not initially listed in the pre-interview questionnaire. Some of the mentioned activities included ‘system design’, ‘data pipelines’, ‘business benefit monitor- ing and reporting’, ‘model integration developed by the research team into the pipelines’, and ‘pipeline deployment’. 4.2 RQ 1- What do AI practitioners understand by ‘fair AI/ML’ ? Based on the responses, we grouped the participants’ understanding of ‘fair AI/ML’ into two categories including, (i) In terms of absence of bias and (ii) In terms of presence of desirable attributes, which are explained in detail below. Figure 4 shows the overview of the participant’s understanding of ‘fair AI/ML’. Fig. 4 Overview of the participant’s understanding of ‘fair AI/ML’ 4.2.1 In terms of absence of bias When a person is tasked with comprehending the concept of ‘fairness’, it is quite probable that their understanding will revolve around the absence of biases as defined by tech companies such as Google (Google, 2022), IBM (IBM, 2022), and countries like Australia (Australia, 2019). When the participants were asked to share their understanding of ‘fair AI/ML’, [P2, P9, P10, P11, P13, P14, P15, P18, P19, and P20] described ‘fair AI/ML’ in terms of absence of bias in the AI/ML system. For example, participants [P9] and [P11] said: 14 “A fair model is a model which is not skewed and not biased.” - [P9] “So in my opinion, I guess like a fair model should be something which decreases the bias, as I mean, there should be very less bias.” - [P11] 4.2.2 In terms of the presence of desirable attributes: [P1, P3, P4, P5, P6, P7, P8, P12, P14, P16, P17, P21 and P22] described ‘fair AI/ML’ in terms of its features or attributes. The participants framed it as the necessary elements an AI/ML system must possess to be considered as fair. For example, the participants said that the AI/ML system should be reproducible [P3], transparent and explainable [P4, P12, P17, P21], interpretable [P21], and accurate [P1, P7, P8, P16, P22]. Some participants also mentioned that a fair AI/ML system should use a good amount of data [P6] and have proper algorithms [P14]. For example, [P3], [P4], [P7], and [P14] said, “But to me fairness is more about whether or not it is reproducible. It’s some- thing that can be tested and checked and improved from that again.” - [P3] “There should be transparency in any fair model that you build, right? So it should explain why it throws a certain outcome.” - [P4] “Fair model my understanding that could work with different data. It should still like give proper accurate results. There shouldn’t be a huge difference between the seen data and unseen data.” - [P7] “A fair model should have proper algorithms which support you to treat your groups fairly and maybe post-processing stage where you when you’re applying busi- ness logic.” - [P14] 4.3 RQ 2- What challenges do AI practitioners face in developing a fair AI/ML system and what are the factors that lead to those challenges? Fig. 5 Overview of the participants’ challenges in developing a fair AI/ML system We also asked the participants about the challenges they face in developing a fair AI/ML system through an open-ended question. We categorised the chal- lenges of AI practitioners into three categories which are, (i) Process-related challenges, (ii) Resource-related challenges, and (iii) Team-related challenges, based on the responses of the participants. Each category is underpinned by multiple concepts and codes which are explained in detail below. Figure 5 15 shows the overview of the challenges faced by AI practitioners in developing a fair AI/ML system. Once we inquired with the participants regarding the challenges they en- countered in developing a fair AI/ML system, we delved further to understand the factors leading to those challenges. Gaining insights into the factors lead- ing to the challenges could contribute to devising more effective strategies to assist AI practitioners in overcoming those challenges. Additionally, we have highlighted the factors leading to each challenge within the quotes of the par- ticipants. 4.3.1 Process-related challenges The participants shared the challenges they encountered in developing fair AI/ML systems, specifically about the process of developing such systems. Here, the term ‘process-related challenges’ refers to the challenges faced during the development phase of the AI/ML systems. The participants reported four key challenges (concepts) under this category which include, (i) Gaining access to datasets, (ii) Balancing ideal vs real, (iii) Handling data-related issues, and (iv) Following policies and regulations. Each of these concepts is underpinned by multiple codes which are discussed below. Gaining access to datasets AI practitioners involved in the development of AI/ML systems might face limitations in accessing vital resources for their work. Factors like adherence to company rules and regulations can impede their access to necessary resources, leading to challenges that, in turn, may contribute to the development of an unfair AI/ML system. In our study, [P4, P8, P12, and P14] reported the challenge of gaining access to the datasets they require to train an AI/ML model. The factors that led to the challenge of gaining access to the datasets include the size of the organisation and the data confidentiality policy. For example, participants [P4] and [P8] said, “Like as a data scientist, I would have access to a very certain amount of data which I can pick. For example, some of the data would be from the external side or something you would not have access to that team data. Sometimes when you work due to data confidentiality policy, you will not have access to most of your data points.” - [P4] “Data is the one thing which models are built on but they are not available for public access, right? Like the open AI is things.. that language models are trained on the data sets, but data that are not available for our cases (small companies).” - [P8] Balancing ideal vs real #Real-world data vs training data: The desire to develop a perfect AI/ML system is different from the ability to develop it. Training an AI/ML model with an extensive array of real-world data can be impractical. AI practitioners 16 must rely on initial training data, and the system subsequently interacts with real-world data after deployment. Consequently, achieving fairness in AI/ML systems requires AI practitioners to navigate a balance between the training data and real-world data. This equilibrium can prove challenging at times due to a variety of limiting factors. In our study, [P8, P9, P10, P12, P15, P16, and P19] reported the challenge of striking a balance between the real-world data and the training data that they use in the development of AI/ML systems. The factors that led to the challenge of striking a balance between the real-world data and the training datasets include gaps existing between the real-world data and training data and negligence of the AI practitioners. For example, participants [P10] and [P15] said, “Because the collected data is only a subset of the data in real world. So, the distribution of collected data is not identical to the real-world distribution, even if we do some data augmentation such as oversampling or other generative tech- niques, we cannot ensure the data distribution of augmented data is identical to the distribution in the real world. So we can just assume it is approximately identical, but they are not perfectly, identical. So it can be still a huge, huge challenge.” - [P10] “Originally, in the project we made, we were trying to do some stuff on human photos. And so we had to augment our data, but the way that the data was created was not correct, it didn’t actually match the real data, like real photos. And so the AI actually learned to tell the difference between the synthetic data and the real data and could tell the difference between a fake photo and a real photo. And so that was like a kind of an eye opener for us that there was a bias from introduced from the synthetic data.” - [P15] #Fairness requirements vs technical constraints: Developing an AI/ML sys- tem may appear straightforward, but it is a highly intricate undertaking. AI practitioners may face considerable challenges when trying to balance their envisioned ideal AI/ML system with the challenges of the real world. In our study, [P4, P11, P14, P16, P17, and P19] mentioned the challenge they face in maintaining a balance between their requirements for developing a fair AI/ML system with the technical constraints they encounter. They discussed factors such as the complex nature of AI and lack of time as contributing to this challenge. For example, [P16] and [P19] said: “And actually, at first, we do not talk about the machine learning bias. At first, we talk about the production because we are practitioners and it is more important, I mean, the working version is the most important.” - [P16] “So as I mentioned, we tend to have like the model that works first, then we’ll look at the virus later. In the industry here, every project has a deadline and lifetime. So if we don’t launch the products, the stakeholders might not be happy and then they could find some other people who can do that.” - [P19] Handling data-related issues #Detecting data bias: One of the main aspects of developing any AI/ML sys- tem is data, presenting a significant challenge for AI practitioners in addressing data-related issues during the development of fair AI/ML system. Effectively managing biases in the data necessitates the initial step of detection, and cor- rective measures can only be taken once the biases are identified. The detection 17 of data bias may pose a formidable challenge for AI practitioners, influenced by various factors. In our study, [P3, P6, P9, P10, P13, P14, and P15] reported the challenge of detecting data bias during the development of a fair AI/ML system. This challenge was led by factors like the nature of data bias, lack of time, and lack of tools/techniques. For example, [P9] and [P13] said: “You’re working on a limited timeline project. So your priority is to have a working model. And sometimes you might not be able to discover such data biases.” - [P9] If we have data bias checking tool to detect biases automatically, it would be great.” - [P13] #Addressing data bias: Simply identifying data bias is not sufficient for developing a fair AI/ML system; it is crucial to actively address and rectify these biases. Dealing with identified bias in the data becomes the subsequent step and it can be challenging for AI practitioners due to several factors. In our study, [P9, P11, P14, and P15] discussed that they feel it is challenging to address (mitigate and/or remove) biases from the data during the development of AI/ML systems. The factors leading to the challenge of addressing data bias include a lack of tools/techniques and the biased nature of team members. For example, [P11] said, “If there was some kind of tool which can let the person who is training the model know, maybe you need to remove this data, or else maybe you need to do these kinds of operations on your data, or maybe you need to do something, obviously, anyone wants that kind of tool as removing bias from the data is hard.” - [P11] Following policies and regulations AI practitioners are required to adhere to various policies and regulations, which serve as guiding principles in developing AI/ML systems. Nevertheless, if these policies and regulations fall short, they can pose challenges to AI prac- titioners and can impede the development of a fair AI/ML system. Likewise, in our study, [P4, P5, P7, P9, and P12] expressed challenges in adhering to poli- cies and regulations concerning AI ethics while working on the development of AI/ML systems. This challenge arose due to factors like lack of policies and lack of implementation of the policies. For example, [P5] and [P9] quoted: “I don’t think Australia has any updated AI ethics policies and stuff, maybe, they need to update the policies based on how to follow the current trend and to follow the current technologies, something like that.” - [P5] “The European Alliance introduced a responsible AI framework but not sure if Australia has adapted such things in organisations. I know that companies like Facebook, are currently adapting. When it comes to small companies, small organisations, I’m not quite sure.” - [P9] 4.3.2 Resource-related challenges Participants encountering challenges in developing a fair AI/ML system face issues primarily associated with the resources used in the development process, constituting the second category of challenges. The participants reported two 18 key challenges (concepts) under this category which include, (i) Obtaining required datasets and (ii) Obtaining other resources. Each of these concepts is underpinned by multiple codes which are discussed below. Obtaining required datasets As previously mentioned, data plays a crucial role in the development of AI/ML systems. In our study, the majority of the participants [P1, P3, P4, P5, P8, P9, P11, P13, P14, P15, P18, P20, P21, and P22] pointed out that obtaining the required datasets to develop a fair AI/ML system is challenging. The participants noted that they often acquire imperfect (non-representative) datasets or incomplete datasets for developing AI/ML systems, presenting a challenge in ensuring the fairness of such systems. The factors leading to this challenge include a lack of representative datasets, lack of cost, lack of tools/techniques, lack of control over data collection, and negligence of AI practitioners. For example, [P3], [P11] and [P20] said: “I think bias could happen since the first step because sometimes we have other teams to collect the data, yes, we don’t know what they actually provide to us.” - [P3] “It has got something to do with the data, but then from the very beginning, like black people are discriminated in the world. So, I feel like the reason might be because, in the world, we have more white people images than black people images. So the data itself is less and it is a proven thing.” - [P11] “Should I buy this data? Or not? It’s money. In most cases, you don’t use data for free. You know, sometimes you need to actually pay for this in some way.” - [P20] Obtaining other resources #Technological requirements: In addition to datasets, AI practitioners have diverse technological needs, including high-quality hardware, to augment the development of a fair AI/ML system. Technology plays a vital role in assisting AI practitioners in the development process. However, acquiring the necessary technology can present a challenge, potentially impeding the development of a fair AI/ML system. Only [P8, P11, and P12] indicated that they encounter challenges in developing fair AI/ML systems because they lack the necessary technology needed for the development process. The participants discussed the lack of cost as a factor leading to this challenge. For example, [P11] said: “And resources because some huge models require huge GPUs. So in our com- pany, we do not have GPUs because they are expensive. So yeah, lack of such resources of course is a challenge.” - [P11] #Human-related requirements: Developing any software is a collaborative effort, involving multiple teams within a company dedicated to specific tasks. Collaborating with various team members offers advantages, such as diverse assistance in different aspects. Nonetheless, not all companies may incorpo- rate multiple members in their development teams, posing a challenge for AI practitioners striving to develop a fair AI/ML system. In our study, [P12 and 19 P15] reported that not having multiple people on the team is a challenge for them in developing a fair AI/ML system, and the lack of cost is the factor contributing to this challenge. For example, [P15] said: “So even an individual (AI practitioner) can introduce bias into their model. And what we try to do is have the evaluation code be performed by a different person than the trainer. But, usually, companies can’t really afford to do that which is challenging.” - [P15] 4.3.3 Team-related challenges The majority of participants also encountered challenges pertaining to their knowledge and understanding of different aspects, which we have classified as team-related challenges in developing a fair AI/ML system. Here, the term ‘team-related challenges’ refers to the challenges AI practitioners encounter as a result of their own limitations or shortcomings. The participants discussed two key challenges under this category which are the challenges in (i) Having knowledge of bias/fairness and (ii) Having knowledge of AI. These two key concepts are underpinned by multiple codes which are explained in detail below: Having knowledge of bias/fairness It is important that AI practitioners possess good knowledge and understand- ing of key concepts such as ‘bias’ and ‘fairness’ when aiming to develop a fair AI/ML system. However, AI practitioners might face difficulties in grasping the concepts of ‘bias’ and ‘fairness’ due to reasons such as the subjective na- ture of these concepts. In our study, [P1, P2, P4, P5, P7, P9, P10, P14, P15, and P19] reported that they are unable to understand the concept of ‘bias’ or ‘fairness’ which poses a challenge in developing a fair AI/ML system. The factors discussed by the participants that contributed to this challenge include a lack of domain knowledge, a lack of AI practitioners’ common approach, and a lack of awareness. For example, [P4], [P9] and [P19] said, “And again, now all humans’ thoughts and the way they approach a problem would be different. So that is one more reason for not understanding the bias problems in the systems we develop.” - [P4] “Most of our data is from sensors. So we might not have like a very clear view of biases like the people who deal with NLP and stuff.” - [P9] “It’s hard to understand fairness. I think I’m not intellectual enough to be in a position to define fairness. My definition of fairness could be different from other people’s point of view.” - [P19] Having knowledge of AI The rapid growth of AI is making it increasingly challenging for everyone to keep updated with its advancements and understand its outcomes (Pant et al., 2023). AI practitioners might face such challenges that can negatively impact the development of a fair AI/ML system. In our study, only [P11 and P15] 20 reported that understanding AI outcomes is challenging due to its complex nature, negatively affecting the development of fair systems. For example, [P15] said, “And then AI models, sometimes you don’t know what it is actually deciding on and what it is actually measuring as it is too complex. So we had other cases where the AI kind of learns funny things that you don’t anticipate.” - [P15] 4.4 RQ 3- What do AI practitioners perceive as the consequences of developing an unfair AI/ML system? Fig. 6 Overview of the consequences of developing an unfair AI/ML system We posed an open-ended query to the participants regarding the conse- quences of developing an unfair AI/ML system. The participants explored three distinct categories of negative consequences: (i) Impact on organisa- tions, (ii) Impact on users, and (iii) Impact on practitioners. Each of these three categories is underpinned by multiple concepts and codes explained in detail below. Figure 6 shows an overview of the consequences of developing an unfair AI/ML system. 4.4.1 Impact on organisations The majority of participants delved into the adverse impacts on organisations resulting from the failure to develop a fair AI/ML system. The participants delved into two key facets (concepts) of the impacts on organisations namely: (i) Financial losses and (ii) Reputational repercussions. Financial losses Developing AI/ML systems is a complex process that demands various re- sources like time and money (Pant et al., 2023). If the systems turn out to be unfair or fail to meet goals, it not only affects the project but also leads to financial setbacks for the organisation. Many participants [P1, P4, P6, P8, P9, P11, P12, P14, P17, P19, and P21] in our study reported that the devel- opment of an unfair AI/ML system leads to financial losses to organisations. For example, participants [P4] and [P6] said: 21 “Because the one that you’re going to deploy would definitely have an impact on your business and in such cases, any small bias can lead to a huge financial loss.” - [P4] “Yeah, it also constitutes money loss to the organisation.” - [P6] Reputational repercussions In today’s tech-driven era, organisations are in fierce competition to enhance their software systems (Hua and Belfield, 2020). The constant race for im- provement means even a minor flaw can tarnish an organisation’s reputation. Developing an unfair AI/ML system poses a significant risk, as it can lead to severe reputational repercussions for organisations in this highly competitive landscape. In our study, only [P5, P7, and P9] provided insights into the conse- quences associated with the organisation’s reputation when an unfair AI/ML system is developed. For example, participants [P5] and [P9] said: “If they are collecting the data, then there shouldn’t be, any bias in the data. If bias is there, different kinds of controversies will rise in, and legal issues will arise.” - [P5] “But to give you another perspective, like Twitter, such models, are exposed to a large number of people and a large number of datasets and are heavily used worldwide. Such organisations need to ensure they do not have such biases in their datasets, or in finally at their models, because then it would create controversies, and then it could finally tarnish the image of these organisations as well.” - [P9] 4.4.2 Impact on users Many participants mentioned the negative impacts on users due to the de- velopment of an unfair AI/ML system. The participants discussed two key aspects (concepts) of the impacts on people including (i) Obtaining flawed product and (ii) Emotional distress and discrimination. Obtaining flawed product The primary objective of developing any AI/ML system is to aid users in various domains, be it healthcare, technology, education, etc. When AI/ML systems are developed unfairly, users receive flawed products, which under- mines their core purpose. In our study, [P1, P2, P3, P8, P9, P10, P11, P15, P16, and P18] emphasised that the primary detriment to users resulting from the development of an unfair AI/ML system is the receipt of defective prod- ucts, leading to inaccurate predictions. For example, [P1] and [P2] said: “It is important to create a fair ML model because the main reason is that if the ML model is biased, users won’t be able to achieve our end goal. If the model is biased, it won’t give the result that it is required to give.” - [P1] “Users will be getting a great product from the company if it is fair, otherwise not.” - [P2] 22 Emotional distress and discrimination When an unfair AI/ML system is developed, most likely the users who are us- ing those systems are impacted negatively (Martin, 2018; Prates et al., 2020). The primary impact could manifest as emotional distress and discrimination towards these users. Only [P1, P7, P20, and P22] highlighted that users experi- ence emotional distress and discrimination when using unfair AI/ML systems, leading to hurt sentiments. For example, a participant [P1] said: “The second reason is that the sentiments of the people can be hurt if the model is biased.” - [P1] 4.4.3 Impact on practitioners A very few participants mentioned the negative impacts of developing an unfair AI/ML system on the practitioners responsible for their development. We classified these negative consequences into a specific aspect: (i) No professional empowerment. No professional empowerment AI practitioners gain valuable insights through hands-on experience in devel- oping AI/ML systems, complementing their theoretical knowledge. The de- velopment of unfair systems, however, could have a negative impact on these practitioners, affecting their learning experiences in the field. Only [P11 and P13] highlighted that the development of unfair AI/ML systems hinders the professional empowerment of AI practitioners, causing a decline in confidence and knowledge. For example, participants [P11] and [P13] said, “When they develop an unfair model, they will not learn something new from that.. like, in terms of data augmentation, or terms of algorithmic change, or terms of data collection.” - [P11] “Yeah, if the research team or the model training team make the unfair model and we lack the confidence to use the model directly in our product.” - [P13] 4.5 RQ 4- What strategies do AI practitioners use in ensuring the fairness of an AI/ML system? We asked the participants about the strategies that they use to ensure the fair- ness of AI/ML systems. The participants discussed two categories of strategies that they use to ensure the fairness of AI/ML systems which are (i) Bias- related strategies and (ii) Performance-related strategies. Each of these two categories is underpinned by multiple concepts and codes explained in detail below. Figure 7 shows the overview of AI practitioners’ strategies to ensure the fairness of the AI/ML systems they develop. 23 Fig. 7 Overview of the participants’ strategies in ensuring the fairness of an AI/ML system 4.5.1 Bias-related strategies The majority of the participants discussed the strategies they used to address the bias-related issues when developing a fair AI/ML system. The participants reported two key strategies (concepts) they used to address bias-related issues including (i) Detecting bias and (ii) Mitigating bias. Each of these concepts is discussed in detail below. Detecting bias Addressing any concern starts with its detection; without identification, mit- igation is impossible. In our study also, participants outlined their strategies for ensuring the fairness of the AI/ML systems they developed, emphasising the initial step of detecting bias. [P1, P3, P4, P5, P6, P7, P8, P9, P11, and P18] mentioned that they rely on testing the system with test datasets as their strategy for identifying biases in the system. For example, participants [P1] and [P8] said: “ So in the beginning, we categorise our whole dataset as train, test, and val- idation dataset and use the testing data to test the model. The testing phase is mandatory, otherwise, we won’t know if the model we created has biases.” - [P1] “After building the model, we use some test case scenarios and we have been doing this post process like how well the model is doing in the test data set to find out any biases.” - [P8] Mitigating bias #By balancing datasets: Data holds significance in the development of AI/ML systems. Using well-balanced datasets for training is important to ensure the system generates an unbiased prediction. Likewise, in our study, the majority of the participants [P1, P3, P5, P7, P8, P9, P10, P11, P13, P14, P15, P16, P18, P19, P20, P21, and P22] mentioned using the data augmentation technique to balance training datasets during the development phase to mitigate biases in their systems. For example, participants [P14] and [P22] said: 24 “I try to identify whether there are any segments where these metrics are up or down, and then I would go back to the data and see if it is because we don’t have enough data for those regions. And then if that is the case, I’ll try to maybe augment the data.” - [P14] I tried to increase for those that did not have enough by strategically doing some kind of artificial, like stretching the existing data or compressing data because I was working with audio data.” - [P22] #By involving multiple people: Collaborating in a team with multiple members offers numerous advantages in software development, fostering the exchange of knowledge and facilitating mutual learning (Augustin et al., 2002). This collaborative dynamic can also prove particularly beneficial in the con- text of developing AI/ML systems. In our study, [P2, P7, and P12] mentioned that they get input from multiple people and this collaborative approach aids them in mitigating data biases within the system they develop. For example, [P7] said: “I don’t have any medical background. Sometimes, some factors or features I never thought about could cause bias. So we have other advisors, like from another university, doctors, and professors. So yeah, we have that on the system to ask them (domain experts) if we have any doubts.” - [P7] #By focusing on practices: In addition to securing the necessary resources, adhering to best practices can be important in the development of an AI/ML system. In our study, [P1, P2, P4, P5, and P12] reported that focusing on the practices of developing a fair AI/ML system is the strategy they take in mitigating biases from the system. For example, [P2] and [P4] quoted: “I just become conscious about unconscious biases during the development pro- cess.” - [P2] “I would also believe in the feedback mechanism out there, not just seeing your results on the test set and then going and deploying it, but rather enabling a feed- back mechanism. And whenever the system goes a little off in terms of prediction, immediately, the feedback loop is getting connected there. So that is one way that I generally rectify my bias.” - [P4] 4.5.2 Performance-related strategies A few participants elaborated on strategies for the performance of the AI/ML system they developed to ensure fairness. Within this category, participants deliberated on a specific strategy (concept), namely, (i) Detecting inaccuracy, which is explained below. Detecting inaccuracy In our study, [P5, P7, P8, P9, P10, P15, P17, and P21] mentioned that they detected the inaccuracy of the AI/ML system by using evaluation metrics. This helps them gauge the system’s performance and ensure its fairness. For example, [P8] said: “Fair model should be 100% accurate. 100% accuracy is good for our scenario, but we also have other case scenarios like the loss and other things to notice in the evaluation metrics. When we have to check what is the loss of the models we have to go for the minimum loss.” - [P8] 25 4.6 Summary of Key Findings This study focuses on exploring AI practitioners’ understanding of ‘fair AI/ML’, exploring the challenges they encounter during the development of a fair AI/ML system, understanding the consequences of developing an unfair AI/ML system perceived by them and investigating the strategies they employ to en- sure fairness in the AI/ML systems they develop. Table 2 shows the summary of the key findings of our study. Table 2 Key Findings (KF) of the study. Key Findings (KF) KF1 AI practitioners’ understanding of ‘fair AI/ML’ (i) In terms of the absence of bias (ii) In terms of the presence of desirable attributes (transparency, accuracy, interpretability etc.) Section 4.2 KF2 AI practitioners’ challenges in developing a fair AI/ML system 4.3 (i) Process-related challenges: gaining access to datasets, balancing ideal vs real, handling data-related issues, and following policies and regulations (ii) Resource-related challenges: obtaining required datasets and obtaining other resources (iii) Team-related challenges: having knowledge of bias/fairness and having knowledge of AI KF3 Consequences of developing an unfair AI/ML system perceived by AI 4.4 practitioners (i) Impact on organisations: financial losses and reputational repercussions (ii) Impact on users: obtaining flawed product and emotional distress and discrimination (iii) Impact on practitioners: no professional empowerment KF4 AI practitioners’ strategies to ensure the fairness of an AI/ML system 4.5 (i) Bias-related strategies: detecting bias and mitigating bias (ii) Performance-related strategies: detecting inaccuracy KF6 KF5 Despite differing understandings of ‘fair AI/ML’ among practitioners, a common challenge they report facing is obtaining the required datasets for model training during AI/ML system development. Some AI practitioners describing ‘fair AI/ML’ in terms of absence of bias seemed to have a broader understanding of negative consequences on practitioners due to the development of unfair AI/ML systems compared to those emphasising the presence of desirable attributes. KF7 Despite differing understandings of ‘fair AI/ML’ among practitioners, a common strategy they report using to ensure fairness in AI/ML systems is implementing bias-mitigation strategies. 4.7.1 4.7.2 4.7.3 4.7 Framework showing the relationship between the aspects-understanding, challenges, consequences and strategies In this section, we discuss the relationship between AI practitioners’ under- standing of ‘fair AI/ML’ with three other aspects including, (i) the challenges encountered in the development of a fair AI/ML system, (ii) the consequences of developing an unfair AI/ML system, and (iii) strategies used to ensure the fairness of an AI/ML system. Every participant [P1 to P22] in our study de- scribed ‘fair AI/ML’ either in terms of the absence of bias or in terms of the 26 presence of desirable attributes in AI/ML systems. The exception was the par- ticipant [P14], who described it in terms of the absence of bias, as well as in terms of the presence of desirable attributes in AI/ML systems. To illustrate the relationship between these aspects, we developed a framework, which is shown in Figure 8. Fig. 8 A framework showing the relationship between AI practitioners’ understanding of ‘fair AI/ML’, with three aspects including, (i) their challenges in its development, (ii) con- sequences of developing unfair AI/ML system perceived by them, and (iii) their strategies in ensuring fairness of AI/ML systems 4.7.1 Relationship between AI practitioners’ understanding and their challenges AI practitioners’ understanding – in terms of the absence of bias vs challenges The interview indicates that the participants who described ‘fair AI/ML’ in terms of the absence of bias in an AI/ML system reported challenges in all three categories: (i) process-related, (ii) resource-related, and (iii) team-related challenges that they face during AI/ML system development, as shown in Fig- ure 8. Specifically, the majority of the participants reported facing challenges in obtaining required datasets (resource-related challenge), followed by the challenge of handling data-related issues (process-related challenge). We see that AI practitioners who described ‘fair AI/ML’ in terms of the absence of bias in AI/ML systems also reported facing substantial challenges related to the datasets used in development. These challenges primarily revolve around 27 resource availability, specifically in obtaining the necessary datasets and han- dling data-related issues. AI practitioners’ understanding – in terms of the presence of de- sirable attributes vs challenges The interview shows that the participants who described ‘fair AI/ML’ in terms of the presence of desirable attributes reported almost all the challenges across all three categories— (i) process-related, (ii) resource-related, and (iii) team- related challenges that they encounter during the development of an AI/ML system, as illustrated in Figure 8. Notably, none of the participants reported the challenge of lacking knowledge of AI (team-related challenge), whereas, most of the participants reported facing the challenge of obtaining required datasets (resource-related challenge) during AI/ML system development. In summary, regardless of how AI practitioners described ‘fair AI/ML’, a common challenge they faced was obtaining the required datasets to train a model during AI/ML system development. 4.7.2 Relationship between AI practitioners’ understanding and the consequences AI practitioners’ understanding – in terms of the absence of bias vs consequences The interview indicates that the participants in our study, describing ‘fair AI/ML’ in terms of the absence of bias in AI/ML systems, perceived the negative consequences of developing an unfair AI/ML system across all three categories including, (i) impact on organisations, (ii) impact of users and (iii) impact on practitioners, as shown in Figure 8. Notably, the majority of par- ticipants perceived the acquisition of flawed products by users as a negative consequence of developing an unfair AI/ML system. However, the majority did not explicitly mention emotional distress and discrimination for users, as well as reputational repercussions for organisations, as significant negative conse- quences. These two specific concerns were expressed by only one participant each. AI practitioners’ understanding – in terms of the presence of de- sirable attributes vs consequences The interview shows that the participants in our study, describing ‘fair AI/ML’ in terms of the presence of desirable attributes perceived the negative conse- quences of developing an unfair AI/ML system across only two categories including, (i) impact on organisations, and (ii) impact on users. According to the data, most participants talked about the financial loss to organisations as a negative consequence of developing an unfair AI/ML system, as shown in 28 Figure 8. Notably, no participants in our study mentioned any negative conse- quences of developing an unfair AI/ML system for the practitioners engaged in their development. In summary, of the ones who described ‘fair AI/ML’ in terms of absence of bias, two of them discussed the negative consequence of developing an unfair AI/ML system on practitioners involved in the development. While those who described it in terms of the presence of desirable attributes did not discuss the negative consequences of developing an unfair AI/ML system for practitioners at all. It looks like the former were able to acknowledge it, and they seem to have a broader understanding of the negative consequences associated with developing unfair AI/ML systems. 4.7.3 Relationship between AI practitioners’ understanding and their strategies AI practitioners’ understanding – in terms of the absence of bias vs strategies According to the interview, the participants in our study who described ‘fair AI/ML’ in terms of the absence of bias in AI/ML systems discussed strate- gies falling into both categories, including (i) bias-related strategies and (ii) performance-related strategies. It shows that most participants discussed the strategies to mitigate bias (bias-related strategies) from AI/ML systems to en- sure its fairness. In contrast, only a small number of participants reported the strategies of detecting bias (bias-related strategies) and detecting inaccuracy (performance-related strategies) to ensure the fairness of AI/ML systems. AI practitioners’ understanding – in terms of the presence of de- sirable attributes vs strategies The interview shows that the participants in our study who described ‘fair AI/ML’ in terms of the presence of desirable attributes discussed strategies falling into both categories, including (i) bias-related strategies and (ii) per- formance -related strategies. It shows that a slightly higher number of partic- ipants discussed the strategy of mitigating bias (bias-related strategies) from AI/ML systems to ensure its fairness as compared to other strategies like de- tecting bias (bias-related strategies) and detecting inaccuracy (performance- related strategies). In contrast, an almost equal number of participants dis- cussed strategies like detecting bias (bias-related strategies) or detecting inac- curacy (performance-related strategies) to ensure AI/ML system fairness. In summary, the interview shows a consistent trend among participants dis- cussing the strategies that they used to ensure the fairness of AI/ML systems, regardless of how they described ‘fair AI/ML’. The majority of the partici- pants who described ‘fair AI/ML’ either in terms of the absence of bias as well as in terms of the presence of desirable attributes in AI/ML systems reported 29 the use of the common strategy of mitigating bias (bias-related strategies) to ensure the fairness of the AI/ML systems they developed. 5 Discussion In this section, we discuss and compare our findings in light of the related works. 5.1 Definition/understanding of AI/ML fairness In recent years, major players in the tech industry, such as Google, Microsoft, and IBM, have delved deeply into the concept of fairness in AI. Their consen- sus on the ‘fairness’ principle revolves around minimising bias and fostering inclusive representation in the development of AI (Google, 2022; Microsoft, 2024a; IBM, 2022). Various experiments, including those by Harrison et al. (2020) and Sri- vastava et al. (2019), have explored user perspectives on AI/ML fairness. The study conducted by Harrison et al. (2020) with non-technical users on Amazon Mechanical Turk (AMT) revealed that users reported unbiased models might not be automatically perceived as fair. This finding does not align with our study as some participants described ‘fair AI/ML’ in terms of absence of bias. On the other hand, the experiment by Srivastava et al. (2019) on AMT found users defining fairness technically, focusing on accuracy and demographic par- ity, mirroring our study where AI practitioners also described ‘fair AI/ML’ in terms of accuracy (the presence of desirable attributes) of the system. Im- portantly, our study involved AI practitioners, while the mentioned studies focused on the general users of AI/ML systems. Due to the lack of research that focuses on investigating AI practition- ers’ understanding of ‘fair AI/ML’, we conducted an empirical study with 22 AI practitioners to investigate their understanding of what a ‘fair AI/ML’ is. In our study, AI practitioners described ‘fair AI/ML’ in terms of the ab- sence of bias and in terms of the presence of desirable attributes in AI/ML systems. Ryan et al. (2023) in their empirical study, found that participants when discussing the term ‘fairness’, commonly focused on preventing biased decisions of ML systems. This aligns with our findings as several AI practition- ers in our study also described ‘fair AI/ML’ in terms of the absence of bias. It is important to note that both academic and industry professionals in the fields of Human-Computer Interaction (HCI) and ML participated in Ryan et al. (2023)’s study. Similarly, aligning with the definition of ‘fairness’ intro- duced by tech companies like Google (Google, 2022) and Microsoft (Microsoft, 2024a), some AI practitioners in our study described ‘fair AI/ML’ in terms of the absence of bias. However, when describing in terms of the presence of desir- able attributes in AI/ML systems, AI practitioners in our study specified fea- tures such as interpretability, transparency, and explainability that an AI/ML 30 system should possess to be deemed fair. Ryan et al. (2023) also highlighted that a few participants mentioned that a system needs to be transparent to be considered fair. Notably, principles like ‘explainability’ and ‘transparency’ are outlined separately in the AI ethics principles listed by tech companies such as Google (Google, 2022), IBM (IBM, 2022), Microsoft (Microsoft, 2024a), and countries such as Australia (Australia, 2019) and countries in Europe (Group, 2019). This suggests a lack of alignment between how AI practitioners under- stand ‘fair AI/ML’ and the definitions set forth by tech companies, countries, and continents. This misalignment may hinder the development of universally accepted principles for fair AI/ML systems, potentially resulting in disparate approaches and interpretations within the AI community. In a similar vein, accuracy and fairness are categorised as two different non-functional requirements of an ML system (Habibullah et al., 2023). The accuracy of an ML system has been categorised as a non-functional require- ment, which can be measured using ML-specific or standard measures, whereas fairness has been categorised as a non-functional requirement that cannot be measured and is non- quantifiable (Habibullah et al., 2023). However, in our study, we found that the participants [P1, P7, P8, P16, and P22] described ‘fair AI/ML’ in terms of accuracy of the AI/ML system (Section 4.2). Along with that, when asked about the strategies to ensure AI/ML fairness, some participants [P5, P7, P8, P9, P10, P15, P17, and P21] reported that they focus on detecting the inaccuracy of the system (Section 4.5). The participants in our study considered accuracy as a requirement to develop a fair AI/ML sys- tem. This also shows that the way AI practitioners in our study described ‘fair AI/ML’ is different from the definitions of fairness provided by tech companies like Google, IBM, etc, and different countries/continents like the USA, China, Australia, Europe, etc. Notably, these definitions do not include considerations regarding the accuracy of AI/ML systems. Understanding how AI practition- ers conceptualise ‘fair AI/ML’ is important for developing effective policies and guidelines on AI fairness. By incorporating their perspectives, frameworks can be created that not only meet regulatory standards but also align with practi- cal implementation challenges and fairness considerations faced in real-world AI applications. This can lead to more fair, inclusive, and socially responsible AI/ML systems. 5.2 Challenges in developing a fair AI/ML system Studies have highlighted challenges for AI practitioners in developing a fair AI/ML system across various domains and phases of development. Our study specifically focused on investigating the overall challenges of AI practitioners in developing a fair AI/ML system through semi-structured interviews. A ma- jority of participants in the studies by Holstein et al. (2019); Fenu et al. (2022) faced challenges related to limited control over data collection, as well as chal- lenges in obtaining balanced and representative datasets for model training due to a lack of methods supporting data collection and curation (Holstein 31 et al., 2019). These findings align with our study, where participants reported similar challenges in obtaining necessary datasets, attributing them to a lack of control over data collection, and expressed difficulties in obtaining balanced datasets due to a lack of methods for data collection and curation (Section 4.3.2). Most participants in Holstein et al. (2019)’s study reported challenges in detecting biases in the ML system, due to a lack of support and challenges in developing their own solutions due to limited time. Our findings align with these, as participants in our study also highlighted how constraints, such as lack of support and time, pose challenges in detecting biases in systems and de- veloping their envisioned ideal system (Section 4.3.1). Similarly, Madaio et al. (2022) identified collecting datasets as a challenge for AI practitioners during the development of AI systems, primarily due to the need to safeguard the personal information of user data. Our study’s findings align, as participants also reported facing challenges in accessing necessary datasets due to data confidentiality concerns (Section 4.3.1). Madaio et al. (2022) found that par- ticipants reported challenges related to the resources required to develop a fair AI system, which aligns with one of our findings. Participants in our study also reported challenges in obtaining resources such as datasets, technology, and human resources, as discussed in Section 4.3.2. Both studies identified funding issues as a contributing factor to this challenge. Fenu et al. (2022) reported that participants faced challenges in collecting data for training an AI system due to a lack of datasets representing the diversity of the population, which aligns with our findings (Section 4.3.2). Fenu et al. (2022) highlighted that adhering to regulations related to the fairness of an AI system was a reported challenge. Some participants in our study also emphasised challenges in fol- lowing policies and regulations, citing reasons such as the lack of policies and inadequate implementation as discussed in Section 4.3.1. Likewise, Hopkins and Booth (2021) in their empirical study reported challenges faced by ML practitioners in detecting bias in ML, attributed to biased data or insufficient model testing, as discussed in Section 4.3.1. The participants in our study also emphasised the same challenge, citing factors such as the nature of data bias, lack of time, and a lack of tools/techniques to assist in detecting system biases. Ryan et al. (2023) in their empirical study highlighted challenges anticipated by HCI experts and ML experts in developing a fair AI which includes ob- taining high-quality data to develop and evaluate a model, which aligns with our findings (Section 4.3.2). However, Ryan et al. (2023) identified additional challenges, including the importance of clarity regarding the model’s context and credibility, as well as the difficulty in aligning the mathematical defini- tion of fairness with the accuracy of the model, which does not align with our findings. While some findings in our study align with previous research, there are unique contributions, particularly in uncovering team-related challenges faced by AI practitioners in developing a fair AI/ML system. Our study uncov- ered challenges specific to the development team, such as having knowledge of bias/fairness and knowledge of AI, as discussed in Section 4.3.3. Under- standing these challenges is crucial, given that AI practitioners play a pivotal 32 role in designing systems that have substantial societal impact and it fosters responsible and effective AI development (Orr and Davis, 2020). Similarly, our study uncovered challenges AI practitioners face in balancing real-world data with training data in AI/ML system development (Section 4.3.1), a finding not reported in previous studies. Additionally, we identified challenges related to obtaining various resources, including technological and human-related re- sources, which impact the development of a fair AI/ML system (Section 4.3.2). These insights contribute new dimensions to the existing understanding of challenges in this domain. Addressing these challenges of AI practitioners can help in developing fair AI/ML systems that can be crucial for mitigating so- cietal inequalities and promoting fairness in society (Holstein et al., 2019). 5.3 Consequences of developing an unfair AI/ML system Studies have explored the consequences of developing an unfair AI/ML system from the perspectives of different stakeholders (Marcinkowski et al., 2020; Shin and Park, 2019). For example, Woodruff et al. (2018) identified that users reported the potential negative consequences of algorithmic unfairness which include racial discrimination and stereotyping and loss of opportunities for personal advancement. Weidener et al. (2024) found that AI experts reported fatal outcomes for users from unfair AI-based systems. Our findings differ, as our participants did not mention fatal outcomes but highlighted other impacts such as users obtaining flawed products, and facing emotional distress, and discrimination as described in Section 4.4. Given the limited research on AI practitioners’ per- spectives on the consequences of developing an unfair AI/ML system, we con- ducted semi-structured interviews with 22 practitioners. Our study revealed new insights, identifying three main negative consequences perceived by AI practitioners: those affecting organisations, users, and the practitioners them- selves, as discussed in Section 4.4. Understanding the consequences of develop- ing an unfair AI/ML system may facilitate the development of specific mitiga- tion strategies. Addressing issues at the organisational, user, and practitioner levels may contribute to more effective and comprehensive solutions in tack- ling unfairness in AI/ML systems. For instance, our study revealed that only a small number of practitioners acknowledge the negative impact on users when developing unfair AI/ML systems (Section 4.4.2). This highlights a critical gap in considering user perspectives during AI/ML development, emphasising the need for more user-centric approaches (Dankloff et al., 2024). Developing such user-centric systems is essential for fostering user trust in AI/ML systems, ensuring fairness and reliability. 5.4 Strategies in ensuring fairness in AI/ML systems In recent years, numerous studies have been conducted exploring strategies and approaches to AI/ML fairness. Several qualitative studies, such as those 33 by Deng et al. (2022), Richardson et al. (2021), and Balayn et al. (2023), have explored AI practitioners’ experiences and perspectives on specific fair- ness toolkits. These studies conducted semi-structured interviews with AI/ML practitioners to understand their practices in using different fairness toolkits. However, as our study focuses on general strategies employed by AI practi- tioners to ensure fairness in AI/ML systems, the findings from these studies do not align with our study. On the other hand, Madaio et al. (2020) identified AI practitioners’ pro- cesses for recognising and addressing fairness issues in AI systems, emphasising understanding fairness as a personal priority and adhering to ad-hoc processes. However, these findings diverge from our study, which concentrates on tacti- cal approaches or strategies used by AI practitioners in their day-to-day lives to ensure the fairness of AI/ML systems. Ryan et al. (2023) noted that the common approach used by ML and HCI experts when addressing fairness was associated with data used in an AI system. This finding aligns with our study as most of the participants in our study also discussed the bias mitigation strat- egy in the AI/ML system by balancing datasets (Section 4.5.1). Similarly, in the study by Ryan et al. (2023), a participant mentioned comparing model accuracy across demographic groups to assess fairness. This corresponds with our findings, where several participants also identified inaccuracies in AI/ML systems through the use of evaluation metrics. However, participants in Ryan et al. (2023)’s study mentioned not considering fairness in the AI systems they develop, contrasting with our findings. In our study, each participant reported employing at least one strategy to ensure fairness in AI/ML systems. Our study presents unique contributions, notably in revealing strategies employed by AI practitioners to detect bias for ensuring fairness in AI/ML systems, as discussed in Section 4.5.1. These insights, including collaboration with team members to mitigate data biases and a focus on individual practices during AI/ML system development, are novel findings that have not been reported in previous research. 5.5 Insights Based on the memos written for the study, we uncovered several interesting insights and reflections. Research recommendations can be made based on these findings and our insights. 5.5.1 ‘No bias’- necessary but not sufficient to make a fair AI/ML system Most ethical guidelines in AI stress the importance of ensuring fairness, aim- ing to eliminate bias and discrimination within AI systems. For example, Aus- tralia’s AI Ethics Principles defined ‘fairness’ as “AI systems should be inclu- sive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups” (Australia, 2019). Similarly, the European Commission defined ‘Diversity, non-discrimination and fairness’ as, 34 “Unfair bias must be avoided, as it could have multiple negative implications, from the marginalisation of vulnerable groups to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle” (Group, 2019). However, based on the interview with AI practitioners, we identified that ‘no bias’ is a crucial element in developing a fair AI/ML system, but it alone is inadequate. For example, when participants were asked to share their understanding of ‘fair AI/ML’, they noted that it must not only be accurate but also exhibit attributes such as transparency and reproducibility to qualify as a fair AI/ML system, as discussed in Section 4.2. Some research has also discussed similar notions. For example, Silberg and Manyika (2019) reported that the absence of unwanted bias is insufficient to infer that the AI system is ‘fair’. Similarly, participants in an experiment perceived that an unbiased AI/ML system does not necessarily mean it is perceived as fair, as they often found certain systems unfair despite being un- biased, especially when errors were distributed unevenly among different racial groups (Harrison et al., 2020). Even though participants had broader ideas about what constitutes a ‘fair AI/ML’, it was interesting to observe that their discussions predominantly revolved around the concept of ‘bias’ when responding to various questions. For instance, when queried about strategies employed to ensure the fairness of an AI/ML system, the majority focused on detecting or addressing biases in AI/ML systems. In contrast, only a small number delved into strategies related to optimising the system’s performance for fairness, as outlined in Section 4.5. Similarly, in discussions about the consequences of developing an unfair AI/ML system, nearly all participants used the term ‘bias’ and elaborated on the consequences of developing a ‘biased’ system. Hence, while achieving a ‘no bias’ is crucial for developing a fair AI/ML system, it is essential to recognise that it alone is not adequate; nevertheless, it remains a significant aspect in the development of a fair AI/ML system. 5.5.2 Data bias vs other biases in AI/ML systems As previously discussed, we observed that participants primarily centered their discussions on the notion of ‘bias’ when working towards a fair AI/ML system. Machine learning (ML) can be prone to various biases, including biases from data to algorithm, algorithm to user, and user to data. Moreover, each of these categories encompasses different sub-types of biases (Mehrabi et al., 2021). However, in our case, even within the discourse on ‘bias’, a majority of par- ticipants specifically addressed the concept of ‘data bias’. For example, when the participants were asked about the challenges encountered in developing a fair AI/ML system, one of the aspects (concepts) they highlighted pertained to handling data-related issues, as detailed in Section 4.3.1. Likewise, the ma- jority of the participants discussed the strategies related to detecting bias in the data and mitigating data biases when they were asked about the strate- gies to ensure the fairness of the AI/ML system as discussed in Section 4.5.1. 35 Additionally, participants discussed the factors leading to their challenges to develop a fair AI/ML system, mainly related to data bias, such as the use of biased training data, lack of tools to check data bias, etc. as discussed in Section 4.3. This indicates that among various biases, data bias stands out as particularly prevalent. Effectively addressing data bias is important while developing a fair AI/ML system. 5.5.3 Can a fair AI/ML system ever be developed? As per the participants, a challenge they encountered in developing a fair AI/ML system revolved around obtaining the required datasets for training the model. Several factors like ‘lack of cost’, ‘lack of tools/techniques’, ‘lack of representative datasets’, and ‘lack of control over data collection’ were reported that led to this challenge as discussed in Section 4.3.2. While factors like ‘lack of cost’, ‘lack of tools/techniques’, and ‘lack of control over data collection’ could be addressable to improve the development of a fair AI/ML system, the majority of the participants reported the ‘lack of representative datasets’ in the real world as one of the factors leading to the challenge of obtaining required datasets. For example, participants [P1] and [P15] said, “In the real world, normally, we don’t get perfect data to train the model. - [P1] “So for example, in our human body projects, a lot of things are on bell curves in terms of like weight and height. And so it’s often very difficult to get those data at the edges of the bell curve, you don’t usually have a lot of very, very, like obese people.” - [P15] This particular factor, ‘the lack of representative datasets’, appears to be more persistent because we cannot change real-world data, and it may pose a greater challenge that is not easily overcome. It was intriguing to learn that some participants believe developing a fair AI/ML system is not possible, asserting that while bias can be minimised, it cannot be entirely eradicated from the systems. For example, participants [P9] and [P11] said: “The real world is not perfect so you don’t have all the datasets you need. So you won’t be able to remove some sort of biases from them. You will have some sort of biases, the only thing that we can do is reduce it to a certain acceptable level. We won’t be getting a perfect fair model. It’s not there. Model reflects the data. Real data is not perfect. So you cannot expect a perfect model with it.” - [P9] “”In my view, because we are talking about bias here, like, there is no model I mean, even in AI, as far as I know, there is no perfect machine or model when pre- diction is involved. So in my opinion, I guess like a fair model should be something that decreases the bias, as you know, as I mean, it should decrease it. It should decrease it to be very less. But then I don’t think there can be any model, which is not biased.” - [P11] Because only a small number of participants in our study talked about this subject, there is room for further investigation into why AI practitioners believe developing a perfectly fair (bias-free) AI/ML system is not feasible. 5.5.4 Organisational Impact vs. User Well-being We found that participants in our study believed that the repercussions of de- veloping an unfair AI/ML system impact organisations more significantly than 36 the users who interact with it. When asked about the consequences of develop- ing an unfair AI/ML system, the majority highlighted potential financial losses and damage to the organisation’s reputation. Participants expressed opinions such as “it could ultimately tarnish the image of these organisations (Twitter)” and “incur a substantial loss for the company”. Only a small number of participants mentioned that users may experience emotional distress and discrimination when unfair AI/ML systems are developed. Participants quoted, “the sen- timents of the people can be hurt”. The limited acknowledgment of potential user experiences, such as emotional distress and discrimination, suggests a poten- tial gap in awareness or consideration of the individual implications of unfair AI/ML systems. It may highlight a tendency to prioritise the broader conse- quences for organisations over the direct effects on the individuals interacting with these systems. 5.6 Implications This section outlines implications for researchers and AI practitioners involved in AI/ML system development, derived from our study findings. Additionally, we offer recommendations for AI practitioners and AI companies to assist in the development of fair AI/ML systems. 5.6.1 Implications for Research and Future Work We developed a framework to show the relationship between AI practitioners’ understanding of ‘fair AI/ML’ and the associated challenges in developing a fair AI/ML system, the consequences of developing an unfair AI/ML system perceived by them, and the strategies employed to ensure fairness in AI/ML systems (Figure 8) based on empirical findings. This framework can be used to identify patterns, and potential areas for intervention, ultimately contribut- ing to a more nuanced understanding of how to enhance fairness in AI/ML systems. The insights drawn from this framework can inform future studies, shaping the direction of research in the field. Researchers can use our findings for future research in the following areas: Investigating factors and solutions for the challenge of obtain- ing required datasets: Our findings reveal that a common challenge faced by AI practitioners who shared their understanding of ‘fair AI/ML’ either in terms of the absence of bias or in terms of the presence of desirable attributes in AI/ML systems such as transparency, interpretability, accuracy etc. was obtaining necessary datasets during AI/ML system development, as discussed in Section 4.7.1. Future work can focus on addressing more factors leading to this challenge and investigating approaches to mitigate them, which can contribute to the development of a fair AI/ML system. Mapping countries’/companies’ definitions of ‘AI fairness’ with practitioners’ understanding: Our findings reveal variations in AI practi- tioners’ understanding of ‘fair AI/ML’ compared to definitions set by different 37 countries and tech companies on ‘AI fairness’ (section 5.1). Future research could explore the alignment between these perspectives through a mapping exercise. Delving deeper into strategies for ensuring fairness in AI/ML systems: Our findings show that AI practitioners commonly use mitigating bias (bias-related strategy) to ensure fairness in AI/ML systems, regardless of how they describe ‘fair AI/ML’, as discussed in Section 4.7.3. Similarly, the participants discussed the strategy of detecting inaccuracy to ensure fair- ness in AI/ML systems; however, there is no mention of strategies to address accuracy-related issues for ensuring fairness. Future research can delve into why mitigating bias is the predominant strategy and explore if practitioners employ strategies to address accuracy-related issues in the system to ensure fairness. This may help to inform the development of comprehensive strategies to address both bias and accuracy concerns, leading to more robust and fair AI/ML systems. Exploring links between various aspects: In our study, we explored the link between what AI practitioners understand by ‘fair AI/ML’ and the challenges they face in development, the consequences of developing an unfair AI/ML systems perceived by them, and the strategies they employed to ensure fairness of an AI/ML system. In the future, researchers can delve into explor- ing connections between other aspects, such as the challenges encountered in developing a fair AI/ML system and the consequences of developing an un- fair AI/ML system perceived by AI practitioners. This may help to uncover deeper insights and connections within the complex landscape of developing a fair AI/ML system, guiding researchers in refining methodologies, devising more effective strategies, and advancing fair and ethical practices in AI/ML system development. 5.6.2 Implications for Practice and Recommendations Our study focuses on investigating AI practitioners’ experiences and percep- tions about various aspects related to the development of a fair AI/ML system. We conducted semi-structured interviews with 22 AI practitioners, exploring their understanding of ‘fair AI/ML’, the challenges encountered in its de- velopment, consequences of developing an unfair AI/ML system, and the strategies employed to ensure the fairness of an AI/ML system. Our find- ings provide AI practitioners with valuable insights, into how people in the same field understand a ‘fair AI/ML’, the challenges they encounter, the con- sequences of developing an unfair AI/ML system, and the strategies they em- ploy to ensure fairness in an AI/ML system. This comprehensive understand- ing, derived from real-world experiences, can inform practitioners’ approaches, enhance decision-making, and contribute to the use of more effective strategies for developing a fair AI/ML system. It may provide a practical and grounded perspective that can guide practitioners in navigating the complexities of fair- ness in their AI/ML development processes. 38 Drawing from our study’s findings, we present some recommendations for AI practitioners and AI companies to support the development of a fair AI/ML system, as detailed below. Recommendation 1: Striking a balance between the fairness of a system and its working version: Several participants in our study high- lighted the challenge of developing their envisioned ideal system, attributing it to factors like a shortage of time. Consequently, they prioritise creating a functional system over ensuring its fairness, as discussed in Section 4.3.1. AI managers can help AI practitioners by fostering a culture that values and pri- oritises fairness in AI/ML system development. They can allocate resources, both in terms of time and support, to enable practitioners to strike a balance between developing a working system and ensuring its fairness. Recommendation 2: Providing AI practitioners with necessary tools/techniques: Many participants in our study emphasised the challenges of developing a fair AI/ML system, citing a lack of tools, or techniques as dis- cussed in Sections 4.3.1 and 4.3.2. They specifically pointed out challenges in detecting and addressing data bias and obtaining necessary datasets due to the absence of adequate tools. AI companies can provide substantial support by investing in the development and provision of specialised tools, and/or tech- niques aimed at addressing the challenges highlighted by participants (Holstein et al., 2019). Recommendation 3: Focusing on enhancing own knowledge and awareness of different concepts: The majority of participants in our study acknowledged facing challenges in grasping the concepts of ‘bias’ and ‘fairness’ as discussed in Section 4.3.3. They attributed this challenge to a lack of aware- ness and knowledge about these concepts, as well as a deficit in understanding the domain they work in. AI practitioners can take proactive steps such as seeking additional training or education on the concepts of ‘bias’ and ‘fair- ness.’ Engaging in domain-specific learning to enhance their understanding of the context they work in might also be beneficial. Staying informed about the latest developments and best practices in AI fairness can contribute to a more comprehensive understanding of these concepts. Recommendation 4: Prioritising users in AI/ML system devel- opment: In discussions about the consequences of developing unfair AI/ML systems perceived by the participants, most participants focused on the nega- tive impacts on organisations, including financial losses and reputational reper- cussions (Section 5). Interestingly, only a small number recognised the poten- tial emotional distress and discrimination experienced by end users as a con- sequence of such systems. AI practitioners can make a conscious effort to shift the focus from solely considering organisational consequences to understand- ing the direct impact on users. This might allow them to identify and address potential biases and discriminatory outcomes, contributing to the development of fair AI/ML systems that treat users equitably. Involving users across var- ious phases of AI/ML system development to gather feedback might help in ensuring user-centric AI/ML system development. A recent study highlights 39 the need for increased user engagement throughout algorithmic development to enhance fairness in AI algorithms (Dankloff et al., 2024). Recommendation 5: Updating and adapting AI ethics policies in organisations: Participants in our study identified challenges in adhering to policies and regulations within their organisations, citing outdated AI ethics policies and a lack of adaptation as the factors leading to those challenges, as discussed in Section 4.3.1. To address this, AI companies can prioritise updating and adapting their AI ethics policies, ensuring strict adherence by practitioners. This proactive approach can help ensure that AI practitioners are equipped with the latest guidelines to navigate complex ethical challenges, promoting responsible AI development. 6 Limitations and Threats to Validity While we advertised our study on platforms such as LinkedIn and Twitter to attract participants globally, our data collection lacks an even distribution of participants worldwide. The majority of study participants are based in Australia. The findings of our study hold the most relevance for participants’ organisations and their respective countries, potentially extending to similar contexts. However, generalising these findings to the entire global software en- gineering community is deemed impractical in practice (Masood et al., 2020). The limitation of this study is that all interview participants held purely tech- nical roles, such as AI/ML developers, engineers, experts, and data scientists involved in AI/ML system design and development. The study did not in- clude a broader range of profiles like data science managers, business experts, ethics/compliance officers, risk managers, heads of innovation, and heads of operations. Including these profiles could have provided a wider range of per- spectives on AI/ML fairness. Future studies should incorporate these roles to better understand their perspectives on AI/ML fairness. Likewise, our main interview guide was developed after conducting two pilot interviews. The interview recordings underwent automatic transcription, and any errors introduced during this process were manually rectified by listen- ing to each audio recording during the coding phase. In the interviews, there could be a possibility of misalignment between our intended questions and participants’ understanding, leading to potential misinterpretations or mis- understandings. To address this, we employed follow-up questions to ensure clarity on the participants’ statements. All four authors were involved in designing the interview guide, with the initial coding primarily handled by the first author. However, all authors ac- tively participated in refining and finalising the codes, concepts, and categories through collaborative discussions. We have also included various interview quotes as examples, aiming to minimise any potential reporting biases in the study. In addition, there could be a potential risk to the research’s internal va- lidity when using the payment for the second round of data collection. As 40 a way of mitigating this risk, we initially provided the candidates with an anonymous pre-interview questionnaire asking them about their years of ex- perience in AI/ML system development. Using this information, we selected participants for interviews, and approval for payment was granted only after confirming alignment with our predetermined participation criteria. The pro- cess was carried out with ethics approval. Candidates with no experience in AI/ML system development were not selected for the interview. 7 Conclusion This study aimed to investigate AI practitioners’ perspectives and experiences in developing a fair AI/ML system, recognising their pivotal role in devel- opment and deployment. The study contributes to gaining insights into the industry’s standpoint on the understanding of a ‘fair AI/ML’, the challenges involved in its development, the consequences of developing an unfair AI/ML system perceived by them, and the strategies they employed to ensure fairness of an AI/ML system. We conducted semi-structured interviews with 22 AI practitioners to fulfill the objective of our study and analysed the qualitative data using the STGT for data analysis (Hoda, 2021). The analysis revealed two categories of AI prac- titioners’ understanding of ‘fair AI/ML’ including, (i) In terms of the absence of bias and (ii) In terms of the presence of desirable attributes in AI/ML sys- tems. We also categorised the challenges of the participants in developing a fair AI/ML system into three sections including, (i) Process-related challenges, (ii) Resource-related challenges, and (iii) Team-related challenges. Similarly, our analysis showed three categories of negative consequences perceived by participants in developing an unfair AI/ML system: (i) Impact on organisa- tions, (ii) Impact on users, and (iii) Impact on practitioners. We also classified the strategies employed by participants to ensure the fairness of an AI/ML system into two categories: (i) Bias-related strategies and (ii) Performance- related strategies. Based on the findings, we also developed a framework to show the relationship between AI practitioners’ understanding of ‘fair AI/ML’ and three other aspects including, (i) their challenges in developing a fair AI/ML system, (ii) the consequences of developing an unfair AI/ML system perceived by them and (iii) their strategies to ensure the fairness of an AI/ML system. Our findings offer valuable insights into the industry’s perspective and experiences in developing a fair AI/ML system, aiding the AI research com- munity in better understanding how AI practitioners perceive and experience this process. We also identified areas that need further investigation within the AI research community, enabling researchers to make more informed deci- sions about the direction of their studies. This might ensure that their efforts address the critical areas identified by the study for further exploration. We also offered recommendations to AI practitioners and AI companies, aiming to assist in enhancing the development of a fair AI/ML system. 41 Acknowledgements Aastha Pant is supported by the Faculty of IT Ph.D. scholarship from Monash University. C. Tantithamthavorn is partially supported by the Australian Research Council’s Discovery Early Career Researcher Award (DECRA) funding scheme (DE200100941). We would like to thank all the interviewees for their participation in our study. 8 Appendices A Appendix A: Interview Protocol Section A: Demographic Information (via Qualtrics) 1. Your full name: 2. Please enter your email address so the researcher can contact you to schedule a time for an interview: 3. What is your current job title? – AI Engineer – AI/ML/Data Scientist – AI/ML Expert – AI/ML Practitioner – AI/ML Developer – Other: 4. How many years of experience do you have in the area of AI/ML system development? – No Experience – Less than 1 year – Between 1 to 2 years – Between 2 to 5 years – More than 5 years 5. How old are you? – Below 20 – 20-25 – 26-30 – 31-35 – 36-40 – 41-45 – 46-50 – 50+ 6. How would you describe your gender? – Woman – Man – Non-binary/ gender diverse – My gender identity isn’t listed. I identify as: – Prefer not to say 7. What is your country of residence? 8. What is the highest degree or level of education you have completed? – High School – Bachelor degree – Master degree – Ph.D. or Higher – Prefer not to answer – Other: 9. What activities are you involved in? Select all that apply. – Model requirements – Data collection – Data cleaning – Data labeling – Feature engineering – Model training – Model evaluation – Model deployment – Model monitoring – Other: 42 Section B: Practitioners’ Perception and Experiences on AI/ML Fairness (via semi- structured interviews) Section B.1- Questions on ‘AI/ML Bias’ 1. Can you briefly tell me about your professional background and current role? 2. Are you aware of the term ‘AI/ML bias’ ? (a) (If yes), what do you understand by the term ‘AI/ML bias’ ? 3. Based on your professional experience, can you tell me if something like ‘AI/ML bias’ exists in practice? (a) (If yes), why do you say so? In your professional experience, have you come across any cases related to AI/ML bias?) (b) (If yes), can you give an example? (c) What kind of AI/ML system were you developing? (d) How did you find out that the system was biased? (e) What kind of bias crept into the system? (f) What caused the bias? (g) Once you found that the system was biased, did you deploy that system? (Yes/No) i. (If yes), any strategies/methods were used to mitigate those biases before deploying it? ii. (If yes), what strategies/methods did you use? iii. (If not), why weren’t any strategies/methods used? (h) (If no), do you think the term ‘AI/ML bias’ is theoretical and does not exist in practice? i. You do not have to deal with/haven’t dealt with any AI/ML biases? Can you tell me based on your experience? ii. Why don’t you have to deal with AI/ML bias? 4. Based on your professional experience, what can help you in addressing/ preventing/ miti- gating biases in the AI/ML system you develop? (a) In what way can it help you? Section B.2- Questions on ‘Fair AI/ML’ 1. Are you aware of the term ‘fair AI/ML’ ? (a) (If yes), what would you consider as fair? Can you give an example? 2. Based on your professional experience, do you think it is important to create a fair AI/ML system? (a) (If yes), why is it important for the AI/ML system to be fair? (b) (If not), why is it not important to create fair AI/ML systems? 3. Based on your professional experience, how do you know the AI/ML system that you devel- oped is fair? Do you use any strategies to ensure its fairness? (a) (If yes), what strategies/ tools/ techniques do you use? (b) (If not), is it not mandatory to use tools/strategies/techniques? (c) Why is it not mandatory? 4. Based on your professional experience, do you face any challenges in developing a fair AI/ML system? (a) (If yes), what challenges do you face? (b) What do you think are the factors leading to those challenges? 5. Based on your professional experience, what does it take to develop AI/ML systems that are fair? (a) Why? 6. If an AI/ML system is unfair, who does it impact, according to you? (a) (If yes), how does it impact them? Can you give an example? (b) (If not), why not? Can you give an example? Data Availability Statement The data are protected and are not available due to data privacy laws. Conflict of interest Conflicts of interest include Klaas-Jan Stol, Paul Ralph, Brian Fitzgerald, Burak Turhan, Patanamon Thongtanunam. 43 References Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. URL https://www. propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed 17 January 2024 Augustin L, Bressler D, Smith G (2002) Accelerating software development through collab- oration. In: Proceedings of the 24th International Conference on Software Engineering, pp 559–563, DOI https://doi.org/10.1145/581339.581409 Australia G (2019) Australia’s AI ethics principles. URL https://www.industry. gov.au/publications/australias-artificial-intelligence-ethics-framework/ australias-ai-ethics-principles, accessed 10 January 2024 Bacelar M (2021) Monitoring bias and fairness in machine learning models: A review. Sci- enceOpen Preprints DOI 10.14293/S2199-1006.1.SOR-.PP59WRH.v1 Balayn A, Yurrita M, Yang J, Gadiraju U (2023) “Fairness toolkits, A checkbox culture?” On the factors that fragment developer practices in handling algorithmic harms. In: Pro- ceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pp 482–495, DOI https://doi.org/10.1145/3600211.3604674 Baltes S, Ralph P (2022) Sampling in software engineering research: A critical review and guidelines. Empirical Software Engineering 27(4):94, DOI https://doi.org/10.1007/ s10664-021-10072-8 Binns R (2018) Fairness in machine learning: Lessons from political philosophy. In: Confer- ence on Fairness, Accountability and Transparency, PMLR, pp 149–159 Caliskan A, Bryson JJ, Narayanan A (2017) Semantics derived automatically from language corpora contain human-like biases. Science 356(6334):183–186, DOI 10.1126/science.aal42 Caton S, Haas C (2020) Fairness in machine learning: A survey. ACM Computing Surveys 56(166):1–38, DOI https://doi.org/10.1145/3616865 Chen P, Wu L, Wang L (2023) AI fairness in data management and analytics: A review on challenges, methodologies and applications. Applied Sciences 13(18):10258, DOI https: //doi.org/10.3390/app131810258 Chouldechova A, Roth A (2018) The frontiers of fairness in machine learning. arXiv preprint arXiv:181008810 D’Amour A, Srinivasan H, Atwood J, Baljekar P, Sculley D, Halpern Y (2020) Fairness is not static: Deeper understanding of long term fairness via simulation studies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp 525–534, DOI https://doi.org/10.1145/3351095.3372878 Dankloff M, Skoric V, Sileno G, Ghebreab S, Ossenbruggen Jv, Beauxis-Aussalet E (2024) Analysing and organising human communications for AI fairness assessment: Use cases from the dutch public sector. AI & Society pp 1–21, DOI https://doi.org/10.1007/ s00146-024-01974-4 Deng WH, Nagireddy M, Lee MSA, Singh J, Wu ZS, Holstein K, Zhu H (2022) Exploring how machine learning practitioners (try to) use fairness toolkits. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp 473–484, DOI https://doi.org/10.1145/3531146.3533113 DrivenData (2024) Deon. URL https://deon.drivendata.org/, accessed 17 January 2024 Fenu G, Galici R, Marras M (2022) Experts’ view on challenges and needs for fairness in artificial intelligence for education. In: International Conference on Artificial Intelligence in Education, Springer, pp 243–255, DOI https://doi.org/10.1007/978-3-031-11644-5 20 Finkelstein A, Harman M, Mansouri SA, Ren J, Zhang Y (2008) “Fairness analysis” in requirements assignments. In: 16th IEEE International Requirements Engineering Con- ference, IEEE, pp 115–124, DOI 10.1109/RE.2008.61 Friedler SA, Scheidegger C, Venkatasubramanian S, Choudhary S, Hamilton EP, Roth D (2019) A comparative study of fairness-enhancing interventions in machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp 329–338, DOI https://doi.org/10.1145/3287560.3287589 Google (2022) Responsible AI practices. URL https://ai.google/responsibility/ responsible-ai-practices/, accessed 10 January 2024 44 Group HLE (2019) Ethics guidelines for trustworthy AI. URL https://digital-strategy. ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai, accessed 10 January 2024 Habibullah KM, Gay G, Horkoff J (2023) Non-functional requirements for machine learning: Understanding current use and challenges among practitioners. Requirements Engineering 28(2):283–316, DOI https://doi.org/10.1007/s00766-022-00395-3 Harrison G, Hanson J, Jacinto C, Ramirez J, Ur B (2020) An empirical study on the perceived fairness of realistic, imperfect machine learning models. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp 392–402, DOI https://doi.org/10.1145/3351095.3372831 Hoda R (2021) Socio-technical grounded theory for software engineering. IEEE Transactions on Software Engineering 48(10):3808–3832, DOI 10.1109/TSE.2021.3106280 Holstein K, Wortman Vaughan J, Daum´e III H, Dudik M, Wallach H (2019) Improving fairness in machine learning systems: What do industry practitioners need? In: Proceed- ings of the 2019 CHI Conference on Human Factors in Computing Systems, pp 1–16, DOI https://doi.org/10.1145/3290605.3300830 Hopkins A, Booth S (2021) Machine learning practices outside big tech: How resource con- straints challenge responsible development. In: Proceedings of the 2021 AAAI/ACM Con- ference on AI, Ethics, and Society, ACM New York, United States, pp 134–145, DOI https://doi.org/10.1145/3461702.3462527 Hua SS, Belfield H (2020) AI & antitrust: Reconciling tensions between competition law and cooperative AI development. Yale JL & Tech 23:415 Hutchinson B, Mitchell M (2019) 50 years of test (un) fairness: Lessons for machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, ACM New York, USA, pp 49–58, DOI https://doi.org/10.1145/3287560.3287600 IBM (2022) Everyday ethics for AI. URL https://www.ibm.com/design/ai/ethics/ everyday-ethics, accessed 10 January 2024 IBM (2024a) AI factsheets. URL https://dataplatform.cloud.ibm.com/docs/content/ accessed 17 wsj/analyze-data/factsheets-model-inventory.html?context=cpdaas, January 2024 IBM (2024b) AI fairness 360. URL https://www.ibm.com/opensource/open/projects/ ai-fairness-360/, accessed 17 January 2024 Johnson B, Brun Y (2022) Fairkit-learn: A fairness evaluation and comparison toolkit. In: Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings, pp 70–74, DOI https://doi.org/10.1145/3510454.3516830 Madaio M, Egede L, Subramonyam H, Wortman Vaughan J, Wallach H (2022) Assessing the fairness of AI systems: AI practitioners’ processes, challenges, and needs for support. Proceedings of the ACM on Human-Computer Interaction 6(CSCW1):1–26, DOI https: //doi.org/10.1145/3512899 Madaio MA, Stark L, Wortman Vaughan J, Wallach H (2020) Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In: Pro- ceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM New York, USA, pp 1–14, DOI https://doi.org/10.1145/3313831.3376445 Majumder S, Chakraborty J, Bai GR, Stolee KT, Menzies T (2023) Fair enough: Search- ing for sufficient measures of fairness. ACM Transactions on Software Engineering and Methodology 32(6):1–22, DOI https://doi.org/10.1145/3585006 Marcinkowski F, Kieslich K, Starke C, L¨unich M (2020) Implications of AI (un-) fair- ness in higher education admissions: The effects of perceived AI (un-) fairness on exit, voice and organizational reputation. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, ACM New York, USA, pp 122–130, DOI https://doi.org/10.1145/3351095.3372867 Martin N (2018) Are AI it eliminating worse? https://www.forbes.com/sites/nicolemartin1/2018/12/13/ are-ai-hiring-programs-eliminating-bias-or-making-it-worse/?sh=552bb0cc22b8, accessed 17 January 2024 or making programs hiring URL bias Masood Z, Hoda R, Blincoe K (2020) How agile teams make self-assignment work: A grounded theory study. Empirical Software Engineering 25:4962–5005, DOI https://doi. org/10.1007/s10664-020-09876-x 45 Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A (2021) A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54(6):1–35, DOI https: //doi.org/10.1145/3457607 Microsoft (2024a) Microsoft responsible AI standard. URL https://www.microsoft.com/ en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6, accessed 10 January 2024 Microsoft (2024b) AI fairness checklist. URL https://www.microsoft.com/en-us/ research/project/ai-fairness-checklist/, accessed 17 January 2024 Orr W, Davis JL (2020) Attributions of ethical responsibility by artificial intelligence practi- tioners. Information, Communication & Society 23(5):719–735, DOI https://doi.org/10. 1080/1369118X.2020.1713842 Pagano TP, Loureiro RB, Lisboa FV, Peixoto RM, Guimar˜aes GA, Cruz GO, Araujo MM, Santos LL, Cruz MA, Oliveira EL, et al. (2023) Bias and unfairness in machine learning models: A systematic review on datasets, tools, fairness metrics, and identification and mitigation methods. Big Data and Cognitive Computing 7(1):15, DOI https://doi.org/ 10.3390/bdcc7010015 Pant A, Hoda R, Spiegler SV, Tantithamthavorn C, Turhan B (2023) Ethics in the age of AI: An analysis of AI practitioners’ awareness and challenges. ACM Transactions on Software Engineering and Methodology 33(80):1–35, DOI https://doi.org/10.1145/3635715 Pant A, Hoda R, Turhan B, Tantithamthavorn C (2024) What do AI/ML practitioners think about AI/ML bias? URL https://arxiv.org/abs/2407.08895, 2407.08895 Pessach D, Shmueli E (2022) A review on fairness in machine learning. ACM Computing Surveys (CSUR) 55(3):1–44, DOI https://doi.org/10.1145/3494672 Prates MO, Avelar PH, Lamb LC (2020) Assessing gender bias in machine translation: A case study with google translate. Neural Computing and Applications 32:6363–6381, DOI https://doi.org/10.1007/s00521-019-04144-6 Richardson B, Garcia-Gathright J, Way SF, Thom J, Cramer H (2021) Towards fairness in practice: A practitioner-oriented rubric for evaluating fair ML toolkits. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM New York, USA, pp 1–13, DOI https://doi.org/10.1145/3411764.3445604 Ryan S, Nadal C, Doherty G (2023) Integrating fairness in the software design process: An interview study with HCI and ML experts. IEEE Access 11:29296–29313, DOI 10.1109/ ACCESS.2023.3260639 Seaman CB (1999) Qualitative methods in empirical studies of software engineering. IEEE Transactions on Software Engineering 25(4):557–572, DOI 10.1109/32.799955 Shin D, Park YJ (2019) Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior 98:277–284, DOI https://doi.org/10.1016/j. chb.2019.04.019 Silberg J, Manyika J (2019) Notes from the AI frontier: Tackling bias in AI (and in humans). McKinsey Global Institute 1(6):1–31 Srivastava M, Heidari H, Krause A (2019) Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ACM New York, USA, pp 2459–2468, DOI https://doi.org/10.1145/3292500.3330664 Ueda D, Kakinuma T, Fujita S, Kamagata K, Fushimi Y, Ito R, Matsui Y, Nozaki T, Nakaura T, Fujima N, et al. (2024) Fairness of artificial intelligence in healthcare: Review and recommendations. Japanese Journal of Radiology 42(1):3–15, DOI https://doi.org/ 10.1007/s11604-023-01474-3 Vasudevan S, Kenthapadi K (2020) Lift: A scalable framework for measuring fairness in ML applications. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, ACM New York, USA, pp 2773–2780, DOI https://doi.org/ 10.1145/3340531.3412705 Verma S, Rubin J (2018) Fairness definitions explained. In: Proceedings of the International Workshop on Software Fairness, ACM New York, USA, pp 1–7, DOI https://doi.org/10. 1145/3194770.3194776 Wan M, Zha D, Liu N, Zou N (2023) In-processing modeling techniques for machine learning fairness: A survey. ACM Transactions on Knowledge Discovery from Data 17(3):1–27, DOI https://doi.org/10.1145/3551390 46 Wang Y, Song Y, Ma Z, Han X (2023) Multidisciplinary considerations of fairness in medical AI: A scoping review. International Journal of Medical Informatics 178:105175, DOI https://doi.org/10.1016/j.ijmedinf.2023.105175 Weidener L, Fischer M, et al. (2024) Role of ethics in developing AI-based applications in medicine: Insights from expert interviews and discussion of implications. JMIR AI 3(1):e51204, DOI 10.2196/51204 Woodruff A, Fox SE, Rousso-Schindler S, Warshaw J (2018) A qualitative exploration of perceptions of algorithmic fairness. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ACM New York, USA, pp 1–14, DOI https://doi.org/10. 1145/3173574.3174230 Xavier B (2024) Biases within AI: Challenging the illusion of neutrality. AI & Society pp 1–2, DOI https://doi.org/10.1007/s00146-024-01985-1 Xivuri K, Twinomurinzi H (2021) A systematic review of fairness in artificial intelligence al- gorithms. In: Responsible AI and Analytics for an Ethical and Inclusive Digitized Society, Springer, vol 12896, pp 271–284, DOI https://doi.org/10.1007/978-3-030-85447-8 24 Zhang J, Shu Y, Yu H (2023) Fairness in design: A framework for facilitating ethical artificial intelligence designs. International Journal of Crowd Science 7(1):32–39, DOI 10.26599/ IJCS.2022.9100033
ai_researcher
3
MoA_is_All_You_Need_Building_LLM_Research_Team_using_Mixture_of_Agents.pdf
4 2 0 2 p e S 3 1 ] P C . n i f - q [ 2 v 7 8 4 7 0 . 9 0 4 2 : v i X r a MoA is All You Need :Building LLM Research Team using Mixture of Agents Sandy Chen, Leqi Zeng, Abhinav Raghunathan, Flora Huang, Terrence C. Kim Vanguard IMFS (Investment Management FinTech Strategies) Abstract Large Language Models (LLMs) research in the financial domain is particularly complex due to the sheer number of approaches proposed in literature. Retrieval-Augmented Generation (RAG) has emerged as one of the leading methods in the sector due to its inherent groundedness and data source variability. In this work, we introduce a RAG framework called Mixture of Agents (MoA) and demonstrate its viability as a practical, customizable, and highly effective approach for scaling RAG applications. MoA is essentially a layered network of individually customized small language models [1] collaborating to answer questions and extract information. While there are many theoretical propositions for such an architecture and even a few libraries for generally applying the structure in practice, there are limited documented studies evaluating the potential of this framework considering real business constraints such as cost and speed. We find that the MoA framework, consisting of small language models [1], produces higher quality and more grounded responses across various financial domains that are core to Vanguards business while simultaneously maintaining low costs. 1 Introduction It is well known in the machine learning community that single-model approaches typically fall short in predictive power compared to multi-model approaches (also known as ensemble models). Two main reasons being: • Conclusions drawn from ensemble models are bolstered by the consensus of multiple models, each receiving slightly different inputs. This collective validation enhances the confidence in the predictive outcomes. • Ensemble models are better equipped to generalize to new information that has not been captured in the training data. Large Language Models (LLMs) initially relied on single dense transformer approaches due to their computational complexity and the inherent risk of hallucinations. However, the research community has recently shifted its focus towards sparse ensembles of LLMs, as they offer several advantages [2][3]. These ensembles exhibited lower hallucination rates, improved output quality, and enhanced information surfacing capabilities [4]. Moreover, by arranging multiple LLMs in sequence or parallel, researchers can create intricate networks [5] that resemble the organizational structures found within real corporations. This arrangement unlocks a crucial collaborative potential, enabling LLMs to work together in a more sophisticated manner. Large Language Models (LLMs) that surpass simple classification tasks and can perform actions based on information stored in databases, APIs, and other sources are known as ”agents.” Both individual agents and systems composed of multiple agents (often referred to as ”Socratic AI,” ”Agentic AI,” or similar terms) are extremely powerful as they can arbitrarily read and execute tasks far more efficiently 1 than humans [6]. This capability is particularly valuable in the finance domain, where the vast amount of knowledge researchers consume is of a textual nature. For the purpose of this paper, we define a Mixture of Agents (MoA) system as an ensemble of agents, each with unique characteristics such as customized linking, prompting, and knowledge. Existing literature explores ensemble LLMs primarily from a theoretical perspective, focusing on ex- perimentation to determine if error improves or compounds in these systems. Studies have provided evidence that ensembles of LLMs can improve classification accuracy over single models [7] and collab- orate through debate to solve complex problems [8]. It is also clear that ensemble LLMs have a wide variety of potential use cases in the biomedical, financial, and even research domains [5]. The main drawback of ensemble LLMs are cost and speed - running multiple models in parallel or in sequence is a computationally costly operation that results in slow generation and high latency. Figure 1: Single vs. Multi-Agent system configuration. In the practical domain, research aligns more closely with intra-model approaches. Mistral AIs ground- breaking paper on their Mixture of Experts (MoE) model [9] seems to be motivated, at least partially, from ensemble models in traditional machine learning. Mistrals Mixtral 8x16,a MoE model, outper- formed much of the existing open-source competition due to its innovative architecture, which serves as an inspiration for our work. While MoE is model-centric, applying ensemble learning within a single model, MoA is a system-centric approach to apply ensemble learning across multiple models. OpenAI has also openly embraced the idea of ensembles. The GPT-4 model is rumored to be one of the most impactful implementations of MoE, with GPTs representing their active exploration of agents using GPT-4 as the foundation. Although libraries such as AIFlows, Langchain, and Microsoft Autogen enable programmatic composition of agents and LLMs, there are still very limited studies that demon- strate the viability of systems of agents when considering cost and user experience as primary factors [10][11]. At Vanguards Investment Management Fintech Strategies (IMFS) team, we propose one of the first data points suggesting that MoA meets these constraints. 2 Mixture of Agents (MoA) Mixture of Agents (MoA) is an enhanced multi-agent Retrieval-Augmented Generation (RAG) frame- work that supports a group of highly specialized small [1] language model agents working together in complex formations to answer questions. MoA is highly inspired by ongoing research into ensem- ble approaches for LLMs, including Mixture of Experts (MoE) and Socratic AI [6][12]. Our findings suggest that these agents operate in powerful ways that mimic organizational hierarchies, ultimately producing higher quality outputs with built-in transparency and grounding. The agents that constitute the MoA system are sophisticated information gatherers, each possessing its own internal knowledge, external knowledge bases [13], prompts, groundings, abilities, and connections with other agents. This high degree of specialization enables the overall MoA system to develop diverse 2 views that converge to form a final response. More importantly, we observe that a robust MoA system consisting of small [1] language models is incredibly cost-effective. When combined with good data engineering practices, MoA can achieve speed and scale that truly rivals other methods of interacting with traditional single large language models [14]. This makes MoA a suitable approach for most, if not all, enterprise use cases. 2.1 Agent as ”Junior Researcher” The role of the agent in the MoA framework resembles a junior researcher for investment manage- ment, but with tremendous potential. By customizing the knowledge accessible to each agent, we can develop highly diversified yet extremely intelligent agents that possess domain understanding and specialization. Figure 2: Examples of hyper specialized agents with API access and knowledge. By hyper-specializing each of these agents, they can individually achieve better results compared to a single model handling both tasks. Figure 2 illustrates an example of such agents, each with its own prompts, knowledge, instructions, fine-tuning, and model bases. In this example, the 10-K/Q Math Agent” is a GPT-4 instance with a definitional understanding of line items and accounting terminology. It is fine-tuned and prompted specifically for mathematical tasks (take a deep breath). Additionally, it has RAG access to raw filings and API access to a SQL database containing analyst notes with domain-specific equations. The ”10-K/Q Sentiment Agent,” on the other hand, is a Llama-2 instance fine-tuned on equities sentiment classification. It has RAG access to real positive and negative state- ments from the company being queried and is prompted for sentiment analysis. 3 The split-agent approach offers significantly higher response quality compared to a single-model ap- proach due to the customizability of each individual agent. These specialized agents can answer extremely nuanced and complex questions with greater accuracy and depth in an MoA system. 2.2 Team of Junior Researcher Agents After agents are customized and built, it is immediately evident that for various high-level tasks, pipelines of agents can be constructed in an efficient manner to carry this task to completion. This structure is reminiscent of a research team, where experts with different backgrounds (i.e., agents with different customizations) collaborate to tackle a common problem. Using the same example agents from before, it is possible to pose adjacent questions to different agents to obtain more specific responses, which can then be compiled into a comprehensive answer. Figure 3 represents one possible configuration of these two agents, preceded by a planner that selects the questions and followed by an aggregator that intelligently combines the agents responses. Figure 3: Possible split of specialized agents to answer a complex question with strong response quality. The flexibility of MoA lies in the fact that agents can be replaced by heuristics, API calls, or any other subprocess that might feed additional information into the aggregator or other agents. In all these scenarios, MoA greatly benefits from its ability to maintain a high level of customizabil- ity. Since each agent effectively serves as a real-time expert on user questions, the overall action and response quality remains strong. However, it is important to note that MoAs performance is only as good as its data and engineering capabilities. The system can potentially be allowed to grow arbi- trarily complex, with reports and answers from an output potentially feeding future inputs through various different approaches currently existing and in development. At Vanguards IMFS team, our MoA system has scaled to analyzing tens of thousands of documents simultaneously. MoA has a unique property wherein any higher-level agent responsible for summarizing or supervising the outputs of lower-level agents can discern and filter out irrelevant or inaccurate information. In- terestingly, we observe that the concept of ”compounding error” only occurs with a single stream of serial models and not with MoA. 3 Results MoA is a property of a LLM system, unlike MoE, which is a property of LLMs. Therefore, we abstract away from the complexities of model evaluation and instead focus on higher-level results as a consequence of the system. Consistent with the findings of other researchers, we find that an interwoven network of models outperforms any single workstream. Furthermore, as the system scales and the layers of abstraction increase, both latency and potential grow. The more abstraction present, 4 the more steps are saved for the human researchers. MoA essentially becomes increasingly efficient compared to human effort as it scales. MoA presents an extremely useful solution for those seeking to enhance existing RAG pipelines beyond the response quality of a single-model system. 3.1 Information Surfacing & Output Quality MoA enhances the information surfacing capabilities of any RAG implementation, thereby increasing the quality of the output. One of the most pressing concerns regarding RAG systems is the available context window. When this value is small, the models coverage with respect to available data is correspondingly limited. Extensive ongoing research focuses on maximizing context windows while minimizing performance degradation [15]. In this regard, MoA provides an advantage over single-model systems. When using systems of agents, the effective context window of the system is significantly augmented. Instead of one model handling all available context, it can now be deliberately split among multiple expert agents. This approach allows for a higher degree of precision and reduces the probability of lost in the middle is- sues. Responses to questions with known or verifiable answers are typically well-formed and accurate. Furthermore, it is well known that model responses are extremely sensitive to their system prompts. Customizing prompts for agents based on their data source can dramatically improve output quality and insight. At Vanguard, we regularly employ MoA to extract and surface insights from documents researchers refer to, often in the volume of tens of thousands of documents. We find that one of the main value propositions of MoA is the ability to drastically increase the likelihood of surfacing information that researchers may have missed or overlooked. Each agent’s answers contain different contexts; for example, a 10-K filing may contain a revenue figure, while the context for this number may be found in an earnings transcript. When these diverse contexts are combined, analysts can glean circumstantial information and gain deeper levels of insight. 3.2 Comparison with Single-Model Providers MoA has demonstrated better and more holistic response quality without the use of fine-tuning [14]. As an experiment, major providers of single-model LLM systems (Anthropics Claude 3 Opus and OpenAIs ChatGPT 4) were given two public documents: Apples Q1 2023 earnings transcript and 10-Q. All three models, including a rudimentary MoA system with two Mistral-7B agents (one for each document), were asked questions and graded based on the amount of vital information captured in their responses. By way of example, one such question was In $AAPL 2023Q1 transcripts and filings, what was the context around revenue growth for the rest of the year? The critical information that needed to be captured included: 1. Revenue for Q2 2023 is projected to be similar to Q1. 2. Gross margins are expected to be between 44-44.5%. 3. Macroeconomic headwinds in digital advertising and mobile gaming. 4. Negative foreign exchange impacts. 5. Expected stock repurchases and buybacks. 6. Expected 65% deferred revenue to be realized in a year. 7. iPhone sales are expected to accelerate, while Mac/iPad sales are expected to decline. 5 Model Response Claude Opus (4/7) 3 Based on Apple’s Q1 2023 earnings call transcript and 10-Q filing, here are the key points regarding their outlook for revenue growth in fiscal 2023: GPT-4o (2/7) 1. Apple is not providing specific revenue guidance for Q2 or the full year due to continued macroeconomic uncertainty. However, they did provide some directional insights. 2. For Q2 2023, Apple expects year-over-year revenue performance to be similar to the December quarter (Q1). This represents an acceleration in underlying year-over-year business performance since Q1 benefited from an extra week compared to Q2. 3. Foreign exchange will continue to be a headwind in Q2, with Apple ex- pecting a 5 percentage point negative year-over-year impact. 4. For iPhone, Apple expects the Q2 year-over-year revenue performance to accelerate relative to the Q1 year-over-year revenue decline. 5. For Mac and iPad, Apple expects Q2 revenue for both categories to decline double-digits year-over-year due to challenging compares and macroeconomic headwinds. 6. Services revenue is expected to grow year-over-year in Q2 while continuing to face macroeconomic headwinds in areas like digital advertising and mobile gaming. Overall, while Apple faced short-term revenue declines due to external factors, the company’s strategic focus on services and emerging markets, along with its strong product ecosystem, were seen as key drivers for potential growth throughout the rest of the year. • In Apple’s Q1 2023 earnings report, the company faced several challenges that impacted its revenue growth outlook for the rest of the year. No- tably, Apple experienced a 5% year-over-year decline in revenue, amount- ing to $117.2 billion, primarily due to supply chain shortages and weaker iPhone sales. This decline was significant given that Q1 typically in- cludes the holiday shopping season, which is crucial for Apple’s financial performance. • Tim Cook, Apple’s CEO, highlighted that despite these challenges, the company maintained a strong product lineup and focused on long-term growth. He mentioned that Apple’s installed base of active devices sur- passed 2 billion, reflecting high customer loyalty and satisfaction across all product categories. • Looking forward, Apple CFO Luca Maestri emphasized the company’s resilience in the face of a difficult macroeconomic environment. He noted that Apple’s services sector set an all-time revenue record of $20.9 bil- lion, showing robust growth even amidst broader economic pressures. Additionally, Apple saw strong performance in emerging markets, with significant growth in regions like South Asia, India, Latin America, and the Middle East. Continued on next page 6 Model Response ChatGPT 4(3/7) In the 2023 Q1 filings and transcripts for Apple, the discussion around revenue growth for the rest of the year highlighted several challenges and strategic focuses. Here are the key points: 1. **Revenue Challenges and Achievements**: Apple reported a revenue of $117.2 billion for the December quarter, marking a decrease of 5% year over year. Despite this decline, they achieved all-time revenue records in multiple markets globally. The factors affecting revenue included foreign exchange headwinds, COVID-19 related supply chain disruptions, par- ticularly with iPhone 14 Pro models, and a challenging macroeconomic environment. However, production had returned to desired levels by the end of the quarter. 2. **Strategic Initiatives and Product Performance**: Apple’s CEO Tim Cook emphasized the company’s continuous investment in innovation and their strategic initiatives across various product categories. iPhone rev- enue was slightly down but flat on a constant currency basis. The Mac and iPad lines showed robust performances, with the Mac maintaining strong customer upgrade activity and the iPad growing due to a better supply situation compared to the previous year. 3. **Services Growth**: Apple’s services segment reached an all-time rev- enue record of $20.8 billion, up 6% year over year. This growth was driven by strong performance in cloud services, payment services, and an increase in paid subscriptions. Apple now hosts over 935 million paid subscriptions across its services. 4. **Forward Outlook**: While specific revenue guidance was not provided due to ongoing macroeconomic uncertainties and potential COVID-19 im- pacts, the company shared directional insights suggesting that the year- over-year revenue performance in the March quarter would be similar to the December quarter. They expected continued foreign exchange head- winds but also saw potential revenue growth in services. Apple’s revenue for Q2 FY23 is expected to be similar to that of Q1 FY23, with a negative year-over-year impact of nearly 4 percentage points due to foreign exchange. Services revenue growth is also expected to be similar to Q1 FY23, while facing macroeconomic headwinds in areas such as digital advertising and mobile gaming. Gross margin is expected to be between 44% and 44.5%. The company expects to continue to manage for the long term and invest in inno- vation and product development, while closely managing spend. Despite the challenges, Apple continues to see strong growth in its installed base of over 2 billion active devices and growing customer engagement with its services. The company also plans to return $90 billion to shareholders through share repur- chases and dividends, maintaining its goal of getting to net cash neutral over time. Continued on next page MoA Agent1 (5/7) 7 Model Response MoA Agent2 (6/7) MoA Aggre- gator (7/7) The context around revenue growth for the rest of the year in Apple Inc.’s ($AAPL) 2023Q1 transcripts and filings indicates that the Rest of Asia Pacific net sales increase during the second quarter and first six months of 2023 was primarily driven by higher net sales of iPhone, partially offset by lower net sales of Mac. However, the Company also mentions that gross margins are subject to volatility and downward pressure due to various factors, as discussed in the 2022 Form 10-K under the heading ”Risk Factors.” The Company expects 65% of total deferred revenue to be realized in less than a year, 26% within one-to- two years, 7% within two-to-three years, and 2% in greater than three years. There have been no material changes to the Companys market risk during the first six months of 2023. The Company’s fiscal years 2023 and 2022 span 53 and 52 weeks, respectively. New product and service introductions can significantly impact net sales, cost of sales, and operating expenses. According to Apple’s Q1 FY23 transcripts and filings, the company expects its revenue for Q2 FY23 to be similar to that of Q1 FY23, with a negative year-over-year impact of nearly 4 percentage points due to foreign exchange. Services revenue growth is also expected to be similar to Q1 FY23, while fac- ing macroeconomic headwinds in areas such as digital advertising and mobile gaming. Despite these challenges, Apple continues to see strong growth in its installed base of over 2 billion active devices and growing customer engage- ment with its services. The company plans to return $90 billion to shareholders through share repurchases and dividends, maintaining its goal of getting to net cash neutral over time. Additionally, the company mentions that gross margins are subject to volatility and downward pressure due to various factors, and 65% of total deferred revenue is expected to be realized in less than a year. The analysis clearly demonstrates that a MoA system consisting of two Mistral-based agents (each with 7B parameters) competes effectively with much larger and more expensive systems such as ChatGPT 4, GPT-4o and Claude 3 Opus. Interestingly, MoA and Claude excel at reporting on expected or future states, as the question requires, in contrast to ChatGPT 4. Responses from each agent can also be displayed to the user for transparency. Most of the responses to other questions asked followed closely with the example laid out above. 3.3 Cost & Scale The beauty of the current MoA system lies in its cost effectiveness and simplicity. In its simplest form, MoA can be performed with the same model and endpoint, activated as many times as necessary to perform inference through the various layers. For enterprises with cloud-based compute resources and endpoints priced based on uptime, such as Amazon Sagemaker or Microsoft Azure, there is no significant difference in overall cost between MoA and single-model systems of similar size. The drawback of MoA is its higher demand on concurrent inference. When scaling, single-model systems can support more users because each user accesses only one endpoint. In contrast, MoA requires at least two endpoints per user, and this number can grow arbitrarily large. However, this flexibility allows for the customization of the agent configuration within the system based on budget and use case. Vanguard IMFSs own MoA system has a significantly lower cost compared to most third-party RAG providers, such as Arcus and Databricks, with a total run cost of under $8,000 per month processing a team of researchers queries. As for speed, Vanguard IMFSs MoA system, which includes pre- and post-operations such as tokenization, retrieval, and hallucination catching, is capable of searching and surfacing information from over 30,000 documents in under 60 seconds using two layers of agents. The latency penalty for implementing MoA is approximately 4.07x, or 2.24x when running inference in parallel.In comparison, our original single-model system was capable of performing the 8 same operation in under three seconds. These results, summarized in Table 2, were obtained using a rudimentary MoA with two layers: three context-accepting agents in layer one and one aggregator in layer two. Metric Single-Model Systems Max Concurrent Users Around 20 Total Compute Cost per Month $5,000-$8,000 MoA 11 $5,000- $8,000 MoA (Parallel Inference) 11 $5,000-$8,000 Average Response Speed 2.9974s 12.3334s 6.8626s Average Latency Penalty Average Passages Consid- ered Average Context Window Improvement - 30 - 4.07x 90 3.00x 2.24x 90 3.00x Table 2: Summary of speed and context window differences between single-model, MoA, and optimized MoA architectures. Based on this and other similar analyses, we conclude that the speed and context window improve- ment of MoA scales linearly with the number of models used in the system. In the above table, we implemented a four-model MoA, consisting of three agents accepting contexts, one aggregator. The total inference time increases by 4x without parallelization, and the context window increases by 3x as a result. MoA is an efficient system that maximizes the benefits of RAG while still meeting cost and scalability constraints in practice. If an enterprise can create and deploy a single-model system, it can also deploy MoA. 3.4 Permanence As a framework, MoA is a robust system that maintains an edge over traditional single-model LLM systems. At Vanguard, we have supported the hypothesis that smaller language models [1] are the present and future when it comes to highly efficient and accurate outcomes. MoA is an extension of this hypothesis, as it has allowed us to operate at a fraction of the cost by utilizing open-weight, sub-10B parameter models. With most of the language modeling community arriving at similar conclusions, we believe in MoAs permanence to become an industry standard. 3.5 Transparency Since the responses from each agent serve as an input to the final aggregator, each output can be regularly displayed to the user and evaluated for missteps or hallucinations. At its core, MoA is a vari- ant of an advanced RAG system and, therefore, retains all of its transparency and grounding properties. However, there are cases where the final output from the MoA system is not as relevant or impactful as an output from one of the constituent agents. In such situations, it is a straightforward task to present the output from each agent to the user along with the final output, allowing them to make their own judgment. At Vanguard, we have invested a substantial amount of time in developing safeguards to limit the hallucination tendency of the models within the MoA system. One of the hardest tasks was to teach the models to say I dont know when the model did not have the relevant dataset to answer a specific question.These safeguards range from heuristics-based checks to more complex embedding comparisons, ensuring the reliability and accuracy of the generated outputs. 9 Figure 4: Example output of MoA with Mistral v0.2 as the agent model. Each agent has its own output that can be used to verify the summary. 4 Conclusion & Future Plans By comparing cost, output quality, transparency, and various other characteristics of LLM systems, we conclude that MoA using small language models should be the de facto standard for enterprise-grade RAG pipelines. It is important to note that this analysis was conducted using a specific technology stack consisting of Amazon AWS. Performance may be significantly improved by employing more efficient cost-per- token providers such as Fireworks AI or Groq, which may also offer faster inference times and better scalability. With improved performance, the delta between MoA and single LLM systems decreases substantially. As MoA’s output quality surpasses that of single LLM systems, it potentially becomes strictly better. References [1] J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. De, L. Casas, L. Hendricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. Van Den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. Rae, O. Vinyals, L. Sifre, and . Equal, “Training compute-optimal large language models,” 03 2022. [2] S. Z. Shen, H. Lang, B. Wang, Y. Kim, and D. Sontag, “Learning to decode collaboratively with multiple language models,” 03 2024. [3] G. Cheng, “Unlocking the power of multiple language models: A dive into collaborative ai,” 11 2023. [4] R. Gordon, “Multi-ai collaboration helps reasoning and factual accuracy in large language mod- els,” 09 2023. [5] Y.-S. Chuang, A. Goyal, N. Harlalka, S. Suresh, R. Hawkins, S. Yang, D. Shah, J. Hu, and T. T. Rogers, “Simulating opinion dynamics with networks of llm-based agents,” 11 2023. [6] A. Zeng, M. Attarian, B. Ichter, K. Choromanski, A. Wong, S. Welker, F. Tombari, A. Purohit, M. Ryoo, V. Sindhwani, J. Lee, V. Vanhoucke, and P. Google, “Socratic models: Composing zero-shot multimodal reasoning with language,” 05 2022. [7] J. Li, Q. Zhang, Y. Yu, Q. Fu, and D. Ye, “More agents is all you need,” arXiv (Cornell Univer- sity), 02 2024. 10 [8] T. Guo, X. Chen, Y. Wang, R. Chang, S. Pei, N. V. Chawla, O. Wiest, and X. Zhang, “Large language model based multi-agents: A survey of progress and challenges,” 01 2024. [9] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. d. l. Casas, E. B. Hanna, F. Bressand, G. Lengyel, G. Bour, G. Lample, L. R. Lavaud, L. Saulnier, M.-A. Lachaux, P. Stock, S. Subramanian, S. Yang, S. Antoniak, T. L. Scao, T. Gervet, T. Lavril, T. Wang, T. Lacroix, and W. E. Sayed, “Mixtral of experts,” 01 2024. [10] M. Josifoski, L. Klein, M. Peyrard, N. Baldwin, Y. Li, S. Geng, J. P. Schnitzler, Y. Yao, J. Wei, D. Paul, and R. West, “Flows: Building blocks of reasoning and collaborating ai,” 02 2024. [11] “Introduction — langchain,” [12] R. Yang, “Socraticai.” [13] Y. Ding, A. Poudel, Q. Zeng, T. Weninger, B. Veeramani, and S. Bhattacharya, “Entgpt: Linking generative large language models with knowledge bases,” 02 2024. [14] J. Wang, J. Wang, B. Together, C. Zhang, and J. Zou, “Mixture-of-agents enhances large language model capabilities,” 06 2024. [15] N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang, “Lost in the middle: How language models use long contexts,” arXiv, 07 2023. 11
ai_researcher
2
Mechanical_Engineering_Master_’_s_Defense_An_Improved_Framework_for_Design_Concept_Generation_Based_On_Experiential_and_Intuitive_Methods_Sumit.pdf
: Developing and delivering a remote experiment based on the experiential learning framework Title during COVID-19 pandemic Author information : W.D. Kularatne Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka. [email protected] Lasanthika H. Dissawa* Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka. [email protected] ORCID - 0000-0002-0246-6555 T.M.S.S.K. Ekanayake Department of Education, Faculty of Arts, University of Peradeniya, Peradeniya, Sri Lanka. [email protected] Janaka B. Ekanayake Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka. [email protected] *corresponding author : Abstract The students following Engineering disciplines should not only acquire the conceptual understanding of the concepts but also the processors and attitudes. There are two recognizable learning environments for students, namely, classroom environment and laboratory environment. With the COVID-19 pandemic, both environments merged to online environments, impacting students' development of processes and characteristic attitudes. This paper introduces a theoretical framework based on experiential learning to plan and deliver processes through an online environment. A case study based on the power factor correction experiment was presented. The traditional experiment that runs for 3 hours was broken into smaller tasks such as a pre-lab activity, a simulation exercise, a PowerPoint presentation, a remote laboratory activity, and a final report based on the experiential learning approach. A questionnaire that carries close and open- ended questions were administered to obtain students' reflections about developing the processes through an online-friendly experiential learning approach. The majority of the students like the approach followed and praise for providing them with an opportunity to perform the experiment in a novel way during the COVID-19 situation. Keywords Distance learning, experiential learning, learning technology, remote laboratory : Declarations : Funding - This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Conflicts of interest/Competing interests - The authors declare that they have no conflict of interest. Availability of data and material - Not applicable. Code availability - Not applicable. Developing and Delivering a Remote Experiment based on the Experiential Learning framework during COVID-19 Pandemic Abstract The students following Engineering disciplines should not only acquire the conceptual understanding of the concepts but also the processors and attitudes. There are two recognizable learning environments for students, namely, classroom environment and laboratory environment. With the COVID-19 pandemic, both environments merged to online environments, impacting students' development of processes and characteristic attitudes. This paper introduces a theoretical framework based on experiential learning to plan and deliver processes through an online environment. A case study based on the power-factor correction experiment was presented. The traditional experiment that runs for 2 hours was broken into smaller tasks such as pre-lab activity, simulation exercise, PowerPoint presentation, remote laboratory activity, and final report based on the experiential learning approach. The delivery of the lab under online mode delivery was presented. Then students' performance was compared before and after the online mode of delivery. It was found that students’ performance in average has a distinct improvement. In order to obtain students' reflections about the online experiential learning approach, a questionnaire that carries close and open-ended questions were administered. The majority of the students like the approach followed and praise for providing them with an opportunity to perform the experiment in a novel way during the COVID-19. Keywords: Distance learning, experiential learning, learning technology, remote laboratory 1. Introduction Teaching engineering means more than enabling students to acquire knowledge. It is necessary to foster comprehension of a combination of content, processes and characteristic attitudes related to the topic being studied. Content includes abstract concepts, laws, and theories, whereas processes include observation, classification, measurement, inference, prediction, and communication. Characteristic attitudes involve being curious and imaginative and being enthusiastic about asking questions and solving problems. It is important to focus on the development of these three dimensions among students. In Engineering disciplines, the content is transferred to students in a classroom environment. The only way to grasp the practical knowledge and experiences for processes is through laboratory experiments. Specially, the experience obtained through experimental work is important as students are focused more on solving problems in real situations. Further, the practical experience gained through laboratory experiments helps to improve skills of applying theoretical knowledge in practical situations (L. Feisel et al., 2002). The COVID 19 pandemic has imposed a global shutdown of various activities, including educational activities. This resulted in transforming the classroom learning environment into an online learning environment. The challenges and the opportunities of crisis-response migration methods of universities are discussed in (Adedoyin & Soykan, 2020). With COVID 19 pandemic, the delivery of the content is done online successfully using platforms like Zoom and Microsoft Teams. Reference (Tang et al., 2020) shows that flipped learning improved students' learning, attention, and evaluation of courses. However, there is always a question among engineering educators, whether they are able to develop processes and characteristic attitudes of students when switched to online learning environment. Reference (Almeida et al., 2009) state that in order to provide a good context for making students effectively understand what they are being taught, tasks designed as investigation activities and uses real data and/or asking for problem-solving, that is, learning by experience, can effectively be used. In experiential learning, the 'learning' relies on the practical aspects and the experience being considered the key to success in the educational act. This approach based on experience adds good value to the student's individuality, develops his/her action skills, reflection skills, critical and innovative thinking, initiative, motivation, curiosity, and trust in his/her own person (Gorghiu & Santi, 2016). According to (Kolb, 1984) experiential learning consists of four stages: concrete experience, reflective observation, abstract conceptualization and active experimentation. That is, learning starts with the active involvement of a learner in getting experience by doing something individually or as a team; then the learner taking a time-out from 'doing' and stepping back from the activity and reviewing what has been experienced and done. The above stages are followed by the process of making sense of what has happened and involve interpreting the events and understanding the relationships between them, drawing upon theory from textbooks for framing and explaining events, models they are familiar with, ideas from colleagues, previous observations, or any other knowledge that they have developed. Finally, the learner considers how they are going to put what they have learnt into practice. This cycle is shown in Figure 1. An improved form of Kolb's experiential learning cycle consists of contextually rich concrete experience, critical reflective observation, contextual-specific abstract conceptualization, and pragmatic active experimentation is proposed in (Morris, 2020). Figure 1 Experiential learning cycle 2. Online Modes of Getting Concrete Experience Prior to the pandemic, most of the experiments were performed in the laboratory environment and simulation environment. In the laboratory environment, the students work on real equipment and instruments, and they obtained the skills to use measuring equipment and experiences of real-world practical problems. On the other hand, the simulation laboratories are conducted as a pre-lab exercise to obtain an idea of the actual outcome of the experiment (L. D. Feisel & Rosa, 2005) before performing it in a real laboratory and also theoretical concepts they are conducted for explaining (Balamuralithara & Woods, 2009). Further, simulation laboratories are conducted as an alternative way of doing experiments related to expensive or large systems, which are not practical for doing in a university laboratory environment. Simulations are also used to demonstrate the events that are not easily seen, such as current flow, heat transfer and electromagnetic fields (Bourne et al., 2005). As the software represents simplified mathematical models of complex real systems (Mosterman et al., 1996), in some cases, the simulation does not provide accurate results. During the COVID-19 pandemic lock-down period, many universities explored online modes of performing experiments such as (a) simulation labs, (b) remote labs and (c) virtual labs (Odeh et al., 2013). From simulation-based laboratories, the experience and the practical knowledge obtained by the student depends on the capabilities, authenticity and constraints of the software (Ertugrul, n.d.). But it can be conducted as a substitution for some experiments, as mentioned earlier. Reference (Das, 2018) proposes a MATLAB-Simulink modelling based experiment to understand the characteristic of solar PV cells and solar PV systems. A virtual microgrid experiment introduced in (Chai et al., 2020) uses Simscape, an electrical power system toolbox in MATLAB/Simulink software package, to model the microgrid. Students can download and install software packages and libraries on their personal computers and can do the experiment remotely. The simulation tools give a robust platform to create models and help to analyze the behaviour and performance of systems but do not provide the students with a feeling of the real presence of laboratory equipment (Peterson & Feisel, n.d.). A laboratory that could give access to operate and control the real equipment via the internet is called a remote lab. This allows the students to undertake the experiments through the internet. Students can access the remote lab using their personal computers via a web browser application and can send commands to control the lab equipment. The commands will go through a server and execute the command in the real equipment. The results of the experiment will be displayed at the student's computer. For example, a web API that conforms to the Web of Things standard to control a microscope was developed (Collins et al., 2021). It provides a modern graphical control interface and allows multiple microscopes to be controlled by one computer. Further, it facilitates sharing of equipment between local or remote users. The steps of implementing remote microscopy are discussed in (Goldberg & Dintzis, 2007). A digital camera that was attached to a light microscope provides the images of slides. The students can control this light microscope from a remote location via virtual microscopy software. The authors (Odeh et al., 2013) created a remote electronic Engineering lab based on Augmented Reality using a video camera and real experimental electronic tool kit. The camera transfers a live video of the electronic kit at the remote lab. In this remote lab, students can draw circuit connections on a webpage. Then, after the verification of the connections, the data were emulated onto a real multimeter. The remote lab provides experiences of practical issues that would not occur in a simulation environment (Ferreira et al., 2002). Therefore, remote labs are good choices for distance learning, and it allows the students to do experiments on real equipment located at a distance through the internet. The virtual lab is not a real lab, but the entire infrastructure required for a real lab setup is obtained through computer-generated graphics, and it generates results from software simulations (Ma & Nickerson, 2006). Some virtual labs are developed only using computer-generated graphics, and some are developed using computer-generated graphics, virtual reality sensors, and leap motion control device. In the latter, virtual reality sensors will capture the overall body movements and send the information to the computer to render them in the virtual environment. Further, the hand and finger movements are identified by the leap motion device, and it communicates with the virtual reality sensor and transfers this data to the computer to translate the movements of the students into a virtual environment. The graphics in a virtual environment are maintained in the same form as a real device. This kind of virtual laboratory is effective in increasing student's knowledge and understanding of handling equipment as students can visualize and experience the whole experimental process. Reference (Hasan et al., 2020) presented a virtual electric machines laboratory using Oculus Rift, Unity3D, and Leap Motion to do experiments in a safe environment to gain a broad understanding of the concepts of how electric machines work. A virtual instrumentation and measurement laboratory was reported in (Valdez et al., 2014). Since it used 3D components, students can get a real visualization of the circuit components. But this is a full software program (no hardware components were included) that will not provide real results. A virtual lab for real-time control of a mobile robot is presented in (Solak et al., 2020). In this lab setup, an IP camera was fixed to monitor the indoor laboratory and the mobile robot in real-time. The students can place a virtual target or virtual obstacles anywhere on the video generated by the IP camera. The navigation of the robot is monitored through the personal computer web server. The web server on the single-board computer in the robot can communicate with the student's PC, and it can execute the developed application software on the robot. Further, the robot can control manually through the web environment. From this kind of simulation-based virtual labs, students do not gain experience in analyzing and interpreting real-world results. Even though the literature provides different online approaches that can be applied to develop skills and experience related to processors, they do not provide a theoretical framework that can be used when developing and delivering online experiments. The paper presents an experiential learning approach for a remote power factor correction experiment as a case study. 3. Method According to (Osipov et al., 2015), the ideal online lesson duration is 15–20 min. Its further states that it is hard for both the teacher and the student to study for more than 30 min at a time. Further, reference (Basilaia & Kvavadze, 2020), which reports a transition to the online mode of delivery during the COVID-19 pandemic, states that when the online teaching has started, changes in the duration of the online lessons were done to avoid prolonged contact of the students with a computer. Considering these facts, the usual laboratory session was changed while following the experiential learning cycle shown in Figure 1. This laboratory session was given to the students after a comprehensive lecture. In order to provide the laboratory session as short duration lessons, it was planned as described in Table 1. A PowerPoint presentation was used to reiterate the subject content after they have completed the first simulation exercise. Table 1 Activities based on the experiential learning cycle Experiential learning cycle Activity Active experimentation Pre-lab work Description This is a personalized activity and discusses in more detail in Section 3.1. Concrete experience and Reflective observation Simulation and reflections Abstract conceptualization PowerPoint presentation Concrete experience Remote laboratory activity Reflective observation & Abstract conceptualization Final report A simulation followed by a number of short questions were included for students to reflect on the simulation activity. The details of this activity are given in Section 3.2. As described in Section 3.3, a PowerPoint presentation was given for students so that they could connect what they learn to theory They could connect the laboratory setup remotely and carry out a simple power factor correction experiment. This is described in Section 3.4. This is designed for students to reflect on observation and connect it to the field. This is described in Section 3.5. to 3.1.Pre-lab Work This contains a number of short tasks that help students to connect the activities that they do into the real world. All the tasks were based on a single-phase pump load connected to a 230 V rms supply. In order to personalize the activity, the capacity of the pump was tie with the student's registration number. The capacity of the pump is 7.5Z kW (where Z is the reminder of the [Registration number/3] plus 1), and the operating power factor at 100% loading is at 0.87. The tasks are given in Table 2. Table 2 Tasks given in the pre-lab work Task Description 1 2 Represent the load by a resistor (R) in series with an inductor (L) The pump is connected to a distribution board 20 m apart. Select a suitable cable to supply the pump (cable data was given) Calculate the capacitance required to improve the power factor to 0.99 Compare the power loss and voltage drop across the cable without and with the capacitor 3.2.Simulation and Reflections This simulation exercise is based on the 'Circuit Simulator Applet' available at https:// falstad.com/circuit/. The reason for using this Applet is easy accessibility. In the usual classroom, PSCAD is used. The installation of this software will need special support, and also, data charges to download the software is not affordable to some students. This simulation is based on the pre-lab work that they did. An instruction sheet was given to use the Circuit Simulator Applet. The tasks are given in Table 3. Table 3 Tasks for the simulations and reflections Task Description 1 Implement the R-L representation of the pump load considered in the prelab in 'Circuit Simulator Applet.' Using the scopes available in the Applet, obtain the waveforms of load current, load voltage, and power consumed by the resistive part of the load and inductive part of the load. Using the waveform obtained, calculate the power factor of operation and compare it with the calculated value. Implement the above load with the power factor correction capacitor in 'Circuit Simulator Applet' and obtain the load voltage and current. Obtain using the Applet the losses in the cable when the pump is operating at 100% loading without and with the power factor correction capacitor and compare the results with the calculated value Reflect on the calculated and simulated results for any discrepancies and write down reasons for such discrepancies. 3 4 2 3 4 5 6 3.3.PowerPoint presentation This presentation carried different types of real loads and their R-L representation, consequences of low power factor operation, power factor measurement techniques, and power factor correction. Some slides are shown in Figure 2. Figure 2 Some slides provided for abstract conceptualization 3.4.Online laboratory activity This section provides comprehensive commentary on the development of the laboratory setup. Figure 3 provides the overall setup developed. The power factor improvement circuit consists of a resistor, inductor, capacitor, relay switch and a variac. The sensor circuit consists of a current sensor, voltage sensor, an op-amp circuit to obtain power factor and Arduino UNO controlling board. Raspberry pi and the oscilloscope were exposed to the internet using two real IPs such that they can be accessed through the internet from any network. Figure 3 Components of the online laboratory setup 3.4.1. Power factor improvement circuit Figure 4 shows the hardware setup used in the power factor correction experiment. The relay switch is used to connect or disconnect the capacitor to the RL load. The relay signal is given via a raspberry pi digital output port. The relay ON and OFF commands were given to the raspberry pi by a remote user via a web application. Variac Capacitor bank (C) Oscilloscope Rheostat (R) Inductor bank (L) Relay switch (S) Figure 4 Hardware setup 3.4.2. Sensor circuit to obtain voltage, current and power factor measurements Figure 5 shows a diagram of the sensor circuit. It consists of current and voltage sensing devices, op-amp ICs, an XOR gate IC and Arduino-UNO microcontroller board. The voltage and current sensing devices were used to measure the voltage and current of the circuit. To obtain the power factor, the op-amp comparator circuit was used with an XOR gate. The sensing signals were sent to the analog input pins in the Arduino board, and the pulse generated by the XOR gate IC was also set to a digital input pin of the Arduino board. The mathematical calculations were bone inside the Arduino board to obtain root mean squared voltage, root mean squired current, and the power factor. 3.4.3. Web-based Oscilloscope To observe the circuit current and voltage waveforms, an oscilloscope was used with a current probe and isolated voltage probes. The Oscilloscope, current probe and voltage probes used were Tektronix MDO 3014, Tektronix A622 and GW-Instek GDP-025. The MDO 3014 oscilloscope was used, which can access through the internet. The URL of the oscilloscope was placed as a button in the GUI to enable students to access it. Figure 5 Sensing circuit 3.4.4. The graphical user interface of the system Figure 6 shows the graphical user interface (GUI) of the lab setup. HTML and CSS coding were used on the GUI programming. The interface shows the voltage, current and power factor measurements of the RLC circuit. Further, to operate the relay switch (to add or remove the capacitor from the RLC circuit), a button was placed. The button toggles between 'add capacitor' and 'remove capacitor'. Furthermore, a link was added to access the oscilloscope. In order for students to explore a different aspect of the experiment individually, an instruction sheet was developed while providing clear instructions. Each student was assigned a specific time to access the laboratory setup. This was to minimize any issues that may create due to high traffic. The tasks that students were asked to carry out are summaries in Table 4. Figure 6 Graphical user interface of the lab setup 2 Table 4 Activities in the online lab Activity Task When the toggle switch is at the 'add capacitor' position, obtain the 1 measurements of voltage, current and power factor. Go to the oscilloscope window and observe the waveform patterns. Also, using the oscilloscope settings, calculate the power factor. Go back to the initial window Click on the "Add Capacitor" to on the switch the power factor correction capacitor Obtain the voltage, current and power factor readings Go back to the oscilloscope window and observe the waveform patterns. Also, using the oscilloscope settings, calculate the power factor 5 6 3 4 3.5.Final report The final report is a formal report where students are asked to report the results of each activity carried out under Sections 3.1. to 3.4. Students are then asked to add a discussion about discrepancies of different results and reasons for them. It is anticipated that this report will be a take-home guide to apply their knowledge to wide applications when they graduated as Engineers. 4. Student Performance and Reflections Figure 7 shows the marks distribution of student who did the experiment on power factor correction from 2017 to 2021. From 2017 to 2019, the experiment was done in the laboratory. Students submitted a pre-lab report as described in Table 1 prior to doing the experiment and then submitted the final report. The main difference between the way the experiment was conducted prior to the COVID period and after 2020 were no simulation activity, a 2-hour long laboratory experiment, and conducting the experiment in groups of 3 students. In 2020, students individually did the first three steps in Table 1, i.e., pre- lab work, simulations and reflections, and PowerPoint presentation. The remote laboratory setup was not available that year. This paper describes the steps followed in 2021. The main difference noted when the student did the experiment as groups and as individuals is existence of some outliers in 2020 and 2021. More importantly, on average, students' performance was improved when adopting the methodology described in this paper. Figure 7 The marks distribution of student who did the experiment on power factor correction from 2017 to 2021 In order to assess whether students fulfil the intended learning outcomes, a google form was developed that provided some close-ended questions and open-ended questions. Table 5 summaries the responses to some of the close-ended questions: Table 5 Questions and Student's responses Question Narrated PowerPoint presentation provided a good insight into the online experiment The simulation was intellectually simulating Simulation and PowerPoint provided a good base so that I could connect this experiment to real world The instruction sheet provided for the online lab was informative and useful Allowing to visualize the oscilloscope trace was useful Strongly Agree % Agree % Other % 51.7 16.7 40 8.3 46.7 36.6 23.7 33.9 42.4 42.4 50.8 6.8 20.0 46.7 33.3 For the close-ended question, 'under COVID-19 learning from home situation, the online lab backed by PowerPoint presentation and simulations was an ideal alternative,' the response is shown in Figure 8. Figure 8 Student responses Some of the responses to the open-ended questions are given in the following quotes: "It is a good opportunity for us to complete our lab assignments online in this pandemic situation. But we are missing the hand on experience, which is really important". "With the situation of the country, this way of conducting lab sessions is very useful as we can come with a good idea about the lab session though we couldn't make our own setup. This gave me a good experience never had before. Allowing time slots to use the setup individually is really good as everyone can engage in this lab session. Everything was well planned and thank you for trying something new". "The instruction sheet and the video before the lab is very understandable and sufficient for doing the lab". "I think it's a very good idea to provide us with a virtual laboratory experience during this period as it can be considered as one of the best alternatives for physical labs. I believe that this can be improved to a great extent by adding more features such as the ability to change component values. In addition, login credentials (or something similar) can be assigned to us so that only one student can access the lab during a given time slot. The key feature I saw in this virtual lab concept was that we are seeing a realistic result, rather than a simulated result". However, due to some technical hiccups, some students had issues. They responded as: "Interface of virtual lab is good if all functions were working. If all are working well, this approach is ideal to do our lab classes". "When I was trying to do the lab, I could not access the lab in the time slot that was assigned to me. The setup was not working". "The values of voltage, current and the power factor were correctly updating, and the "Add/Remove Capacitor" button was working. The only problem was that we were not able to access the oscilloscope screen due to a connection error. It would have been a really good experience if we were able to connect to it and obtain the necessary readings". "There were some connection issues at the beginning. But later on, everything was fine and successfully finished". 5. Conclusions It was recognized by many Engineering educators that processes associated with some modules could not be delivered effectively through the online environment. Moving on to the online mode for laboratory experiment during the COVID-19 pandemic was new to many educators, and often they had doubts about the planning and delivery of laboratory experiments. Considering these facts, this paper presented a theoretical framework based on experiential learning to plan and deliver experiments online. Since a laboratory session online should be shorter, the proposed theoretical framework ideally suited to deliver laboratory session in a number of shorter activities. A case study based on the power factor correction was used to demonstrate the planning and execution of the proposed delivery mode. Students' performance was compared before and after the online mode of delivery, and it was found that students' performance was improved when the laboratory activity was conducted as described in this paper. Students' opinion about this mode was obtained using an online questionnaire. In general, students like the idea of a remote lab and the way it was delivered. A few issues with the laboratory setup were also highlighted. It was recognized that a better way of accessing the oscilloscope is important, and latency should be improved. Data Availability No additional data were used to support this study. Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper. Funding Statement This research was not funded. References Adedoyin, O. B., & Soykan, E. (2020). Covid-19 pandemic and online learning: The challenges and opportunities. Interactive Learning Environments, 0(0), 1–13. https://doi.org/10.1080/10494820.2020.1813180 Almeida, A. M. d, Fernandes, J., Pascoal, M., & Pereira, S. (2009). Experiential Learning in Science: Getting the Laboratory Inside the Classroom Using the Web. 2009 Ninth IEEE International Conference on Advanced Learning Technologies, 327– 328. https://doi.org/10.1109/ICALT.2009.173 Balamuralithara, B., & Woods, P. C. (2009). Virtual laboratories in engineering education: The simulation lab and remote lab. Computer Applications in Engineering Education, 17(1), 108–118. https://doi.org/10.1002/cae.20186 Basilaia, G., & Kvavadze, D. (2020). Transition to Online Education in Schools during a SARS-CoV-2 Coronavirus (COVID-19) Pandemic in Georgia. Pedagogical Research, 5(4). https://eric.ed.gov/?id=EJ1263561 Bourne, J., Harris, D., & Mayadas, F. (2005). Online Engineering Education: Learning Anywhere, Anytime. Journal of Engineering Education, 94(1), 131–146. https://doi.org/10.1002/j.2168-9830.2005.tb00834.x Chai, H., Priestley, M., Tang, X., & Ravishankar, J. (2020). Implementation of Microgrid Virtual Laboratory in a Design Course in Electrical Engineering. 2020 IEEE for International Conference on Teaching, Assessment, and Learning Engineering 509–515. https://doi.org/10.1109/TALE48869.2020.9368350 (TALE), Collins, J. T., Knapper, J., Stirling, J., McDermott, S., & Bowman, R. (2021). Modern Microscopy with the Web of Things: The OpenFlexure Microscope Software Stack. ArXiv:2101.00933 [Physics]. http://arxiv.org/abs/2101.00933 Das, S. (2018). Design and Implementation of MATLAB-Simulink Based Solar Cell Modeling and PV System Design Exercises for Advanced Student Learning. 2018 ASEE Annual Conference & Exposition 30263. https://doi.org/10.18260/1-2--30263 Proceedings, Ertugrul, N. (n.d.). New Era in Engineering Experiments: An Integrated and Interactive Teaching/Learning Approach, and Real-Time Visualisations. 13. Feisel, L. D., & Rosa, A. J. (2005). The Role of the Laboratory in Undergraduate Engineering Education. Journal of Engineering Education, 94(1), 121–130. https://doi.org/10.1002/j.2168-9830.2005.tb00833.x Feisel, L., Peterson, G. D., & Emeritus, D. (2002). A Colloquy on Learning Objectives For Engineering Education Laboratories. https://doi.org/10.18260/1-2--11246 Ferreira, J. M. M., Costa, R. J., Alves, G. R., & Cooper, M. (2002). THE PEARL DIGITAL ELECTRONICS LAB: FULL ACCESS TO THE WORKBENCH VIA THE WEB. 6. Goldberg, H. R., & Dintzis, R. (2007). The positive impact of team-based virtual microscopy on student learning in physiology and histology. Advances in 261–265. Physiology https://doi.org/10.1152/advan.00125.2006 Education, 31(3), Gorghiu, G., & Santi, E. A. (2016). Applications of Experiential Learning in Science 320–326. Contexts. Education https://doi.org/10.15405/epsbs.2016.11.33 Non-Formal Hasan, B., Al-Quorashy, Y., Al-Mousa, S., Al-Sahhaf, Y., & El-Abd, M. (2020). V-LAB – The Virtual Electric Machines Laboratory. 2020 IEEE Global Engineering Education 72–77. https://doi.org/10.1109/EDUCON45650.2020.9125349 (EDUCON), Conference Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall. Ma, J., & Nickerson, J. V. (2006). Hands-on, simulated, and remote laboratories: A review. ACM Computing Surveys, 38(3), 7-es. comparative https://doi.org/10.1145/1132960.1132961 literature Morris, T. H. (2020). Experiential learning – a systematic review and revision of Kolb’s 1064–1077. Environments, Learning 28(8), model. https://doi.org/10.1080/10494820.2019.1570279 Interactive Mosterman, P. J., Campbell, J. O., Brodersen, A. J., & Bourne, J. R. (1996). Design and implementation of an electronics laboratory simulator. IEEE Transactions on Education, 39(3), 309–313. https://doi.org/10.1109/13.538752 Odeh, S., Shanab, S. A., Anabtawi, M., & Hodrob, R. (2013). A Remote Engineering Lab based on Augmented Reality for Teaching Electronics. International Journal of Online and Biomedical Engineering (IJOE), 9(S5), 61–67. Osipov, I. V., Prasikova, A. Y., & Volinsky, A. A. (2015). Participant behavior and content of the online foreign languages learning and teaching platform. 476–488. in Computers https://doi.org/10.1016/j.chb.2015.04.028 Behavior, Human 50, Peterson, G. D., & Feisel, L. D. (n.d.). e-Learning: The Challenge for Engineering Education. 6. Solak, S., Yakut, Ö., & Dogru Bolat, E. (2020). Design and Implementation of Web- Based Virtual Mobile Robot Laboratory for Engineering Education. Symmetry, 12(6), 906. https://doi.org/10.3390/sym12060906 Tang, T., Abuhmaid, A. M., Olaimat, M., Oudat, D. M., Aldhaeebi, M., & Bamanger, E. (2020). Efficiency of flipped classroom with online-based teaching under 1–12. COVID-19. https://doi.org/10.1080/10494820.2020.1817761 Environments, Interactive Learning 0(0), Valdez, M. T., Ferreira, C. M., Martins, M. J. M., & Barbosa, F. P. M. (2014). Virtual labs in electrical engineering education—The VEMA environment. 2014 Information Technology Based Higher Education and Training (ITHET), 1–5. https://doi.org/10.1109/ITHET.2014.7155714
ai_researcher
1
Refining_Information_Extraction_Rules_using_Data_Provenance.pdf
2 2 0 2 t c O 6 2 ] L C . s c [ 1 v 6 4 8 4 1 . 0 1 2 2 : v i X r a PROVE: A PIPELINE FOR AUTOMATED PROVENANCE VERIFICATION OF KNOWLEDGE GRAPHS AGAINST TEXTUAL SOURCES A PREPRINT Gabriel Amaral1[0000−0002−4482−5376] Odinaldo Rodrigues1[0000−0001−7823−1034] Elena Simperl1[0000−0003−1722−947X] October 27, 2022 ABSTRACT Knowledge Graphs are repositories of information that gather data from a multitude of domains and sources in the form of semantic triples, serving as a source of structured data for various crucial ap- plications in the modern web landscape, from Wikipedia infoboxes to search engines. Such graphs mainly serve as secondary sources of information and depend on well-documented and verifiable provenance to ensure their trustworthiness and usability. However, their ability to systematically assess and assure the quality of this provenance, most crucially whether it properly supports the graph’s information, relies mainly on manual processes that do not scale with size. ProVe aims at remedying this, consisting of a pipelined approach that automatically verifies whether a Knowl- edge Graph triple is supported by text extracted from its documented provenance. ProVe is intended to assist information curators and consists of four main steps involving rule-based methods and ma- chine learning models: text extraction, triple verbalisation, sentence selection, and claim verification. ProVe is evaluated on a Wikidata dataset, achieving promising results overall and excellent perfor- mance on the binary classification task of detecting support from provenance, with 87.5% accuracy and 82.9% F1-macro on text-rich sources. The evaluation data and scripts used in this paper are available in GitHub and Figshare. Keywords Fact Verification · Data Verbalisation · Knowledge Graphs 1 Introduction A Knowledge Graph (KG) is a type of knowledge base that stores information in the form of semantic triples formed by a subject, a predicate, and an object. KGs represent both real and abstract entities internally as labelled and uniquely identifiable entities, such as The Moon or Happiness, and can amass information from a multitude of domains and sources by connecting such entities amongst themselves or to literals through relationships, coded via uniquely iden- tified predicates. KGs serve as sources of both human and machine-readable semantically structured data for various crucial applications in the modern web landscape, such as Wikipedia infoboxes, search engines results, voice-activated assistants, and information gathering projects [30]. Developed and maintained by ontology experts, data curators, and even anonymous volunteers, KGs have massively grown in size and adoption in the last decade, mainly as secondary sources of information. This means not storing new information, but taking it from authoritative and reliable sources which are explicitly referenced. As such, KGs depend on well-documented and verifiable provenance to ensure they are regarded as trustworthy and usable [56]. Processes to assess and assure the quality of information provenance are thus crucial to KGs, especially measuring and maintaining verifiability, i.e. the degree to which consumers of KG triples can attest these are truly supported by their sources [56]. However, such processes are currently performed mostly manually, which does not scale with size. Manually ensuring high verifiability on vital KGs such as Wikidata and DBpedia is prohibitive due to their sheer size. ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT ProVe (Provenance Verification) is proposed to assist data curators and editors in handling the upkeep of KG verifia- bility. It consists of an automated approach that leverages state-of-the-art Natural Language Processing (NLP) models, public datasets on data verbalisation and fact verification, as well as rule-based methods. ProVe consists of a pipeline that aims at automatically verifying whether a KG triple is supported by a web page that is documented as its prove- nance. ProVe first extracts text passages from the triple’s reference. Then, it verbalises the KG triple and ranks the extracted passages according to their relevance to the triple. The most relevant passages have their stances towards the KG triple determined (i.e. supporting, refuting, neither) and finally ProVe estimates whether the whole reference supports the triple. This task is a specific application of Automated Fact Checking (AFC), also known as AFC on KGs. AFC is a currently well explored topic of research with several published papers, surveys, and datasets [47, 57, 15, 43, 29, 27, 16, 58, 28, 59, 49, 48, 38], and generally defined as the verification of a natural language claim by collecting and reasoning over evidence extracted from text documents or structured data sources. Both the verification verdict and the collected evidence are the main outputs. While general AFC takes a textual claim and a searchable evidence base as inputs, AFC on KGs takes a single KG triple and its documented provenance in the form of an external reference. Approaches tackling AFC on KGs are very few, with the two only works of note in a similar direction, as far as is known by the authors, being DeFacto [25, 14] and its successor FactCheck [45]. While they tackle this task mostly as defined above, they rely on a searchable document base instead of a given reference and judge triples on a true-false spectrum instead of verifiability. Like these few approaches, ProVe diverges from the general AFC framework and introduces a few different sub-tasks. Still, it makes use of the current state-of-the-art on those subtasks in common, being the first approach to tackle AFC on KGs with large pre-trained Language Models (LMs), which can be expanded to work in languages other than English and benefits from an Active Learning scenario. ProVe is evaluated on an annotated dataset of Wikidata triples and their references, combining multiple types of prop- erties and web domains. ProVe achieves promising results overall (75% accuracy and 68.1% F1-macro) on classifying references as either supporting their triples or not, with an excellent performance on explicit and naturally written references (87.5% accuracy, 82.9% F1-macro, 0.908 AUC). Additionally, ProVe assesses passage relevance with a strong positive correlation (0.5058 Pearson’s r) to human judgements. In summary, this paper’s main contributions are: 1. A novel pipelined approach to evidence-based Automated Fact Checking on Knowledge Graphs based on large Language Models; 2. A benchmarking dataset of Wikidata triples and references for Automated Fact Checking on Knowledge Graphs, covering a variety of information domains as well a balanced sample of diverse web domains; 3. Novel crowdsourcing task designs that facilitate repeatable, quick, and large-scale collection of human anno- tations on passage relevance and textual entailment at good agreement levels. These contributions directly aid KG curators, editors, and researchers in improving KG provenance. Properly deployed, ProVe can do so in multiple ways. Firstly, by assisting the detection of verifiability issues in existing references, bringing them to the attention of humans. Secondly, given a triple and its reference, it can promote re-usability of the reference by verifying it against neighbouring triples. Finally, given a new KG triple entered by editors or suggested by KG completion processes, it can analyse and suggest references. The remainder of this paper is structured as follows. Section 2 explores related work on KG data quality, mainly verifiability, as well as approaches to AFC on KGs. Section 3 presents ProVe’s formulation and covers each of its modules in detail. Section 4 presents an evaluation dataset consisting of triple-reference pairs, including its generation and its annotation. Section 5 details the results of ProVe’s evaluation. Finally, Section 6 delivers discussions around this work and final conclusions. All code and data used in this paper are available on Figshare 1 and GitHub. 2,3 2 Related Work ProVe attempts to solve the task of AFC on KGs, with the purpose of assisting data curators in improving the verifi- ability of KGs. Thus, to understand how ProVe approaches this task, it is important to first understand how the data quality dimension of verifiability is currently defined and measured in KGs, as well as how state-of-the-art approaches to general AFC and AFC on KGs tackle these tasks and how ProVe learns or differs from them. 1 https://figshare.com/s/df0ec1c233ebd50817f4 2 https://anonymous.4open.science/r/RSP-F367/ 3 https://anonymous.4open.science/r/ClaimVerificationHIT-A04D 2 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT 2.1 Verifiability in KGs In order to properly evaluate the degree to which ProVe adequately predicts verifiability, this dimension first needs to be well defined and a strategy needs to be established to measure it given an evaluation dataset. Verifiability in the context of KGs, whose information is mainly secondary, is defined as the degree to which consumers of KG triples can attest these are truly supported by their sources [56]. It is an essential aspect of trustworthiness [56, 11, 34], yet is amongst the least explored quality dimensions [56, 34], with most measures carried superficially, unlike correctness or consistency [34, 40, 2, 22, 1]. For instance, Farber et al. [11] measure verifiability only by considering whether any provenance is provided at all. Flouris et al. [12] look deeper into sources’ contents, but only verify specific and handcrafted irrationalities, such as a city being founded before it had citizens. Algorithmic indicators are not suited to directly measure verifiabil- ity, as sources are varied and natural language understanding is needed. As such, recent works [33, 3] measure KG verifiability through crowdsourced manual verification, giving crowdworkers direct access to triples and references. Crowdsourcing allows for more subjective and nuanced metrics to be implemented, as well as for natural text compre- hension [55, 8]. Thus, this paper employs crowdsourcing in order to measure verifiability metrics of individual triple-reference pairs. By comparing a pair’s metrics with ProVe’s outputs given said pair as input, ProVe and its components can be eval- uated. Like similar crowdsourcing studies [33, 3], multiple quality assurance techniques are implemented to ensure collected annotations are trustworthy [10]. To the best of the authors’ knowledge, this is the first work to use crowd- sourcing as a tool to measure the relevance and stance of references in regards to KG triples at levels varying from whole references to individual text passages. 2.2 Automated Fact Checking on Knowledge Graphs General AFC Automated Fact Checking (AFC) is a topic of several works of research, datasets, and surveys [47, 57, 15, 43, 29, 27, 16, 58, 28, 59, 49, 48, 38]. AFC is commonly defined in the Natural Language Processing (NLP) domain as a broader category of tasks and subtasks [47, 57, 15] whose goal is to, given a textual claim and searchable document corpora as inputs, verify said claim’s veracity or support by collecting and reasoning over evidence. Such evidence is extracted from the input document corpora and constitutes AFC’s output alongside the claim’s verdict. While a detailed explo- ration of individual AFC state-of-the-art approaches is out of this paper’s scope, it is crucial to define their general framework in order to properly cover ProVe’s architecture. A general framework for AFC has been identified by recent surveys [57, 15], and can be seen in Figure 1. Zeng et al. [57] define it as a multi-step process where each step can be tackled as a subtask. Firstly, a claim detection step identifies which claims need to be verified. Based on such claims, a document retrieval step gathers documents that might contain information relevant to verifying the claim. A sentence selection step then identifies and extracts from retrieved documents a set of few individual text passages that are deemed the most relevant. Based on these passages, a claim verification step provides the final verdict. Guo et al. [15] add that a final justification production step is crucial for explainability. Given the framework’s nature, it is no wonder pipelined approaches are extremely popular and compose the current state-of-the-art. Fig. 1. Overview of a general AFC pipeline. White diamond blocks are documents and objects, and grey square blocks are AFC subtasks. Specific formulations and implementation of course might differ. AFC mainly deals with text, both as claims to be verified and as evidence documents, due to recent advances in this direction being greatly facilitated by textual resources like the FEVER shared task [49] and its associated large- scale benchmark FEVER dataset [48]. Still, some tasks in AFC take semantic triples as verifiable claims, either from KGs [41, 21] or by extracting them from text. Some also utilise KGs as reasoning structures from where to draw 3 Claim DetectionDocument RetrievalClaimEvidenceSelectionClaimVerificationStatementsFinal VerdictDocument CorporaRelevant EvidenceJustification ProductionJustification ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT evidence [15, 50, 46, 9, 42]. For instance, Thorne and Vlachos [46] directly map claims found in text to triples in a KG to verify numerical values. Both Ciampaglia et al. [9] and Shiralkar et al. [42] use entity paths in DBpedia to verify triples extracted from text. Other approaches based on KG embeddings associate the likelihood of a claim being true to that of it belonging as a new triple to a KG [5, 18]. These tasks, while incorporating semantic triples and KGs, can not be exactly defined as AFC on KG; either the verified triples do not come from full and consistent KGs, or the evidence used for reasoning is not taken from sources that could serve as provenance, but inferred from the graph itself. AFC on KGs AFC on KGs is a more specific task within AFC, explored by a handful of approaches, the most prominent of which are DeFacto [14] and its successor FactCheck [45]. Its main purpose is to ensure KGs are fit for use by asserting whether their information is verifiable by trustworthy evidence. Given a KG triple and either its documented external provenance or searchable external document corpora whose items could be used as provenance, AFC on KGs can be defined as the automated verification of said triple’s veracity or support by collecting and reasoning over evidence extracted from such actual or potential provenance. Its outputs are the verdict and the evidence used. KGCleaner [31] uses string matching and manual mappings to retrieve sentences relevant to a KG triple from a docu- ment, using embeddings and handcrafted features to predict the triple’s credibility. Leopard [44] validates KG triples for three specific organisation properties, using specifically designed extractions from HTML content. Both approaches entail manual work overhead, cover a limited amount of predicates, and do provide human-readable evidence. DeFacto [14] and its successor FactCheck [45] represent the current state-of-the-art on this task. They verbalise KG triples using text patterns and use it to retrieve web pages with related content. They then score sentences based on relevance to the claim and use a supervised classifier to classify the entire web page. Despite their good performance, both approaches depend on string matching, which might miss verbalisations that are more nuanced and also entail considerable overhead for unseen predicates. ProVe, on the other hand, covers any non-ontological predicate (such as subclass of and main category of ) by using pre-trained LMs that leverage context and meaning to infer verbalisations. Due to its specific application scenario, approaches tackling AFC on KGs differ from the general framework [57, 15] seen in Figure 1. A claim detection step is not deemed necessary, as triples are trivial to extract and it is commonly as- sumed they all need verifying. Alternatively, triples with particular predicates can be easily selected. The existence of a document retrieval step depends on whether provenance exists or needs to be searched from a repository, with the for- mer scenario dismissing the need for the step. This is the case for ProVe, but not for DeFacto [14] and FactCheck [45], which search for web documents. Additionally, KG triples are often not understood by the components’ main labels alone. Descriptions, alternative labels, editor conventions, and discussion boards help define their proper usage and interpretation, rendering their meaning not trivial, in contrast to the natural language sentences tackled by general AFC. As such, approaches tack- ling AFC on KGs rely on transforming KG triples into natural sentences [25, 14, 45] through an additional claim verbalisation step. While both DeFacto [14] and FactCheck [45] rely on sentence patterns that are completed with the components’ labels, ProVe relies on state-of-the-art Language Models (LMs) for data-to-text conversion. Lastly, evidence document corpora normally used in general AFC tend to have a standard structure or come from a specific source. Both FEVER [48] and VitaminC [39] take their evidence sets from Wikipedia, with FEVER’s even coming pre-segmented as individual and clean sentences. Vo and Lee [51] use web articles from snopes.com and politifact.com only. KGs, however, accept provenance from potentially any website domains. As such, unlike general AFC approaches, ProVe employs a text extraction step in order to retrieve and segment text from triples’ references. While previous approaches simply remove HTML markup, ProVe employs a rule-based approach that allows for more flexibility. Large Pre-trained Language Models on AFC Advances towards textual evidence-based AFC, particularly the sentence selection and claim verification subtasks, have been facilitated by resources like the FEVER [49, 48] shared task and its benchmarking dataset. The FEVER dataset consists of a large set of claims annotated with one of three classes: supported, refuted, and not enough in- formation to determine (neither). The dataset also provides pre-extracted and segmented passages from Wikipedia as evidence for each claim. Tackling FEVER through pre-trained LMs [43, 27, 29] and graph networks [28, 59, 58] represents the current state- of-the-art. While approaches using graph networks (such as KGAT [28], GEAR [59], and DREAM [58]) for claim verification slightly outperform those based mainly on sequence-to-sequence LMs, they still massively depend on the 4 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT latter for sentence selection. Additionally, explainability for task-specific graph architectures, like those of KGAT and DREAM, is harder to tackle than for generalist sequence-to-sequence LM architectures which are shared across the research community [7, 36, 23]. Slightly decreasing potential performance in favour of a simpler and more explainable pipeline, ProVe employs LMs for both sentence selection and claim verification. On sentence selection, the common strategy is to assign relevance scores to text passages based on their contextual proximity to a verifiable claim. GEAR [59] does so with LSTMs, but use a BERT model to acquire text encodings. Soleimani et al. [43], KGAT [28], and current state-of-the-art DREAM [58] outperform GEAR by directly using BERT for the rankings, an approach ProVe also follows. Graph networks are employed at the claim verification subtask [28, 59, 58]. Soleimani et al. [43] are among the few to achieve near state-of-the-art results using an LM and a rule-based aggregation instead of graph networks. While ProVe handles the subtask similarly, it uses a weak classifier as its final aggregation. As a subtask of AFC on KGs, claim verbalisation is normally done through text patterns [14, 45] and by filling templates [31], both of which can either be distantly learned or manually crafted. ProVe is the first approach to utilise an LM for this subtask. Amaral et al. [4] shows that a T5 model fine-tuned on WebNLG achieves very good results when verbalising triples from Wikidata across many domains. ProVe follows suit by also using a T5. Table 1 shows a comparison of ProVe to other AFC approaches mentioned in this section grouped by specific task, showcasing the particular subtasks each targets, as well as the datasets used as a basis for their evaluation. AFC on KG is amongst the least researched tasks within AFC. ProVe is the first to tackle it through fine-tuned LMs that adapt to unseen KG predicates and to be evaluated on a Wikidata dataset consisting of multiple non-ontological predicates. Task Input Type Evidence Source Evidence Returned Subtasks Evaluation Dataset Approaches General text-based AFC Graph-based AFC Textual claims Textual claims KG triple prediction KG triples Text Yes KG Yes KG paths Yes KGE No AFC on KGs KG triples Text Yes No Yes DR, SS, CV SS, CV SS, CV RE, EL, CV CVb, DR, TA, CV SS, CV CVb, TR, SS, CV FEVER [43, 29, 28, 59, 58], Freebase DBpedia, SemMedDB, Wikipedia Kaggle, news articles DBpedia, Freebase DBpedia, FactBench Wikidata (48 predicates), SWC 2017 [50, 46] [41, 21, 9, 42], [5, 18] [25, 14, 45] [31, 44] Wikidata (any non-ontological predicate) ProVe Table 1. Comparison between ProVe and others within AFC. KGE = KG Embeddings, DR = Document Retrieval, SS = Sentence Selection, CV = Claim Verification, RE = Relation Extraction, EL = Embedding Learning, CVb = Claim Verbalisation, TA = Trustworthiness Analysis, TR = Text Retrieval. 3 Approach ProVe consists of a pipeline for Automated Fact Checking (AFC) on Knowledge Graphs (KG) that, provided with a KG triple that is not ontological in nature (e.g. denoting subclasses, categories, lists, etc) and its documented provenance in the form of a web page or text document, automatically verifies whether the page textually supports the triple, retrieving from it relevant text passages that can be used as evidence. This section presents an overview of ProVe and its task, as well as detailed descriptions of its modules and the subtasks they target. 5 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT 3.1 Overview From a KG’s set of non-ontological triples T , consider a KG triple t ∈ T , where t is composed by a subject, a predicate, and an object, i.e. t = (s, p, o). Consider also a reference r to a web page or text document, acting as the documented provenance of t. ProVe assesses whether r textually supports t through a pipeline of rule-based methods and language models. Figure 2 shows a KG triple (taken from Wikidata), its reference to a web page, and ProVe’s processing according to the definitions provided in this section. ProVe extracts the text from r and divides it into a set of passages P . Each passage pi ∈ P , with i in the closed integer interval [0..|P | − 1], receives a relevance score ρi ∈ [−1, 1] indicating how relevant they are to the triple t. The five highest-ranking passages from P are selected as evidence and assembled into the evidence set E. Each evidence ei ∈ E receives three stance probabilities (σSU P P , σREF ), with i in the i i ∈ [0, 1] and (cid:80) σk integer interval [0..4], such that each probability σk i = 1 for k ∈ K and K = {SU P P, REF, N EI}. These probabilities denote the individual stance of evidence ei towards the triple t, which can either support the triple (SU P P ), refute it (REF ), or not have enough information to do either (N EI). Finally, ProVe uses the relevance scores ρi and the stance probabilities σk i ∀ k ∈ K of each ei ∈ E to define a final stance z ∈ K, as well as to calculate a support probability y ∈ [0, 1] indicating how much the triple t is supported by its reference r. , σN EI i i Fig. 2. An example of the inputs and outputs of ProVe when applied to a Wikidata triple and its provenance. A triple’s (t) subject, predicate, and object elements (s, p, o) have their labels extracted and verbalised (v). Reference r has its passages extracted (P ). The 5 most relevant passages are compiled as the evidence set (E), with their respective relevance scores (ρi) and stance probabilities (σk i ) used to calculate a final class (z) and a support probability (y). Note that i indices between passages in P and evidence in E are different, i.e. the 27th extracted passage is the 4th evidence. A modular view of ProVe’s pipeline can be seen in Figure 3. Its inputs, as previously stated, are a KG triple (t) and a referenced web page or text document (r), while its outputs are a final stance class (z), a support probability (y), and a set of textual evidence used to calculate it (E). ProVe takes any non-ontological KG triple as long as its components are accompanied by labels in natural language. The claim verbalisation module takes the preferred labels of each of the triple’s components (i.e. subject, predicate, and object) as its inputs and produces a natural language sentence that expresses the same information (v). As KG entities and predicates might contain multiple labels, multiple possible verbalisations can be generated; users might chose those labels that best portray the triple’s meaning, which are here 6 Relevance scoreKG TripleLibrarian of Congressposition holderJames H. BillingtonQ6542448P1308Q137576Q6542448$7cdc5c7c-466d-ae3b-f176-b4af6c74dc05ComponentsPreferred labelsJames H Billington was the Librarian of CongressVerbalisation"His successor as Librarian of Congress, Carla Hayden, said Dr. Billington has left an indelible legacy on the institution he led passionately for 28 years. With his vigor for philanthropy and tireless efforts to expand the reach and impact of the Library, he achieved so much to advance the Library of Congress as an enduring place for scholars and learners.""James Hadley Billington was sworn in as the Librarian of Congress on September 14, 1987. He is the 13th person to hold the position since the Library was established in 1800.""In October 2004, Billington headed a Library of Congress delegation to Tehran, Iran, that significantly expanded exchanges between the Library of Congress and the National Library of Iran.""He also established the Library Collections Security Oversight Committee in 1992 to improve protection of collections, and also the Library of Congress Congressional Caucus in 2008 to draw attention to the Library's curators and collections.""He is the founding chairman of the Board of Trustees (1999-2011) of the Open World Leadership Center, a nonpartisan initiative of the U.S. Congress, which has administered 24,000 professional exchanges for emerging post-Soviet leaders in Russia, Ukraine, and seven other successor states of the former USSR to visit counterparts in the United States. Open World began as a Library of Congress project, and later became an independent agency in the legislative branch."0.9990.9830.0030.0130.9990.9900.0040.0050.5610.9790.0030.0170.6340.9800.0040.0150.5490.7100.0260.262"Supports" probability"Refutes" probability"NEI" probability0.98Final source support scorehttps://www.loc.gov/item/n80020417/james-h-billington-1929/EvidenceSetReference"James Hadley Billington was sworn in as the Librarian of Congress on September 14, 1987."He is the 13th person to hold the position since the Library was established in 1800.""Billington was nominated by President Ronald Reagan, and his ...""During his 28-year tenure at the Library of Congress, Billington doubled the size of..."...PassagesSUPPFinal source stance ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT defined as preferred labels, and even rules within the KG can determine them. The reference (r) can consist of any HTML page containing natural language text, but also plain text documents. As ProVe makes no assumptions as to the page’s layout in order to optimise recall, its text retrieval module extracts all identified passages (P ) from the page, even if they contain boilerplate text or have poor syntax due to layout-dependant contents, e.g. tables, headers, etc. Fig. 3. Overview of ProVe’s pipeline. The white blocks are artefacts while the green are modules, further detailed in the subsections indicated in the circles. Pairs consisting of the verbalised claim (v) alongside each extracted passage (pi) are given to the sentence selection and the claim verification modules. At the former, extracted passages are given a relevance score from -1 to 1, indicating how contextually relevant to the claim they are, regardless of stance. The five highest scoring passages are selected as evidence (E). The claim verification module has two steps. First, a Textual Entailment Recognition (TER) step assigns each extracted passage with probabilities of having the three following stances: supports the claim, refutes the claim, or does not hold enough information for either (NEI). Finally, a stance aggregation step takes the relevance scores plus stance probabilities for the five passages in the evidence set and outputs the final class (z) and support probability (y), indicating how supportive of the triple is the reference. ProVe’s workflow differs from the AFC framework seen in Figure 1. This is due to the particular task ProVe tackles, i.e. the verification of KG triples using text from documented provenance, where triples can have any non-ontological predicate and such provenance can come from varied sources. As detailed in Section 2 and evidenced in Table 1, this is currently a little studied task compared to others in AFC, posing distinct problems and requiring specific subtasks to be solved. For instance, ProVe does not need to perform either claim detection or document retrieval, as both the claims and the sources are given to it as inputs, although such modules can be easily plugged in from other pipelines. On the other hand, as ProVe handles both KG triples and unstructured text with the same model architectures and does not make use of KG paths for evidence, it needs to convert the triples into text through a claim verbalisation module, akin to most other approaches in this task [25, 14, 45, 31]. As its input references can lead to web pages having any HTML layout and their text does not come pre-segmented (such as with FEVER [48]), ProVe needs a non-trivial text retrieval module so that it can identify informative passages. Finally, as such KGs are often secondary sources of information and triples should not include conclusions or interpretations from editors, ProVe needs to consider pieces of evidence in isolation; this is to lower ProVe’s reliance on multi-sentence reasoning, as concluding a triple from multiple text passages should not configure support in this task. Thus, ProVe first identifies stances of retrieved evidence individually, aggregating them into a final verdict afterwards. Each of ProVe’s modules is further detailed in the remainder of this section. 3.2 Claim Verbalisation KG entities and properties have natural language labels that help clarify their meanings, with KGs like Wikidata and Yago also containing multiple aliases and alternative names. However, these entities and predicates are often not created by prioritising human understanding, but rather data organisation, and thus rely heavily on descriptions in order to set out proper usage rules. Many serve as abstract concepts that unite other related but not identical concepts, using a very broad main label and more specific aliases. One example of such is Wikidata’s inception property (P571), which indicates the date in which something was founded, written, formed, painted, or created, and applies to any entity with a beginning; its description clearly points out that for dates of official opening, P1619 should be used instead. Many also depend heavily on context (e.g. subject and object types) or editor conventions to have a clear meaning. One example of such is the child property (P40), which follows the convention that the subject has the object as its child, but should not be used for stepchildren. However, the triple (John, child, Paul) alone makes it unclear which is the parent. As such, merely concatenating labels does not convey the full meaning of the triple [14, 25, 45]. Thus, ProVe 7 ReferenceClaim VerbalisationText Retrieval documented provenanceVerbalised ClaimSentence SelectionTextual Entailment RecognitionKG tripleEvidence ConsideredFinal Class and Support ProbabilitySet of Passages<Claimi,Sentencej><Claimi,Sentencej>(Claim,Passage)pairsStance AggregationClaim Verification3.23.33.53.4 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT relies on a claim verbalisation component that, based on similar triples and their verbalisations, is able to fill format and meaning gaps. ProVe’s claim verbalisation module takes as input a KG triple t, made by its subject, predicate, and object components such that t = (s, p, o). A component’s preferred label is assumed to be its main (and often only) label, but alternative labels or aliases can be manually chosen by editors and curators employing ProVe. Denoting by the function l(·) the process of defining a component’s preferred label, ProVe’s claim verbalisation module outputs a natural language formulation v, called the verbalisation, as defined by Equation 1. v = φ(l(s), l(p), l(o)) (1) The function φ(·) represents the generation of a fluent and adequate verbalisation from the components’ preferred labels. A fluent verbalisation is defined as one written in good grammar, resembling natural text, and an adequate verbalisation as one that carries the same meaning as the original triple intended. Like recent works in data verbalisa- tion [37, 4], a pre-trained transformer is used for this subtask. The function φ(·) is carried out by a T5-base [35] model fine-tuned on the WebNLG 2017 dataset [13]. The WebNLG 2017 dataset consists of DBpedia triples belonging to 15 categories and their corresponding verbalisations in English; 10 categories (the SEEN partition) were used for training, validation, and testing, and the remaining 5 (the UNSEEN partition) for testing only. Fine-tuning was carried out with a constant 3e-5 learning rate, an Adam optimiser, and a cross-entropy loss for 100 epochs with early stopping patience of 15 epochs. Its text generation is done via beam search with 3 beams. Amaral et al. [4] use this exact same model to produce the WDV dataset, which consists of verbalised Wikidata triples, and then evaluate its quality with human annotations on both fluency and adequacy. Such evaluations are covered in more detail in Section 5. In WDV, main labels are used as preferred labels for all triple components, despite aliases often representing better choices. ProVe allows editors to manually define the behaviour of l(·) to replace contextually- dependent and vague main labels with alternate labels in order to address some of the fluency and adequacy issues observed in WDV. While this entails the extra effort by ProVe’s users of choosing proper labels, it is still a much more scalable alternative to manually generating verbalisations. 3.3 Text Retrieval In KGs, provenance is documented per individual triples and is often presented as URLs to web pages or text docu- ments. Such references form the basis of KG verifiability and should point to sources of evidence that support their associated KG triples. Additionally, they can come from a huge variety of domains as long as they adhere to verifia- bility criteria, that is, humans can understand the information they contain. As humans are excellent in making sense of structured and unstructured data combined, KG editors do not need to worry much about how references express their information. Images, charts, tables, headers, infoboxes, and unstructured text, can all serve as evidence to the information contained in KG triple. However, this complicates the automated extraction of such evidence in a standard format so that LMs can understand it. Rather than only free-flowing text, referenced web pages can have multiple sections, layouts, and elements, making it non-trivial to automatically segment its textual contents into passages. Thus, ProVe employs a combination of rule- based methods and pre-trained sentence segmenters to extract passages. Figure 4 details this process. The module takes as input a reference r, which can either be a URL to a web page or a text document. If an URL, the module extracts all HTML contents via a web scrapper, assuring that all contents accessible by users are rendered and processed. The module then removes scripts and code from the HTML, leaving only markup and text. A list of rule-based cleaning steps are then applied. Whitespaces in continuous text elements are corrected by ensuring text within tags such as <p> do not have separations that could be breaking sentences. Tags that are sure to be boilerplate are removed, such as tables of contents and navigation bars. Text that is broken across sequential similar tags is joined. Lastly, spacing and punctuation (full stops) are corrected. Following this process addresses the most severe cases of improper sentence segmentation. Leftover HTML markup is removed and the text text is fed into spaCy’s sentence segmenter using the en core web lg model [17], producing a set of text segments S. If r is a text document, it is fed directly into sentence segmentation. As a last step, multiple n-sized sliding window concatenations are used to create the final set of passages P . Given a positive integer n ∈ N , where N is the set of window sizes to be applied, an n-sized window slides through the sequential set S of text segments produced by the sentence segmentation step. Let Si j = [si, . . . , sj], for j ≥ i, be the window including all segments from si to sj and (cid:12)(·) the function that concatenates a sequence of segments interleaving them with a blank space as a separator. Equations 2 and 3 define the set of all passages Pn produced by a n-sized sliding window and the union P of all such passages, respectively. 8 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT Fig. 4. Illustration of the text extraction module’s workflow, taking a reference r as input, dividing its text into a set of segments S, and concatenating them into a set of extracted passages P . Pn = {(cid:12)(Si i+n−1) | 0 ≤ i ≤ |S| − n} P = (cid:91) Pn n∈N (2) (3) ProVe concatenates text segments in this fashion for two reasons. Firstly, meaning can often be spread between se- quential segments, e.g. in the case of anaphora. Secondly, HTML layouts might separate text in ways ProVe’s general rules were not able to join, e.g. a paragraph describing an entity in the header. For a trade-off between coverage and sentence length, ProVe defines N = {1, 2}, i.e. it concatenate all segments by themselves (P1) and all sequential pairs (P2). Current best approaches to extracting textual content from web pages, often called boilerplate removal, are based on supervised classification [52, 26] and, as no classification is perfect, might miss relevant text. ProVe’s text retrieval module aims at maximizing recall by retrieving all reachable text from the web page and arranging it into separate passages by following a set of rules based on the HTML structure. The sentence selection module is later responsi- ble for performing relevance-based filtering. ProVe’s rule-based method can easily be updated with ad hoc cleaning techniques to help treat difficult cases, such as linearisation or summarisation of tables, automated image and chart descriptions, converting hierarchical header-paragraph structures and infoboxes into linear text, etc. 3.4 Sentence Selection As ProVe’s text extraction module extracts all the text in a web page in the form of passages P , it needs a way to filter those based on their relevance to the triple t. Sentence selection consists of, given a claim, rank a set of sentences based on how relevant each is to the claim, where relevance is defined as contextual proximity e.g. similar entities or overlapping information. Sentence selection is an integral part of most recent AFC approaches [15, 57], incuding AFC on KGs [45, 14], as discussed in Section 2.2. It is a subtask of the FEVER fact-checking shared task [49], directly supported as a supervised task by the FEVER dataset [48], and is explored by a large body of work [43, 29, 28, 59, 58] to excellent results. Following on KGAT’s [28] and DREAM’s [58] approach to FEVER’s sentence selection subtask, ProVe employs a large pre-trained BERT transformer. ProVe’s sentence selection BERT is fine-tuned on the FEVER dataset by adding to it a dropout and a linear layer, as well as a final hyperbolic tangent activation, making outputted scores range from −1 to 1, and training it on a pairwise margin ranking loss with the margin set to 1. Fine-tuning is achieved by feeding the model pairs of inputs, where the first element is a concatenation of a claim and a relevant sentence, while the second element is the same but with an irrelevant sentence instead, and training it to assign higher scores to the first element, such that the difference in scores between the pair is 1 (the margin). FEVER is used for training and validation. For each FEVER claim in the training and validation partition, relevant sentences are provided as annotations. Irrelevant sentences were retrieved by applying the same document retrieval process used by other works [43, 28, 59] to define relevant Wikipedia articles, which FEVER breaks into pre-segmented sentences. All sentences from such retrieved documents that were not already annotated as relevant were taken as irrelevant. This fine-tuned module is thus used to assign scores ranging from −1 to 1 to passages from P , expressing how relevant they are to the given triple t. Taking the triple t’s verbalisation v as the claim, and a passage pi as the sentence, the BERT model takes the concatenation of v and pi as input and outputs a relevance score ρi ∈ [−1, 1]. This is defined in Equation 4, where ψ(·) represents the execution of the sentence selection BERT on the concatenated input. 9 ReferenceChromeDriverScrapperDecomposeScriptsApply RulesCorrect Paragraph WhitespaceDecompose Bad tags (warningbox, toc, etc)Joining Continuing TextCorrect SpacingInsert Full StopsSentece Segmentation(spaCy's en_core_web_lg)Sliding Window ConcatSentenceSentenceSentenceN = 1SentenceSentencePassageSentenceSentencePassageN = 2...Is a web pageIs a text document ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT ProVe ranks all passages pi ∈ P based on their relevance scores ρi. As passages are generated with sliding windows of different sizes by the text extraction module, they might have overlapping content. To avoid unnecessary repetition of information, a passage pi is removed from P whenever there is another passage pj that overlaps with it and is more relevant (i.e, ρj > ρi), yielding the set of passages P ∗, whose five highest scoring passages constitute the evidence set E (see Equation 5). ρi = ψ(v, pi) E = argmax P (cid:48)⊆P ∗,|P (cid:48)|=5 (cid:88) ρi pi∈P (cid:48) (4) (5) 3.5 Claim Verification As discussed in Section 2.2, claim verification is a crucial subtask in AFC, central to various approaches [15, 47], and consists on assigning a final verdict to a claim, be it on its veracity or support, given retrieved relevant evidence. ProVe’s claim verification relies on two steps: first, a pre-trained BERT fine-tuned on data from FEVER [48] performs Textual Entailment Recognition (TER) to detect stances of individual pieces of evidence, and then an aggregation considers the stances and relevance scores from all evidence to define a final verdict. As ProVe also uses a BERT model for sentence selection, its approach is similar to that of Soleimani et al. [43], which uses two fine-tuned BERT models, one for sentence selection, another for claim verification. Although task-specific graph-based approaches [28, 58] outperform Soleimani, it is by less than a percentage point (on FEVER score), while explainability for such generalist pre-trained LMs is increasingly researched [7, 36, 23] by the NLP community. Textual Entailment Recognition Like sentence selection, claim verification is a well defined subtask of AFC, supported by both the FEVER shared task and the FEVER dataset. FEVER annotates claims as belonging to one of three TER classes: those supported by their associated evidence (‘SUPPORTS’), those refuted by it (‘REFUTES’), and those wherein the evidence is not enough to reach a conclusion (‘NOT ENOUGH INFO’, abbreviated as ‘NEI’). As previously mentioned, ProVe is meant to handle KGs as secondary sources of information. Thus, it assesses evidence first in isolation, aggregating assessments afterwards in a similar fashion to other works [43, 29]. ProVe’s TER step is a BERT model fine-tuned on a multiclass classification TER task. It consists of identifying a piece of evidence’s stance towards a claim using the three classes from FEVER (‘SUPPORTS’,‘REFUTES’,‘NEI’). To fine- tune such model, a labelled training dataset of claims-evidence pairs is built out of FEVER. For each claim in FEVER labeled as ‘SUPPORTS’, all sentences annotated as relevant to it are paired with the claim; such pairs are labelled as ‘SUPPORTS’. The same is done for all claims in FEVER labeled as ‘REFUTES’, generating pairs classified as ‘REFUTES’. For claims labeled as ‘NOT ENOUGH INFO’, FEVER does not annotate any sentence as relevant to them. Thus, ProVe’s sentence selection module is applied to documents deemed relevant to such claims (retrieved in a similar fashion to KGAT [28]) and each claim is paired with all sentences that have relevance scores greater than 0 in regards to them. All such pairings are labelled ‘NEI’. Fine-tuning was carried for 2 epochs with an AdamW optimizer with 0.01 weight decay. Population Based Training was used to tune learning rate, batch sizes, and warmup ratio. Thus, for a verbalisation v obtained from a triple t and a piece of evidence ei ∈ E, retrieved as a passage from the t’s reference r, ProVe’s TER step returns a probability array σi that describes ei’s stance towards v. Given that v is fluent and adequate, σi also describes ei’s stance towards t. Equation 6 formulates this, where τ (·) is the function representing ProVe’s TER BERT model, which takes the concatenation of v and ei as input and outputs σi, an array that is normalised through a softmax layer. The array σi consists of the probabilities of each FEVER class k ∈ K = {SU P P, REF, N EI}. Notice that (cid:80) k∈K σk i = 1. σi = (σSU P P i , σREF i , σN EI i ) = τ (ei, v) (6) Stance Aggregation After classifying the stance and relevance of each individual piece of evidence ei ∈ E towards the triple t, ProVe aggregates these scores (ρi and σi) into a final stance class z ∈ K and a probability y denoting the level of support shown by the triple-reference pair (t, r). Multiple aggregation strategies can be adopted, with ProVe proposing three: 1. A simple weighted sum σ of the TER probability arrays σi using the relevance scores ρi as weights, where negative relevance scores are dismissed (Equation 7). A final TER class z can be defined as the class k ∈ 10 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT K with the highest weighted sum (Equation 8). The support probability y is then defined as the weighted summed probability of the ‘SUPPORTS’ class (Equation 9). σk = (cid:88) max(ρi, 0) ∗ σk i 0≤i≤|E| z = argmax σk k∈K y = σSU P P (7) (8) (9) 2. The rule-based strategy adopted by Malon et al. [29] and Soleimani et al. [43]. A triple-reference pair (t, r) is assigned a final TER class z (Equation 10) of ‘SUPPORTS‘ if any individual evidence ei ∈ E is most likely to support the triple. If that is not the case, z is set as ‘REFUTES’ if any individual evidence ei ∈ E is most likely to refute the triple. If that is also not the case, z is set to ‘NEI’. The final probability y is 1 is the triple-reference is classified as ‘SUPPORTS’ and 0 otherwise (Equation 11). This strategy does not allow editors to vary the classification threshold. z =    SU P P, REF, N EI, if there exists ei ∈ E s.t. argmaxk∈K σk else if there exists ei ∈ E s.t. argmaxk∈K σk otherwise i = SU P P i = REF y = (cid:26)1, if z = SU P P 0, otherwise (10) (11) 3. A simple classifier, trained on an annotated set of triple-reference pairs (t, r). The classifier takes as features all relevance scores ρi and all TER probability arrays σi calculated from every piece of evidence ei ∈ E, as well as their sizes in characters. It is trained on a multiclass classification task to predict the annotated final TER class z of the pair (t, r) by outputting a probability array θ, as defined by Equation 12, where ω(·) represents the classifier. Thus, z can be defined as the class with the highest probability (Equation 13), and y as the probability θSU P P assigned to the ‘SUPPORTS’ class (Equation 14). θ = (θSU P P , θREF , θN EI ) = ω({(ρi, σi, |ei|) | ei ∈ E}) z = argmax θk k∈K y = θSU P P (12) (13) (14) 4 Reference Evaluation Dataset This section presents and describes the dataset used to evaluate ProVe: Wikidata Textual References (WTR). WTR is mined from Wikidata, a large scale multilingual and collaborative KG, produced by voluntary anonymous editors and bots, and maintained by the Wikimedia Foundation [53]. WTR consists of a series of detailed and annotated triple- reference pairs, each consisting of a non-ontological Wikidata triple paired with a reference to a web page, documented as its provenance. Unlike other benchmarking datasets used in AFC, WTR contains no artificial data and reflects only naturally occurring information. WTR’s triples are detailed with all three main components (subject, predicate, and object), as well as their unique Wikidata identifiers, main labels, aliases (alternative labels), and textual descriptions. WTR’s references are detailed with the URLs they resolve to, as well as the HTML contents within. WTR is balanced in terms of web domains contemplated, meaning the web domains represented by its references have mostly an equal number of triple-reference pairs. Each triple-reference pairing in WTR is annotated both at evidence-level and at reference-level. Evidence-level anno- tations are provided by crowd-workers and describe the stance that specific text passages from the reference display towards the triple. Reference-level annotations are provided by the authors and describe the stance the whole refer- enced web page displays towards the triple. Evaluation, described in Section 5, consists of comparing ProVe’s final class (z) and support probability (y) outputs to such annotations. Section 4.1 covers WTR’s construction, while Sec- tion 4.2 details its annotation process. 11 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT 4.1 Dataset Construction Wikidata has been chosen as the source for ProVe’s evaluation dataset, as it contains vast amounts of triples that ex- plicitly state their provenance, pertain to various domains, and are accompanied by aliases and descriptions that greatly aid annotators. Since many references in Wikidata can be automatically verified through API calls, as showcased by Amaral et al. [3], WTR is built focusing on those that can not. Furthermore, to prevent biases towards frequently used web domains, such as Wikipedia and The Peerage, WTR is built to represent a variety of web domains with equal amounts of samples from each. Selecting References WTR is constructed from the Wikidata dumps from March 2022. First, all references are extracted. Those associated to at least one triple and which lead to a web page by either an external URL (through the reference URL (P854) property), or by an external identifier property that has formattable URLs, e.g. VIAF ID (P214), are kept. These two types of references constitute 91.52% of all references. The remaining portion consists of references to Wikidata items, inferred from other specific Wikidata claims, imported from Wikipedia (but no page specified), and those without any provenance property. These types of references are avoided due to potential issues with circular or vague provenance. Close to 20M unique references are extracted. Next, each extracted reference has their initial web URLs defined. For references with direct URLs (reference URL (P854)), such URLs are used. For references with external ID properties, the property’s formatter URL (P1630) is combined with the linked external ID to establish the URL. For instance, a reference might use the IMDb ID (P345) property, which has the formatter URL “https://wikidata-externalid-url.toolforge.org/?p=345&url prefix=https://www .imdb.com/&id=$1”. By replacing the ‘$1’ with the ID linked by the property, one establishes the IMDB URL repre- sented by the reference. The extracted set of references and their respective initial URLs is then filtered. References that are inadequate to the scenario in which ProVe will be used are removed. These consist of three groups: 1. References with URLs to domains that have APIs or content available as structured data (e.g. JSON or XML), as these can be automatically checked through APIs, e.g. PubMed, VIAF, UniProt, etc. 2. References with URLs linking to files such as CSV, ZIP, PDF, etc., as parsing these file formats is outside of ProVe’s scope. 3. References with URLs to pages that have very little to no information in textual format, such as images, doc- ument scans, slides, and those consisting only of infoboxes, e.g. nga.gov, expasy.org, Wikimedia commons, etc. As shown by Amaral et al. [3], this first group is very substantial, with an estimated over 70% of English references being automatically verifiable through API calls. Additionally, references with URLs to websites not in English (ac- cording according to FastText’s language identification models [19, 20]), posing security risks, or unavailable (e.g. 404 and 502 HTML response codes) are removed. Close to 7M references are left after these removals, wherein English Wikipedia alone represents over 40% of URLs. To avoid biasing evaluation towards populous web domains, a stratified sampling is carried using web domains as strata. The 30 most populous web domains are defined as 30 separate groups, with two additional groups formed from remaining web domains: RARE, for web domains that appear only once, and OTHER, for all others. Pairing References with Triples Given the total number of references contemplated (7M), a sample size of 385 represents a 95% confidence interval and a 5% margin of error. An equal amount of references from each of the 32 groups is sampled, totalling over 400 references. Samples for a group are retrieved one by one through the following method. A reference is randomly sampled without replacement and checked as to whether it is associated to at least one triple fitting to be used for evaluation; if so, it is kept. A triple fitting for evaluation is defined as one which carries non-ontological and reliable meaning, and which can be expressed concisely through natural language. They are identified by the following criteria: – Is not deprecated by Wikidata; – Has an object that is not the “novalue” or “somevalue” special nodes; – Has an object that is not of an unverbalisable type, i.e. URLs, globe coordinates, external IDs, and images. 12 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT – Has a predicate that is not ontological, e.g. instance of (P31), merged into (P7888), entry in abbreviations table (P8703), etc. These steps produce a stratified representative sample of references, including their unique identifiers, resolved URLs, web domains, and HTTP response attributes. Finally, for each sampled reference, a triple-reference pair is formed by extracting from Wikidata a random triple associated to the reference and fitting for evaluation, alongside its unique identifiers, object data types, main labels, aliases, and descriptions. This construction process creates WTR, ensuring it is composed of triples carrying non-ontological meaning and verifiable through their associated references, thus useful for evaluating ProVe. It also ensures meaning and context understanding is evaluated, rather than merely string matching, e.g. in the case of URLs, globe coordinates, and IDs. As for image data, tackling such multimodal scenarios is outside ProVe’s scope. 4.2 Dataset Annotation As described in Section 3 (see Figure 2), ProVe tackles its AFC task as a sequence of text extraction, ranking, classifi- cation, and aggregation subtasks. Given a triple-reference pair, ProVe extracts text passages from the reference, ranks them according to relevance to the triple, and individually classifies them according to their stance towards the triple. Then, triple-reference pairs are classified according to the overall stance of the reference towards the triple. To allow for a fine-grained evaluation of ProVe’s subtasks, WTR receives three sets of annotations: (1) on the stance of individual pieces of evidence towards the triple, (2) on the collective stance of all evidence, and (3) on the overall stance of the entire reference. The two first sets of annotations are deemed evidence-level annotations, while the last is reference-level. Crowdsourcing is used to collect evidence-level annotations, due to the large number of annotations needed (six per triple-reference pair) in combination with the simplicity of the task, which requires workers only to read short passages of text. Reference-level annotations are less numerous (one per triple-reference pair), much more complex, and hence manually annotated by the authors. Crowdsourcing Evidence-level Annotations Collecting evidence-level annotations for all retrievable sentences of each triple-reference pair in WTR, in order to account for different rankings that can be outputted by ProVe, would be prohibitively expensive and inefficient. Thus, evidence-level annotations are only provided for the five most-relevant passages in each reference, i.e. the collected evidence. First, ProVe’s text retrieval (Section 3.3) and sentence selection (Section 3.4) modules are applied to each reference and the five pieces of evidence for each collected. This does not severely bias the annotation towards highly- relevant text passages, actually allowing for a more even collection of both relevant and irrelevant text passages, as often only a couple of passages amidst the five tend to be relevant. Then, evidence-level annotations for each individual piece of evidence, as well as for the whole evidence set, are collected through crowdsourcing, totalling 6 annotation tasks per triple-reference pair. Task Design Two crowdsourcing task designs have been created to carry out this evidence-level annotation process. Task design 1 (T1) asks workers to assess the stance of an individual evidence towards the triple, which can also be used as an indication of relevance. Each task in T1 is a bundle of 6 subtasks, each providing the worker with a Wikidata triple, a piece of evidence extracted from its paired reference, the reference’s URL, and asking the worker the stance of that evidence: either it supports the claim, refutes it, has not enough information for either, or the worker is not sure. This bundling is done in order to get more annotations out of a single crowdsourcing task assignment. Task design 2 (T2) asks workers to assess the collective stance of the evidence set (all five individual pieces of evidence) towards the triple. Similarly to tasks in T1, tasks in T2 are made from 6 subtasks. Each T2 task providing the worker with a triple, five text passages extracted from the reference (the evidence set), shown in a random order, and the reference’s URL. It then asks the collective stance of the five passages, with similar response options to T1. The designs for both T1 and T2 tasks can be seen in Appendix A. Recruitment and Quality Assurance The crowdsourcing campaign received ethical approval by King’s College London on 7th of April, 2022, with registration number MRA-21/22-29584. All tasks were carried through Amazon Mechanical Turk (AMT). A pilot was run to collect feedback on instructions, design, and compensation. After proper adjustments, the main data annotation tasks were carried out. Several quality control techniques [10] were applied, following similar tasks by Amaral et al. [4, 3]. A number of subtasks in both T1 and T2 were manually created and annotated by the authors and used as golden-standard subtasks to reject low-quality workers. A randomised attention test was put at the start of each task to discourage spammers. Finally, detailed instructions and examples were available at all times to workers. 13 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT Execution times for each task in the pilot were measured and used to define payment in USD proportionally to double the US minimum hourly wage (7.25): USD 0.50 for tasks in T1 and USD 1.00 for tasks in T2, calculated based on the highest between mean and median execution time. 500 tasks were generated for T1 and 91 tasks for T2, assigned to about 200 and 140 unique workers, respectively. Workers needed to have finished at least 1000 tasks in AMT with at least 80% approval. Each task was resolved 5 times and annotation aggregation was done via majority voting to reduce worker bias. In the event of a tie (4% of cases for T1 and 13.4% for T2), authors served as tie-breakers. To assure annotation aggregation was trustworthy, inter-annotator agreement was measured through kappa values, achieving 0.56 for tasks in T1 and 0.33 for tasks in T2. According to Landis and Koch [24], these results show fair and moderate agreement, respectively. Several factors that contribute to lower inter-annotator agreement [6] are present in this crowdsourcing setting: subjectivity inherent to natural language interpretation, a high number of annotators which also lack domain expertise, and class imbalance. On individual annotations for T1, the majority of passages (68.7%) was deemed as neither supporting nor refuting, followed by passages annotated as supporting (27.5%), and a small portion refuting (3.3%). Only 0.4% of annotations were ‘not sure’. Aggregated by majority-voting, these values are, respectively, 70.5%, 28.5%, 1.0%, and 0.0%. For T2, the proportions of individual (and aggregated) annotations were 65.5% (73.6%) for supporting sets of evidence, 9.3% (5.9%) for refuting, 24.6% (20.5%) for neither, and 0.6% (0.0%) for ‘note sure’. Gathering Reference-level Annotations WTR has reference-level annotations for each triple-reference pair. They define a reference’s overall stance towards its associated triple, and are manually provided by the authors. These annotations are crucial in order to provide a ground truth for an evaluation of the entire pipeline’s performance when taking the whole web page into consideration. They consider a reference’s full meaning and context, and not only what was captured and processed by the modules as evidence. Differently from the other annotation level, covered by crowdsourcing tasks that could be simplified to directly comparing extracted text passages, the mental load and task complexity of interacting with the page to inspect all information (e.g. in text, infoboxes, images, charts), on top of cases where the information is nonexistent, is too high for cost-effective crowdsourcing. Thankfully, with one annotation per triple-reference pair, it is feasible for manual annotations to be created by the authors. The authors have thus annotated the over 400 references into different categories and sub-categories, which are a more detailed version of the three stance classes used at evidence- level annotations (and by FEVER): 1. Supporting References (directly maps to the ‘supports’ class): (a) Support explicitly stated as text as natural language sentences (b) Support explicitly stated as text, but not as natural language sentences (c) Support explicitly stated, but not as text (d) Support implicitly stated 2. Non-supporting References (a) Reference refutes claim (directly maps to the ‘refutes’ class) (b) Reference neither supports nor refutes the claim (directly maps to the ‘not enough information’ class) These six subclasses allow WTR to aid in evaluating the overall performance of ProVe in both ternary (the three sentence-level stances) and binary (supporting vs. not supporting) classifications, as well as to investigate which pre- sentations of supporting information are better captured by the pipeline. WTR contains 416 Wikidata triple-reference pairs, representing 32 groups of text-rich web domains commonly used as sources, as well as 76 distinct Wikidata properties. 43% of references were obtained through external IDs and 57% through direct URLs. Its structured is shown in Appendix B. 5 Evaluation This section covers the evaluation of ProVe’s performance by applying it to the evaluation dataset WTR, described in Section 4, and comparing ProVe’s outputs with WTR’s annotations. These inspections and comparisons provide insights into the pipelines’ execution and results at its different stages and modules. Each module in ProVe is covered in a following subsection. The overall classification performance of ProVe is indicated by the outputs of the claim verification module’s aggregation step and is covered at the end of the section. 14 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT 5.1 Claim Verbalisation Given that ProVe’s verbalisation module is the exact same model used to create the Wikidata triple verbalisations found in the WDV dataset [4], this section first reports the relevant evaluation results obtained by WDV’s authors. It then analyses the claim verbalisation module’s execution on the WTR evaluation dataset, looking at the quality of its outputs. Model Validation ProVe’s verbalisation module consists of a pre-trained T5 model [35] fine-tuned on the WebNLG dataset [13]. To confirm that fine-tuning was properly carried, the authors measure the BLEU [32] scores of its verbalisations on WebNLG data. They measure 65.51, 51.71, and 59.41 on the testing portion of the SEEN partition, on the UNSEEN partition, and on their combination, respectively. These are all within a percentage point from current state-of-the- art [37]. Amaral et al. [4] use this exact same fine-tuned model to create multiple Wikidata triple verbalisations, which compose the WDV dataset, and evaluate them with human annotators. WDV consists of a large set of Wikidata triples, alongside their verbalisations, whose subject entities come from three distinct groups (partitions) of Wikidata entity classes. The first partition consists of 10 classes that thematically map to the 10 categories in WebNLG’s SEEN partition. The second, of 5 classes that map to the 5 categories in WebNLG’s UNSEEN partition. The third consists of 5 new classes not covered in WebNLG but populous in Wikidata. WDV’s verbalisations were evaluated by Amaral et al. through aggregated crowdsourced human annotations of flu- ency and adequacy dimensions, as defined in Section 3.2. Fluency scores range from 0, i.e. very poor grammar and unintelligible sentences, to 5, i.e. perfect fluency and natural text. Adequacy scores consisted of 0/No for inadequate verbalisations and 1/Yes for adequate ones. WDV’s authors observed 96% of annotated verbalisations having a median fluency score of 3 or higher, where 3 denotes “Comprehensible text with minor grammatical errors”, and around 93% being voted by the majority of annotators as adequate. These results did not vary considerably between WDV par- titions, indicating model stability regardless if classes are seen in training or have mappings to WebNLG (DBpedia) classes. The WDV paper [4] contains a more detailed evaluation. Execution on WTR The verbalisation module was applied to all 416 triple-reference pairs in WTR. For each triple t, its three components were retrieved from Wikidata: subject (s), predicate (p), and object (o). Subjects and predicates are all Wikidata entities and, thus, have associated main labels and aliases. Objects have multiple possible data types, including Wikidata entity, from which labels were retrieved like with subjects. Strings and quantities without units are used as-is as single labels, otherwise the unit’s main label and aliases are added. Date-time values are formatted into multiple string patterns based on their granularity. The process of defining preferred labels for verbalisation, represented in Section 3.2 through function l(·), consisted of using the main labels of all three components for verbalisation, and changing the predicate’s label for an alias in case the verbalisation (v) was not fluent or adequate. First, for a triple t, a concatenation of its components’ main labels is fed to the verbalisation model to generate a verbalisation v. Then, these verbalisations were manually inspected by the authors in order to assess fluency and adequacy as previously defined. In case a verbalisation v scored lower than a 4 on fluency or was inadequate, it was replaced by an alternate verbalisation v(cid:48) generated by using a predicate alias rather than the predicate’s main label. As mentioned in Section 3.5, contextually-dependant and vague predicate main labels hinder verbalisations and choosing proper aliases for it can be quickly carried out by KG editors and curators. One example is the predicate child (P40), whose alias ‘has child’ is used to remedy the main label’s lack of explicit direction. Out of the 416 verbalisations generated by ProVe through main labels, 62.6% were adequate and had good to excellent fluency. Another 29.8% followed suit after predicate alias replacement, totalling 92.4%. The remaining 7.6% either had no aliases or could not be improved by them and had to be manually corrected before being passed down the pipeline. While manual corrections are more demanding than alias selection, those were not frequent and mainly affected specific properties such as isomeric SMILES (P2017) and designation (P1435). Corrections were also not extensive; the normalised Levenshtein distance before and after corrections was under 0.25 (a quarter total length) for 80% of them. Finally, 7 of the claims verbalised ended up having both identical URLs and verbalisations, and were thus dropped from the evaluation downstream as they would have the exact same results. This results from ProVe not taking claim qualifiers into consideration, which is further discussed in Section 6.3. 15 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT 5.2 Text Extraction Due to the complexity of defining metrics that measure success in text extraction, this section instead first defines metrics that can be used for an indirect evaluation of the text extraction module. It then explores insightful descriptive metrics obtained from executing the module on WTR. Indirect Evaluation ProVe’s text extraction module essentially performs a full segmentation of the referenced web-page’s textual content without excluding any text. The model can not be directly evaluated through annotations due to the sheer quantity of ways in which one can segment all references contained in the evaluation dataset. Annotating such text extractions would require manually analysing entire web pages to find all textual content relevant to the claim, inspecting all the text extractable from such page, and segmenting it such that boundaries are properly placed in terms of syntax and that relevant passages are kept unbroken. One would then need to compare one or multiple ideal extractions to the extraction performed by ProVe. It is neither trivial to simplify this process for crowdworkers, nor efficient for the authors to carry it out by hand. It is also not trivial to define what constitute ‘well-placed’ sentence boundaries nor how and if one can break relevant passages of text. Thus, instead of a direct evaluation, the performances of the subsequent sentence selection and final aggregation steps are used as indirect indicators of ProVe’s text extraction. A correlation between ProVe’s relevance scores (ρ) and evidence-level crowd annotations, as defined in Section 5.3, can be used to measure how much ProVe captures human-perceived relevance. A high value of such metric indicates ProVe extracts text passages such that relevant and irrelevant text segments are well divided. Otherwise, there would be a dissonance between humans and models in rating relevance, as while humans are good in judging badly divided passages, models would struggle significantly. Likewise, a good final classification performance, measured against WTR’s reference-level annotations, indicates ProVe’s capacity of extracting useful sentences. Classification metrics for ternary and binary classification tasks, such as accuracy and f-scores, are shown in Section 5.4. Still, the direct evaluation of ProVe’s text extraction module, en- compassing sentence segmentation and meaning extraction from unstructured and semi-structured textual content, is intended as future work. Execution on WTR The text extraction module was applied to each of the 416 triple-reference pairs (t, r) in WTR, each yielding a set of passages P , as defined in Section 3.3. The total number of extracted passages was of nearly 64K, an average of 154 passages per reference. This average number of extracted passages varied heavily according to web domain, ranging from as low as 1 to as high as 804, with relatively low inter-domain standard deviation (29.51 median standard deviation). The average size, in characters, of individual extracted passages behaved similarly, ranging from 24.20 to 4059.23 depending on web domain. Finally, the number of passages |P | extracted from a reference r and their average size ((cid:80)|P |−1 |pi|)/|P | have a slightly moderate negative correlation (−0.2513 Pearson’s r), with a few domains, such as bioguides.congress.gov returning very few but very large passages. i=0 These metrics confirm that extracted textual content mainly vary based on web domain. It indicates ProVe’s extraction depends heavily on particular web layouts, e.g. having difficulty segmenting the contents of specific domains like bioguides.congress.gov, due to their textual content being contained in a single paragraph (<p> HTML tag) without periods or any other sentence breaks. 5.3 Sentence Selection ProVe’s sentence selection module contains a BERT model fine-tuned on FEVER’s training partition, as described in Section 3.4. This section first performs a sanity check of by evaluating the model on FEVER’s validation and testing partitions, measuring standard classification metrics to ensure the model has properly fine-tuned to FEVER. Afterwards, the entire module is applied to WTR and its performance measured by relying on the crowdsourced annotations. Model Validation For each claim-sentence pair in FEVER’s validation and testing partitions, the sentence selection module’s BERT model outputs a relevance score between −1 and 1. A sentence is classified as relevant to a claim if it figures within the top 5 highest scoring sentences for that claim, and as irrelevant otherwise. According to FEVER Scorer 4, the 4 https://github.com/sheffieldnlp/fever-scorer 16 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT model scores 0.945 and 0.87 recall on validation and testing, respectively. Similarly, F1-scores were 0.421 and 0.386, which are less than a percentage point away from KGAT [28] and DREAM [58], current state-of-the-art, on the test partition.5 These indicate the model has properly fine-tuned to FEVER. Execution on WTR Inputs to the sentence selection module consist of the verbalisations v outputted by the claim verbalisation module, as described in Section 5.1, and the extracted passages P taken from the reference, as described in Section 5.2. For each passage p ∈ P , a relevance score ρ ∈ [−1, 1] is calculated. Distribution of Relevance Score Relevance scores (ρ) varied heavily across web domains. For each triple-reference pair (t, r) in WRT, its passages P were scored by the sentence selector module, and the top 5 highest scores (which correspond to the evidence set E) were averaged, with Figure 5 showing the distribution of these averages across and within domains. Overall, relevance scores spanned the whole range of values (-1 to 1), with a median close to zero. Variation within web domains was not wide, denoted by a prevalence of small to medium interquartile ranges. Web domains with large variations were those that cover a large range of information domains and page layouts, such as bbc.co.uk. In contrast, there is extensive variation across web domains. This is due, firstly, to how each domain conveys its information, where long and continuous textual contents greatly favour reliable scores, as opposed to spread-out information which pushes scores down. Secondly, due to the amount of content actually related to the triple. A website like thepeerage.com mentions the triples’s subject many times, leading to many positively-scored sentences, as opposed to vesseltracking.net, which provides fewer, shorter, and more direct information. Proportion of Irrelevant Passages Given the distributions seen in Figure 5, one can define the value zero as the threshold between likely relevant and likely irrelevant passages. By using only the passages extracted by the n = 1 sliding window (P1), this threshold leaves 24.7% of triple-reference pairs as containing only passages that are likely irrelevant. This decreases to 17.6% when combining the n = 1 and n = 2 sliding windows, clearly indicating the method described in Section 3.3 achieves its desired objective of generating more relevant passages by combining multiple text segments. This percentage of triple-reference pairs without likely relevant passages also varies heavily across web domains, and is a problem mostly affecting those same domains at the lower end of the distribution seen in Figure 5. Correlation Between Modeled and Crowdsourced Relevance For each triple-reference pair (t, r) in WTR, crowd- workers annotated each of its 5 most relevant passages (its Evidence set E) as ranked by the sentence selection module, described in Section 4.2 as the evidence-level annotations obtained through task design T1. Pieces of evidence were annotated according to their individual stance towards the reference’s associated triple, using one of four choices: ‘sup- ports’, ‘refutes’, ‘neither’ (not enough information), and ‘not sure’. In order to reduce crowd bias, multiple individual annotations were collected per evidence and aggregated through majority voting. Authors served as tie-breakers. In order to compare the model’s relevance scores (ρ) with human-perceived values of relevance (T1 crowd annota- tions), both ‘supports’ and ‘refutes’ annotations are grouped as ‘relevant’, with ‘neither’ relabelled as ‘irrelevant’. Relevance score distributions are then analysed based on this binary class annotation. Figure 6, shows annotated pas- sages divided into groups based on the percentage of ‘relevant’ individual annotations (or votes) each has received, as well as each group’s relevance score distribution. There is a clear pattern of association between the model’s rele- vance scores and real humans’ opinions of relevance. This conclusion is reinforced by the strong correlation between relevance scores and percentages of individual annotations as ‘relevant’ (0.5058 Pearson’s r). Such strong correlation can also be seen with aggregated annotations, as shown in Figure 7, where the relevance score distributions for the group of passages majority-voted as ‘relevant’ vs. those voted as ‘irrelevant’. These met- rics and distributions indicate that ProVe’s sentence selection module produces scores that are well-related to human judgements of relevance. 5.4 Claim Verification The first step of ProVe’s claim verification module consists of a TER classification task, resolved by a fine-tuned BERT model, as described in Section 3.5. Such TER model is used to classify the stances of individual pieces of evidence (e ∈ E) towards a KG triple (t) by calculating three class probabilities (σ) corresponding to the three TER classes found in FEVER: ‘supports’, ‘refutes’, and ‘not enough information’. The module’s second step is an aggregation which uses these classification probabilities to calculate a final verdict for the whole reference (r). This section first 5 https://competitions.codalab.org/competitions/18814#results 17 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT Fig. 5. Relevance scores distributions across and within different web domains. ‘ALL’ stands for the combined distribution of the subsequent 32 groups. Data values here are the averages of the top 5 passages’ relevance scores for each reference. describes a sanity-check of both steps by evaluating them in conjunction on FEVER’s validation and testing partitions. It then assesses their performance on WTR. Model Validation ProVe’s claim verbalisation module has been applied to FEVER’s validation and testing partitions. For the validation partition, the sentences pre-annotated as relevant to the claim being judged were used as evidence. The first TER step is performed to calculate the individual stance probabilities of each piece of evidence. The second aggregation step is then carried out to define the claim’s final verdict. The same process is carried for the testing partition, however, since its sentences do not come pre-annotated, ProVe’s sentence selection module was used instead. For the final aggregation step, only methods #1 (a weighted sum) and #2 (Malon’s strategy) were used, as neither require training a new classifier and thus have similar complexity to other approaches used to tackle FEVER [29, 43], yielding a more direct comparison. Label accuracy and FEVER score were calculated as evaluation metrics. Label accuracy is a normal classification accuracy calculated over the three TER classes. FEVER score is an accuracy 18 1.000.750.500.250.000.250.500.751.00Mean relevance scores of a reference's top 5 passagesALLwww.thepeerage.comwww.irishstatutebook.iedeu.archinform.netadb.anu.edu.auwww.bailii.orgwww.britannica.comwww.legislation.gov.auwww.biographi.caen.wikipedia.orgwww.loc.govwww.historyofparliamentonline.orgen.wikivoyage.orgmemory-alpha.fandom.comwww.findagrave.comsnaccooperative.orgwww.gracesguide.co.ukRAREwww.royalcollection.org.ukgo.drugbank.comOTHERwww.charitynavigator.orgwww.lambiek.netwww.eib.orgwww.nytimes.comwww.bbc.co.ukindiancine.maportal.historicenvironment.scotyvng.yadvashem.orgschools.org.inwww.allmusic.combioguide.congress.govwww.vesseltracking.netWeb domains ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT Fig. 6. Distributions of relevance scores given by the module divided by the percentage of crowd annotations deeming that passage as ‘relevant’ (either ‘supports’ or ‘refutes’. calculated by also taking collected evidence into consideration: a prediction is correct if the label is correct and the predicted evidence set contains all correct evidence. Aggregation method #1, the weighted sum, scores 0.6964 label accuracy and 0.6952 FEVER score on the validation set, and 0.6508 and 0.617 respectively on the testing set. Method #2, the rule-based aggregation, scores 0.7624 label accuracy and 0.76110 FEVER score on the validation set, and 0.7037 and 0.6739 respectively on the testing set. Method #2 puts ProVe 3.2 percentage points below state-of-the-art (DREAM [58]) on FEVER score, which the authors consider sufficient to validate the module’s fine-tuning. Execution on WTR ProVe’s claim verification module is tested on WTR by comparing its outputs to WTR’s annotations, both at evidence level and reference level. Such annotations, as described in Section 4.2, consist of: crowd annotations denoting the stances of individual pieces of evidence towards KG triples (from crowdsourcing tasks T1), crowd annotations denot- ing the collective stances of sets of evidence towards KG triples (from crowdsourcing tasks T2), and author annotations denoting the stance of the entire reference towards its associated KG triple. Crowdsourced annotations are collected multiple times and aggregated through majority voting, with authors serving as tie-breakers. Distributions of TER Class Probabilities Similarly to the relevance scores outputted by the sentence selection module, the TER class probabilities outputted by the TER model for individual pieces of evidence varied greatly across web domains. Consider a triple-reference pair (t, r) and its i-th piece of evidence ei, whose stance probabilities assigned by the model are (σSU P P , σREF ). Consider also the highest of the three probabilities to denote the i predicted TER class for that individual evidence zi, where zi = argmaxk∈K σk i . Web domains such as indiancine.ma have close to 95% of its evidence classified as ‘not enough information’, while domain such as deu.archinform.net classify close to 98% as ‘supports’. Likewise to sentence selection, such variations can be attributed to web layouts. For instance, infoboxes, implicit subjects, and large amounts of boilerplate clearly inflate the number of unrelated sentences that might get highly ranked for relevance, but are neither supportive nor refutative. , σN EI i i Individual TER Classification Metrics Using this argmax approach to define the predicted stance zi of an indi- vidual piece of evidence ei, and using the aggregated annotations provided by the crowd through T1 tasks as ground truth, once can measure classification metrics for ProVe’s individual TER classification. Figure 8 shows the resulting confusion matrix. Accuracy was 0.56 and Macro F1-score was 0.43. While predicting supporting sentences with mod- erate precision, ProVe’s TER model has difficulty in disentangling refuting sentences from those that neither support 19 1.000.750.500.250.000.250.500.751.00Relevance score assigned by sentence selection model[0%,20%)[20%,40%)[40%,60%)[60%,80%)[80%,100%)100%Percentage of individual crowd annotations as "Relevant" ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT nor refute. This is believed to be due to the differences between refuting evidence data found in FEVER and refuting references that occur naturally in Wikidata’s sources, discussed in more detail in Section 6.4. Fig. 7. Relevance score distributions of passages majority-voted as ‘relevant’ and of those voted ‘irrelevant’. Fig. 8. Stance classes predicted for single claim-reference pairs (obtained through argmax) vs. the crowd’s aggregated annota- tions. ProVe’s pipeline focuses on the classification of entire references rather than individual sentences, and the TER class probabilities are merely features for the final aggregation. Still, TER classifications for individual pieces of evidence can be greatly improved by, rather than argmax, using a simple classifier with the three TER class probabilities (σi), the evidence’s relevance score (ρi), and the evidence’s length (|ei|) as features. The best scoring classifier was a Ran- dom Forests Classifier (RFC) with cross-validated (k=5) scores of 0.77 accuracy and 0.50 F1-score. Furthermore, by grouping the ‘refutes’ and ‘neither’ (NEI) classes into one ‘not supporting’ class, turning this into a binary classifi- cation task, cross-validated (k=5) scores reach 0.79 accuracy and 0.76 F1-score, as well as an Area Under the ROC Curve (AUC) of 0.85, as seen in Figures 9 and 10. Collective TER Classification Metrics The annotations obtained through T2 tasks describe human judgements on the collective stance of the sets of evidence extracted from a reference towards its associated triple in WTR. After majority-voting aggregation, there are 301 evidence sets collectively supporting the triple, 24 refuting, and 84 that neither support nor refute it. Taking these annotations as ground truth labels, one can measure the classification performance of ProVe. Table 2 showcases and compares these results with different aggregation methods. Due to class imbalance, both macro averaged and weighted averaged results are reported. Not only the original ternary classification problem but also to a simplified ‘supporting’ vs. ‘not supporting’ binary classification problem is explored. Although a binary formulation is easier to solve, having a pipeline that provides assistance on differentiating between supporting and non-supporting references is of huge benefit to any KG’s curation and editing. Results show how a simple classifier considerably outperforms the other two aggregation approaches, especially in the binary classification scenario. Reference Representation through Retrieved Evidence Verifying whether ProVe properly represents entire refer- ences through the evidence set it retrieves from them is possible by comparing the evidence-level crowd annotations on the collective stance of retrieved evidence (T2 tasks) with the reference-level author annotations, which represent the stance of the reference as a whole. As detailed in Section 4.2, reference-level annotations consist of six categories which directly map to the three TER labels used in crowd annotations. Figure 11 shows WTR’s reference-level annotation distribution. On the ternary classification task, labels 1.A. through 1.D. map to ‘supporting’, label 2.A. maps to ‘refuting’, and label 2.B. to ‘neither’. As for the binary classification task, labels 1.X. map to ‘supporting‘ and labels 2.X. map to ‘not supporting‘. 20 1.51.00.50.00.51.01.5Relevance score assigned bysentence selection model0.00.20.40.60.81.01.2Distribution densityMajority Voting ResultRelevantIrrelevantSupportsModelRefutesModelNeitherModelTER model class predictions (argmax)SupportsCrowdRefutesCrowdNeitherCrowdAggregated crowd annotations0.720.110.170.240.290.480.230.280.490.00.20.40.60.81.0 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT Fig. 9. Binary stance classes of single claim-reference pairs pre- dicted by a RFC vs. the crowd’s aggregated annotations. Fig. 10. ROC curve for the simplified binary stance classification performed by a RFC. Method 1: Weighted Sum 2: Malon’s 3: RFC 1: Weighted Sum 2: Malon’s 3: RFC Classes Accuracy Precision Recall F1 Macro Averaged Weighted Averaged Precision Recall F1 3 3 3 2 2 2 0.592 0.641 0.726 0.626 0.667 0.750 0.439 0.484 0.439 0.430 0.456 0.436 0.433 0.468 0.446 0.609 0.639 0.596 0.614 0.638 0.617 0.681 0.664 0.667 0.700 0.592 0.626 0.691 0.641 0.659 0.709 0.726 0.714 0.716 0.626 0.648 0.712 0.667 0.683 0.747 0.750 0.745 Table 2. Classification performance of each of the three aggregation methods on both ternary and binary collective stance TER classification formulations. Majority-voted annotations obtained in T2 are used as true labels. Results from method 3 were cross- validated with k = 5. There is a high class imbalance, with authors deeming only 2 references as ‘refuting’. This number is much lower than the 24 majority-voted by the crowd as ‘refuting’. Figure 12 shows a complete comparison between evidence-level collective stance annotations from the crowd and reference-level author labels. Crowd annotators very successfully judge references in 1.A. (explicit textual support in natural language), and obtain moderate to high performance on all other groups, except for 1.C. (non-textual support) and 2.A. (refuting reference). It is trivial to see why ProVe, a text-based approach, fails at 1.C. The failure at 2.A. is partially due to the disparity between refuting text seen in training and in the evaluation dataset, as explained in Section 6.4. The low amount of refuting references compromises a deeper look. Still, where supportive textual information is the most available, explicit, and naturally written, the better ProVe performs. Full Pipeline Performance Lastly, ProVe’s performance on a ‘supported’ vs. ‘not supported’ binary classification scenario is investigated by comparing the outputs obtained with the WTR evaluation dataset to its reference-level anno- tations provided by the authors. The variation of such performance based on how supportive information is expressed on text is also analysed. ProVe is evaluated on the entire WTR by using the reference-level annotations as ground-truth and adopting ProVe’s best performing aggregation method, the simple classifier (method #3). Additionally, for each of the supporting reference-level labels 1.A. through 1.D. (except for 1.C, as it is non-textual), WTR is modified by keeping only those triple-reference pairs with either that specific supporting label or labelled as ‘not supporting’. ProVe is then also evaluated on these modified datasets. Due to the very low amount of 2.A. labels, only the binary ‘supporting’ vs. ‘not supporting’ classification task is evaluated. Table 3 showcases the results obtained. It shows that ProVe has a good re- 21 SupportingModelNot SupportingModelTER model class predictions (simple binary classifier)SupportingCrowdNot SupportingCrowdAggregated crowd annotations0.730.270.220.780.00.20.40.60.81.00.00.20.40.60.81.0False positive rate0.00.20.40.60.81.0True positive rateAUC = 0.8371 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT Fig. 11. Distribution of reference-level author labels of the eval- uation dataset. Fig. 12. Comparison between reference-level author label and sentence-level collective stance annotations from the crowd. sult on the evaluation dataset overall, with close to 80% accuracy, and an excellent performance on identifying support from references that showcase said support through explicit and naturally written textual information (1.A.). It also shows good results on references where support is not naturally written (1.B.). Macro Averaged Weighted Averaged Support type Accuracy Precision Recall F1-Score Precision Recall F1-Score AUC 0.876 0.908 1.A. 0.768 0.745 1.B. 0.682 0.754 1.D. 0.821 0.753 ALL 0.878 0.875 0.762 0.779 0.749 0.666 0.864 0.794 0.821 0.838 0.668 0.637 0.662 0.669 0.565 0.635 0.875 0.779 0.666 0.794 0.829 0.648 0.649 0.574 Table 3. ProVe’s binary classification performance on all WTR and per type of textual support. Reference-level annotations were used as ground-truth, and a simple classifier as aggregation method. Values obtained through cross-validation (k = 5). 6 Discussions and Conclusions In this section, aspects and limitations of the implementation and evaluation results of ProVe are further discussed. Additionally, future directions of research are pointed out and final conclusions are drawn. 6.1 ProVe for Fact Verification Fact checking as a tool to assist users in discerning between factual and non-factual information has a myriad of applications, formulations, approaches, and, overall, considerably ambiguous results. The effects of fact-checking interventions, while significant, are substantially weakened by its targets’ preexisting beliefs and knowledge [54]. Its effectiveness depend heavily on many variables, such as the type of scale used and whether facts can be partially checked, as well as different impacts if they go along or against a person’s ideology. This further motivates ProVe’s standpoint of judging support instead of veracity. Triples are evaluated not as factual or non-factual, but based on their documented provenance, passing the onus of providing trustworthy and authoritative sources to the graph’s curators. This keeps ProVe’s judgements from clashing with the ideologies of its users, as the pipeline does not pass factual, but linguistic judgement. Additionally, by using only two to three levels of verdict and not including graphical elements, 22 1.A.1.B.1.C.1.D.2.A.2.B.Reference-level labels05101520253035Percentage of evaluation datasetSupportingRefutingNeitherSentence-level collective stance annotations from the crowd1.A.1.B.1.C.1.D.2.A.2.B.Reference-level author labels12056107528021647251119423 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT the presence of elements that compromise fact-checking [54] is hampered. The authors’ focus for future research lies in increasing ProVe’s explainability in order to increase trust and understanding. The results achieved by ProVe, especially on text-rich references, are considered by the authors as more than satisfac- tory, representing an excellent addition to a family of approaches that currently includes very few, e.g. DeFacto [14] and FactCheck [45]. Still, there is a need for a dataset specialised in AFC on KGs in order to tie in these approaches and make benchmarking them and future works possible. While WTR (ProVe’s evaluation dataset) can serve this purpose, it can definitely be improved in size and predicate coverage. ProVe’s use as a tool can greatly benefit from an active learning scenario which would further enhance the models and techniques it employs. As the same time, users of the tool inherently introduce a bias based on their demographics, with the same being valid for the crowdsourced evaluation of ProVe’s pipeline. Being aware of this bias is crucial to the proper deployment of such approaches. 6.2 Text Extraction Add-ons ProVe’s text extraction module has three essential steps: rendering web pages with a web crawler, using rule-based methods to convert content inside HTML tags into text, and sentence segmentation. While the rule-based methods presented in this paper are simple, they are quite effective, as shown in Section 5. Better and more specialized rules and methods to detect and extract text from specific HTML layouts, such as turning tabular structures or sparse infoboxes into sequential and syntactically correct sentences, can be seamlessly integrated into ProVe. Both supervised and unsupervised approaches can also be applied. In order to properly assess such added methods, as well as to provide more insight into the text extraction module in general, a direct evaluation of its performance would be extremely helpful. Although good performance on downstream tasks is a good indicator, it does not indicate where to improve text extraction. Be it through descriptive statistics or comparison against golden data, this as a focus of future research alongside model explainability 6.3 Usage of Qualifiers Triples in KGs such as Wikidata often are accompanied by qualifiers that further detail it. A triple such as the one seen in Figure 2 (<Librarian of Congress, position holder, James H. Billington>) has several qualifiers, such as ‘start time’ and ‘end time’. If a person were Librarian of Congress twice, this qualifier would differentiate two triples that would otherwise be identical if expressed only with main components. ProVe does not take qualifiers into consideration. As such, two triples with distinct IDs and meanings can have exactly the same verbalisations which, while adequate, do not contain all information. Ribeiro et al. [37] show transformers can verbalise multiple triples into a single sentence. Hence, adding qualifiers as secondary triples is possible, generating more detailed verbalisations. However, ProVe’s sentence selection and claim verification modules contain models fine-tuned on FEVER, whose vast majority of sentences contain only a main piece of information with little to no additional details, e.g. ‘Adrienne Bailon is an accountant’ and ‘The Levant was ruled by the House of Lusignan’. In order to make proper use of qualifiers during verbalisation, there needs to be an assurance that downstream modules can properly handle more complex sentences by either different or augmented training data. 6.4 Detecting Refuting Sources with FEVER The FEVER dataset presents claims that are normally short and direct in nature, from multiple domains, and associ- ated evidence extracted directly from Wikipedia. ProVe shows it is possible to use FEVER to train pipeline modules to detect supportive and non-supportive sources evidence. However, as seen in Section 5.4, detecting refuting sources is hard for ProVe and believed to be due to how FEVER generates refuted claims through artificial alterations. Claims labelled by FEVER as ‘REFUTES’ are those generated by annotators who alter claims that would otherwise be sup- ported by its associated evidence. Alterations follow six types: paraphrasing, negation, entity/relationship substitution, and making the claim more general/specific. This leads to claims that, while meaningful and properly annotated, would never be encoded in a KG triple, such as “As the Vietnam War raged in 1969, Yoko Ono and her husband John Lennon did not have two week-long Bed-Ins for Peace” or “Ruth Negga only acts in Irish cinema”. Additionally, associated evidence often rely on common sense in order to refute these claims, such as “Kingdom Hearts III is owned by Boyz II Men”, whose relevant evidence at no point elaborate on Kingdom Hearts III’s ownership, only describing it as a Japanese video-game. We supposedly know Boyz II Men is a music group, rendering the claim implausible. While useful for other tasks, these refuted claims are very different from refutable triples occurring naturally in KGs, which mainly consist of triples whose objects have different values in the provenance. One such example is “Robert 23 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT Brunton was born on 23/03/1796”, whose reference actually mentions the “10th of February 1796”. In order to properly detect KG provenance that refute their triples, ProVe’s claim verification module needs re-training on a fitting subset of FEVER, or on a new dataset containing non-artificial refuted claims. 6.5 Conclusions Knowledge graphs are widespread secondary sources of information. Their data is extremely useful and available in a semantic format that covers a myriad of domains. Ensuring the verifiability of this data through documented provenance is a task that is crucial to the upkeep of their usability and one that should be actively supported by automated and semi-automated tools to help data curators and editors cope with the sheer volume of information. However, as of now, there are no such tools deployed at large scale KGs and only a very small family of approaches tackle this task from a research standpoint. This paper proposes, describes, and evaluates ProVe, a pipelined approach to support the upkeep of KG triple verifia- bility through their documented provenance. ProVe leverages large pre-trained LMs, rule-based methods, and simple classifiers, to provide automated assistance to the activity of creating and maintaining references in a KG. ProVe’s pipeline aims at extracting relevant textual information from references and evaluating whether or not they support an associated KG triple, providing its users with a support classification, a support probability, as well as relevance and textual entailment metrics for the evidence used. Deployed correctly, ProVe can help detect verifiability issues in existing references, as well as improve the reuse of good sources. Additionally, the approach can be expanded to work in a multilingual setting. ProVe has been evaluated with WTR, a dataset of triple-reference pairs extracted directly from Wikidata, a large KG, and annotated by both crowdworkers and the authors. ProVe achieves 75% accuracy, 0.681 F1-macro, and 0.667 AUC on the full evaluation dataset, which includes references to many different web domains. On references where support is stated explicitly and in natural text, ProVe achieves an excellent 87.5% accuracy (0.829 F1-macro and 0.908 AUC). Future work mainly lies in exploring techniques to improve ProVe’s explainability, with a focus on its sentence selec- tion and claim verification steps. Other directions can include expanding the size and the distinct predicate coverage of the benchmarking dataset WTR, as well as a direct evaluation of text extraction and segmentation techniques. Acknowledgements This research received funding from the European Union’s Horizon 2020 research and innova- tion programme under the Marie Skłodowska-Curie grant agreement no. 812997. References 1. Acosta, M., Zaveri, A., Simperl, E., Kontokostas, D., Auer, S., Lehmann, J.: Crowdsourcing linked data quality assessment. In: International semantic web conference. pp. 260–276. Springer (2013) 2. Acosta, M., Zaveri, A., Simperl, E., Kontokostas, D., Fl¨ock, F., Lehmann, J.: Detecting linked data quality issues via crowd- sourcing: A dbpedia study. Semantic web 9(3), 303–335 (2018) 3. Amaral, G., Piscopo, A., Kaffee, L.A., Rodrigues, O., Simperl, E.: Assessing the quality of sources in wikidata across lan- guages: a hybrid approach. Journal of Data and Information Quality (JDIQ) 13(4), 1–35 (2021) 4. Amaral, G., Rodrigues, O., Simperl, E.: WDV: A broad data verbalisation dataset built from wikidata. arXiv preprint arXiv:2205.02627 (2022) 5. Ammar, A., Celebi, R.: Fact validation with knowledge graph embeddings. In: ISWC (Satellites). pp. 125–128 (2019) 6. Bayerl, P.S., Paul, K.I.: What determines inter-coder agreement in manual annotations? a meta-analytic investigation. Compu- tational Linguistics 37(4), 699–725 (2011) 7. Bras¸oveanu, A.M., Andonie, R.: Visualizing and explaining language models. In: Integrating Artificial Intelligence and Visu- alization for Visual Knowledge Discovery, pp. 213–237. Springer (2022) 8. Cao, M., Zhang, J., Xu, S., Ying, Z.: Knowledge graphs meet crowdsourcing: A brief survey. In: International Conference on Cloud Computing. pp. 3–17. Springer (2020) 9. Ciampaglia, G.L., Shiralkar, P., Rocha, L.M., Bollen, J., Menczer, F., Flammini, A.: Computational fact checking from knowl- edge networks. PloS one 10(6), e0128193 (2015) 10. Daniel, F., Kucherbaev, P., Cappiello, C., Benatallah, B., Allahbakhsh, M.: Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys (CSUR) 51(1), 1–40 (2018) 11. F¨arber, M., Bartscherer, F., Menne, C., Rettinger, A.: Linked data quality of dbpedia, freebase, opencyc, wikidata, and yago. Semantic Web 9(1), 77–129 (2018) 12. Flouris, G., Roussakis, Y., Poveda-Villalon, M., Mendes, P.N., Fundulaki, I.: Using provenance for quality assessment and repair in linked open data (2012) 24 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT 13. Gardent, C., Shimorina, A., Narayan, S., Perez-Beltrachini, L.: The webnlg challenge: Generating text from rdf data. In: Proceedings of the 10th International Conference on Natural Language Generation. pp. 124–133 (2017) 14. Gerber, D., Esteves, D., Lehmann, J., B¨uhmann, L., Usbeck, R., Ngomo, A.C.N., Speck, R.: Defacto—temporal and multilin- gual deep fact validation. Journal of Web Semantics 35, 85–101 (2015) 15. Guo, Z., Schlichtkrull, M., Vlachos, A.: A survey on automated fact-checking. Transactions of the Association for Computa- tional Linguistics 10, 178–206 (2022) 16. Hanselowski, A., Zhang, H., Li, Z., Sorokin, D., Schiller, B., Schulz, C., Gurevych, I.: UKP-athene: Multi-sentence textual entailment for claim verification. In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). pp. 103–108. Association for Computational Linguistics, Brussels, Belgium (Nov 2018). https://doi.org/10.18653/v1/W18-5516, https://aclanthology.org/W18-5516 17. Honnibal, M., Montani, I., Van Landeghem, S., Boyd, A.: spaCy: Industrial-strength Natural Language Processing in Python (2020). https://doi.org/10.5281/zenodo.1212303 18. Joshi, U., Urbani, J.: Ensemble-based fact classification with knowledge graph embeddings. In: European Semantic Web Con- ference. pp. 147–164. Springer (2022) 19. Joulin, A., Grave, E., Bojanowski, P., Douze, M., J´egou, H., Mikolov, T.: Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651 (2016) 20. Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759 (2016) 21. Kim, J., Choi, K.S.: Unsupervised fact checking by counter-weighted positive and negative evidential paths in a knowledge graph. In: Proceedings of the 28th International Conference on Computational Linguistics. pp. 1677–1686 (2020) 22. Kontokostas, D., Zaveri, A., Auer, S., Lehmann, J.: Triplecheckmate: A tool for crowdsourcing the quality assessment of linked data. In: International Conference on Knowledge Engineering and the Semantic Web. pp. 265–272. Springer (2013) 23. Kumar, P., Singh, A., Kumar, P., Kumar, C.: An explainable machine learning approach for definition extraction. In: Inter- national Conference on Machine Learning, Image Processing, Network Security and Data Sciences. pp. 145–155. Springer (2020) 24. Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. biometrics pp. 159–174 (1977) 25. Lehmann, J., Gerber, D., Morsey, M., Ngonga Ngomo, A.C.: Defacto-deep fact validation. In: International semantic web conference. pp. 312–327. Springer (2012) 26. Leonhardt, J., Anand, A., Khosla, M.: Boilerplate removal using a neural sequence labeling model. In: Companion Proceedings of the Web Conference 2020. pp. 226–229 (2020) 27. Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., K¨uttler, H., Lewis, M., Yih, W.t., Rockt¨aschel, T., et al.: Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33, 9459–9474 (2020) 28. Liu, Z., Xiong, C., Sun, M., Liu, Z.: Fine-grained fact verification with kernel graph attention network. arXiv preprint arXiv:1910.09796 (2019) 29. Malon, C.: Team papelo: Transformer networks at fever. arXiv preprint arXiv:1901.02534 (2019) 30. Malyshev, S., Kr¨otzsch, M., Gonz´alez, L., Gonsior, J., Bielefeldt, A.: Getting the most out of wikidata: semantic technology usage in wikipedia’s knowledge graph. In: International Semantic Web Conference. pp. 376–394. Springer (2018) 31. Padia, A., Ferraro, F., Finin, T.: Kgcleaner: Identifying and correcting errors produced by information extraction systems. arXiv preprint arXiv:1808.04816 (2018) 32. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the Association for Computational Linguistics. pp. 311–318 (2002) 33. Piscopo, A., Kaffee, L.A., Phethean, C., Simperl, E.: Provenance information in a collaborative knowledge graph: an evaluation of wikidata external references. In: International semantic web conference. pp. 542–558. Springer (2017) 34. Piscopo, A., Simperl, E.: What we talk about when we talk about wikidata quality: a literature survey. In: Proceedings of the 15th International Symposium on Open Collaboration. pp. 1–11 (2019) 35. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(140), 1–67 (2020) 36. Rastas, I., Ryan, Y.C., Tiihonen, I.L.I., Qaraei, M., Repo, L., Babbar, R., M¨akel¨a, E., Tolonen, M., Ginter, F.: Explainable pub- lication year prediction of eighteenth century texts with the bert model. In: Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change. The Association for Computational Linguistics (2022) 37. Ribeiro, L.F., Schmitt, M., Sch¨utze, H., Gurevych, I.: Investigating pretrained language models for graph-to-text generation. arXiv preprint arXiv:2007.08426 (2020) 38. Sathe, A., Ather, S., Le, T.M., Perry, N., Park, J.: Automated fact-checking of claims from wikipedia. In: Proceedings of the 12th Language Resources and Evaluation Conference. pp. 6874–6882 (2020) 25 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT 39. Schuster, T., Fisch, A., Barzilay, R.: Get your vitamin c! robust fact verification with contrastive evidence. arXiv preprint arXiv:2103.08541 (2021) 40. Shenoy, K., Ilievski, F., Garijo, D., Schwabe, D., Szekely, P.: A study of the quality of wikidata. Journal of Web Semantics 72, 100679 (2022) 41. Shi, B., Weninger, T.: Discriminative predicate path mining for fact checking in knowledge graphs. Knowledge-based systems 104, 123–133 (2016) 42. Shiralkar, P., Flammini, A., Menczer, F., Ciampaglia, G.L.: Finding streams in knowledge graphs to support fact checking. In: 2017 IEEE International Conference on Data Mining (ICDM). pp. 859–864. IEEE (2017) 43. Soleimani, A., Monz, C., Worring, M.: Bert for evidence retrieval and claim verification. In: European Conference on Infor- mation Retrieval. pp. 359–366. Springer (2020) 44. Speck, R., Ngomo, A.C.N.: Leopard—a baseline approach to attribute prediction and validation for knowledge graph popula- tion. Journal of Web Semantics 55, 102–107 (2019) 45. Syed, Z.H., R¨oder, M., Ngonga Ngomo, A.C.: Factcheck: Validating rdf triples using textual evidence. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management. pp. 1599–1602 (2018) 46. Thorne, J., Vlachos, A.: An extensible framework for verification of numerical claims. In: Proceedings of the Software Demon- strations of the 15th Conference of the European Chapter of the Association for Computational Linguistics. pp. 37–40. Asso- ciation for Computational Linguistics (2017) 47. Thorne, J., Vlachos, A.: Automated fact checking: Task formulations, methods and future directions. arXiv preprint arXiv:1806.07687 (2018) 48. Thorne, J., Vlachos, A., Christodoulopoulos, C., Mittal, A.: Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355 (2018) 49. Thorne, J., Vlachos, A., Cocarascu, O., Christodoulopoulos, C., Mittal, A.: The fact extraction and verification (fever) shared task. arXiv preprint arXiv:1811.10971 (2018) 50. Vlachos, A., Riedel, S.: Identification and verification of simple claims about statistical properties. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pp. 2596–2601. Association for Computational Linguistics (2015) 51. Vo, N., Lee, K.: Where are the facts? searching for fact-checked information to alleviate the spread of fake news. arXiv preprint arXiv:2010.03159 (2020) 52. Vogels, T., Ganea, O.E., Eickhoff, C.: Web2text: Deep structured boilerplate removal. In: European Conference on Information Retrieval. pp. 167–179. Springer (2018) 53. Vrandeˇci´c, D.: Wikidata: A new platform for collaborative data collection. WWW’12 - Proceedings of the 21st Annual Con- ference on World Wide Web Companion p. 1063 (2012). https://doi.org/10.1145/2187980.2188242 54. Walter, N., Cohen, J., Holbert, R.L., Morag, Y.: Fact-checking: A meta-analysis of what works and for whom. Political Com- munication 37(3), 350–375 (2020) 55. Xue, B., Zou, L.: Knowledge graph quality management: a comprehensive survey. IEEE Transactions on Knowledge and Data Engineering (2022) 56. Zaveri, A., Rula, A., Maurino, A., Pietrobon, R., Lehmann, J., Auer, S.: Quality assessment for linked data: A survey. Semantic Web 7(1), 63–93 (2016) 57. Zeng, X., Abumansour, A.S., Zubiaga, A.: Automated fact-checking: A survey. Language and Linguistics Compass 15(10), e12438 (2021) 58. Zhong, W., Xu, J., Tang, D., Xu, Z., Duan, N., Zhou, M., Wang, J., Yin, J.: Reasoning over semantic-level graph for fact checking. arXiv preprint arXiv:1909.03745 (2019) 59. Zhou, J., Han, X., Yang, C., Liu, Z., Wang, L., Li, C., Sun, M.: Gear: Graph-based evidence aggregating and reasoning for fact verification. arXiv preprint arXiv:1908.01843 (2019) 26 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT A Crowdsourcing Task Designs Fig. 13. The task design T1, which collects evidence-level annotations of the individual stances of pieces of evidence towards a Wikidata triple. 27 Show InstructionsPair 1 of 6:Read the following affirmation:G V Raja died in Kullu Valley.0Now, read the following passage, paying particular attention to the highlighted text:He was the main architect in developing Kovalam as an international tourist spot. He died in an air crash near Kullu(Kulu) Valley on April 30, 1971. Sports journalists, historians, experts and sportsmen consider him as the Father ofSports and Tourism in Kerala.The passage was extracted from the following link:https://en.wikipedia.org/wiki/G._V._RajaYou can click on the link to see the passage with its original form and with more context.In your opinion, how does the highlighted text relate to the affirmation? The highlighted text SUPPORTS the affirmation The highlighted text REFUTES the affirmation The highlighted text NEITHER supports nor refutes the affirmation Not SureNextAffirmation Classification with a Single Evidencefile:///C:/Users/gabri/Documents/Repos/ClaimVerificationHIT/single/single_mockup.html1 of 104/08/2022, 15:25 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT Fig. 14. The task design T2, which collects evidence-level annotations of the collective stances of evidence sets towards a Wikidata triple. 28 Show InstructionsAffirmation 1 of 6:Read the following affirmation:G V Raja died in Kullu Valley.The following passages were extracted from this link: https://en.wikipedia.org/wiki/G._V._RajaIf you want, you can click on the link to see the passages with their original form and with morecontext.Now, read the following passages, paying particular attention to their highlighted texts:He died in an air crash near Kullu (Kulu) Valley on April 30, 1971. Sports journalists, historians, experts and sportsmenconsider him as the Father of Sports and Tourism in Kerala. G. V. Raja's birth anniversary, 13 October, is observed as"Kerala Sports Day".G. V. Raja was also the President of Tourism Promotion Council of Kerala. He was the main architect in developingKovalam as an international tourist spot. He died in an air crash near Kullu (Kulu) Valley on April 30, 1971.He died in an air crash near Kullu (Kulu) Valley on April 30, 1971. Sports journalists, historians, experts and sportsmenconsider him as the Father of Sports and Tourism in Kerala.P. R. Godavarma Raja Karthika Nal Godavarma Raja Born 13 October 1908 Poonjar Royal House, Kottayam Died 30April 1971 Kullu Valley, Himachal Pradesh Cause of death Air Crash Spouse(s) Maharani Karthika Thirunal Lakshmi Bayiof Travancore (m. 1934) Children Crown Prince Rama Varma (died aged six)He made an unscheduled trip to the Kulu Valley on 30 April 1971. With two others, G V Raja flew in a three-seateraircraft which nose-dived and crashed, killing all its passengers. He was aged 62 at the time of his death.In your opinion, when taken into consideration together, how does the highlighted texts relateto the affirmation? Taken together, the highlighted texts SUPPORT the affirmation Taken together, the highlighted texts REFUTE the affirmation Taken together, the highlighted texts NEITHER support nor refute the affirmation Not SureNextAffirmation Classification with Multiple Evidencefile:///C:/Users/gabri/Documents/Repos/ClaimVerificationHIT/multiple/multiple_mockup.html1 of 104/08/2022, 15:25 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT B WTR Dataset Format WTR is available at Figshare 6 and contains 409 Wikidata triple-reference pairs, representing 32 groups of text-rich web domains commonly used as sources, as well as 76 distinct Wikidata properties. Out of the 416 sampled triple- reference pairs, as described in Section 4.1, 7 were left out due to having exactly the same triple components and referenced URL as other triple-reference pair, as explained in Section 6.3. 43% of references were obtained through external IDs and 57% through direct URLs. Each entry has the following attributes: – Reference attributes: • Reference ID: An unique identifier issued to the reference by Wikidata; • Reference property ID: The unique identifier of the Wikidata property used by the reference to encode the URL we retrieve for it; • Reference datatype: Whether the reference’s URL was retrieved by a direct URL or an URL formatted with an external identifier; • URL: The URL retrieved for this reference; • Netloc: The actual web domain of this URL; • Netloc group: The web domain of this URL after grouping references under the RARE and OTHER groups; • Final URL: The URL reached after redirects and whose HTML and text were extracted by ProVe into sentences for annotation; • HTML: The HTML code extracted from the reference’s final URL. – Claim attributes: • Claim ID: An unique identifier issued to the claim by Wikidata; • Rank: The claims’s rank, either normal or preferred; • Datatype: The datatype of the claim’s object, e.g. quantity, string, etc; • Component IDs: The Wikidata IDs of the claim’s subject and property; • Component labels: Main labels for subject, property, and object; • Component aliases: Aliases lists for subject, property, and object; • Component descriptions: Wikidata descriptions for subject, property, and object (if it is a Wikidata item). – Annotations for evaluation: • Evidence-level annotations (T1): The evidence-level annotations that describe the individual TER stances of each piece of evidence towards the claim, in which the evidence set is the five most rele- vant passages collected from the URL. This consists the following attributes for each piece of evidence: * Evidence: The individual textual evidence collected and being annotated; * MTurk IDs: A list of anonymous worker IDs and assignment IDs denoting the crowd workers who provided annotations/votes; * TER Relation: The list of TER stances voted by the workers, where 0 = SUPP, 1 = REF, 2 = NEI, 3 = Not Sure; * Reason for ’Not Sure;: If a voter gave ’not sure’ as their relation, this denotes the reason why, out of the list of options seen in this paper’s appendix or a free-text reason; * Times: The times in seconds taken by a worker to provide their full annotations; * Aggregated TER Relation: The majority-voting aggregation of each individual worker’s TER stance annotation; • Evidence-level annotations (T2): The evidence-level annotations that describe the collective TER stances of the entire evidence set towards the claim, in which the evidence set is the five most relevant passages collected from the URL. This consists of the following attributes: * Evidence: The entire textual evidence set collected and being collectively annotated; * MTurk IDs: A list of anonymous worker IDs and assignment IDs denoting the crowd workers who provided annotations/votes; * TER Relation: The list of TER stances voted by the workers, where 0 = SUPP, 1 = REF, 2 = NEI, 3 = Not Sure; * Reason for ’Not Sure;: If a voter gave ’not sure’ as their relation, this denotes the reason why, out of the list of options seen in this paper’s appendix or a free-text reason; 6 https://figshare.com/s/df0ec1c233ebd50817f4 29 ProVe: A Pipeline for Automated Provenance Verification of Knowledge Graphs against Textual SourcesA PREPRINT * Times: The times in seconds taken by a worker to provide their full annotations; * Aggregated TER Relation: The majority-voting aggregation of each individual worker’s TER stance annotation; • Sentence-level author annotations: The sentence-level annotation representing the stance of the entire reference towards the triple. 30
ai_researcher
2
AI_for_Scientific_Discovery_and_a_Sustainable_Future.pdf
Artificial intelligence for Sustainability in Energy Industry: A Contextual Topic Modeling and Content Analys Tahereh Saheb1 Research Assistant Professor, Science & Technology Studies Group, Management Studies Center, Tarbiat Modares University, Tehran, Iran [email protected] Mohammad Dehghani Industrial and Systems Engineering Tarbiat Modares University Tehran, Iran [email protected] Abstract— Parallel to the rising debates over sustainable energy and artificial intelligence solutions, the world is currently discussing the ethics of artificial intelligence and its possible negative effects on society and the environment. In these arguments, sustainable AI is proposed, which aims at advancing the pathway toward sustainability, such as sustainable energy. In this paper, we offered a novel contextual topic modeling combining LDA, BERT and Clustering. We then combined these computational analyses with content analysis of related scientific publications to identify the main scholarly topics, sub-themes and cross-topic themes within scientific research on sustainable AI in energy. Our research identified eight dominant topics including sustainable buildings, AI-based DSSs for urban water management, climate artificial intelligence, Agriculture 4, convergence of AI with IoT, AI-based evaluation of renewable technologies, smart campus and engineering education and AI-based optimization. We then recommended 14 potential future research strands based on the observed theoretical gaps. Theoretically, this analysis contributes to the existing literature on sustainable AI and sustainable energy, and practically, it intends to act as a general guide for energy engineers and scientists, AI scientists, and social scientists to widen their knowledge of sustainability in AI and energy convergence research. Keywords— Artificial intelligence; sustainability; energy; topic modeling; content analysis; sustainable energy; 1 Corresponding Author 1. Introduction The rise of unsustainable practices and procedures co-occurred with the rising urbanization and civilization have driven the emergence of AI- based solutions to assist the path toward sustainability [1–3]. Excessive consumption and unsustainable energy sources, which have increased at an unprecedented rate due to factors such as urbanization, improper building construction, transportation, environmental changes, and population growth, have pressured the energy industry to pursue clean energy sources and smart solutions [4]. The deployment of alternative energy sources and access to sustainable energy are pillars of global economic growth [5] and fight against environmental hazards, in particular climate change [6]. Thus, the energy sector has focused its efforts not only on developing new sources of energy, but also on inventing novel technical solutions that increase the efficiency of existing mitigation measures [7]. AI-based interventions, which are available in the form of both hard and soft solutions, such as robots and algorithms and models, are one of these solutions that have come to assist humanity [8]. Artificial intelligence can provide a wide range of intelligent solutions, from predictive and prescriptive energy consumption insights to intelligent energy generation and distribution. Parallel to the escalating discussions over sustainable energy and artificial intelligence solutions, the world is now debating the ethics of artificial intelligence and its potentially negative effects on society and the environment. Ethical AI considers not just AI's moral dimensions, but also its epistemic perspectives [9]. While prior studies have urged scholars to focus on the epistemological aspects of sustainable AI and to open the black box of algorithms to develop sustainable models and algorithms [10], other researches have concentrated on AI for social good and its favorable societal and environmental circumstances [11,12]; such as the development of sustainable AI. In this article, we define sustainable AI as AI that is designed to achieve sustainability and is called AI for sustainability, as differed from AI that is designed to be sustainable and is called sustainability of AI [10]. In this paper, the term "sustainable AI" refers to the extent to which artificial intelligence can help society accomplish their sustainability goals [13,14]. The energy industry is one of the core industries that will benefit from sustainable AI, which will aid in the development of energy sustainability [15]. Sustainable energy strives to fulfill today's energy demand without depleting energy supplies or harming the environment. Sustainable energy systems are regarded as a requirement for achieving all the Sustainable Development Goals (SDGs) [16]. Sustainable artificial intelligence can help to expedite the development of sustainable energy [14]. To advance sustainable energy, the industry has supplied a wide variety of choices, including wind energy, fossil fuels, solar energy, and bioenergy. It's also vital to recognize how academics have dealt with the confluence of sustainability, artificial intelligence, and energy. 2 This research is novel from various perspectives. First, this study intends to foster discussions on sustainable AI by identifying the most important research issues in the area, highlighting intellectual gaps, and proposing potential research streams. It is obvious that the energy sector and scientific research and innovation are inextricably linked. Scientific research is seen to be the cornerstone of technological advancements [17]. Identifying the intellectual frameworks of scientific research across time and the historical progression of its themes can have a huge influence on the effectiveness or failure of new technological solutions. To our knowledge, scientific research on sustainable energy is lacking a coherent understanding of how artificial intelligence has been integrated into this domain and how it should be conducted in the future. It is therefore imperative to perform a mixed-method literature review to have a deeper understanding of the deployment of AI to achieve sustainable energy in order to identify existing research gaps and potential future research streams. The second aspect of this research that distinguishes it from prior research is its novel methodology. Extensive literature reviews are conducted by scholars using bibliometric methodologies [18–20] or topic modeling techniques such as Latent Dirichlet Allocation (LDA) [21,22] or qualitative content analysis [23]. As a result, we incorporated all the aforementioned review methodologies to ensure that their findings were complementary. Furthermore, because both bibliometric and LDA topic modeling are based on keyword co- occurrence analysis, we included a contextual embedding-based topic modeling analysis that incorporates use of sentences as fundamental units of analysis. This method which is the latest development in natural language processing (NLP) is offered by Google under the name of Bidirectional Encoder Representations for Transformers (BERT) [24] . BERT makes use of the Transformer library, which uses machine learning to discover contextual relationships between words in a text. Our integrated adoption of computational and advanced topic modeling tools, as well as qualitative analysis, enables us to gain highly objective, coherent, superior, and meta-analytical insight into present research on sustainable artificial intelligence in energy and to forecast its future. The final contribution of this research is that we offer a thorough list of research gaps and potential research agendas that may be used to increase the depth of research on sustainable artificial intelligence in the energy industry In sum, the theoretical contribution of this research is to extent the literatures on sustainable AI and sustainable energy by determining the key academic themes, sub-themes and cross-topic common themes addressed by scientists working on sustainable AI in energy, as well as how these subjects have evolved over time. Practically, this research attempts to enlighten policymakers, the energy sector, and engineers and developers of artificial intelligence about the productivity of science while emphasizing the challenges that require more AI-based responses. Additionally, it encourages policymakers to design artificial intelligence regulations that promote the development of sustainable AI in the energy sector while mitigating the unintended consequences of unsustainable energy sources and AI solutions. 3 The study is structured as follows: we begin with an explanation of our methodology and then go on to the findings, which include our topic modeling and content analysis of topics. We conclude the study by discussing our findings, theoretical research gaps, and potential future research directions. We also discussed the theoretical and practical contribution of the study. We conclude the paper with a conclusion. 2. Methodology It is a widely held belief among researchers that each quantitative and qualitative research technique has inherent strengths and weaknesses; hence, combining both methods is advised to ensure that their results complement one another. We drew on and included four complimentary sets of research methodologies in our study. Three of these, BERT, LDA topic modeling and clustering are connected with text mining techniques. Additionally, we supplemented these quantitative findings with a qualitative topic-based content analysis. Our mixed-methods approach is new in three ways. First, we employed computational approaches such as BERT, LDA, and clustering to discover the thematic content of research on sustainable AI in energy. Second, we conducted a comprehensive analysis of the retrieved topics using content analysis as a qualitative approach. Third, we integrated LDA and BERT topic modeling approaches in this study to achieve the highest level of topic identification accuracy. Our suggested mixed-method methodology may be used by researchers from a variety of disciplines to improve our understanding of quantitative and computational analyses through the use of topic-based content analysis. LDA is predicated on the premise that documents are made of topics and that some words are more likely to occur in certain topics than others (Xie et al., 2020). While LDA has been regularly used by academics to identify topics, it does have some limitations due to the fact that it is a word co-occurrence analysis and so cannot incorporate the entire content of the sentence. Additionally, it does not do well on short texts [26]. Additionally, the outcomes of LDA may be challenging for humans to comprehend and consume [27]. By contrast, BERT topic modeling is focused on detecting semantic similarity and integrating topics with pre- trained contextual representations [28] It substantially enhances the coherence of neural topic models by including contextual information into the topic modeling process [29]. BERT makes use of the Transformer library, which has an Autoencoder technique: an encoder that scans the text input. We combined the LDA and BERT vectors in this study to improve topic recognition and clustering. Moreover, because one of the most difficult aspects of word-sentence embedding is dealing with high dimensions, we applied the Uniform Manifold Approximation and Projection (UMAP) approach. In comparison to other approaches, UMAP is one of the most efficient implementations of manifold learning [30]. 4 1.2. Corpus Building On May 29th 2021, we searched the following keywords inside the title, keyword, and abstract: "artificial intelligence" OR "AI" AND "sustainable" OR "sustainability" AND "energy". This search resulted in the retrieval of 981 documents. Following that, we restricted the document type to Articles and the language to English. This exclusion resulted in 296 articles. Following that, we manually evaluated the titles and abstracts of the articles to identify the most pertinent ones that examined the role of artificial intelligence in ensuring the energy sector's sustainability. This screening yielded 182 publications spanning the years 2004 to 2022. Given that abstracts of research articles are the most succinct summary of key ideas [22], we included abstracts of the final publications in the study's corpus. 2.2. Preprocessing and Post-Processing Stages Python 3.7.9 was utilized for pre- and post-processing, as well as for topic modeling analysis. We preprocessed our corpus using the NLTK and Scikit-learn packages, as well as Regular Expressions or RegEX. We import the word tokenize from the NLTK to begin the tokenization process. After removing punctuation, we lowercased our characters and deleted all numeric characters, punctuation, and whitespace. Additionally, we eliminated no-word repetitions and anything enclosed in parenthesis. Additionally, we eliminated the NLTK library's stopwords. We reviewed the first findings and created a manual exclusion list for more relevant topic identification during the postprocessing step. We added the core keywords (i.e. artificial intelligence, AI, energy, sustainable, sustainability) in the exclusion list to enhance the coherence of the findings. We used stemming throughout the preprocessing step; however, after observing the first results, we decided to remove the stemming to make the words displayed in the word clouds more understandable. We next used the lemmatization procedure, which we abandoned following the findings of the word clouds in order to make our topic labeling approach more comprehensible. Additionally, we estimated the TF-IDF score for each word in the corpus. We eliminated words with scores that were lower than the median of all TF-IDF values. We calculated the TF-IDF scores using the Scikit-learn package. The maximum TF-IDF score was set to 0.8 and the minimum value at 0.11. Additionally, we incorporated unigrams and bigrams. 3.2. Topic Modeling We applied the following libraries to conduct the topic modeling: Pandas to read the dataset, Gensim to perform LDA, Transformers to perform BERT, Keras to perform auto-encoding, and Seaborn and Matplotlib to visualize the results. We imported the TFID vectorizer from the Scikit-learn feature extraction and KMeans from the Scikit-learn cluster. The probabilistic topic assignment vector was constructed using LDA, while the sentence embedding vector was constructed using BERT. To begin, we used the TF-IDF, 5 LDA, and BERT to model the topics (Figure 1). The LDA and BERT vectors were then concatenated in order to balance the information content of each vectors. We incorporated the Keras package to process the auto-encoder in order to learn a lower-dimensional latent space representation for the concatenated vector. To ensure the clusters were of good quality, we calculated the Silhouette Score, which was 0.566 and near to one for LDA+BERT+ Clustering. TFIDF+clustering received a score of 0.048, while BERT+clustering received a score of 0.095 (Figure 2). The Silhouette Score is used for cluster quality [31]. The score ranges from -1 to 1. If the score is near to one, the cluster is dense and well isolated from neighboring clusters. In comparison to other topic modeling techniques, LDA BERT Clustering is closer to 1, indicating that the clusters are of excellent quality. Figure 1 The concatenating and encoding LDA and BERT vectors to extract contextual topics 6 TF-IDF Clustering BERT LDA Figure 2 The separate and independent results of topic modeling of research on sustainable AI in energy by using TF-IDF, BERT and LDA algorithms The final topic identification obtained by LDA+BERT+Clustering Algorithms is depicted in Figure 3. We utilized the UMAP package to do dimension reductions and set the topic count to eight. We also evaluated several topic clustering, including 10, 4, and 6. The authors determined that eight topics were better separated from one another and had a greater density within each topic; this demonstrates the excellent quality of clustering. As indicated by the percentage of documents contained inside each topic, approximately 11% of documents belong to topic 0 and approximately 16% to topic 1. Clustering resulted in a balanced distribution of documents within each topic, confirming the clustering's excellent quality. 7 Figure 3 The global view of the topic model on sustainable AI in energy research area. We integrated LDA, BERT and clusetering for topic modeling detection. 3. Results 1.3. Descriptive Analysis Figure 3.0 shows a representation of the topic model on sustainable AI in energy research field with respect to the overall global view. This visualization represents the topic modeling results, where topics are illustrated as clusters on a two-dimensional plane. Also shown in Figure 4 is the word cloud visualization of the topics with the most frequently used terms in each topic. Topics 1, 2, and 3 represent the greatest research interest in the model based on 8 topics and including 21.67%, 17.22%, and 15.0% of the corpus. Our research uncovered eight different topics. These topics will be described, and then a content analysis of the papers that are associated with each one will be carried out throughout this part of the article. These articles were organized according to their relative likelihood of belonging to each topic. As seen in Figure 4.0, the three most-covered topics by academia are topic 1: Sustainable buildings (22.5%), Topic 2: AI- based DSSs for urban water management (16.5%) and Topic 3: Climate Artificial Intelligence (14.8%). About 54% of the articles in the corpus are concerned with these three themes. The word cloud visualization (Figure 6.0) shows the identified topics after labeling based on the topic three keywords. The Figure 6 shows that the first three most-used terms in each subject are as follows: Topic 1(building, consumption, environment); topic 2 (design, water, decision); topic 3 (building, climate, fuel); topic 4 (decision, agriculture, improve); topic 5 (IoT, devices, consumption); topic 6 (urban, technology, industrial); topic 7 (engineering, efficiency, students); topic 8 (optimization, efficient, building). 8 Figure 4 The distribution of documents across topics 2.3. The evolution of topics over time Once we scoured the corpus for hidden topics, we determined how often they appear throughout time. Figure 5 depicts the ratios of all the eight topics (beginning in 2004 and extending into 2021). Since 2018 forward, topics have garnered a substantial amount of academic interest. Specifically, the first topic, which is about the design of sustainable buildings and minimizing energy usage via the application of artificial intelligence. This subject gained considerable attention between 2012 and 2014, but then slipped off the spotlight between 2015 and 2018. The discussions about AI-based evaluation of renewable energy solutions peaked around 2008 but then became less prominent until 2019. Climate artificial intelligence experienced two distinct phases, with the second one peaking in 2015 and 2016 and the first between 2009 and 2012; however, topic reached its apex in 2019 and 2020. The topic of AI for energy efficiency has shown a reasonably steady increase from 2013, with its greatest growth occurring between 2020 and 2021. In 2020, significant academic focus was given to AI-based DSSs for urban water management. 9 14 12 10 8 6 4 2 0 Topic 1: Sustainable Buildings and Energy Consumption Topic 2: AI-based DSSs for Sustainable Urban Water Management Topic 3: Climate Artificial Intelligence Topic 4: Agriculture 4.0 and Sustainable Sources of Energy Topic 5: Convergence of IoT & AI for Sustainable Smart Cities Topic 6: AI-based Evaluation of Renewable Energy Technologies topic 7: Smart Campus & Engineering Education Topic 8: AI for Energy Optimization 4 0 0 2 5 0 0 2 6 0 0 2 7 0 0 2 8 0 0 2 9 0 0 2 0 1 0 2 1 1 0 2 2 1 0 2 3 1 0 2 4 1 0 2 5 1 0 2 6 1 0 2 7 1 0 2 8 1 0 2 9 1 0 2 0 2 0 2 1 2 0 2 Figure 5 The evolution of topics over time 3.3. Content analysis to detect topics, sub-themes and cross-topic common themes In this part of the paper, we conducted content analysis of detected topics for three purposes: First, to detect the general topics from articles; second, to identify the sub-themes from each topic, and third to find the cross-topic common themes. Topic 1: Sustainable Buildings and Energy Consumption The primary concerns of topic 1 are related to the design of automated and intelligent systems and the incorporation of cutting-edge technologies, particularly IoT and AI-based DSSs, in order to construct sustainable buildings. These buildings will be part of the sustainable cities initiative, which aims to promote sustainable energy consumption and smart grids. One of the primary scholarly interests is the creation of sustainable buildings and smart grids for the purpose of reducing energy consumption. One way to accomplish this aim is to redefine the design and architecture of buildings, whether residential, public, commercial, industrial, or manufacturing. According to studies, the application of automation and intelligent systems in the construction of sustainable buildings will result in sustainable energy usage [32,33]. Several AI-based approaches are proposed to achieve a more sustainable building, including building management systems, knowledge-based engineering (KBE), fuzzy logic, neural 10 networks, genetic algorithms, and Monte-Carlo simulation [34]. From a broad standpoint, sustainable building development falls under the umbrella of sustainable smart cities and reducing building energy consumption [35]. Additionally, scholars have drawn inspiration from nature and advocated regenerative design influenced by nature for pattern detection, prediction, optimization, and planning of buildings [36]. Additionally, scholars discuss the potential of AI in reducing CO2 emissions in buildings, suggesting that AI may be used to construct smart multi-energy systems, such as those found in industrial districts, resulting in significant energy savings and CO2 emission reductions (Simeoni, Nardin and Ciotti, 2018 ). As a result, sustainable building design would be a way to combat climate change. Several additional studies integrate AI solutions with other cutting-edge technologies, most notably the Internet of Things and big data, to improve not only the design and optimization of sustainable buildings, but also the efficiency of their power usage (Chui, Lytras and Visvizi, 2018). For instance, one project focused on the application of IoT in public buildings in order to discover and anticipate energy usage trends [39]. A preceding study, for illustration, outlines the obstacles involved in understanding the semantics of IoT devices using machine learning models. Image Encoded Time Series has been identified as an alternate method to other statistical feature-based inference[35]. Sustainability analysts from [40] and [41] studies have also advocated for continual monitoring of sustainability metrics by integrating AI with DSSs or ambient intelligence. Both residential buildings and plants and commercial buildings and offices have the same issue in regard to energy usage. Previous studies incorporated multi-objective and multi-attribute decision making modeling as well as impact evaluation of the emission outputs to help designers and manufacturers to make environmentally sustainable decisions about the designs and production of facilities [42]. Researchers also believe that in order to provide bulk energy consumption forecast, control, and management, simulation techniques could be utilized [15], for instance in public buildings, offices and factories. Due to new modes of consumption and distributed intelligence, the electrical power grids have been also influenced, and as a result, smart energy grids have been generated to achieve sustainability [43]. Topic 2: AI-based DSSs for Sustainable Urban Water Management The second topic is sustainable water management, which includes utilizing AI to create DSSs for consumption and water usage. Forecasting, real-time monitoring, and customized and adjustable pricing and tariffs are the primary strategies. AI is used with other sophisticated technologies to assist in the development of a smart city. The previous studies have postulated several approaches, such as optimization and AI-based decision support systems, for water infrastructure management [44], better delivery of public services of smart cities such as water treatment and supply [45], AI-based water pricing and tariff options [46] and sustainable water 11 consumption [47]. For this goal, AI is integrated with recent technological advances in urban life. This includes using open source data, employing deep learning algorithms, and developing smart street lighting systems. Such decisions about social impacts of smartphone applications or smart travel behavior are also examined [48]. AI techniques are utilized to anticipate water resource management [49], such as water quality by adopting algorithms such as neuro-fuzzy inference system [50]. Real-time optimization of water resources and cloud technologies are integrated with visual recognition techniques and created to improve efficiency with irrigation systems [51]. A study conducted on ecological water governance implementation using AI found that including algorithms into the system yields higher-quality information and better prediction models for accurate evaluation of water quality [52]. AI may be used for tracking water use and demand as well as forecasting water quality, but it can also be used for estimating water infrastructure maintenance, monitoring dam conditions, water-related diseases and disasters [53] and water reuse [54]. By critiquing conventional decision support systems, research offer alternatives based on artificial intelligence, such as a systematic decision process [55], sustainability ranking framework based on Mamdani Fuzzy Logic Inference Systems to develop a sustainable desalination plant [56] or an comprehensive and flexible decision-making process fueled by social learning and engagement aimed at ensuring the urban water system's environmental and energy sustainability [57]. One research offers a unique DSS for analyzing the energy effect of each of the urban water cycle's macro-sectors, including assessing the system's energy balance and proposing potential energy-efficient solutions ( Puleo et al., 2016). Topic 3: Climate Artificial Intelligence (Climate Informatics) Climate informatics, specially climate artificial intelligence as a new field of study is concerned with issues such as AI-based DSSs to reduce greenhouse gas emissions, optimizing grid assets, enhancing climate resiliency and reliability, increasing energy efficiency, forecasting energy consumption and modeling earth systems. Moreover, within this topic, scholars have addressed the issue of explainable and trustworthy AL models due to the controversial nature of climate change. Climate change has compelled societies to seek alternate energy sources and fuels [59]. Climate informatics [60], such as several AI-based solutions, including novel algorithms and DSSs, have been hugely beneficial in lowering greenhouse gas emissions in the energy sector. By improving grid assets, and strengthening climate adaptability these innovations have greatly contributed to this ultimate goal [15]. Reliable and explainable artificial intelligence models, as advocated in prior studies, might help stakeholders and decision-makers achieve climate-resilient and sustainable development goals [61]. By integrating advanced machine learing techniques, AI can propose fresh insights in complex climate simulations in the field of climate modeling [62]. Energy consumption patterns might undergo considerable changes due to climatic change, which means AI 12 forecasts can aid in estimating future energy use for various climate scenarios [63]. It's not only businesses and other organizations that are using AI algorithms these days—AI algorithms are also being utilized to foster sustainable urban growth and mitigate climate change by examining how future urban expansion will affect material and energy flows [64]. Fossil fuel, used as the primary energy source, is the primary contributor to human greenhouse gases that influence the climate. AI is extensively utilized for decreasing carbon footprints and for avoiding fossil fuel combustion [65] as prior studies show that AI can act as an automated carbon tracker [66]. Artificial intelligence-powered technologies may help investors in analyzing a company's climate effect while making investment choices [67]. By drawing attention to climate change through visualization techniques, they help to educate the public on the effects of climate change [68] Ultimately, AI algorithms may provide great resources for climate change conflicts, including in the field of modeling earth systems [69], teleconnections [70], weather forecasting ( McGovern and Elmore, 2017), future climate scenarios [72], climate impacts [73] and climate extremes[74]. Topic 4: Agriculture 4.0 and Sustainable Sources of Energy The fourth area that academics in the field of sustainable AI for energy extensively address is the development of smart agriculture and sustainable energy sources. The primary issue in this subject is how to combine advanced technologies like IoT, drones, and renewable energy with AI in order to create automated and real-time systems. According to some researchers, the agriculture industry is suffering from an insufficient application of responsible innovation[75]. As a result, the researchers are calling for a system referred to as Responsible Agriculture 4.0, which incorporates drones, IoT, robotics, vertical farms, AI, and solar and wind power linked to microgrids [76–78]. When it comes to the productivity of agriculture, factors such as the cost of energy for cultivation are equally significant [79]. Based on the premise that most agricultural machinery operates on fossil fuels, it may potentially contribute to climate change. Thus, new energy solutions, and AI-based approaches are provided. One way in which bioproduction and renewable energy may positively influence sustainable agriculture and farming is via the development of bioproduction and renewable energy [80]. Proposing new AI methods to forecast agricultural energy use has also been researched [79]. biomass may also be used to provide sustainable energy in agriculture, and care should be taken to avoid any injuries [81]. Real-time alerting systems, AI-based DSSs, real-time DSS forecasting models, and alternative energy sources such as solar and wind play a vital role in sustainable agriculture [82]. Maximizing agricultural production and economic stabilization while minimizing the use of natural resources and their harmful environmental consequences may be accomplished using renewable energy and AI [82]. Artificial intelligence enables academics to provide accurate forecasts of agricultural energy use [83]. Especially, a drastic shift toward sustainability in agricultural practices has occurred because of its confluence with other cutting-edge 13 technology, including sensors, DSSs, greenhouse monitoring, intelligent farm equipment, and drone-based crop imaging. [84]. 14 Topic 1: Sustainable Buildings and Energy Consumption Topic 2: AI-based DSSs for Sustainable Urban Water Management Topic 3: Climate Artificial Intelligence (Climate Informatics) Topic 4: Agriculture 4.0 and Sustainable Sources of Energy Topic 5: Convergence of IoT & AI for Sustainable Smart Cities Topic 6: AI-based Evaluation of Renewable Energy Technologies Topic 7: Engineering Education & Smart Campus Topic 8: AI for Energy Optimization Figure 6 Topics detected by the combination of LDA+BERT+Clustering algorithms on sustainable AI in energy sector 15 Topic 5: Convergence of IoT & AI for Sustainable Smart Cities A significant step in the implementation of sustainable energy solutions is to implement smart cities and services using internet of things technology. This topic exhibits how AI and IoT operate together to drive environmental progress. Much of this topic focuses on measure such as smart buildings, smart grid systems, green IoT, and smart campuses. AI is used in tandem with a number of cutting-edge technologies for sustainable energy development, such as improved energy conservation [85] and building intelligent energy management [86] such as building management systems [35]. Internet of Things (IoT) is one of the most promising and pervasive technologies [85]; whose integration with AI has generated a revolution in the energy sector. There are many functions in creating sustainable energy in the IoT-enabled smart city dubbed City 4.0 [87] such as simulation and optimization of power plant energy sustainability [86]. City systems such as water and electricity, as well as other infrastructures, such as data analytics, will be driven by sensor and data collection in the smart city [87]. A significant use of IoT is in the design of intelligent buildings, which with AI included may support a goal of energy or water conservation [39,88], for instance, by educating the citizens on how to use energy more effectively and giving them warnings if they are using excessive amounts of energy. [89]. IoT is integral to modern grid development as well. In particular, it seeks to transform the traditional, fossil-fuel-based power grids with distributed energy resources and integrate it with cutting-edge technology such as artificial intelligence for improved grid management [90]. In the same manner, Blockchain has also been considered to be a viable alternative for smart cities. Fusing blockchain with AI may be leveraged for smart services, including energy load forecasting, categorizing customers, and evaluating energy load [91]. Smart connected devices such as IoT devices have successfully employed blockchain in time to retain these devices safe and secure in a blockchain network [92]. The effect of IoT and AI on agriculture and food sectors is also substantial [93,94]. Manufacturing facilities such as food factories and plants may be transformed more intelligent and more environmentally friendly via the use of IoT and AI, which merge with nonthermal and advanced thermal technologies [94]. Sustainable and green IoT are other topics covered in this subject. The two main objectives of the literature on green IoT are to increase the recyclability and usefulness of IoT devices, as well as to minimize the carbon footprints of such devices. The second objective is to incorporate more effective life cycle assessment (LCA) methods integrating artificial intelligence (AI) in order to cut costs and time [95]. Another of the many topics that apply to IoT is with developing smart campuses, which are carbon neutral, energy efficient, use less water, and are laced with various high-quality green energy tools [96] and smart teaching and learning platforms [97]. Researchers have identified the positive traits of IoT devices, but they've also forewarned about the possible 16 risks of the devices and proposed various techniques for detecting weaknesses [93] or challenges regarding the heterogeneity of smart devices and their associated meta-data [35]. Topic 6: AI-based Evaluation of Renewable Energy Technologies Scholarly interest has been generated by the discussion of leveraging AI for DSSs to enhance the efficiency of conventional system evaluations for renewable energy technologies. To a great extent, a sustainable future will depend on maximizing the use of energy sources that cannot be depleted [98]. Artificial intelligence is important for the survival of the future by leveraging a wide range of renewable energy technologies such as biomass energy, wind energy, solar energy, geothermal energy, hydro energy, marine energy, bioenergy, hydrogen energy, and hybrid energy [99]. AI is used to evaluate renewable energy solutions based on their cost of energy production, carbon footprint, affordability of renewable resources, and energy conversion efficiency [100]. Artificial intelligence will ensure the most effective use of these resources while also pushing for improved management and distribution systems [14]. Distributed energy management, generating, forecasting, grid health monitoring, and fault detection are also made more efficient by using automated AI systems [101]. AI can help disperse the supply and demand of energy in real-time and improve energy consumption and storage allocation (Sun, Dong and Liang, 2016). To mitigate against the barrier of utilizing renewable energy technology, the following measures are taken: Renewable energy sustainability is evaluated [103]; in addition, the turbulent and sporadic character of renewable energy data is addressed [104]. One research group claims that standard techniques such as LCA and EIA (Environmental Impact Assessment) may be improved by developing more advanced digital intelligent decision-making systems, or DSSs. It is feasible that improved assessments of renewable energy sources may be achieved via intelligent and automated technologies [105]. With the smart mechanisms in place, long-term detrimental consequences can be calculated, as well as visible and invisible factors [106]. Artificial intelligence (AI) increases the adaptability of power systems, providing DSSs for energy storage applications [107]. For instance, to ensure more use of battery-electric buses, and minimize the effect on the power grids, the researchers developed an AI-powered DSS [108]. Another research leveraged AI to create a DSS for forecasting future energy consumption patterns, and to provide a solution for utilizing renewable energy alternatives [109]. Topic 7: Smart Campus & Engineering Education It is possible to break down the discussions inside this topic into two distinct types: those about engineering education and those which deal with using AI and IoT to construct intelligent campuses to help maintain sustainability objectives. The two themes represent two elements of education: one dealing with the learning contents, and the other with behavioral outcomes of developing smart campuses.To build a model of smart campuses, we should focus on incorporating IoT into the infrastructure, with subsequent implementations of 17 smart apps and services, with smart educational tools and pedagogies and smart analysis as well [97]. A smart campus is in charge of energy consumption scheduling, while its telecommunications infrastructure serves as the place where data transfers are conducted [110]. Integrating cutting-edge technology, a smart campus captures real-time data on energy usage, renewable energy power generation , air quality, and more [111]. Another point of view is that higher education should equip itself with relevant skills and competences to help in realizing long-term sustainable objectives [112]. The energy sustainability in this respect may be addressed via engineering education and engineering assistance for high-level strategic decision-making [113]. This objective can be achieved by using innovative instructional programs, alongside cutting-edge technology such as artificial intelligence and the Internet of Things. A living lab campus equipped with technology, as well as a deep well of talent and competency, may serve as a digital platform for education and sustainable growth [114]. For illustration, to support ongoing research, teaching, and learning on sustainable development, the University of British Columbia (UBC) implemented the Campus as a Living Laboratory project, which included AI and IoT and other cutting-edge technologies [115]. Furthermore, there have been several research done to help AI seamlessly integrate with current educational institutions in order to aid in sustainable development learning [116]. Topic 8: AI for Energy Optimization Conventional optimization methods may be a roadblock for making progress toward sustainability, and AI- based solutions can help eliminate such roadblocks. Whilst renewable energy sources, like solar and wind, have many merits, there are some downsides to consider. They are usually not always available and often rely on the climate, which renders employing them complicated [117]. A proper optimization of energy may be utilized to minimize greenhouse gas emissions and cut energy usage. Efforts to reduce costs and side effects of energy consumption are facilitated using optimization models [118]. Computational and intelligent resources have enabled academics to progress with optimization problems by employing advanced AI methods. Manufacturers have developed numerous energy-efficient appliances for this reason. Even if the deployment of digital technologies in buildings will likely lead to improved energy efficiency, that is not the sole solution. Studies recommend implementing energy-saving measures that don't just target environmental variables, but also include building inhabitants' comfort and preferences, which is achievable via the integration of AI-augmented algorithms [119]. For illustration, AI algorithms that not only monitor current actions but also give real-time alerts and warnings to users and providers allow optimization to be significantly accelerated. Some approaches, such as algorithms that use energy consumption data to lower energy costs in buildings that use advanced AI, are only one example of how AI and advanced technology may be used to benefit society [120]. 18 Weather has a direct effect on energy consumption, which is indisputable. To ensure the winter heating demand of non-residential buildings was calculated correctly, researchers used an optimized artificial neural network method to determine and forecast this need [121]. By utilizing AI along with the use of smart metering and non-intrusive load monitoring, one may improve energy efficiency by evaluating the electricity use of appliances [38]. Using a new approach, researchers found that the GP model was capable of making accurate predictions and a multi-objective genetic algorithm, NSGA-II, was also capable of optimizing sustainable building design [32]. The use of a fuzzy-enhanced energy system model to represent a route to a sustainable energy system has also been presented in another research [122]. The views of other researchers in the field include techniques based on artificial neural networks, evolutionary algorithms, swarm intelligence, and their hybrids, all of which rely on biological inspiration. These findings imply that sustainable energy development is computationally challenging conventional optimization, demanding advanced techniques [123]. 4. Discussion, Theoretical Gaps, and Future Strands of Research To identify the relevant research topics in the literature on artificial intelligence for sustainability in the energy industry, we performed a contextual topic modeling combined with qualitative cluster analysis. We went beyond previous approaches in developing this novel analysis by combining three algorithms of topic modeling (LDA, BERT, and clustering) with content analysis. In this research, eight academic topics were discovered including sustainable buildings and energy consumption, AI based DSSs for sustainable urban water management, climate artificial intelligence, agriculture 4.0 and sustainable sources of energy, convergence of IoT and AI for sustainable smart cities, AI-based evaluation of renewable energy technologies, smart campus and engineering education and AI for energy optimization. Concerns and problems addressed in each topic are summarized in Figure 7. The Figure illustrates that each topic addresses a number of specific issues, which some of them overlap. For topic 1, the key problems are the importance of sustainable buildings for smart city development and smart grid services. The issue of AI and its application in decision-making, pricing, forecasting, and sustainable consumption are all addressed in this topic. To reach sustainability, various cutting-edge technologies are tied to AI. One problem which may be especially neglected is the use of AI technology to make buildings eco-friendlier and enhance their inhabitants' feeling of accountability toward sustainability. One approach might be to design real-time warning systems to ensure people are prohibited from excessive energy use, while also ensuring that they benefit from the AI-based solutions. Convergence research may also explore how green architecture is uniquely enabled to deal with complex issues, including environmental efficiency, such as using eco-lighting, natural ventilation, shading, green roofs, and artificial intelligence. Most of prior research focuses on eco-design and overlooks other factors of green architecture. 19 Topic 2 addresses sustainable urban water management via the use of AI-based DSSs. Conventional DSSs were under criticism from academics who suggested alternatives, and innovative approaches to DSSs were revealed, particularly with regard to water utilities in a smart city. The second discussion point, focused on sustainable consumption and real-time and predictive modeling, is also addressed in topic 2. Mitigating urban problems, notably air pollution, waste management, and wastewater management, are applicable here to exemplify how smart energy management leveraging AI improves environmental sustainability. Topic 3 deals with the connection between climate change and artificial intelligence, and the emergence of the climate informatics field. This topic highlights the role of trustworthy of explainable AI algorithms, an issue which is marginalized in other topics. As a result, a future potential study direction may be the development of ethical artificial intelligence in other topics to help with the sustainable management of energy. One prospective future study area is the confluence of smart grids, renewable energy, and 5G technology, since these technologies have the potential to generate enormous volumes of big data. Furthermore, the use of AI in transportation seems worthy of analysis, for example, with regard to traffic predictions, public transit planning, and so on. The agricultural 4.0 and sustainable energy sources are examined in Topic 4. Many problems relevant to the subject of "prosperity, sustainable consumption, forecasting, and convergence with other automated and real- time technologies" are covered in this topic. There is only a limited body of studies dedicated to precision farming and digital mapping, but both developments promise to lead to better knowledge of the environment and to improved energy management. Precision farming by assessing soil nutrients, detecting humidity in the air, and monitoring crops allows farmers to leverage digital maps for better energy management and fight against climate change. Other related areas of study include developing automated working environments. It is worthwhile to investigate the effect that artificial intelligence and other green technologies will have onthe working conditions of farmers and farm operators, since AI may help with deeper speculations of working conditions in farms. 20 Figure 5 Sub-themes extracted from each topic In Topic 5, convergent IoT and AI technologies for smart city development were addressed. The primary goal of this topic was to discuss issues around sustainable consumption, LCA analysis, and the development of intelligent energy grids. Pervasive Wi-Fi connection, due to its ability to save energy, is critical in this subject. Additionally, a significant problem is open data sharing in energy management. AI-based assessment of renewable energy technologies, such as DSSs, financial problems, sustainable consumption, and automated and real-time systems are all issues in this topic that focus on renewable energy. One potential study path in this topic involves the challenges that AI algorithms and models face when attempting to evaluate renewable energy solutions. Other sophisticated AI systems, such as deep learning, make use of supervised learning using human-annotated data, and thus they are limited when it comes to complicated situations. The subject of smart campus and engineering education is examined in the seventh topic. Labs that facilitate continuous innovation are discussed in this article, as well as the idea of sustainable consumption, AI skills, and convergence with other technologies. There is an imperative requirement for further research to clarify how AI might be leveraged for practical learning and training for a range of stakeholders across businesses, farmers, residents, and employees in relation to energy management. AI is discussed in relation to energy optimization in Topic 8 of the study. This subject covers many elements of sustainable 21 optimization, including forecasting, consumption, affordable pricing, and societal and financial impacts. However, there is a dearth of distributed energy resource optimization models, particularly due to the emergence of blockchain. Figure 6 Identified cross-topic common themes As shown in Figure 8, we discovered six core problems that were prevalent throughout the majority of the topics. For example, tariff and price models based on artificial intelligence are prevalent in topics 1 and 2; while economic issues in general are a concern in topics 4, 6, and 8. The dilemma of sustainable consumption is prevalent in all of these topics, demonstrating the critical role of AI in attaining sustainable energy use. Forecasting is inextricably connected to sustainable consumption, since more than half of the topics cover both; demonstrating the progress of AI forecasting algorithms for sustainable consumption. Forecasting, on the other hand, is not restricted to anticipating consumption patterns. The topic's second significant recurring theme is the development of AI-based DSSs. The majority of research have contested traditional DSSs and devised decision-making systems based on artificial intelligence. Sustainable building, urban water management, climate change, and renewable energy evaluation have all been substantially influenced by AI-based DSSs. Automated and real-time systems enabled by artificial intelligence are also discussed in relation to buildings, agriculture, the Internet of Things, and renewable energy technologies. Scholars have combined various digital technologies to promote sustainability in the energy sector via the management of buildings, water, agriculture, IoT, and smart campuses. 22 Figure 7 Possible future streams of research pertaining to each topic 5. Theoretical and Practical Contribution 1.5. Theoretical Contribution Our results supplement existing work on sustainable AI and sustainable energy by delivering the following results. Results from this study provide and highlight a thematic map of the sustainable AI research topics existing in several fields, such as energy, ethics, and management. We developed a novel mixed-method approach, the contextual topic modeling and content analysis, to visualize the latent knowledge structures pertaining to AI and sustainability and energy. This yielded in a conceptual framework representing the main topics, subtopics and common terms in each topic pertaining to sustainable AI in energy. Using LDA and BERT, eight themes related to AI in the sustainability and energy sectors were discovered. We provided the most likely terms for each topic, as well as the distribution of articles and topics throughout time. Finally, by using a thematic analysis method, we identified and qualitatively analyzed the hidden themes. 23 Second, we examined and analyzed hidden sub-themes within each topic, as well as common themes between topics, using a content analysis method. Figure 8 illustrates the sub-domain themes within each topic, whereas Figure 9 depicts the common cross-topic themes. Our content analysis of each topic reveals six recurring themes: sustainable consumption, AI-based DSSs, forecasting models, economic and pricing problems, automated and real-time systems, and convergence with digital technology. To further our knowledge, we highlighted how these themes intersect across topics in order to articulate the commonalities across topics. These six separate but related topics demonstrate that sustainable AI solutions can be observed at a range of behavioral, decision-making, economic, operational, and technical dimensions. At the behavioral level, shifts in consumption patterns are illustrated; at the decision-making level, decision automation is outlined; at the economic level, personalized tariffing is demonstrated; at the operational level, automation and real-time operations are addressed; and at the technological level, convergence with other technologies is studied. 2.5. Practical Implications This research provides energy engineers, social scientists, scientists, and policymakers with a variety of insights. Engineers may develop sustainable energy products and services. Energy scientists can also integrate sustainability considerations into their research and development of new energy sources such as renewable energy. In their discussions on AI and energy, social scientists may also emphasize ethical problems, including sustainability. Additionally, policymakers may create and construct new laws and policy initiatives aimed at mitigating the harmful effects of unsustainable energy on society and the environment. 6. Conclusion To discover heavily discussed scholarly topics, our study utilized a new topic modeling technique. While this illustration depicts the trajectory of previous efforts, it also prompted us to propose a number of possible future research strands targeted at increasing energy sector sustainability via the application of artificial intelligence technology. The aim of this study is to further the conversation on sustainable AI and energy, as well as their intersection, in order to get a deeper understanding of how AI may be incorporated to achieve sustainability in the energy sector. References 1. Wang, H.; Liu, Y.; Zhou, B.; Li, C.; Cao, G.; Voropai, N.; Barakhtenko, E. Taxonomy research of artificial intelligence for deterministic solar power forecasting. Energy Convers. Manag. 2020, 214, 112909. 2. Subotić, V.; Eibl, M.; Hochenauer, C. Artificial intelligence for time-efficient prediction and optimization of solid oxide fuel cell performances. Energy Convers. Manag. 2021, 230, 113764, 24 doi:10.1016/j.enconman.2020.113764. 3. Bibri, S.E. The eco-city and its core environmental dimension of sustainability: green energy technologies and their integration with data-driven smart solutions. Energy Informatics 2020, 3, 1–26, doi:10.1186/s42162-020-00107-7. 4. Hoang, A.T.; Pham, V.V.; Nguyen, X.P. Integrating renewable sources into energy system for smart city as a sagacious strategy towards clean and sustainable process. J. Clean. Prod. 2021, 305, 127161, doi:10.1016/j.jclepro.2021.127161. 5. Chu, S.; Majumdar, A. Opportunities and challenges for a sustainable energy future. Nature 2012, 488, 294–303. 6. Lin, B.; Zhu, J. The role of renewable energy technological innovation on climate change: Empirical evidence from China. Sci. Total Environ. 2019, 659, 1505–1512, doi:10.1016/j.scitotenv.2018.12.449. 7. Pham, A.D.; Ngo, N.T.; Ha Truong, T.T.; Huynh, N.T.; Truong, N.S. Predicting energy consumption in multiple buildings using machine learning for improving energy efficiency and sustainability. J. Clean. Prod. 2020, 260, 121082, doi:10.1016/j.jclepro.2020.121082. 8. Khosravi, A.; Syri, S.; Pabon, J.J.G.; Sandoval, O.R.; Caetano, B.C.; Barrientos, M.H. Energy modeling of a solar dish/Stirling by artificial intelligence approach. Energy Convers. Manag. 2019, 199, 112021, doi:10.1016/j.enconman.2019.112021. 9. Morley, J.; Machado, C.C.V.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The ethics of AI in health care: A mapping review. Soc. Sci. Med. 2020, 260, 113172. 10. van Wynsberghe, A. Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics 2021, 1, 3, doi:10.1007/s43681-021-00043-6. 11. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707, doi:10.1007/s11023-018- 9482-5. 12. Floridi, L.; Cowls, J.; King, T.C.; Taddeo, M. How to Design AI for Social Good: Seven Essential Factors. Sci. Eng. Ethics 2020, 26, 1771–1796, doi:10.1007/s11948-020-00213-5. 13. Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D.; Tegmark, M.; Fuso Nerini, F. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 1–10. 25 14. Jha, S.K.; Bilalovic, J.; Jha, A.; Patel, N.; Zhang, H. Renewable energy: Present research and future scope of Artificial Intelligence. Renew. Sustain. Energy Rev. 2017, 77, 297–317. 15. Ahmad, T.; Zhang, D.; Huang, C.; Zhang, H.; Dai, N.; Song, Y.; Chen, H. Artificial intelligence in sustainable energy industry: Status Quo, challenges and opportunities. J. Clean. Prod. 2021, 289, 125834. 16. Holden, E.; Linnerud, K.; Rygg, B.J. A review of dominant sustainable energy narratives. Renew. Sustain. Energy Rev. 2021, 144, 110955. 17. deS. Price, D. The science/technology relationship, the craft of experimental science, and policy for the improvement of high technology innovation. Res. Policy 1984, 13, 3–20, doi:10.1016/0048- 7333(84)90003-9. 18. Saheb, T.; Saheb, M. Analyzing and visualizing knowledge structures of health informatics from 1974 to 2018: A bibliometric and social network analysis. Healthc. Inform. Res. 2019, 25, 61–72. 19. Saheb, T.; Saheb, T. Understanding the development trends of big data technologies : an analysis of patents and the cited scholarly works. J. Big Data 2020, doi:10.1186/s40537-020-00287-9. 20. Saheb, T.; Izadi, L. Paradigm of IoT big data analytics in the healthcare industry: A review of scientific literature and mapping of research trends. Telemat. Informatics 2019, doi:10.1016/J.TELE.2019.03.005. 21. Mustak, M.; Salminen, J.; Plé, L.; Wirtz, J. Artificial intelligence in marketing: Topic modeling, scientometric analysis, and research agenda. J. Bus. Res. 2021, 124, 389–404, doi:10.1016/j.jbusres.2020.10.044. 22. Delgosha, M.S.; Hajiheydari, N.; Talafidaryani, M. Discovering IoT implications in business and management: A computational thematic analysis. Technovation 2021, 102236, doi:10.1016/j.technovation.2021.102236. 23. Kim, H.; Choi, H.; Kang, H.; An, J.; Yeom, S.; Hong, T. A systematic review of the smart energy conservation system: From smart homes to sustainable smart cities. Renew. Sustain. Energy Rev. 2021, 140, 110755. 24. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference; Association for Computational Linguistics (ACL), 2019; Vol. 1, pp. 4171–4186. 26 25. Xie, Q.; Zhang, X.; Ding, Y.; Song, M. Monolingual and multilingual topic analysis using LDA and BERT embeddings. J. Informetr. 2020, 14, 101055, doi:10.1016/j.joi.2020.101055. 26. Qiang, J.; Chen, P.; Wang, T.; Wu, X. Topic modeling over short texts by incorporating word embeddings. In Proceedings of the Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer Verlag, 2017; Vol. 10235 LNAI, pp. 363–374. 27. Song, Y.; Pan, S.; Liu, S.; Zhou, M.X.; Qian, W. Topic and keyword re-ranking for LDA-based topic modeling. In Proceedings of the International Conference on Information and Knowledge Management, Proceedings; 2009; pp. 1757–1760. 28. Peinelt, N.; Nguyen, D.; Liakata, M. tBERT: Topic Models and BERT Joining Forces for Semantic Similarity Detection.; Association for Computational Linguistics (ACL), 2020; pp. 7047–7055. 29. Bianchi, F.; Terragni, S.; Hovy, D. Pre-training is a Hot Topic: Contextualized Document Embeddings Improve Topic Coherence. 2020. 30. McInnes, L.; Healy, J.; Melville, J. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. 2018. 31. Shahapure, K.R.; Nicholas, C. Cluster quality analysis using silhouette score. In Proceedings of the Proceedings - 2020 IEEE 7th International Conference on Data Science and Advanced Analytics, DSAA 2020; Institute of Electrical and Electronics Engineers Inc., 2020; pp. 747–748. 32. Gilan, S.S.; Dilkina, B. Sustainable Building Design: A Challenge at the Intersection of Machine Learning and Design Optimization; 2015; 33. Vázquez, F.I.; Kastner, W. Usage profiles for sustainable buildings. In Proceedings of the Proceedings of the 15th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2010; 2010. 34. Gilner, E.; Galuszka, A.; Grychowski, T. Application of artificial intelligence in sustainable building design - Optimisation methods. In Proceedings of the 2019 24th International Conference on Methods and Models in Automation and Robotics, MMAR 2019; Institute of Electrical and Electronics Engineers Inc., 2019; pp. 81–86. 35. Iddianozie, C.; Palmes, P. Towards smart sustainable cities: Addressing semantic heterogeneity in Building Management Systems using discriminative models. Sustain. Cities Soc. 2020, 62, 102367, doi:10.1016/j.scs.2020.102367. 27 36. Kadar, T.; Kadar, M. Sustainability Is Not Enough: Towards AI Supported Regenerative Design. In Proceedings of the Proceedings - 2020 IEEE International Conference on Engineering, Technology and Innovation, ICE/ITMC 2020; Institute of Electrical and Electronics Engineers Inc., 2020. 37. Simeoni, P.; Nardin, G.; Ciotti, G. Planning and design of sustainable smart multi energy systems. The case of a food industrial district in Italy. Energy 2018, 163, 443–456, doi:10.1016/j.energy.2018.08.125. 38. Chui, K.T.; Lytras, M.D.; Visvizi, A. Energy sustainability in smart cities: Artificial intelligence, smart monitoring, and optimization of energy consumption. Energies 2018, 11, 2869, doi:10.3390/en11112869. 39. Santos, C.; Ferreira, J.C.; Rato, V.; Resende, R. Public building energy efficiency - An IoT approach. In Proceedings of the Advances in Intelligent Systems and Computing; Springer Verlag, 2019; Vol. 806, pp. 65–72. 40. Juan, Y.K.; Gao, P.; Wang, J. A hybrid decision support system for sustainable office building renovation and energy performance improvement. Energy Build. 2010, 42, 290–297, doi:10.1016/j.enbuild.2009.09.006. 41. Silva, F.; Analide, C.; Rosa, L.; Felgueiras, G.; Pimenta, C. Ambient Sensorization for the Furtherance of Sustainability. In Proceedings of the Advances in Intelligent Systems and Computing; Springer Verlag, 2013; Vol. 219, pp. 179–186. 42. Mattiussi, A.; Rosano, M.; Simeoni, P. A decision support system for sustainable energy supply combining multi-objective and multi-attribute analysis: An Australian case study. Decis. Support Syst. 2014, 57, 150–159, doi:10.1016/j.dss.2013.08.013. 43. Nguyen, P.H.; Kling, W.L.; Ribeiro, P.F.; Venayagamoorthy, G.K.; Croes, R. Role of proactive behaviour enabled by advanced computational intelligence and ICT in Smart Energy Grids 2013, 1–6. 44. Baron, S.; Hoek, J.; Alves, I.K.; Herz, S. Comprehensive scenario management of sustainable spatial planning and urban water services. Water Sci. Technol. 2016, 73, 1041–1051, doi:10.2166/wst.2015.578. 45. Monteiro, A.C.B.; França, R.P.; Arthur, R.; Iano, Y. A Look at Machine Learning in the Modern Age of Sustainable Future Secured Smart Cities. In Advanced Sciences and Technologies for Security Applications; Springer, 2021; pp. 359–383. 46. Grigoras, G.; Bizon, N.; Enescu, F.M.; Lopez Guede, J.M.; Salado, G.F.; Brennan, R.; O’Driscoll, C.; Dinka, M.O.; Alalm, M.G. ICT based Smart Management Solution to Realize Water and Energy Savings through Energy Efficiency Measures in Water Distribution Systems. In Proceedings of the 28 Proceedings of the 10th International Conference on Electronics, Computers and Artificial Intelligence, ECAI 2018; Institute of Electrical and Electronics Engineers Inc., 2019. 47. Santos, C.; Ferreira, J.C.; Rato, V.; Resende, R. Public building energy efficiency - An IoT approach. In Proceedings of the Advances in Intelligent Systems and Computing; Springer Verlag, 2019; Vol. 806, pp. 65–72. 48. Zhang, J.; He, S. Smart technologies and urban life: A behavioral and social perspective. Sustain. Cities Soc. 2020, 63, 102460. 49. Niu, W. jing; Feng, Z. kai Evaluating the performances of several artificial intelligence methods in forecasting daily streamflow time series for sustainable water resources management. Sustain. Cities Soc. 2021, 64, 102562, doi:10.1016/j.scs.2020.102562. 50. Al-Adhaileh, M.H.; Alsaade, F.W. Modelling and prediction of water quality by using artificial intelligence. Sustain. 2021, 13, 4259, doi:10.3390/su13084259. 51. Freeman, D.; Gupta, S.; Hudson Smith, D.; Maja, J.M.; Robbins, J.; Owen, J.S.; Peña, J.M.; de Castro, A.I. Watson on the farm: Using cloud-based artificial intelligence to identify early indicators of water stress. Remote Sens. 2019, 11, doi:10.3390/rs11222645. 52. Wei, Y. Application of Artificial Intelligence in the Process of Ecological Water Environment Governance and Its Impact on Economic Growth. Math. Probl. Eng. 2021, 2021, 1–9, doi:10.1155/2021/9967531. 53. Mehmood, H.; Liao, D.; Mahadeo, K. A Review of Artificial Intelligence Applications to Achieve Water-related Sustainable Development Goals. In Proceedings of the 2020 IEEE / ITU International Conference on Artificial Intelligence for Good, AI4G 2020; Institute of Electrical and Electronics Engineers Inc., 2020; pp. 135–141. 54. Chhipi-Shrestha, G.; Hewage, K.; Sadiq, R. Fit-for-purpose wastewater treatment: Conceptualization to development of decision support tool (I). Sci. Total Environ. 2017, 607–608, 600–612, doi:10.1016/j.scitotenv.2017.06.269. 55. Alzu’bi, S.; Alsmirat, M.; Al-Ayyoub, M.; Jararweh, Y. Artificial Intelligence Enabling Water Desalination Sustainability Optimization. In Proceedings of the Proceedings of 2019 7th International Renewable and Sustainable Energy Conference, IRSEC 2019; Institute of Electrical and Electronics Engineers Inc., 2019. 56. Rustum, R.; Kurichiyanil, A.M.J.; Forrest, S.; Sommariva, C.; Adeloye, A.J.; Zounemat-Kermani, M.; 29 Scholz, M. Sustainability ranking of desalination plants using mamdani fuzzy logic inference systems. Sustain. 2020, 12, 631, doi:10.3390/su12020631. 57. Pearson, L.J.; Coggan, A.; Proctor, W.; Smith, T.F. A sustainable decision support framework for Urban water management. Water Resour. Manag. 2010, 24, 363–376, doi:10.1007/s11269-009-9450-1. 58. Puleo, V.; Notaro, V.; Freni, G.; La Loggia, G. Water and Energy Saving in Urban Water Systems: The ALADIN Project. In Proceedings of the Procedia Engineering; Elsevier Ltd, 2016; Vol. 162, pp. 396–402. 59. Afsordegan, A.; Sánchez, M.; Agell, N.; Zahedi, S.; Cremades, L. V. Decision making under uncertainty using a qualitative TOPSIS method for selecting sustainable energy alternatives. Int. J. Environ. Sci. Technol. 2016, 13, 1419–1432, doi:10.1007/s13762-016-0982-7. 60. Monteleoni, C.; Schmidt, G.A.; McQuade, S. Climate informatics: Accelerating discovering in climate science with machine learning. Comput. Sci. Eng. 2013, 15, 32–40, doi:10.1109/MCSE.2013.50. 61. Chakraborty, D.; Alam, A.; Chaudhuri, S.; Başağaoğlu, H.; Sulbaran, T.; Langar, S. Scenario-based prediction of climate change impacts on building cooling energy consumption with explainable artificial intelligence. Appl. Energy 2021, 291, 116807, doi:10.1016/j.apenergy.2021.116807. 62. Yuval, J.; O’Gorman, P.A. Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions. Nat. Commun. 2020, 11, 1–10, doi:10.1038/s41467-020-17142-3. 63. Fathi, S.; Srinivasan, R.; Ries, R. Campus Energy Use Prediction (CEUP) Using Artificial Intelligence (AI) to Study Climate Change Impacts. Proc. 2019 Build. Simul. Conf. Rome, IBPSA, Italy 2019, doi:10.26868/25222708.2019.210874. 64. Blecic, I.; Cecchini, A.; Falk, M.; Marras, S.; Pyles, D.R.; Spano, D.; Trunfio, G.A. Towards a planning decision support system for low-carbon urban development. In Proceedings of the Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer, Berlin, Heidelberg, 2011; Vol. 6782 LNCS, pp. 423–438. 65. Kasim, H.; Hilme, H.; Al-Ghaili, A.; #2, R.O.; Hazim, M.; Muhmat, B.; #3, H.; Al-Ghaili, A.M. Future Fuels for Environmental Sustainability: Roles of Computing View project Developing a Digital Hub Framework in Inculcating Knowledge Sharing Practices for Malaysian Energy Sectors View project Future Fuels for Environmental Sustainability: Roles of Computing. Int. J. Adv. Sci. Technol. 2019, 28, 87–95. 66. Anthony, L.F.W.; Kanding, B.; Selvan, R. Carbontracker: Tracking and Predicting the Carbon 30 Footprint of Training Deep Learning Models. 2020. 67. Fischer, I.; Beswick, C.; Newell, S. Rho AI – Leveraging artificial intelligence to address climate change: Financing, implementation and ethics. J. Inf. Technol. Teach. Cases 2021, doi:10.1177/2043886920961782. 68. Luccioni, A.; Schmidt, V.; Vardanyan, V.; Bengio, Y. Using Artificial Intelligence to Visualize the Impacts of Climate Change. IEEE Comput. Graph. Appl. 2021, 41, 8–14, doi:10.1109/MCG.2020.3025425. 69. Schneider, T.; Lan, S.; Stuart, A.; Teixeira, J. Geophysical Research Letters Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations. Wiley Online Libr. 2017, 44, 12,396-12,417, doi:10.1002/2017GL076101. 70. Liess, S.; Agrawal, S. A teleconnection between the West Siberian Plain and the ENSO region. J. Clim. 2017, 30, 301–315. 71. McGovern, A.; Elmore, K. Using artificial intelligence to improve real-time decision-making for high- impact weather. Bull. Am. Meteorol. Soc. 2017, 98, 2073–2090. 72. Monteleoni, C.; Schmidt, G.A.; Saroha, S.; Asplund, E. Tracking climate models. Stat. Anal. Data Min. 2011, 4, 372–392, doi:10.1002/sam.10126. 73. Buckland, C.E.; Bailey, R.M.; Thomas, D.S.G. Using artificial neural networks to predict future dryland responses to human and climate disturbances. Sci. Rep. 2019, 9, 3855, doi:10.1038/s41598- 019-40429-5. 74. Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N.; Prabhat Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204, doi:10.1038/s41586-019-0912-1. 75. Rose, D.C.; Chilvers, J. Agriculture 4.0: Broadening Responsible Innovation in an Era of Smart Farming. Front. Sustain. Food Syst. 2018, 2, 87, doi:10.3389/fsufs.2018.00087. 76. Yahya, N. Agricultural 4.0: Its implementation toward future sustainability. In Green Energy and Technology; Springer Verlag, 2018; Vol. 0, pp. 125–145. 77. Davies, F.T.; Garrett, B. Technology for Sustainable Urban Food Ecosystems in the Developing World: Strengthening the Nexus of Food–Water–Energy–Nutrition. Front. Sustain. Food Syst. 2018, 2, 84, doi:10.3389/fsufs.2018.00084. 78. Mustapha, U.F.; Alhassan, A.W.; Jiang, D.N.; Li, G.L. Sustainable aquaculture development: a review 31 on the roles of cloud computing, internet of things and artificial intelligence (CIA). Rev. Aquac. 2021. 79. Ceylan, Z. Assessment of agricultural energy consumption of Turkey by MLR and Bayesian optimized SVR and GPR models. J. Forecast. 2020, 39, 944–956, doi:10.1002/for.2673. 80. Ahamed, T.; Takigawa, T.; Noguchi, R.; Tian, L. Bioproduction engineering: A road map of sustainable agricultural practice; 2013; 81. Mohamed Abdul Ghani, N.M.A.; Vogiatzis, C.; Szmerekovsky, J. Biomass feedstock supply chain network design with biomass conversion incentives. Energy Policy 2018, 116, 39–49, doi:10.1016/j.enpol.2018.01.042. 82. Chel, A.; Kaushik, G. Renewable energy for sustainable agriculture. Agron. Sustain. Dev. 2011, 31, 91– 118. 83. Bolandnazar, E.; Rohani, A.; Taki, M. Energy consumption forecasting in agriculture by artificial intelligence and mathematical models. Energy Sources, Part A Recover. Util. Environ. Eff. 2020, 42, 1618– 1632, doi:10.1080/15567036.2019.1604872. 84. Misra, N.N.; Dixit, Y.; Al-Mallahi, A.; Bhullar, M.S.; Upadhyay, R.; Martynenko, A. IoT, big data and artificial intelligence in agriculture and food industry. IEEE Internet Things J. 2020, 1–1, doi:10.1109/jiot.2020.2998584. 85. Rahin Batcha, R.; Kalaiselvi Geetha, M. A Survey on IOT Based on Renewable Energy for Efficient Energy Conservation Using Machine Learning Approaches. In Proceedings of the Proceedings of 3rd International Conference on Emerging Technologies in Computer Engineering: Machine Learning and Internet of Things, ICETCE 2020; Institute of Electrical and Electronics Engineers Inc., 2020; pp. 123–128. 86. Abbas, S.; Khan, M.A.; Falcon-Morales, L.E.; Rehman, A.; Saeed, Y.; Zareei, M.; Zeb, A.; Mohamed, E.M. Modeling, Simulation and Optimization of Power Plant Energy Sustainability for IoT Enabled Smart Cities Empowered with Deep Extreme Learning Machine. IEEE Access 2020, 8, 39982–39997, doi:10.1109/ACCESS.2020.2976452. 87. D’Amico, G.; L’Abbate, P.; Liao, W.; Yigitcanlar, T.; Ioppolo, G. Understanding sensor cities: Insights from technology giant company driven smart urbanism practices. Sensors (Switzerland) 2020, 20, 1–24, doi:10.3390/s20164391. 88. Shah, A.S.; Nasir, H.; Fayaz, M.; Lajis, A.; Shah, A. A review on energy consumption optimization techniques in IoT based smart building environments. Inf. 2019, 10, 108. 32 89. Casado-Mansilla, Di.; Moschos, I.; Kamara-Esteban, O.; Tsolakis, A.C.; Borges, C.E.; Krinidis, S.; Irizar-Arrieta, A.; Konstantinos, K.; Pijoan, A.; Tzovaras, Di.; et al. A Human-Centric Context-Aware IoT Framework for Enhancing Energy Efficiency in Buildings of Public Use. IEEE Access 2018, 6, 31444–31456, doi:10.1109/ACCESS.2018.2837141. 90. Kumar, N.M.; Chand, A.A.; Malvoni, M.; Prasad, K.A.; Mamun, K.A.; Islam, F.R.; Chopra, S.S. Distributed Energy Resources and the Application of AI, IoT, and Blockchain in Smart Grids. Energies 2020, 13, 1–42. 91. Kumari, A.; Gupta, R.; Tanwar, S.; Kumar, N. Blockchain and AI amalgamation for energy cloud management: Challenges, solutions, and future directions. J. Parallel Distrib. Comput. 2020, 143, 148– 166, doi:10.1016/j.jpdc.2020.05.004. 92. Wong, P.F.; Chia, F.C.; Kiu, M.S.; Lou, E.C.W. The potential of integrating blockchain technology into smart sustainable city development. In Proceedings of the IOP Conference Series: Earth and Environmental Science; Institute of Physics Publishing, 2020; Vol. 463, p. 12020. 93. Anand, P.; Singh, Y.; Selwal, A.; Alazab, M.; Tanwar, S.; Kumar, N. IoT vulnerability assessment for sustainable computing: Threats, current solutions, and open challenges. IEEE Access 2020, 8, 168825– 168853, doi:10.1109/ACCESS.2020.3022842. 94. Jambrak, A.R.; Nutrizio, M.; Djekić, I.; Pleslić, S.; Chemat, F. Internet of nonthermal food processing technologies (Iontp): Food industry 4.0 and sustainability. Appl. Sci. 2021, 11, 1–20. 95. Sharma, N.; Panwar, D. Green IoT: Advancements and Sustainability with Environment by 2050. In Proceedings of the ICRITO 2020 - IEEE 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions); Institute of Electrical and Electronics Engineers Inc., 2020; pp. 1127–1132. 96. Agarwal, P.; Ravi Kumar, G.V.V.; Agarwal, P. IoT based framework for smart campus: COVID-19 readiness. In Proceedings of the Proceedings of the World Conference on Smart Trends in Systems, Security and Sustainability, WS4 2020; Institute of Electrical and Electronics Engineers Inc., 2020; pp. 539–542. 97. Pham, T. V.; Nguyen, A.T.T.; Ngo, T.D.; Le, D.H.; Le, K.C.V.; Nguyen, T.H.N.; Le, H.Q. Proposed Smart University Model as a Sustainable Living Lab for University Digital Transformation. In Proceedings of the Proceedings of 2020 5th International Conference on Green Technology and Sustainable Development, GTSD 2020; Institute of Electrical and Electronics Engineers Inc., 2020; pp. 472–479. 33 98. Chen, C.; Hu, Y.; Marimuthu, K.; Kumar, P.M. Artificial intelligence on economic evaluation of energy efficiency and renewable energy technologies. Sustain. Energy Technol. Assessments 2021, 47, 101358, doi:10.1016/j.seta.2021.101358. 99. Turkenburg, W.; Faaij, A. Renewable energy technologies; 2000; 100. Evans, A.; Strezov, V.; Evans, T.J. Assessment of sustainability indicators for renewable energy technologies. Renew. Sustain. Energy Rev. 2009, 13, 1082–1088. 101. Ali, S.S.; Choi, B.J. State-of-the-art artificial intelligence techniques for distributed smart grids: A review. Electron. 2020, 9, 1–28, doi:10.3390/electronics9061030. 102. Sun, S.; Dong, M.; Liang, B. Distributed Real-Time Power Balancing in Renewable-Integrated Power Grids with Storage and Flexible Loads. IEEE Trans. Smart Grid 2016, 7, 2337–2349, doi:10.1109/TSG.2015.2445794. 103. Strantzali, E.; Aravossis, K. Decision making in renewable energy investments: A review. Renew. Sustain. Energy Rev. 2016, 55, 885–898. 104. Wang, H.; Lei, Z.; Zhang, X.; Zhou, B.; Peng, J. A review of deep learning for renewable energy forecasting. Energy Convers. Manag. 2019, 198, 111799. 105. Mohammadi Ashnani, M.H.; Johari, A.; Hashim, H.; Hasani, E. A decision support system (DSS) for sustainable production of biofuel. In Proceedings of the Applied Mechanics and Materials; Trans Tech Publications Ltd, 2014; Vol. 465–466, pp. 1103–1108. 106. Khan, M.M.; Islam, M.R.; Zatzman, G.M. A novel approach to comprehensive economic evaluations: Case study of an inherently sustainable technology using renewable energy sources. Energy Sources, Part B Econ. Plan. Policy 2011, 6, 145–155, doi:10.1080/15567240801993325. 107. Badami, M.; Fambri, G.; Mancò, S.; Martino, M.; Damousis, I.G.; Agtzidis, D.; Tzovaras, D. A Decision Support System Tool to Manage the Flexibility in Renewable Energy-Based Power Systems. Energies 2019, 13, 1–16. 108. Abdelwahed, A.; Berg, P. van den; Brandt, T. Enabling Sustainable Public Transport in Smart Cities through Real-time Decision Support. ICIS 2019 Proc. 2019. 109. Ma, Z.; Wang, H.; Wu, A.; Zeng, G.; Tu, X. An intelligent decision support system for residential energy consumption and renewable energy utilization in rural China. Energy Sources, Part B Econ. Plan. Policy 2014, 9, 374–382, doi:10.1080/15567241003663138. 110. Barbato, A.; Bolchini, C.; Geronazzo, A.; Quintarelli, E.; Palamarciuc, A.; Pitì, A.; Rottondi, C.; 34 Verticale, G. Energy optimization and management of demand response interactions in a smart campus. Energies 2016, 9, 398, doi:10.3390/en9060398. 111. Alrashed, S. Key performance indicators for Smart Campus and Microgrid. Sustain. Cities Soc. 2020, 60, 102264, doi:10.1016/j.scs.2020.102264. 112. Durrans, B.; Whale, J.; Calais, M. Benchmarking a sustainable energy engineering undergraduate degree against curriculum frameworks and pedagogy standards from industry and academia. Energies 2020, 13, 822, doi:10.3390/en13040822. 113. Mulder, K.F. Strategic competences for concrete action towards sustainability: An oxymoron? Engineering education for a sustainable future. Renew. Sustain. Energy Rev. 2017, 68, 1106–1111. 114. Mazutti, J.; Londero Brandli, L.; Lange Salvia, A.; Fritzen Gomes, B.M.; Damke, L.I.; Tibola da Rocha, V.; Santos Rabello, R. dos Smart and learning campus as living lab to foster education for sustainable development: an experience with air quality monitoring. Int. J. Sustain. High. Educ. 2020, 21, 1311–1330, doi:10.1108/IJSHE-01-2020-0016. 115. Pilon, A.; Madden, J.; Tansey, J.; Metras, J. Campus as a Living Lab: Creating a Culture of Research and Learning in Sustainable Development. In; Emerald Publishing Limited, 2020; pp. 213–227. 116. Tanveer, M.; Hassan, S.; Bhaumik, A. Academic policy regarding sustainability and artificial intelligence (Ai). Sustain. 2020, 12, 1–13, doi:10.3390/su12229435. 117. Baños, R.; Manzano-Agugliaro, F.; Montoya, F.G.; Gil, C.; Alcayde, A.; Gómez, J. Optimization methods applied to renewable and sustainable energy: A review. Renew. Sustain. Energy Rev. 2011, 15, 1753–1766. 118. Magrassi, F.; Borghi, A. Del; Gallo, M.; Strazza, C.; Robba, M. Optimal planning of sustainable buildings: Integration of life cycle assessment and optimization in a decision support system. Energies 2016, 9, 490, doi:10.3390/en9070490. 119. González-Briones, A.; Chamoso, P.; De La Prieta, F.; Demazeau, Y.; Corchado, J.M. Agreement technologies for energy optimization at home. Sensors (Switzerland) 2018, 18, 1633, doi:10.3390/s18051633. 120. Park, S.; Park, S.; Choi, M.I.; Lee, S.; Lee, T.; Kim, S.; Cho, K.; Park, S. Reinforcement learning-based bems architecture for energy usage optimization. Sensors (Switzerland) 2020, 20, 1–33, doi:10.3390/s20174918. 121. Ciulla, G.; D’Amico, A.; Lo Brano, V.; Traverso, M. Application of optimized artificial intelligence 35 algorithm to evaluate the heating energy demand of non-residential buildings at European level. Energy 2019, 176, 380–391, doi:10.1016/j.energy.2019.03.168. 122. Weber, K.; Martinsen, D. Computation of transition paths towards sustainable energy systems by means of fuzzy optimization. In Proceedings of the Computational Intelligence Foundations and Applications - Proceedings of the 9th International FLINS Conference, FLINS 2010; World Scientific Publishing Co. Pte Ltd, 2010; pp. 826–831. 123. Zheng, Y.J.; Chen, S.Y.; Lin, Y.; Wang, W.L. Bio-inspired optimization of sustainable energy systems: A review. Math. Probl. Eng. 2013, 2013. 36
ai_researcher
8
An_Interactive_Co-Pilot_for_Accelerated_Research_Ideation.pdf
Astronomy & Astrophysics manuscript no. N131 October 17, 2019 c(cid:13)ESO 2019 Using CO line ratios to trace compressed areas in bubble N131 Chuan-Peng Zhang1, 2, 5, Guang-Xing Li3, Chenlin Zhou1, 4, Lixia Yuan1, 4, and Ming Zhu1, 5 1 National Astronomical Observatories, Chinese Academy of Sciences, 100101 Beijing, P.R. China 2 Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany 3 South-Western Institute for Astronomy Research, Yunnan University, Kunming, 650500 Yunnan, P.R. China e-mail: [email protected] e-mail: [email protected] 4 University of Chinese Academy of Sciences, 100049 Beijing, P.R. China 5 CAS Key Laboratory of FAST, National Astronomical Observatories, Chinese Academy of Sciences, 100101 Beijing, P.R. China 9 1 0 2 t c O 6 1 ] A G . h p - o r t s a [ 2 v 2 5 2 1 1 . 8 0 9 1 : v i X r a October 17, 2019 ABSTRACT Aims. N131 is a typical infrared dust bubble showing an expanding ring-like shell. We study the CO line ratios that can be used to trace the interaction in the expanding bubble. Methods. We carried out new CO (3 − 2) observations toward bubble N131 using the 15m JCMT, and derived line ratios by combining these observations with our previous CO (2 − 1) and CO (1 − 0) data from IRAM 30m observations. To trace the interaction between the molecular gas and the ionized gas in the HII region, we used RADEX to model the dependence of the CO line ratios on kinetic temperature and H2 volume density, and examined the abnormal line ratios based on other simulations. Results. We present CO (3 − 2), CO (2 − 1), and CO (1 − 0) integrated intensity maps convolved to the same angular resolution (22.5(cid:48)(cid:48)). The three different CO transition maps show a similar morphology. The line ratios of WCO (3−2) /WCO (2−1) mostly range from 0.2 to 1.2 /WCO (1−0) range from 0.5 to 1.6 with a median of 0.84 ± 0.15. The high with a median of 0.54 ± 0.12, while the line ratios of WCO (2−1) (cid:38) 1.2 are beyond the threshold predicted by numerical simulations /WCO (1−0) CO line ratios WCO (3−2) based on the assumed density-temperature structure for the inner rims of the ring-like shell, where the compressed areas are located in bubble N131. (cid:38) 1.2, can be used as Conclusions. These high CO integrated intensity ratios, such as WCO (3−2) a tracer of gas-compressed regions with a relatively high temperature and density. This further suggests that the non-Gaussian part of the line-ratio distribution can be used to trace the interaction between the molecular gas and the hot gas in the bubble. (cid:38) 0.8 and WCO (2−1) (cid:38) 0.8 and WCO (2−1) /WCO (1−0) /WCO (2−1) /WCO (2−1) Key words. infrared: ISM – stars: formation – ISM: bubbles – H II regions – ISM clouds 1. Introduction Infrared dust bubbles are ubiquitous interstellar objects (Church- well et al. 2006, 2007; Simpson et al. 2012; Hou & Gao 2014; Zhang et al. 2013, 2016; Jayasinghe et al. 2019). However, the details of the bubble shell formation mechanism are still un- clear (e.g., Beaumont & Williams 2010; Watson et al. 2008). N131 is a quite typical bubble, which has been observed and in- vestigated in detail by Zhang et al. (2013, 2016). Bubble N131 has an inner minor radius of 13 pc and an inner major radius of 15 pc at a kinetic distance of ∼8.6 kpc, and the center coordinates are R.A.(J2000) = 19h52m21.s5, DEC.(J2000) = +26◦21(cid:48)24.(cid:48)(cid:48)0. A ring-like shell is visible at 8.0 and 24 µm and is associated with CO emission (see Figure 1). Two giant elongated molecular clouds are located at opposite sides of the ring-like shell, and to- gether, they exhibit a large velocity gradient. In addition, there is a huge cavity inside the bubble that is visible in the 5.8 − 500 µm emission. The column density, excitation temperature, and ve- locity of the CO (1 − 0) emission show a possibly stratified struc- ture from the inner to outer rims of the ring-like shell. These suggest that bubble N131 has an expanding shell caused by feed- back of strong stellar winds from the star formation at the center of the bubble (see also the detailed discussion in Zhang et al. 2016). The CO (3 − 2), CO (2 − 1), and CO (1 − 0) transitions have different upper energy levels (Kaufman et al. 1999). The different transitions can therefore be used to trace different excitation conditions. The integrated intensity ratios, such as WCO (3−2)/WCO (2−1) and WCO (2−1)/WCO (1−0), may indicate a dif- ferent temperature and density structure of the molecular cloud environments (Hasegawa et al. 1994; Wilson et al. 1997). For ex- ample, high WCO (2−1)/WCO (1−0) ratios have been observed in the Large Magellanic Cloud (LMC) by Bolatto et al. (2000). It was proposed that self-absorbed emission and optical depth effects may be possible origins for the high line ratios (Bolatto et al. 2000, 2003). Additionally, the line ratios are also quite impor- tant for us to diagnose the evolutionary stage of the molecular clouds (e.g., Sakamoto et al. 1995; Beuther et al. 2000; Yoda et al. 2010; Polychroni et al. 2012; Nishimura et al. 2015). In this work, we carry out new CO (3 − 2) observations to- ward bubble N131 using the 15m James Clerk Maxwell Tele- scope (JCMT). In combination with our previous CO (2 − 1) and CO (1 − 0) line observations with the IRAM 30m telescope, we study how the CO line ratios can be used to trace the interaction in the expanding infrared dust bubble N131. In Section 2 we de- scribe the observations and data reduction. In Section 3 we show the observational results and the RADEX modeling. In Section 4 we mainly discuss the possibility of using the CO line ratios to trace the compressed inner rims of the ring-like shell around the bubble. In Section 5 we summarize our results. Article number, page 1 of 6 A&A proofs: manuscript no. N131 2. Observations 2.1. 12CO J = 3 − 2 We carried out new CO (3 − 2) observations (M17BP077 and M18BP069) toward bubble N131 during September 2017 – Au- gust 2018 using the Heterodyne Array Receiver Programme (HARP; Buckle et al. 2009) at the 15m JCMT. Maps were refer- enced against an off-source position that was free of any signif- icant CO emission in the Dame et al. (2001) CO Galactic Plane Survey. At 345 GHz, the half-power beam width (HPBW) was ∼14.0(cid:48)(cid:48), and the main beam efficiency is ηmb = 0.64, taken from the JCMT efficiency archive. The main beam brightness tem- perature (Tmb) can be derived by Tmb = T ∗ A/ηmb. The on-the- fly mapping mode was used to scan the bubble with a sampling step of 7.0(cid:48)(cid:48). For further line ratio analysis, the raw data were then convolved to the same angular resolution of 22.5(cid:48)(cid:48), corre- sponding to the lowest angular resolution of CO (1 − 0) (see Section 2.2), with a grid of 11.0(cid:48)(cid:48) using the GILDAS1 software package. Calibration scans, pointing, and focus were performed regu- larly. Calibration scans were taken at the beginning of each sub- scan. A pointing was made about every hour. A focus scan was taken every three hours, but more scans were taken around sun- set and sunrise. The flux calibration is expected to be accurate to within 10%. The GILDAS software package was used to reduce the observational data. 2.2. 12CO J = 2 − 1 and J = 1 − 0 Our CO (2 − 1) and CO (1 − 0) observations were simultaneously carried out in April 2014 using the IRAM 30m telescope2 on Pico Veleta, Spain. The observations have been introduced in detail in our previous work in Zhang et al. (2016). In our raw data, the HPBWs of CO (2 − 1) and CO (1 − 0) are 11.3(cid:48)(cid:48) and 22.5(cid:48)(cid:48) , respectively, with the same sampling step of 9.3(cid:48)(cid:48). For further line ratio analysis, the raw data were then convolved to the lowest angular resolution of 22.5(cid:48)(cid:48) with a grid of 11.0(cid:48)(cid:48) using the GILDAS software package. 3. Results and analysis 3.1. CO integrated intensity distributions Figure 1 displays the integrated intensity maps of CO (3 − 2), CO (2 − 1), and CO (1 − 0) lines with a velocity range from −16.0 to −5.0 km s−1 superimposed on MIPSGAL 24 µm emis- sion (Carey et al. 2009). All the CO data were convolved to the same angular resolution of 22.5(cid:48)(cid:48). We also label the nine selected molecular clumps (Zhang et al. 2016) and the ring-like shell of the bubble in the maps. The morphological structures of the three integrated intensity maps are clearly similar. Fig. 1. Integrated intensity maps of CO (3 − 2) (upper), CO (2 − 1) (mid- dle), and CO (1 − 0) (lower) lines with a velocity range from −16.0 to −5.0 km s−1 superimposed on 24 µm emission. The contour levels in = 0.6 K km s−1, each CO map start at 5σ in steps of 10σ with σCO (3−2) = 1.6 K km s−1. The letters and σCO (2−1) the ellipse indicate the positions of nine molecular clumps (A-I) and the ring-like shell of the bubble, respectively. The angular resolution (22.5(cid:48)(cid:48)) is indicated in the bottom left corner of each panel. = 1.3 K km s−1, and σCO (1−0) 10σ. This indicates that the line ratios have high signal-to-noise ratios at least above 7σ. For comparison, we also extracted some spectra (see lower panels in Figure 2 and 3) with low line ratios from the corresponding clump center regions. 3.2. Spectra 3.3. Integrated intensity ratio distributions We extracted several example spectra CO (3 − 2), CO (2 − 1), and CO (1 − 0) (see upper panels in Figures 2 and 3) with high ratios (WCO (3−2)/WCO (2−1) (cid:38) 0.8 and WCO (2−1)/WCO (1−0) (cid:38) 1.2) from the inner rims near clumps A, B, G, and H. All the spec- tra with the highest ratios have high signal-to-noise ratios above 1 http://www.iram.fr/IRAMFR/GILDAS/ 2 Based on observations carried out with the IRAM 30m Telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain). Figure 4 displays the integrated intensity ratio maps of WCO (3−2)/WCO (2−1) and WCO (2−1)/WCO (1−0). The ratios were ob- tained based on the integrated intensity maps that are above 5σ (see Figure 1). For the line ratios we considered pixels above 3.5σ according to the error propagation of the integrated inten- sity maps. It clearly shows that at clumps A, F, G, H, and I in the WCO (3−2)/WCO (2−1) map, the inner rims of ring-like shell have a higher integrated intensity ratio (WCO (3−2)/WCO (2−1) (cid:38) 0.8) than the outer rims, while in the WCO (2−1)/WCO (1−0) map the high- Article number, page 2 of 6 C.-P. Zhang et al.: Infrared dust bubble N131 /WCO (2−1) (upper) Fig. 2. Example spectra of the high ratios WCO (3−2) from the inner rims of the ring-like shell near clumps G and H, and of the low ratios (lower) from the clump center regions (see also Figure 4). /WCO (2−1) (upper) and Fig. 4. Integrated intensity ratio maps of WCO (3−2) /WCO (1−0) (lower) derived from the integrated intensity maps WCO (2−1) that are above 5σ in Figure 1. The letters indicate the positions of nine molecular clumps (A-I) in the bubble. The angular resolution is indi- cated in the bottom left corner of each panel. /WCO (1−0) (upper) Fig. 3. Example spectra of the high ratios WCO (2−1) from the inner rims of the ring-like shell near clumps A and B, and of the low ratios (lower) from the clump center regions (see also Figure 4). est line ratio occurs at the inner rims of the shell near clumps A, B, E, and F with WCO (2−1)/WCO (1−0) (cid:38) 1.2. Figures 2 and 3 display some spectra CO (3 − 2), CO (2 − 1), and CO (1 − 0) ex- tracted from the inner rims of the ring-like shell near clumps A, B, G, and H with high ratios (WCO (3−2)/WCO (2−1) (cid:38) 0.8 and WCO (2−1)/WCO (1−0) (cid:38) 1.2). Figure 5 displays the integrated intensity ratio histograms of WCO (3−2)/WCO (2−1) and WCO (2−1)/WCO (1−0) for all pixels in Fig- ure 4. The line ratios of WCO (3−2)/WCO (2−1) mostly range from 0.2 to 1.2 with a median of 0.54 ± 0.12, which is slightly lower than what was found (≈0.75) at the Central Molecu- lar Zone of the Milky Way (Kudo et al. 2011). The line ra- tios of WCO (2−1)/WCO (1−0) range from 0.5 to 1.6 with a me- dian of 0.84 ± 0.15. We also derived the median value of WCO (3−2)/WCO (1−0) , which is around 0.45, close to the average /WCO (2−1) and /WCO (1−0) for all pixels in Figure 4. The median uncertainties Fig. 5. Integrated intensity ratio histograms of WCO (3−2) WCO (2−1) are derived from the standard deviation of the sample. value of WCO (3−2)/WCO (1−0) ≈ 0.5 in star-forming galaxies (e.g, Aravena et al. 2010, 2014; Daddi et al. 2015). 3.4. RADEX modeling To study the line ratio distributions as a function of kinetic temperature and H2 volume density in bubble N131, we used the nonlocal thermodynamic equilibrium (non-LTE) radiative Article number, page 3 of 6 0.51.01.5Lineratio0100200300NumberMedian=0.54±0.12Median=0.84±0.15>0.8>1.2TailTailWCO(3−2)/WCO(2−1)WCO(2−1)/WCO(1−0) A&A proofs: manuscript no. N131 transfer code RADEX3 (van der Tak et al. 2007) with the Lei- den Atomic and Molecular Database (LAMDA; Schöier et al. 2005) to model the CO (3 − 2), CO (2 − 1), and CO (1 − 0) lines. The model grid extends over a grid of 51 temperatures (Tkin = 3 − 500 K) and 51 densities (nH2 = 10 − 105 cm−3). The CO column density and line width were fixed with NCO = 2.2 × 1017 cm−2 and δv = 3.5 km s−1, which are the derived me- dian values of CO column density and CO (1 − 0) velocity dis- persion from CO (1 − 0) and 13CO (1 − 0) in N131 (see Zhang et al. 2016). The beam-filling factors were assumed to be unity. Figures 6 and 7 display the line ratio and optical depth dis- tributions as a function of kinetic temperature and H2 volume density obtained with RADEX modeling. Linear molecules of CO at low rotational transitions (critical density of about ncrit ∼ 104 cm−3) are tracers of low-density gas (Kaufman et al. 1999; Qin et al. 2008; Nishimura et al. 2015; Peñaloza et al. 2018). For a given molecule, moving up to a high rotational transition will lead to a high critical density. The high rotational transitions are sensitive to a high temperature based on the large velocity gra- dient (LVG) model. The high temperature and density can there- fore be probed with the high CO line ratios (van der Tak et al. 2007). 4. Discussion: Line ratios tracing the compressed areas Wilson et al. (1997) found that the WCO (3−2)/WCO (2−1) line ra- tios for the molecular clouds containing optical H II regions (0.79±0.05) are somewhat higher than those for the clouds with- out optical H II regions (0.58 ± 0.06), while the line ratio in the giant H II region is even higher (1.07±0.03). Wilson et al. (1997) also suggested that the high line ratio may be caused by heat- ing of the gas by the massive stars. Line ratio distributions such as WCO (3−2)/WCO (2−1) and WCO (2−1)/WCO (1−0) have been used to study the interaction in supernova remnant molecular cloud sys- tem (e.g, Jiang et al. 2010; Zhou et al. 2016, 2018; Arias et al. 2019). The high ratios with WCO (2−1)/WCO (1−0) ≈ 1.6 were sug- gested by Zhou et al. (2016) to trace the shocked compressed gas that is located at the shell of supernova remnant Tycho. Recently, Celis Peña et al. (2019) also found that the high integrated line ratios WCO (3−2)/WCO (2−1) at the shell of the LMC supergiant bub- ble N11 may be caused by the expansion of nebulae and the in- teraction with radiation from OB association. The question now is why and how the CO line ratios can be used to trace the inter- actions. The infrared dust bubble N131 originates from expand- ing H II regions, but the H II region inside has been ex- tinguished (Zhang et al. 2013, 2016). Figure 4 clearly shows that most parts of the inner rims of the ring-like shell have higher integrated intensity ratios (e.g., WCO (3−2)/WCO (2−1) (cid:38) 0.8, WCO (2−1)/WCO (1−0) (cid:38) 1.2) than the outer rims. Additionally, the most notable discrepancy between the two ratio distribu- tions is that at the inner rims of the ring-like shell near clumps G and H, the ratio WCO (3−2)/WCO (2−1) is much higher than in other regions (except for the complicated clump A4) but the ra- tio WCO (2−1)/WCO (1−0) is not, while at the inner rims near clump B, the ratio WCO (2−1)/WCO (1−0) is much higher than in other re- gions but the ratio WCO (3−2)/WCO (2−1) is not. This may suggest that the inner rims of the ring-like shell near clumps G and H have a relatively high kinetic temperature up to the excitation 3 https://home.strw.leidenuniv.nl/~moldata/radex.html 4 Clump A is a small expanding H II region that is deeply embedded in the ring-like shell of bubble N131 (see details in Zhang et al. 2016). Article number, page 4 of 6 /WCO (2−1)) and optical depths (τCO (2−1)) Fig. 6. Line ratios (R = WCO (3−2) = 2.2 × 1017 cm−2 and δv = 3.5 km s−1 (esti- in the conditions of NCO mated by median values in N131) as a function of kinetic temperature and volume density by RADEX modeling. The green contour indicates a region (or threshold) for a possible gas temperature-density distribution in a colliding flow at the onset of star formation from simulations in Clark et al. (2012). /WCO (1−0)) and optical depths (τCO (1−0)) Fig. 7. Line ratios (R = WCO (2−1) = 2.2 × 1017 cm−2 and δv = 3.5 km s−1 (esti- in the conditions of NCO mated by median values in N131) as a function of kinetic temperature and volume density by RADEX modeling. The green contour indicates a region (or threshold) for a possible gas temperature-density distribution in a colliding flow at the onset of star formation from simulations in Clark et al. (2012). temperature of high transition level of CO (3 − 2), leading to stronger CO (3 − 2) emission than in other regions; while the inner rims near clump B have a relatively low kinetic tempera- ture just up to the low transition level of CO (2 − 1), leading to stronger CO (2−1) emission than in other regions. This also sug- gests that the inner rims of the ring-like shell were compressed by strong stellar winds from the bubble insides (see also discus- sion in Nishimura et al. 2015). To trace the compressed inner rims of the ring-like shell by stellar winds from the bubble insides, we computed the expected CO line ratios at different gas temperatures and densities using RADEX code. The results are presented in Figures 6 and 7. We then determined the CO line ratios that can be used to trace the Threshold101102103104105nH2(cm−3)101102Tkin(K)R=0.5R=0.8τ=2τ=5τ=10τ=20WCO(3−2)/WCO(2−1)0.30.60.91.21.5WCO(3−2)/WCO(2−1)Threshold101102103104105nH2(cm−3)101102Tkin(K)R=0.8R=1.2τ=1τ=5τ=20WCO(2−1)/WCO(1−0)0.51.01.52.02.53.0WCO(2−1)/WCO(1−0) C.-P. Zhang et al.: Infrared dust bubble N131 propose Therefore, we use CO line interactions. We recall that in an ordinary molecular cloud, the cold gas is mainly heated by cosmic rays. This heating is bal- anced by radiative cooling (Draine 2011). As a result, we expect a limited range of temperatures and densities for the molecular gas, which leads to a limited range of observed line ratios. These line ratios, which lie far beyond the upper limit, could trace the interaction between the cold and hot gas that presumably lies in the inner rims of a bubble shell because these interactions should increase the temperature and density. to ratios WCO (3−2)/WCO (2−1) (cid:38) 0.8 and WCO (2−1)/WCO (1−0) (cid:38) 1.2 to trace the compressed inner rims of the ring-like shell. The thresholds were selected based on the following considerations. The thresholds correspond to the non-Gaussian tail of line ratio distribution presented in Figure 5, where we propose that non-interacting clouds should produce line ratios that are Gaussian distributed, and the non-Gaussian parts of the distributions are caused by interaction. To justify our thresholds, we used RADEX to compute the line ratios as a function of gas temperature and density (see Figures 6 and 7). By overlaying the expected range of gas density and temperature found in the most recent numerical simulations5 (Clark et al. 2012), we derived the expected CO line ratios for non-interacting clouds. The highest ratios are located in regions with moderate or low optical depths (τ (cid:46) 5 for WCO (3−2)/WCO (2−1) (cid:38) 0.8 and τ (cid:46) 1 for WCO (2−1)/WCO (1−0) (cid:38) 1.2) in the temperature-density plane. Line ratios higher than this can be used to trace the interaction region where the gas temperature and density are higher than normal. 5. Summary Based on our previous multiwavelength observations (Zhang et al. 2013, 2016), the infrared dust bubble N131 is a typical bub- ble showing an expanding ring-like shell, which has been swept up by the energetic winds of ionizing stars inside. We here car- ried out new CO (3 − 2) observations toward the bubble N131 using the 15m JCMT, and also used our published CO (2−1) and CO (1 − 0) line data observed with the IRAM 30m telescope. We plotted their integrated intensity maps, which were convolved to the same angular resolution (22.5(cid:48)(cid:48)). We find that the three differ- ent CO transition maps show a similar morphological structure. In bubble N131, we used the RADEX code to model the ki- netic temperature and H2 volume density, and we studied the relationship between them and line ratios. The line ratios of WCO (3−2)/WCO (2−1) mostly range from 0.2 to 1.2 with a median of 0.54 ± 0.12, while the line ratios of WCO (2−1)/WCO (1−0) range from 0.5 to 1.6 with a median of 0.84 ± 0.15. The line width ra- tios between CO (3 − 2), CO (2 − 1), and CO (1 − 0) are close to unity. To probe the interaction between the hot stellar winds and the cold molecular ring-like shell, we performed RADEX model- ing to test the dependence of the line ratios on the underlying parameters such as temperature and density, and to predict the range of CO integrated intensity ratios WCO (3−2)/WCO (2−1) and WCO (2−1)/WCO (1−0) if the gas temperatures and densities are pre- dicted by the chemodynamics simulations. Line ratios far be- yond the temperature-density threshold (Clark et al. 2012) could thus be used to trace the interactions. From our observations, we find that the high CO integrated intensity ratios WCO (3−2)/WCO (2−1) and WCO (2−1)/WCO (1−0) are far beyond the prediction from the most recent numerical simula- tion without stellar feedback. As a result, these high line ra- tios can be used to trace the compressed areas in bubble N131. We suggest that the high CO integrated intensity ratios, such as WCO (3−2)/WCO (2−1) (cid:38) 0.8 and WCO (2−1)/WCO (1−0) (cid:38) 1.2, can be used as a tracer of gas-compressed regions with a relatively high temperature and density. We further proved that the non- Gaussian part of the line-ratio distribution can be used to trace the interaction between the molecular gas and the hot gas in the bubble. Acknowledgements. We thank the anonymous referees for constructive com- ments that improved the manuscript. This work is supported by the National Natural Science Foundation of China Nos. 11703040, 11743007, and National Key Basic Research Program of China (973 Program) No. 2015CB857101. C.- P. Zhang acknowledges support by the MPG-CAS Joint Doctoral Promotion Program (DPP) and China Scholarship Council (CSC) in Germany as a post- doctoral researcher. The JCMT is operated by the EAO on behalf of NAOJ; ASIAA; KASI; CAMS as well as the National Key R&D Program of China (No. 2017YFA0402700). Additional funding support is provided by the STFC and participating universities in the UK and Canada. References Aravena, M., Carilli, C., Daddi, E., et al. 2010, ApJ, 718, 177 Aravena, M., Hodge, J. A., Wagg, J., et al. 2014, MNRAS, 442, 558 Arias, M., Domˇcek, V., Zhou, P., & Vink, J. 2019, A&A, 627, A75 Beaumont, C. N. & Williams, J. P. 2010, ApJ, 709, 791 Beuther, H., Kramer, C., Deiss, B., & Stutzki, J. 2000, A&A, 362, 1109 Bolatto, A. D., Jackson, J. M., Israel, F. P., Zhang, X., & Kim, S. 2000, ApJ, 545, 234 Bolatto, A. D., Leroy, A., Israel, F. P., & Jackson, J. M. 2003, ApJ, 595, 167 Buckle, J. V., Hills, R. E., Smith, H., et al. 2009, MNRAS, 399, 1026 Carey, S. J., Noriega-Crespo, A., Mizuno, D. R., et al. 2009, PASP, 121, 76 Celis Peña, M., Paron, S., Rubio, M., Herrera, C. N., & Ortega, M. E. 2019, arXiv e-prints, arXiv:1905.08829 Churchwell, E., Povich, M. S., Allen, D., et al. 2006, ApJ, 649, 759 Churchwell, E., Watson, D. F., Povich, M. S., et al. 2007, ApJ, 670, 428 Clark, P. C., Glover, S. C. O., Klessen, R. S., & Bonnell, I. A. 2012, MNRAS, 424, 2599 Daddi, E., Dannerbauer, H., Liu, D., et al. 2015, A&A, 577, A46 Dame, T. M., Hartmann, D., & Thaddeus, P. 2001, ApJ, 547, 792 Draine, B. T. 2011, Physics of the Interstellar and Intergalactic Medium Hasegawa, T. I., Mitchell, G. F., Matthews, H. E., & Tacconi, L. 1994, ApJ, 426, 215 Hou, L. G. & Gao, X. Y. 2014, MNRAS, 438, 426 Jayasinghe, T., Dixon, D., Povich, M. S., et al. 2019, MNRAS, 1691 Jiang, B., Chen, Y., Wang, J., et al. 2010, ApJ, 712, 1147 Kaufman, M. J., Wolfire, M. G., Hollenbach, D. J., & Luhman, M. L. 1999, ApJ, 527, 795 Kudo, N., Torii, K., Machida, M., et al. 2011, PASJ, 63, 171 Nishimura, A., Tokuda, K., Kimura, K., et al. 2015, ApJS, 216, 18 Peñaloza, C. H., Clark, P. C., Glover, S. C. O., & Klessen, R. S. 2018, MNRAS, 475, 1508 Polychroni, D., Moore, T. J. T., & Allsopp, J. 2012, MNRAS, 422, 2992 Qin, S.-L., Wang, J.-J., Zhao, G., Miller, M., & Zhao, J.-H. 2008, A&A, 484, 361 Sakamoto, S., Hasegawa, T., Hayashi, M., Handa, T., & Oka, T. 1995, ApJS, 100, 125 Schöier, F. L., van der Tak, F. F. S., van Dishoeck, E. F., & Black, J. H. 2005, A&A, 432, 369 Simpson, R. J., Povich, M. S., Kendrew, S., et al. 2012, MNRAS, 424, 2442 van der Tak, F. F. S., Black, J. H., Schöier, F. L., Jansen, D. J., & van Dishoeck, 5 Although the simulations in Clark et al. (2012) were carried out un- der a certain set of initial conditions, the predicted temperature-density relation for the molecular gas is relatively robust (e.g., independent of the initial condition) and is applicable to our data. Additionally, due to the short cooling times, the density-temperature relation of the molec- ular gas should not depend on the initial conditions (e.g., whether the converging speed is fast or slow). E. F. 2007, A&A, 468, 627 Watson, C., Povich, M. S., Churchwell, E. B., et al. 2008, ApJ, 681, 1341 Wilson, C. D., Walker, C. E., & Thornley, M. D. 1997, ApJ, 483, 210 Yoda, T., Handa, T., Kohno, K., et al. 2010, PASJ, 62, 1277 Zhang, C.-P., Li, G.-X., Wyrowski, F., et al. 2016, A&A, 585, A117 Zhang, C.-P., Wang, J.-J., & Xu, J.-L. 2013, A&A, 550, A117 Zhou, P., Chen, Y., Zhang, Z.-Y., et al. 2016, ApJ, 826, 34 Zhou, P., Li, J.-T., Zhang, Z.-Y., et al. 2018, ApJ, 865, 6 Article number, page 5 of 6 A&A proofs: manuscript no. N131 all pixels in Figure A.1. The line ratios of δvCO (3−2)/δvCO (2−1) mostly range from 0.6 to 1.8 with a median of 1.06 ± 0.18, while δvCO (2−1)/δvCO (1−0) range from 0.5 to 1.3 with a median of 0.86 ± 0.18. We can also derive that the median value of δvCO (3−2)/δvCO (1−0) is around 0.91. Compared with the line width ratios, we have δvCO (1−0) > δvCO (3−2) > δvCO (2−1) only for their median values. However, generally, the line width ratios between CO (3 − 2), CO (2 − 1), and CO (1 − 0) are close to unity. /δvCO (2−1) (upper) and Fig. A.1. Line width ratio maps of δvCO (3−2) /δvCO (1−0) (lower). The letters indicate the positions of the nine δvCO (2−1) molecular clumps (A-I) in the bubble. The angular resolution is indi- cated in the bottom left corner. Fig. A.2. Line width ratio histograms of δvCO (3−2) /δvCO (1−0) for all pixels in Figure A.1. δvCO (2−1) /δvCO (2−1) and Appendix A: Line width ratio the displays line width ratio maps Figure A.1 of δvCO (3−2)/δvCO (2−1) and δvCO (2−1)/δvCO (1−0). For clumps F, G, and H in the δvCO (3−2)/δvCO (2−1) map, the outer rims of the ring-like shell have a higher line width ratio than the inner rims, while this is reversed for clump I. For the other clumps, there is no visible line ratio gradient feature. In the δvCO (2−1)/δvCO (1−0) map, it seems that the higher line width ratios are located at the clump center positions, and the line ratio gradient is not evident. Figure A.2 displays the line width ratio histograms of δvCO (3−2)/δvCO (2−1) and δvCO (2−1)/δvCO (1−0) for Article number, page 6 of 6 0.51.01.52.0Lineratio0100200300NumberMedian=1.06±0.18Median=0.86±0.18δvCO(3−2)/δvCO(2−1)δvCO(2−1)/δvCO(1−0)
ai_researcher
5
Rethinking_the_Bounds_of_LLM_Reasoning_Are_Multi-Agent_Discussions_the_Key.pdf
Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the Key? Qineng Wang1* Zihao Wang2* Ying Su2 Hanghang Tong3 Yangqiu Song2 1Zhejiang University 2HKUST 3UIUC [email protected], [email protected] {zwanggc,ysuay,yqsong}@cse.ust.hk 4 2 0 2 b e F 8 2 ] L C . s c [ 1 v 2 7 2 8 1 . 2 0 4 2 : v i X r a Abstract Recent progress in LLMs discussion suggests that multi-agent discussion improves the rea- In this work, we soning abilities of LLMs. reevaluate this claim through systematic experi- ments, where we propose a novel group discus- sion framework to enrich the set of discussion mechanisms. Interestingly, our results show that a single-agent LLM with strong prompts can achieve almost the same performance as the best existing discussion approach on a wide range of reasoning tasks and backbone LLMs. We observe that the multi-agent discussion per- forms better than a single agent only when there is no demonstration in the prompt. Further study reveals the common interaction mecha- nisms of LLMs during the discussion. 1 Introduction Large Language Models (LLMs) demonstrate strong abilities in language understanding and gen- eration (OpenAI, 2022, 2023; Touvron et al., 2023a; Zhang et al., 2022a; Chowdhery et al., 2022; Team et al., 2023). However, LLMs still fall short for reasoning tasks due to model deficiencies like hal- lucination and reasoning perspective mistakes (Xu et al., 2023a). To overcome these issues, numerous works have been proposed by simulating human reasoning. Inspired by Society of Mind (Minsky, 1988), multi-agent discussion frameworks such as Debate (Du et al., 2023), MAD (Liang et al., 2023), and ReConcile (Chen et al., 2023a) present a novel approach by involving multiple AI agents. Pow- ered by LLMs, these agents autonomously engage in discussions on given topics, improving the rea- soning abilities of LLMs by emulating the human discussion process. To further improve the perfor- mance, most of multi-agent discussion frameworks leverage task-specific examples, which are often termed as demonstrations (Min et al., 2022). This is *These authors contributed equally to this work. 1 Figure 1: Comparative performance of single-agent set- tings and multi-agent discussion frameworks on FOLIO- wiki dataset. based on the insights that LLMs can learn from the context demonstrations (Brown et al., 2020). We note that all these discussion frameworks claim that they outperform the conventional methods with a single agent, such as Chain-of-Thought (Wei et al., 2022). Figure 1 presents a comparison between single- agent settings and multi-agent discussion frame- works on FOLIO-wiki dataset (Zhang et al., 2023b; Han et al., 2022). In this figure, ‘Demo’ means that the tested single agent is provided with a demon- stration case, ‘Q-Desc.’ indicates that the single agent is provided with detailed question descrip- tion, while ‘Direct’ refers to the single agent with- out demonstrations. We observe that the single agent ‘Demo’ tends to reach a performance upper bound similar to that of all discussion frameworks, which will be further elaborated on later. This re- sult suggests that the previous claim is NOT fully established. Based on the observation from Figure 1, in this paper, we conduct systematic experiments to pro- vide in-depth analysis. As a newly emergent topic, the number of available discussion frameworks still remains limited. To provide a more com- prehensive and detailed comparison between the Figure 2: Our proposed design pipeline of multi-agent discussion frameworks. This pipeline operates by having agents starting with a kick-start prompt. Then, agents will start discussion by obeying the rules defined in the algorithm and come to a result in the end. single-agent settings and discussion frameworks, as demonstrated in Figure 3, we propose a new discus- sion framework named CMD, which is inspired by human group discussion process. Our discussion pipeline considers multiple design aspects shown in Figure 2 (see Section 2.1 for more details). We fur- ther carry out a range of experiments over standard reasoning benchmarks (ECQA (Aggarwal et al., 2021), GSM8k (Cobbe et al., 2021), and FOLIO- wiki (Han et al., 2022; Zhang et al., 2023b)) using various configurations within different parts of this pipeline. We find that multi-agent discussion does not nec- essarily enhance reasoning when the prompt pro- vided to an agent is sufficiently robust, which aligns with the observation from Figure 1. Additionally, our experimental results reveal that multi-agent dis- cussion frameworks outperform single-agent setups when no demonstrations are provided. We also find that agents powered by weaker LLMs like Bard (Anil et al., 2023) can improve its performance on reasoning with the assistance of the stronger LLMs like Gemini Pro (Team et al., 2023) during interaction. In summary, our analysis provides a new way of understanding when to use multi-agent discus- sion on reasoning. Our contributions are listed as follows: (1) We propose a new multi-agent dis- cussion framework CMD, which simulates human group discussion process; (2) We observe that sin- gle agents with a strong prompt match the perfor- mance of a multi-agent discussion using equally robust prompts; (3) We identify two common types of discussion errors: judge mistake and wrong an- swer propagation; (4) We find that without demon- strations, multi-agent discussions surpass the single agent ; (5) In multi-LLMs multi-agent discussions, agents with stronger LLMs help improve perfor- mance of agents with weaker LLMs. 2 Preliminary 2.1 What is Multi-Agent Discussion? Multi-agent discussion refers to an interactive setup where multiple agents, each powered by an LLM, engage in an autonomous dialogue. Each agent is given a prompt that outlines the necessary back- ground knowledge and guides its behavior through- out the discussion. Once the topic is given, these agents can carry on the conversation independently. Figure 2 illustrates a discussion design pipeline, which is structured into four main elements: the Kick-start prompt, agents, algorithm, and result. The Kick-start prompt supplies essential details such as background knowledge and the topic for discussion. This information is then fed to an agent, which operates based on various LLMs. Next, the algorithm lays out the specific rules of discussion, including the number of rounds, decision-making processes (whether by a judge or through voting), and the structure of the discussion (such as hier- archical). Lastly, ‘Result’ describes how the final decisions are rendered and presented. We further provide a view from symmetry for prompt-related mechanism (including discussion) in Appendix A. 2 After careful consideration, I think that participant Amakes more sense. Therefore, the proposition is [Correct].Final ResultParticipant A and Bbelive the answer is [Correct].Participant Cbelieves the answer is [Incorrect].Therefore, the proposition is [Correct].Final ResultYou are a debater in this discussion.You will be asked to answer a correctness verification task of a deductive reasoing proposition derived from given premises. You will think the proposition is [Correct]. Type 1You are a debator in this discussion.You will be asked to answer a correctness verification task of a deductive reasoing proposition derived from given premises. You will think the proposition is [Incorrect]. Type 2You are the judge in this discussion.You will be asked to determine which answer is more plausible. Here are the answers of the debaters: ...Type 3ABCDABCKickstart PromptAgentsAlgorithmResult Figure 3: Overview of the Conquer-and-Merge Discussion (CMD) Framework. 2.2 Existing Discussion Frameworks Previous works on multi-agent discussion frame- works have already covered several common dis- cussion paradigms. In this paper, we consider the following discussion frameworks for our experi- ments: Debate (Du et al., 2023), MAD (Liang et al., 2023) and ReConcile (Chen et al., 2023a). Debate replicates a simple turn-based discussion among agents. Initially, all agents are prompted to address the assigned task, and their respective responses are then incorporated into each agent’s input for the subsequent round. Typically, this pro- cess consists of three rounds of discussion. MAD introduces divergent thinking by involving two participants to discuss a task from opposing perspectives. A judge then steps in to evaluate which viewpoint seems more plausible or if further discussion is needed. The discussion is repeated until a consensus is reached or the judge favors a particular solution. ReConcile implements a round-table discussion It with the agents powered by multiple LLMs. reaches a decision through a weighted voting sys- tem. In this process, agents provide a confidence level for their answers, and agents in ReConcile utilize these confidence levels as weights to cast votes and arrive at a final decision. Despite these advancements, the number of multi-agent frameworks is still limited, and none has yet to consider one of the most prevalent forms of discussion: group discussion. 3 CMD: Conquer-and-Merge Discussion To provide a thorough comparison in following experiments, we identify and rectify the gap in pre- vious research, which has not suggested a group- discussion-based framework. To address this, we introduce a novel framework for multi-agent group discussion, referred to as CMD. In CMD, suppose there are n agents A = {Ai}n i=1 discussing a ques- tion Q, and agent Ai is powered by an LLM Li. We use an array H to store the history responses. The agents will discuss for R rounds. During each turn r, every agent Ai generates a response (vi, ei) = CMD(H|Q, r), where vi is the viewpoint and ei denotes the explanation. Detailed descrip- tion can be seen in Appendix B. 3.1 Message-Passing Algorithm For the synchronization of agent communications, we propose a message-passing algorithm. Previ- ous works on multi-agent discussion frameworks focus exclusively on specific scenarios without de- signing a universal algorithm to synchronize agent messages across various discussions. Moreover, discussion forms vary in architecture and agents can be powered by different LLMs, where each LLM usually possesses different calling protocols. Therefore, we design a message-passing algorithm using a multi-threaded way to overcome these is- sues. The algorithm establishes a receiving map M for every agent to store the messages they should get in the next turn. For each message mj from Agent Ai, the algorithm first confirms the receivers Ar, then add mj into the receiving map M by ev- ery agent Ak ∈ Ar. When the next turn begins, the algorithm will automatically push stored messages from M to corresponding agents. Please refer to Appendix B.4 for more detailed pseudo code. 3 ... Therefore, the proposition is True.... Therefore, the proposition is False. ... Therefore, the proposition is True.... Therefore, the proposition is False.... Therefore, the proposition is False. ... Therefore, the proposition is True.Round 1G1G2... I maintain the proposition is True.... I now believe the answer is True. ... I agree that the proposition is True.... I maintain the proposition is False.... I believe the proposition is False. ... I still think the answer is True.Round 2G1G2... I maintain the proposition is True.... Therefore, the proposition is still True. ... I agree that the proposition is True.... I maintain the proposition is False.... Therefore, the proposition is False. ... Now I agree that the answer is False.Round 3G1G2Group 1truetruetrueG1Group 2falsefalsefalseG2Vote (stage 2)Secretary3 agents vote for True.Reasons: ...3 agents vote for False.Reasons: ... Please decide which opinion is more plausible.trueFinal Decision (stage 3)Group Discussion (stage 1)Stages of CMD 3.2 Three Stages of CMD As illustrated in Figure 3, CMD consists of three stages: group discussion, voting and the final de- cision stage. Typically, the final decision stage is reserved for instances of a tie and is otherwise un- necessary. Below is a breakdown of each stage involved in the CMD framework, and detailed intro- duction can be seen in Appendix B.3. Stage 1: Group Discussion. During this stage, agents A are divided into groups G = {Gk}t k=1 with an equal size. All agents are asked to solve task Q through discussion. For each agent Ai ∈ Gk, all answers and explanations from Ai ∈ Gk in the last round are accessible where Aj ∈ Gk and j ̸= i. In contrast, they can only see the answers without explanations from agents in other groups. After R rounds of discussions, CMD moves to the voting stage. Stage 2: Voting. When discussion reaches the maximum number of rounds, all agents A start vot- ing. Each vote of agent in this discussion is treated equally, therefore, the result is determined by the majority decision. In the event of a tie, CMD transi- tions to stage 3. Otherwise, the discussion process concludes formally. Stage 3: Final Decision. If a tie occurs, we intro- duce an extra agent S in the role of the secretary to make the final decision. Each proposed answer is accompanied by an explanation selected from agents with the same opinion and provided to the secretary for the final determination. To sum up, the final decision is made by either Vote(A) or S(V, O|Q) where V = {vi}n and O = {oi}n, representing a set of viewpoints and a set of expla- nations respectively. 4 Experimental Setups In the experiments, we contrast single-agent setup with four established multi-agent discussion frame- works: Debate, MAD, ReConcile, and CMD under various prompt conditions. These methods are in- troduced in previous Sections 2.2 and 3. 4.1 Implementation Details and Metrics Our experiments are primarily implemented with three advanced LLMs, including ChatGPT-3.5 (OpenAI, 2022), Gemini Pro (Team et al., 2023) and Bard (Anil et al., 2023). In particular, we em- ploy the gpt-35-turbo (0613) instance hosted on Azure OpenAI1 for ChatGPT-3.5, while the chat- bison-001 model represents Bard via PaLM2 archi- tecture. Gemini Pro and Bard interfaces operate through Google MakerSuite API2. A uniform di- alogue temperature of 0.25 is configured across LLMs on CMD to ensure consistency. For all multi- agent discussion frameworks, we set the maximum discussion round number to 3. Our evaluations use accuracy to measure performance across all tasks. 4.2 Downstream Tasks The frameworks are implemented on a suite of rea- soning tasks, including a commonsense reasoning task and two deductive reasoning tasks: (1) ECQA (Aggarwal et al., 2021): A QA dataset centered on commonsense knowledge, (2) GSM8K (Cobbe et al., 2021): A benchmark consists of math word problems, (3) FOLIO-wiki (Zhang et al., 2023b): A dataset adapted from FOLIO (Han et al., 2022) for both symbolic and natural language deductive rea- soning. In alignment with constraints imposed by computational resources and following precedents set by earlier research (Du et al., 2023; Chen et al., 2023a; Liang et al., 2023), a subset of 100 instances from the test sets of both ECQA and GSM8K are selectively sampled. For an in-depth analysis, we choose to conduct a comprehensive experiments of all 460 cases within the curated version of the FOLIO-wiki dataset, which removes the flawed cases to ensure the result authenticity. 5 Experiments on Single LLM In this section, we conduct our experiments using a single LLM, ChatGPT-3.5. To gain the initial insights, we provide an in-depth analysis of the FOLIO-wiki dataset, examining both single-agent settings and multi-agent discussions as detailed in Section 5.1. Afterward, we extend our experiments to two other datasets GSM8K and ECQA in Section 5.2. We also investigate common mistakes made by discussions through a case study in Section 5.3. Finally, we summarize our findings in Section 5.4. 5.1 Analysis of FOLIO-wiki Dataset We begin by examining if multi-agent discussions are more effective than an agent using the strongest prompt on FOLIO-wiki dataset. Drawing from pre- vious research (Wei et al., 2022; Ling et al., 2023) on crafting prompts for reasoning tasks, we divide the prompt into three parts: a detailed question de- scription, which provides an in-depth background 1https://oai.azure.com/ 2https://ai.google.dev/ 4 Prompt Components Multi-Agent Discussion (%) Q-Desc. A-Desc. Demo. MAD (3) Debate (3) Debate (6) CMD (6) 64.13 74.13 68.91 71.96 74.13 70.00 75.65 71.96 70.22 75.65 69.13 76.30 71.74 70.00 74.78 73.26 74.13 73.89 71.09 77.39 Single Agent (%) 70.22 73.26 71.30 73.91 76.09 Table 1: Comparative performance of single-agent settings and multi-agent discussions on FOLIO-wiki using ChatGPT-3.5. Abbreviations are: detailed question descriptions (Q-Desc.), and answer format descriptions (A- Desc.), demonstrations (Demo.). Only the question itself is used as input when prompt components are disabled. The number next to the framework represents the number of agents. of the task; an answer format description, which instructs how an agent should reply; and a task- specific demonstration, which shows an example of a question and answer pair. For this task, we metic- ulously craft a demonstration for input prompt. We start by labeling each premise. After that, we quote all the premises and relate them to every step in the reasoning process by using these labels, until the final step is reached. A labeled example is similar to the case in Table 5. We then test the performance of various combinations of these components for both single agents and discussions. Single Agent. We conduct an evaluation of differ- ent prompt components with a single agent, and the results are outlined in Table 1. This table shows that for both single-agent settings and multi-agent discussions, the inclusion of a detailed question de- scription or a task-specific demonstration enhances reasoning abilities on the FOLIO-wiki dataset. The detailed question description is helpful because the possible answers to judge the correctness of a given proposition—true, false, or unknown—require clar- ity. Without such clarity, agents often struggle to differentiate between what is ‘false’ and what is ‘unknown’. Most notably, the addition of a demon- stration contributes significantly to improved per- formance, highlighting its value as the most impact- ful component, in line with what prior studies have suggested (Min et al., 2022). Multi-Agent Discussions. We assess the same prompt components within multi-agent discussion setups, and Table 1 reveals three key insights: (1) Demonstrations and detailed question description enhance multi-agent discussions. (2) Despite simi- lar overall performance, most multi-agent discus- sions do not surpass the single agent when a demon- stration is introduced. (3) CMD performs better than both single-agent setups and other multi-agent dis- cussion frameworks on the FOLIO-wiki dataset. Below is further analysis of our findings. A Strong Single Agent is Comparable to Dis- cussion Frameworks. Analyzing experiments with single-agent settings and multi-agent discus- sions on the FOLIO-wiki dataset, we find that task- specific demonstrations significantly enhance a sin- gle agent’s performance. Additionally, we estab- lish that a well-supported agent can perform on par with discussion frameworks. Our analysis indicates that prompt engineering can boost reasoning per- formance in large language models, with demon- strations in both single-agent and multi-agent dis- cussions pushing towards the upper bound of per- formance. 5.2 Evaluation on All Tasks In Section 5.1, we discover that demonstrations play a pivotal role in enhancing performance on FOLIO-wiki dataset. With this insight in mind, we simplify our evaluation to two prompt scenarios: with (referred as direct) and without demonstra- tions (referred as demo). The results presented in Table 2 cover all tested reasoning tasks. The find- ings show: (1) With demonstrations, discussion frameworks and single-agent settings have compa- rable performance on average. This is consistent with our earlier observations. (2) Without demon- strations, CMD tend to surpass single-agent settings, both on average and in most individual tasks. When Does Discussion Work Better? Most Multi-agent discussion frameworks, especially CMD, achieve better performance compared with single-agent settings when neither is supported by demonstrations. We believe this is because, during discussions, the input from other agents can intro- duce new perspectives, leading to a more thorough reasoning process. Therefore, this collaborative 5 Method ECQA GSM8K FOLIO-wiki Average Direct Demo Direct Demo Direct Demo Direct Demo Single Agent 63.00 67.00 69.00 83.00 70.22 76.09 67.41 75.63 MAD (3 Agents) Debate (3 Agents) Debate (6 Agents) CMD (6 Agents) 55.00 67.00 65.00 64.00 58.00 65.00 64.00 63.00 74.00 78.00 74.00 75.00 78.00 81.00 78.00 83.00 61.25 70.00 69.13 73.26 74.13 75.65 74.78 77.39 63.42 71.67 69.38 70.75 70.04 73.88 72.26 74.46 Table 2: Results for all tasks, with and without demonstration settings included. Using ChatGPT-3.5. advantage makes multi-agent discussions a more effective option in scenarios lacking specific expert knowledge or detailed examples. Why Does Discussion Frameworks Perform Dif- ferently on Tested Tasks? Table 2 indicates that MAD is the least effective among the frameworks tested. We suspect that this is because MAD in- corporates a divergent thinking way, which asks agents to disagree with each other. This can some- times hinder reasoning by introducing irrelevant information that complicates decision-making. We explore this further with an error analysis in Sec- tion 5.3. Additionally, we observed unexpected behaviors from other discussion frameworks under specific conditions or tasks. For example, Debate and CMD perform worse on ECQA dataset when demonstrations are introduced. We hypothesize that because ECQA demands more commonsense knowledge than purely analytical reasoning, the single-source interaction stemming from a single LLM might cause agents to overthink. Instead of clarifying misunderstandings, the increased dia- logue may introduce complexity without address- ing the underlying knowledge gaps. In contrast to ECQA, datasets like GSM8K and FOLIO-wiki place a greater emphasis on deductive reasoning abilities. In these cases, the discussion process be- tween agents can be beneficial as it allows them to identify and address flaws in each other’s reasoning through interaction. 5.3 Two Discussion Error Types: A Case Study Our experiments show that multi-agent discussions can sometimes reach incorrect conclusions on ques- tions that a single agent answers correctly. Fig- ure 4 presents an example from the FOLIO-wiki dataset illustrating this point: a single agent pro- vides the correct answer, but multi-agent discus- sions lead to an erroneous result. We identify two Figure 4: Two common types of errors that may oc- cur in multi-agent discussions are judge mistake and wrong answer propagation. These issues can lead to circumstances where a multi-agent discussion reaches an incorrect conclusion, even if single agent can arrive at the correct one. unique types of errors in multi-agent discussions: (1) Judge Mistake: This occurs in situations where an agent serves as a judge to decide on the final an- swer. If there are varying responses among agents, the judge might select the incorrect option as the final verdict. Frameworks like MAD and CMD are susceptible to this error, particularly when deci- sions are made during a tie. (2) Wrong Answer Propagation: This type of error happens when an agent, influenced by the input from others, deviates from its initial correct answer and adopts an incor- rect consensus, spreading the mistake further in the discussion. This is the most common mistake the multi-agent discussion can make, even when most of their initial answers are correct. 6 1. All advocates of high tariff rates are Republicans.2. Some Republicans are not conservatives.PremisesSome conservatives are advocates of high tariff rates.PropositionSingle AgentAnswerUnknownInput TaskAgent 1Conservatives are ... the proposition is unknown. You are wrong ... the answer shouldbefalse. After reviewing answers, I believe answer is false.Discussion Error Type 1: Judge MistakeAgent 1Agent 2JudgeConservatives are ... the proposition is unknown. No, no proof shows ... the answer is false. After considering, I agree that the answer is false. Discussion Error Type 2: Wrong Answer PropagationRound 1Round 1Round 2Agent 1Agent 2Agent 1We do not know the relationships between conservatives and ...Therefore, the proposition is unknown. 5.4 Summary In this section, we evaluate various prompt com- ponent combinations for both single-agent settings and multi-agent discussions. Our findings suggest that multi-agent discussions are on par with a sin- gle agent when both have access to demonstrations. However, in the absence of demonstrations, multi- agent discussions generally outperform a single agent, making them a better option in scenarios where expert knowledge or detailed examples are insufficient. We also highlight unusual outcomes and present a case study to identify two frequent errors in multi-agent discussions: Judge Mistake and Wrong Answer Propagation. 6 Experiments on Multiple LLMs In this section, we expand our experimental scope from a single LLM to multiple LLMs for both single-agent settings and multi-agent discussion frameworks, which allows us to test the validity of our previous findings in multi-LLM scenarios. Specifically, we assess the performance of agents powered by three advanced LLMs: ChatGPT-3.5, Gemini Pro, and Bard. In Section 6.1, we compare the performance of three single-agent configura- tions, each using a different LLM, against multi- agent discussions leveraging all three LLMs. Our round-level analysis in Section 6.2 yields another insight: an agent powered by stronger LLM can enhance the performance of an agent powered by a less capable LLM. 6.1 Validate Findings on Multiple LLMs Scenarios In this section, we evaluate the performance of three single-agent settings, each supported by a dif- ferent LLM, and two multi-agent discussion frame- works, ReConcile and CMD, across all tasks. We utilize two types of prompt settings for each task: one with demonstrations and one without. For the CMD framework, we organize six agents into two groups of three, with each agent in a group powered by one of the LLMs: ChatGPT-3.5, Gemini Pro, or Bard. The results presented in Table 3 support our previous findings from Section 5 with some slight modifications. A Strong Single Agent is Comparable to Dis- cussion Frameworks. Based on experimental re- sults from Table 3, we can find that discussion frameworks perform comparably to a single agent powered by Gemini Pro when both are provided 7 (a) ReConcile (Discussion) (b) CMD (Group Discussion) Figure 5: Round-level performance of each LLM in multi-agent discussions on FOLIO-wiki dataset. with demonstrations. This consolidates our earlier conclusion that a ‘strong’ single agent—supported by both a well-designed prompt and a SOTA LLM—can rival the performance of a multi-agent discussion framework. Discussion Frameworks Outperform Single Agents with No Demonstration. Table 3 reveals that, in multi-LLM scenarios, multi-agent discus- sions outperform single agents when demonstra- tions are not provided. This outcome aligns with our previous observations in single LLM settings. Furthermore, CMD and ReConcile demonstrate sim- ilar performance when they both have no access to demonstrations and they are both powered by same LLMs. This indicates that our findings are consistent on different multi-LLM multi-agent dis- cussions frameworks. 6.2 Enhancing Agents in Weaker LLMs with Support from Stronger LLMs As shown in Table 3, single agents using Bard show the least effectiveness in reasoning tasks such as those in the FOLIO-wiki dataset. However, multi- LLM multi-agent frameworks remain competitive. To understand how a less advanced LLM like Bard performs during multi-agent discussions, we fur- ther study the round-by-round performance of each LLM engaged in the discussions. Figure 5 demon- strates that agents with less capable LLMs like Bard and ChatGPT-3.5 gradually enhance their per- formance over consecutive rounds with the support of the more robust LLM, Gemini Pro. We infer that throughout the discussion, Gemini Pro assists in bridging the gaps in knowledge and reasoning for the less advanced LLMs, guiding towards a stronger line of reasoning. Notably, although there is a slight drop in the performance of Gemini Pro during the second round, it demonstrates resilience and recovers swiftly, largely maintaining its supe- rior performance. Again, this finding is applicable to different multi-LLM discussions, specifically for Method Category LLM Single Agent Bard Gemini Pro ChatGPT-3.5 ECQA GSM8K FOLIO-wiki Average Direct Demo Direct Demo Direct Demo Direct Demo 66.00 74.00 63.00 65.00 75.00 67.00 47.00 75.00 69.00 54.00 81.00 83.00 70.00 74.13 70.22 71.96 79.78 76.09 61.00 74.38 67.41 63.65 78.59 75.63 Discussion ReConcile (Bard, Gemini, ChatGPT) 70.00 71.00 78.00 83.00 80.34 81.09 76.11 78.36 Group Discussion CMD (Bard, Gemini, ChatGPT) 73.00 72.00 78.00 82.00 79.78 81.96 76.93 78.66 Table 3: Results from single-agent and CMD across multiple LLMs on all tasks evaluated with two types of prompts: with demonstrations and without. both ReConcile and CMD. tion information embedded within input prompts. 7 Related Work 7.1 Prompting LLM for Reasoning Recent researches have experienced great pro- gresses in building powerful LLMs (Brown et al., 2020; OpenAI, 2022, 2023) or exploring the strat- egy of adopting LLMs over many downstream tasks via prompt enigineering. By training with different knowledge tex- tual sources and parameter size, various LLMs equipped with different reasoning capabilities are constructed, such as OPT (Zhang et al., 2022a), LLaMA (Touvron et al., 2023a,b), BLOOM (Scao et al., 2022), and PaLM (Chowdhery et al., 2022; Anil et al., 2023). Recently, Gemini Pro (Team et al., 2023) extends the capabilities of LLMs to the field of multi-modality. Numerous advancements have been made in the field of improving reasoning abilities of LLMs with prompt engineering. Chain of Thought (CoT) (Wei et al., 2022; Kojima et al., 2022) is a linear problem-solving approach where each step builds upon the previous one. Fu et al. (2022) propose to apply CoT to multi-step reasoning tasks. To auto- mate the CoT, Auto-CoT (Zhang et al., 2022b) con- structs demonstrations by sampling diverse ques- tions and generating reasoning chains. Active- Prompt (Diao et al., 2023) aims to select the most uncertain questions for task-specific annotations. Other prompt strategies designed to enhance rea- soning in LLMs include the PS Prompt (Wang et al., 2023), which breaks tasks into subtasks, ToT (Yao et al., 2023a) which expands on the reasoning pro- cess by considering multiple paths of reasoning and self-evaluating choices, the effective GoT (Yao et al., 2023b), which frames thoughts as graphs, Natural Program (Ling et al., 2023) which helps to improve the deductive reasoning tasks, re-reading prompt (Xu et al., 2023b) which revisits the ques- 7.2 Multi-agent Discussion for Reasoning with LLMs Multi-agent discussion utilizes multiple LLMs as agents to collectively discuss and reason given problems in an interactive way. Abundant re- searches have explored how to improve the rea- soning ability of single LLM, while multi-agent discussion among LLMs is still under exploration. The Multi-Agent Debate framework, introduced by (Du et al., 2023), establishes a mechanism for symmetric discussions among agents. During the same period, the MAD (Multi-Agent Debate) framework (Liang et al., 2023) introduces an asym- metric mechanism design. It assigns different roles (debater and judge) asymmetrically. Other similar works include (Chan et al., 2023). Also, the ReC- oncile framework (Chen et al., 2023a) exemplifies an asymmetric discussion mechanism by involv- ing different LLMs and using a weighted voting mechanism. To understand discussion more deeply, Zhang et al. (2023a) aim to explain such collabora- tion mechanism in a social psychology view. Unlike these works, we aim to explore the po- tential effects of prompting contents over the dis- cussion process by our defined multi-agent group discussion framework CMD. 8 Conclusion In this paper, we re-examine the claim that multi- agent discussions are superior to a single agent in reasoning tasks by conducting systematic experi- ments. We introduce a novel framework CMD for a comprehensive and fair assessment. By conducting experiments over standard benchmarks, we find that (1) A single agent with a strong prompt and powered by a strong LLM achieves comparable per- formance with multi-LLM multi-agent discussions; (2) In the absence of demonstrations, multi-agent discussion frameworks outperform single agents on 8 most tasks; (3) When multiple LLMs are involved in multi-agent discussions, agents with stronger LLMs can enhance the performance of agents with weaker LLMs as discussion progresses. 9 Ethical Considerations Our study employs publicly available datasets and LLMs accesses via official APIs, ensuring respon- sible and ethical use. Specifically, our ethical con- siderations can be summarized as follows: Public Datasets. Datasets we use are designed for academic research. No personal data has been processed. Licensed API Usage. Our application of LLMs complies with the API usage policies, maintain- ing fair use standards and respecting intellectual property. Transparency. We provide detailed experimenta- tion methods to allow for result reproduction and encourage transparent scientific practices. 10 Limitations Our research offers comprehensive experiments to study the performance of a strong single agent and multi-agent discussions. However, several aspects highlighted below can be further refined and ex- plored in future work. Enhancing Agent Complexity. Currently, all dis- cussion frameworks including CMD considers an LLM session as an AI Agent. This perspective simplifies the the concept of LLM-based AI Agent defined in the literature (Weng, 2023). By integrat- ing more sophisticated techniques such as Tree-of- Thought (Yao et al., 2023a) or Cumulative Reason- ing (Zhang et al., 2023b), or incorporating with external tools or knowledge bases, we could poten- tially improve the overall reasoning performance of multi-agent discussions. Expanding Task Diversity. While our study mainly focuses on reasoning tasks for assessing both single-agent settings and multi-agent discus- sions, the adaptive nature of discussions allows for a broader types of applications. Future research could explore the use of agent discussions in di- verse scenarios such as real-world strategic plan- ning or the integration of agents into interactive gaming environments. Experimenting with Additional LLMs. Due to computational and financial constraints, our inves- tigation is limited to testing three LLMs—Bard, Gemini Pro, and ChatGPT-3.5. Expanding our anal- ysis to include additional LLMs could provide a more extensive understanding of the capabilities and variances across different language models, of- fering valuable insights into the generalizability and scalability of our findings in multi-agent dis- cussion frameworks. References Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, and Di- nesh Garg. 2021. Explanations for commonsenseqa: New dataset and models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 3050–3065. Philip W Anderson. 1972. More is different: Broken symmetry and the nature of the hierarchical structure of science. Science, 177(4047):393–396. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin John- son, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Maciej Besta, Nils Blach, Ales Kubicek, Robert Ger- stenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. 2023. Graph of thoughts: Solv- ing elaborate problems with large language models. arXiv preprint arXiv:2308.09687. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based eval- uators through multi-agent debate. arXiv preprint arXiv:2308.07201. Justin Chih-Yao Chen, Swarnadeep Saha, and Mohit Bansal. 2023a. Reconcile: Round-table conference improves reasoning via consensus among diverse llms. arXiv preprint arXiv:2309.13007. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2023b. Teaching large language mod- els to self-debug. arXiv preprint arXiv:2304.05128. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. 9 Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Constantinos Daskalakis and Seth Matthew Weinberg. 2012. Symmetries and optimal multi-dimensional mechanism design. In Proceedings of the 13th ACM conference on Electronic commerce, pages 370–387. Shizhe Diao, Pengcheng Wang, Yong Lin, and Tong Zhang. 2023. Active prompting with chain-of- thought for large language models. arXiv preprint arXiv:2302.12246. Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenen- baum, and Igor Mordatch. 2023. Improving factual- ity and reasoning in language models through multia- gent debate. arXiv preprint arXiv:2305.14325. Dheeru Dua, Shivanshu Gupta, Sameer Singh, and Matt Gardner. 2022. Successive prompting for decomposing complex questions. arXiv preprint arXiv:2212.04092. Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2022. Complexity-based prompt- arXiv preprint ing for multi-step reasoning. arXiv:2210.00720. Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Eka- terina Zubova, Yujie Qiao, Matthew Burtell, et al. 2022. Folio: Natural language reasoning with first- order logic. arXiv preprint arXiv:2209.00840. Shima Imani, Liang Du, and Harsh Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398. Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sab- harwal. 2022. Decomposed prompting: A modular approach for solving complex tasks. arXiv preprint arXiv:2210.02406. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. Advances in neural information processing systems, 35:22199– 22213. Jean-Jacques Laffont and David Martimort. 2000. Mechanism design with collusion and correlation. Econometrica, 68(2):309–342. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2022. On the advance of making language models better reasoners. arXiv preprint arXiv:2206.02336. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. 2023. Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118. Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. 2023. Deductive verification of chain-of-thought reasoning. arXiv preprint arXiv:2306.03872. Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and An- drew M Dai. 2022. Mind’s eye: Grounded language model reasoning through simulation. arXiv preprint arXiv:2210.05359. Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai- Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jian- feng Gao. 2023. Chameleon: Plug-and-play compo- sitional reasoning with large language models. arXiv preprint arXiv:2304.09842. Aman Madaan, Niket Tandon, Peter Clark, and Yim- ing Yang. 2022. Memory-assisted prompt editing to improve gpt-3 after deployment. arXiv preprint arXiv:2201.06009. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2023. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettle- moyer. 2022. Rethinking the role of demonstra- tions: What makes in-context learning work? arXiv preprint arXiv:2202.12837. Marvin Minsky. 1988. Society of mind. Simon and Schuster. OpenAI. 2022. Chatgpt. https://openai.com/ blog/chatgpt. OpenAI. 2023. Gpt-4 technical report. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. 2022. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350. Teven Le Scao, Angela Fan, Christopher Akiki, El- lie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik R Narasimhan, and Shunyu Yao. 2023. Re- flexion: Language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems. Kristopher Tapp. 2021. Symmetry. Springer. Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805. 10 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023. Plan- and-solve prompting: Improving zero-shot chain-of- thought reasoning by large language models. arXiv preprint arXiv:2305.04091. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837. Lilian Weng. 2023. Llm-powered autonomous agents. lilianweng.github.io. interpretable math word problem solving with log- arXiv preprint ical prompt-enhanced learning. arXiv:2205.08232. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. 2023a. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601. Yao Yao, Zuchao Li, and Hai Zhao. 2023b. Be- yond chain-of-thought, effective graph-of-thought reasoning in large language models. arXiv preprint arXiv:2305.16582. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Good- man. 2022. Star: Bootstrapping reasoning with rea- soning. Advances in Neural Information Processing Systems, 35:15476–15488. Jintian Zhang, Xin Xu, and Shumin Deng. 2023a. Ex- ploring collaboration mechanisms for llm agents: arXiv preprint A social psychology view. arXiv:2310.02124. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022a. Opt: Open pre-trained transformer language models. arXiv e-prints, pages arXiv–2205. Yifan Zhang, Jingqin Yang, Yang Yuan, and An- drew Chi-Chih Yao. 2023b. Cumulative reason- ing with large language models. arXiv preprint arXiv:2308.04371. Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. 2022. Large language models are arXiv preprint reasoners with self-verification. arXiv:2212.09561. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022b. Automatic chain of thought prompt- arXiv preprint ing in large language models. arXiv:2210.03493. Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, and Xuanjing Huang. 2023. Self-polish: Enhance reasoning in large language models via problem refinement. arXiv preprint arXiv:2305.14497. Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, and Erik Cambria. 2023a. Are large language models really good logical reasoners? a comprehen- sive evaluation from deductive, inductive and abduc- tive views. arXiv preprint arXiv:2306.09841. Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, and Jian-guang Lou. 2023b. Re-reading improves reasoning in language models. arXiv preprint arXiv:2309.06275. Tianci Xue, Ziqi Wang, Zhenhailong Wang, Chi Han, Pengfei Yu, and Heng Ji. 2023. Rcot: Detect- ing and rectifying factual inconsistency in reason- ing by reversing chain-of-thought. arXiv preprint arXiv:2305.11499. Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Liang Lin, and Xiaodan Liang. 2022. Logicsolver: Towards 11 A Discussion Engineering and Agent Symmetry This section introduces a general framework to understand the discussion engineering of large language models. Let L be a Large Language Model (LLM) and x be the input text, the forward inference generates a response of surprising quality, which is written as ˆy = L(x). (1) Given the high price of obtaining one large language model, one essential research question is to unlock the reasoning capability of large language models so that they can perform better on various tasks. One of the key directions is prompt engineering, where the format and content of the input texts are decorated to improve performances for downstream tasks. Specifically, for a task T , the original input x is decorated as a task-specific prompt input p(x; T , L). The function p(·; T , L) is denoted as a prompt decorator for T and L. Then, output ˆy is generated by prompt engineering, which is written as ˆy = L(p(x; T , L)). (2) Another way to improve the reasoning capability of L(x) is to leverage multiple times of inference of large language models, leading to a way larger space for mechanism design. The output ˆy derived by mechanism M is considered as ˆy = M (x; {(Li, pi(·, T , Li))}n i=1) , (3) where M is the mechanism pipeline, Li is the i-th inference of LLM, and pi(·, T , Li) is the corresponding prompt decorator. A mechanism M is designed by jointly considering M = (M, {(Li, pi)}n i=1), i.e., how to organize prompt decorated LLM inferences into the mechanism pipeline M . The discussion engineering, characterized by the mechanism M = (M, {(Li, pi)}n i=1), includes careful designs at three levels: (1) the (i-th) inference Li of an LLM, (2) the prompt decorator pi for the inference Li, and (3) the mechanism M (·, {Li, pi}) to organize the inferences of LLMs. We further introduce the symmetry of the multi-agent system of LLMs to demonstrate the complexity of M. Under our framework, several examples are discussed. A.1 Agent symmetry in discussion engineering Symmetry and its breaking is a fundamental concept and widely investigated in science (Anderson, 1972). Symmetry also plays an important role in the context of mechanism design of multiple agents (Laffont and Martimort, 2000; Daskalakis and Weinberg, 2012). Here we introduce a formal description to justify the agent symmetry of the mechanism M. Definition 1 (Computational graph). Given the mechanism M = (M, {(Li, pi)}n i=1), let G = (V ∪ {x, y}, E) be the directed graph whose node set V ∪ {x, y} includes the inference operation of LLM and the computational input node and output nodes. x is the node to emit the input text, y is the node that takes the inputs from one or multiple inferences and emits the final output, and vi ∈ V is the inference node that describes the i-th LLM inference with (Li, pi). The directed edge in (vi, vk) ∈ E describes the output of inference node vi is then the input of inference node vk. We note that the graph G contains all the necessary information to determine the mechanism M. Then we introduce how the LLM agents are associated with the computational graph. To make an LLM conversational session, the entire conversational history is always concatenated as part of the input of each inference call, resulting in many additional connections of the computational graph. The presence of agents allows us to conceptually assume the agents “know” the information in the conversation history, and then the complexity of the graph can be largely reduced. In this paper, we consider the computational graph with agents, so the connections that feed conversation history to the inference nodes are ignored for simplicity. Let A = {A1, ..., Am} be the set of m discussion agents. The inference call of an agent also concate- nates its conversation history, denoted as Aj(x). 12 Definition 2 (Agent assignment). Let A = {A1, ..., Am} be the set of m discussion agents. Each inference node vi ∈ V is assigned to an agent Aj ∈ A. Let [n] present integers from 1 to n, α : [n] (cid:55)→ [m] is the assignment map that assigns the i-th inference to the α(i)-th agent. Let P ∈ {0, 1}n×m be the agent-inference assignment matrix, such that Pij = (cid:26) 1 j = α(i), j ̸= α(i) 0 . (4) The assignment matrix P is equivalent to the assignment map α. Then a multi-agent discussion mechanism is defined by the triple D = (M, A, α) = (G, A, α), which is the central object of discussion engineering. Then, we can further introduce the concept of the coloring of the computational graph. Definition 3 (Agent coloring of the computational graph). Given the the multi-agent discussion D = (G, A, α), then each inference node vi is described by (Li, pi) = (Aα(i), pi). Let cD i = (Aα(i), pi) be the color of vi and CD(vi) = cD i be the color mapping of nodes, and CD = (V, E, CD) be a colored computational graph. Then the agent symmetry is established by the permutation operation over the agent set A. Then we can discuss the symmetry by the agent permutation. We begin with the concept of mechanism invariance. Definition 4 (Mechanism invariance under the agent permutation). Given the multi-agent discussion D = (G, A, α) and a permutation mapping π : [m] (cid:55)→ [m], the new discussion Dπ = (G, A, π ◦ α) is derived by applying π to the agents. We say a discussion D is invariant under π if and only if there is an isomorphism ϕ between two colored graphs CD = (V, E, CD) and CDπ = (V, E, CDπ ), such that (1) ϕ is a bijection, (2) ∀(vi, vj) ∈ E, (ϕ(vi), ϕ(vj)) ∈ E, (3) ∀v ∈ V, CD(v) = CDπ (ϕ(v)). We can also define the model invariance to justify the symmetry in the multi-model setting (Chen et al., 2023a). Definition 5 (Model invariance under the agent permutation). Given the agent A = {A1, ..., Am}, and their underlying LLM {L1, .., Lm} (i.e., ChatGPT, GPT4, etc.), the permutation π : [m] (cid:55)→ [m] is invariant if for i = 1, ..., m Li = Lπ(i). Following the definition of invariance, the symmetry group of agents is naturally defined following the standard algebra (Tapp, 2021). The largest possible symmetry group for a discussion of m agents is the group Sm. However, the group Sm is not always the symmetry group of a given multi-agent discussion mechanism D, the reasons for not achieving the largest symmetry group can be due to the asymmetry in the mechanism and asymmetry in models, respectively. Furthermore, for asymmetric mechanisms, there are two major ways of symmetry breaking. Referring to the three conditions of establishing the isomorphism, the dissatisfaction of condition (2) implies the asymmetry in the computational graph or asymmetry in the mechanism pipeline M , and that of condition (3) implies the asymmetry in prompt decorators. B CMD: Conquer and Merge Discussion Framework B.1 Motivation The Debate framework proposes that an increased number of agents and discussion rounds will result in improved performance in multi-agent discussions (Du et al., 2023). Also, the ReConcile framework asserts that a greater number of discussion rounds leads to a higher level of consensus among agents, and the higher the consensus among agents, the more accurate the discussion outcomes become (Chen et al., 2023a). However, as the number of agents increases in a discussion, it leads to increased overhead in each 13 round of discussion: 1. Each agent has to read more viewpoints from others, resulting in a sharp rise in input tokens. 2. The increase in input tokens puts significant pressure on language models with context token limitations. Inspired by real-life group discussions, we propose a variant framework for Debate called CMD. Fig 3 shows the overview of our method. B.2 Problem Definition Assume that there are n agents A = {Ai}m i=1 are discussing the given debate task Q, and each agent is a session created from a LLM. Suppose that the maximum number of the discussion rounds is R, the current round is r, the current discussion level is L, and the current active agents set is A′. For each agent Ai ∈ A′, an answer it generates is Ansi = (vi, ei), where vi denotes the viewpoint and ei denotes the explanation. A debate history is H (r) = (cid:83)t is generated through the ), where O(r) , O(r−1) input prompt Ai(Q, Ans(r−1) indicates the opinions generated by Aj̸=i in (r − 1)-th i round. The formal definition of O(r) is i=1 Ansi where t = |A′|. Ans(r) i i i i O(r) i = |H (r−1)| (cid:91) j=1,j̸=i (cid:16) (cid:17) vi, ei · 1{Group(L)(Ai) = Group(L)(Aj)} . (5) This represents that Ai will receive all information from the group members while it can only receive viewpoints from agents that are not in the same group. Our goal is to obtain the final result a through an unweighted vote. In the event of a tie, either a can be made by a secretary S, or representatives from each group will proceed to the next level of discussion until the tie is resolved. Each discussion group has only one representative. Note that Q can be described differently for each Ai. For example, different agents may be asked to hold different views at first. B.3 CMD Stages Generally, there are three stages in CMD. In stage 1, the group map and all states will be initialized, then all the participants will generate their initial answers. Then, in stage 2, the participants will continue the discussion during the remaining rounds in groups. When the discussion round reaches the maximum number, it moves to stage 3. In this stage, all participants will vote to get the final answer. • Stage 1 : Group Discussion Initialization. In this stage, A′ will be initialized as all participants. All participating agents are initially assigned names in uppercase letters, and they are then grouped in sets of three. For getMaxLevel, if secretary mode is on, then it will be set to 1. Otherwise, it will be determined based on the current number of agents. For getGroupMap, the algorithm will automatically generate the groups for all levels. Higher-level groups are generated among the current representatives. All active agents will generate their first response Ansi. • Stage 2 : Multi Rounds Discussion. In this stage, the active agents will continue to discuss in the remaining rounds. Assume current round number is r, for each agent Ai ∈ A′, 1. The algorithm will start to update the opinions history O(r−1) from H. For each record hj whose sender Aj(j ̸= i) stored in H, if Aj and Ai are in the same group, O(r−1) will record both the viewpoint and explanation from hj; otherwise, O(r−1) will only record the viewpoint from hj. After i traversing all the records in H, Oi will first gather all opinions of agents from other groups, then O(r−1) i will gather local group explanations based on group members opinions. 2. To save up tokens, last round history H will be reset. Then, the new prompt p(r) i i based on (Q, Ans(r−1) Ansi will be appended to current round history H for the further use. ). Ai will make a response Ansi = (vi, ei) when given p(r) , O(r−1) i i i i will be generated . At last, 3. Repeat 1. and 2. until the maximum number of discussion rounds is reached. • Stage 3 : Vote for the Final Result. In this stage, all agents will vote based on their final viewpoints stored in the history H (r) to obtain the result a. If there is no tie, then a will be the final result, thus the discussion is over. If there is a tie and secretary mode is on, then the final result will be obtained 14 Algorithm 1 CMD: A Conquer and Merge Style Multi-Agents Discussion Framework Require: Debate Task Q, Maximum Discussion Rounds R, Agents A = {Ai}n Ensure: Final Result a i=1, Secretary S 1: function CMD(Q, R, A, S) 2: A′ ← A N g ← n/3 r ← 0, L ← 0 Lmax ← getMaxLevel(A, S) Mg ← genGroupMap(A, Ng, Lmax) Ansi ← ∅, Oi ← ∅, H ← [] while L ≤ Lmax do while r ≤ R do if r > 0 then Update Oi from H end if H ← [] for each Ai ∈ A′ do Ansi ← Ai(Q, Ansi, Oi) H ← H + [Ansi] end for r ← r + 1 end while a ← AnswerVote(H) if a ̸= Tie then break else if S is not None then a ← S(Q, H); break else 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: ▷ Initialize current active agents ▷ Initialize group numbers, every 3 agents a group ▷ Initialize current round and current discussion level ▷ Assign agents as groups based on discussion levels ▷ Initialize answer, others’ opinions and history ▷ Stage 2 ▷ Ansi = (vi, ei) ▷ Check if the secretary mode is on ▷ Secretary mode is off, representative mode is on L ← L + 1 A′ ← currentActiveAgents(L, Mg) 27: 28: end if r ← 0 end while return a 30: 31: end function 29: ▷ The higher-level discussion has commenced through S based on the viewpoints and explanations from all sides. If there is a tie and representative mode is on, then the discussion will move to the higher level, and deactivate agents that do not represent their groups. The representatives will be assigned to new groups, and return to Stage 2 for further discussion. If there is still a tie, new representatives will be designated for further discussion, until either the tie is resolved or only one agent is activated for the discussion. B.4 Message-Passing Algorithm Below is the detailed message-passing algorithm that synchronize all agents messages during the discus- sion process. This algorithm supports various discussion architectures. 15 Algorithm 2 MesSync: A Message-Passing Algorithm for Multi-Agents Communication i=1, Agent Attribute Table T = {boti}n Require: Discussion Rule R, Agents A = {Ai}n i=1, Agent Initial Prompt Messages M = {pj}t 1: function MESSYNC(R, A, T , M) 2: j=1 Qmsg ← M Qsend ← [] S ← R.Sf irst d ← 0 while Qmsg ̸= ∅ or not R.isOver do ▷ Initialize messages storage queue ▷ Initialize messages to be sent queue ▷ Initialize the first speaker ▷ Initialize the discussion depth to 0 if Qmsg is ∅ then Qmsg ← pushSilenceMessage(d) end if d ← peekDepth(Qmsg) Md ← popAllMessagesAtDepth(Qmsg, d) for each Ai ∈ A do ▷ Peek the discussion depth of the first message mi ← R.mergeCommonMessages(Md, Ai) Qsend.push(mi) end for m′ ← Qsend.pop() isHold ← m′.hold H ← {} dnext ← d + 2 if isHold else d + 1 while m′ and m′.d < dnext do if m′.hold then H[m′.name] ← m′.content else ▷ Signal to merge messages from this round with the next ▷ Initialize the map of messages to hold in the current depth S ← m′.name Mraw ← m′.content ∪ H[S] tin ← R.modifyRawInput(Mraw) tout ← sendText2Bot(tin, S, d, T ) tout ← R.validateOutput(tin, tout, S, d) A′ ← R.getReceivers(S, d) if A′ ̸= ∅ then mnew ← genMessage(tout, S, A′, dnext) Qmsg.push(mnew) end if end if m′ ← Qsend.pop() ▷ Get the receivers of the new message ▷ (content, sender, receivers, depth) 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: end while if R.isOver then break end if end while 38: 39: end function 37: C An CMD Example In the upcoming demonstration, we’ll walk you through the CMD framework from the viewpoint of agent A among six agents using ChatGPT-3.5. 16 C.1 Meta Prompt System You will be asked to answer a correctness verification task of a deductive reasoing proposition. Use knowledge only comes from the given premises. The suffix of your answer should be the your opinion of one of these answer type: 1. [Correct], 2. [Incorrect], 3. [Unknown]. Do not omit the brackets. Table 4: Vanilla Prompt C.2 Round 1 Answer Assistant (ChatGPT-3.5) - Agent A First let’s write down all the premises with labels: #1. Neocrepidodera Corpulentas are flea beetles or moths. #2. The species Neocrepidodera Corpulenta is in the Chrysomelidae family. #3. There are no moths within the Chrysomelidae family. Next, let’s answer the question step by step with reference to the question and reasoning process: #4. (by #1, #2) Neocrepidodera Corpulenta is either a flea beetle or a moth, and it is in the Chrysomeli- dae family. #5. (by #3, #4) Since there are no moths within the Chrysomelidae family, Neocrepidodera Corpulenta must be a flea beetle. Final Step (by #5): Neocrepidodera Corpulenta is a flea beetle, so the proposition “There are no flea beetles within the Chrysomelidae family” is [Incorrect]. Table 5: The Answer of Agent A in Round 1 17 C.3 Middle System and User Prompts in Round 1 System There are 2 groups of people discussing on the same topic. I will provide you the detailed opinions and reasoning steps from your group member and opinions from other group members. Use these opinions and your previous opinion as additional advice, note that they maybe wrong. Do not copy other’s entire answer, modify the part you believe is wrong. Other group members’ opinions: Three agents think the proposition is Incorrect. Your group’s opinions: One agent thinks the proposition is Correct. Below is his answer: First let’s write down all the premises with labels: · · · Next, let’s answer the question step by step with reference to the question and reasoning process: #4. (by #1, #2) Neocrepidodera Corpulenta is either a flea beetle or a moth, and it is in the Chrysomeli- dae family. #5. (by #3) There are no moths within the Chrysomelidae family. #6. (by #4, #5) Neocrepidodera Corpulenta cannot be a moth because there are no moths in the Chrysomelidae family. Final Step (by #6): The proposition “There are no flea beetles within the Chrysomelidae family” is [Correct] because Neocrepidodera Corpulenta, which is in the Chrysomelidae family, cannot be a moth according to premise #5. One agent thinks the proposition is Incorrect. Below is his answer: · · · Next, let’s answer the question step by step with reference to the question and reasoning process: #4. (by #1, #2) Neocrepidodera Corpulenta is a species of Neocrepidodera Corpulentas, so it is either a flea beetle or a moth. #5. (by #3, #4) Since there are no moths within the Chrysomelidae family, Neocrepidodera Corpulenta must be a flea beetle. Final Step (by #5): The proposition states that there are no flea beetles within the Chrysomelidae family. However, from premise #2, we know that Neocrepidodera Corpulenta, which is a flea beetle, is in the Chrysomelidae family. Therefore, the proposition is [Incorrect]. User Use the provided opinions and your previous answer as additional advice critically, please provide an updated answer. Make sure to state your opinion over proposition at the end of the response in the given format: [Correct], [Incorrect] or [Unknown], and such format should be only used to determine the correctness of the proposition. Table 6: Middle System Prompt at the End of Round 1 18 C.4 Round 2 Answer Assistant (ChatGPT-3.5) - Agent A Based on the opinions provided by the other group members and my previous answer, I will critically evaluate the reasoning steps and provide an updated answer. The first agent from my group argues that the proposition is correct. Their reasoning is as follows: #4. (by #1, #2) Neocrepidodera Corpulenta is either a flea beetle or a moth, and it is in the Chrysomeli- dae family. #5. (by #3) There are no moths within the Chrysomelidae family. #6. (by #4, #5) Neocrepidodera Corpulenta cannot be a moth because there are no moths in the Chrysomelidae family. Final Step (by #6): The proposition “There are no flea beetles within the Chrysomelidae family” is [Correct] because Neocrepidodera Corpulenta, which is in the Chrysomelidae family, cannot be a moth according to premise #5. The second agent from my group argues that the proposition is incorrect. Their reasoning is as follows: #4. (by #1, #2) Neocrepidodera Corpulenta is a species of Neocrepidodera Corpulentas, so it is either a flea beetle or a moth. #5. (by #3, #4) Since there are no moths within the Chrysomelidae family, Neocrepidodera Corpulenta must be a flea beetle. Final Step (by #5): The proposition states that there are no flea beetles within the Chrysomelidae family. However, from premise #2, we know that Neocrepidodera Corpulenta, which is a flea beetle, is in the Chrysomelidae family. Therefore, the proposition is [Incorrect]. After critically evaluating the reasoning steps, I agree with the second agent from my group that the proposition is incorrect. Neocrepidodera Corpulenta is a species of Neocrepidodera Corpulentas, which are either flea beetles or moths according to premise #1. Since Neocrepidodera Corpulenta is in the Chrysomelidae family according to premise #2, it cannot be concluded that there are no flea beetles within the Chrysomelidae family. Therefore, the proposition is [Incorrect]. Table 7: The Answer of Agent A in Round 2 C.5 Middle System Prompt at the End of Round 2 System There are 2 groups of people discussing on the same topic. I will provide you the detailed opinions and reasoning steps from your group member and opinions from other group members. Use these opinions and your previous opinion as additional advice, note that they maybe wrong. Do not copy other’s entire answer, modify the part you believe is wrong. Other group members’ opinions: Three agents think the proposition is Incorrect. Your group’s opinions: Two agents think the proposition is Incorrect. Below are their answers: · · · User This part is the same user part shown in table 6. Table 8: Middle System Prompt at the End of Round 2 19 C.6 Round 3 Answer Assistant (ChatGPT-3.5) - Agent A Based on the opinions provided by the other group members and my previous answer, here is an updated response: First, let’s write down all the premises with labels: #1. Neocrepidodera Corpulentas are flea beetles or moths. #2. The species Neocrepidodera Corpulenta is in the Chrysomelidae family. #3. There are no moths within the Chrysomelidae family. Next, let’s answer the question step by step with reference to the question and reasoning process: #4. (by #1, #2) Neocrepidodera Corpulenta is either a flea beetle or a moth, and it is in the Chrysomeli- dae family. #5. (by #3) There are no moths within the Chrysomelidae family. #6. (by #4, #5) Neocrepidodera Corpulenta cannot be a moth because there are no moths in the Chrysomelidae family. Final Step (by #6): The proposition “There are no flea beetles within the Chrysomelidae family” contradicts the given premises, as Neocrepidodera Corpulenta, which is in the Chrysomelidae family, is a flea beetle. Therefore, the proposition is [Incorrect]. After critically evaluating the reasoning steps and considering the opinions provided by the other group members, I agree that the proposition is [Incorrect]. The given premises state that Neocrepidodera Corpulenta is in the Chrysomelidae family and that it is either a flea beetle or a moth. Since there are no moths within the Chrysomelidae family, it can be concluded that Neocrepidodera Corpulenta must be a flea beetle. Therefore, the proposition “There are no flea beetles within the Chrysomelidae family” contradicts the given premises. Table 9: The Answer of Agent A in Round 3 D CMD Secretary - A Tie Case Solution In the event of a tie between two groups, we suggest employing a secretary to resolve such cases. The complete chat transcript below in table 10 illustrates a scenario where the secretary provides the ultimate resolution. 20 System 6 agents are discussing the correctness of a proposition based on the given premises. The given premises are: · · · The proposition is: “IBM has an office in London or Zurich.” However, now there is a draw: Three agents think the proposition is Correct. Below is one of their answers: · · · Three agents think the proposition is Unknown. Below is one of their answers: · · · User Your task is to carefully determine which opinion is more plausible. Answer opinion types are: [Correct], [Incorrect], [Unknown]. You should give your response in the required format. You are forbidden to copy others’ reasoning steps. You can only use the given premises and your reasoning steps to answer the question. Here are the instructions how you organize your answer format: First, let’s write down all the premises with labels. The labels look like “#{premise_number}.” Next, let’s answer the question step by step with reference to the question and reasoing pro- cess. There will be a prefix in your every reasoning step with the format “#{number} (by {list_of_premises_and_steps_used})”. In your final step, you should come to your conclusion with the format “Final Step (by {list_of_premises_and_steps_used}):”. The suffix of your answer should be the answer type: [Correct], [Incorrect] or [Unknown]. Here are several hints to help you conclude the final answer: 1. Pay attention to the logic of quantifiers and distinguish the meanings of quantifiers such as “all” and “some”. 2. When two quantifiers are connected as the premise of existence, the conclusion may not be established, that is, it is unknown. Assistant (ChatGPT-3.5) - Secretary First, let’s write down all the premises with labels: #1. Evangelos Eleftheriou is a Greek electrical engineer. #2. Evangelos Eleftheriou worked for IBM in Zurich. #3. If a company has employees working for them somewhere, then they have an office there. #4. IBM is a company. Next, let’s answer the question step by step with reference to the question and reasoning process: #5 (by #4, #3) Since IBM is a company, and according to premise #3, if a company has employees working for them somewhere, then they have an office there. Therefore, IBM has an office in Zurich. Now, let’s consider the opinions of the other group members: - One agent thinks the proposition is Correct. - Two agents think the proposition is Unknown. Considering the opinions of the other group members, there is disagreement regarding the correctness of the proposition. However, based on the premises and the reasoning process, it can be concluded that IBM has an office in Zurich. Final Step (by #5): Since IBM has an office in Zurich, the proposition “IBM has an office in London or Zurich” is [Correct]. Table 10: Secretary to solve a tie case. 21 E Extended Related Work E.1 Large language models L With the advent of large language models (LLMs) like GPT-3 (Brown et al., 2020), chatGPT (OpenAI, 2022), GPT-4 (OpenAI, 2023), Llama series (Touvron et al., 2023a,b) and PaLM 2 (Anil et al., 2023), prompt engineering is a burgeoning field that focuses on optimizing the output of language models by crafting effective input prompts. E.2 Prompt decorator p(·; T , L) for reasoning Numerous advancements have been made in the field of improving reasoning abilities of LLMs with prompt engineering. Chain of Thought (CoT) (Wei et al., 2022; Kojima et al., 2022) is a linear problem- solving approach where each step builds upon the previous one. Fu et al. (2022) propose to apply CoT to multi-step reasoning tasks. To automate the CoT, Auto-CoT (Zhang et al., 2022b) constructs demonstrations by sampling diverse questions and generating reasoning chains. Active-Prompt (Diao et al., 2023) aims to select the most uncertain questions for task-specific annotations. Other prompt strategies include PS prompt (Wang et al., 2023) which divides task into subtasks then solve them, effective GoT (Yao et al., 2023b) which models human thought processes as a graph rather than a chain, Natural Program (Ling et al., 2023) which helps to improve the deductive reasoning tasks, re-reading prompt (Xu et al., 2023b) which revisits the question information embedded within input prompts. E.3 Mechanism M for reasoning In addition to focusing on the design of prompts themselves, some works incorporate different mechanisms to assist language models in reasoning. The design directions of these mechanisms include: 1. Breaking down the reasoning process into multiple stages, with possible verification at each stage. 2. Optimizing the reasoning process via ensemble methods. 3. Iteratively prompting the model for reflection and correction. 4. Utilizing external tools to aid in reasoning. These approaches aim to enhance the overall reasoning capabilities of language models by introducing additional support and guidance throughout the process. Break Down the Reasoning Process into Multiple Stages. Self-Polish (Xi et al., 2023) make models to progressively refine given problems with multiple stages of prompts. Additionally, some works (Khot et al., 2022; Press et al., 2022; Dua et al., 2022; Zhang et al., 2023b) study over decomposing the tasks into smaller tasks, and use an individual prompt to solve each subtask. Optimize the Reasoning Process via Ensemble Methods. Tree of Thoughts (ToT) (Yao et al., 2023a) expands on the reasoning process by considering multiple paths of reasoning and self-evaluating choices. Graph of Thoughts (GoT) (Besta et al., 2023) further advances this by modeling information as an arbitrary graph, enabling complex networks of thoughts. And some works propose to first sample reasoning pathsvthen vote for the best one, including self-consistency (Wang et al., 2022) and step-aware verifier (Li et al., 2022). Discussion Engineering is also one category of ensemble methods. We leave it to section 7 for further introduction. Iteratively Prompt the Model for Reflection and Correction. These works (Zelikman et al., 2022; Weng et al., 2022; Shinn et al., 2023; Madaan et al., 2023; Chen et al., 2023b; Xue et al., 2023) are mainly based on iteratively asking model to find the mistakes or inconsistencies among previous reasoning steps or the knowledge, then solve them one by one. Utilize External Tools or Knowledge to Aid in Reasoning. Mind’s Eye (Liu et al., 2022) proposes to use physical simulator to help models reason in the physical world. There are also other works (Lu et al., 2023; Imani et al., 2023) will equip models with tools to solve problems. Further more, some works like MemPrompt (Madaan et al., 2022) and LogicSolver (Yang et al., 2022) use external knowledge to assist LLMs with reasoning. 22
ai_researcher
3
Reasoning_on_Graphs_Faithful_and_Interpretable_Large_Language_Model_Reasoning.pdf
3 2 0 2 t c O 5 2 ] I A . s c [ 1 v 1 2 4 6 1 . 0 1 3 2 : v i X r a GRAPH AGENT: EXPLICIT REASONING AGENT FOR GRAPHS PREPRINT Qinyong Wang1,2, Zhenxiang Gao1, and Rong Xu1 1Center for Artificial Intelligence in Drug Discovery, Case Western Reserve University 2Department of Computer and Data Sciences, Case Western Reserve University {qxw225, zxg306, rxx}@case.edu ABSTRACT Graph embedding methods such as Graph Neural Networks (GNNs) and Graph Transform- ers have contributed to the development of graph reasoning algorithms for various tasks on knowledge graphs. However, the lack of interpretability and explainability of graph embedding methods has limited their applicability in scenarios requiring explicit reasoning. In this paper, we introduce the Graph Agent (GA), an intelligent agent methodology of leveraging large language models (LLMs), inductive-deductive reasoning modules, and long-term memory for knowledge graph reasoning tasks. GA integrates aspects of symbolic reasoning and existing graph embedding methods to provide an innovative approach for complex graph reasoning tasks. By converting graph structures into textual data, GA enables LLMs to process, reason, and provide predictions alongside human-interpretable explana- tions. The effectiveness of the GA was evaluated on node classification and link prediction tasks. Results showed that GA reached state-of-the-art performance, demonstrating accuracy of 90.65%, 95.48%, and 89.32% on Cora, PubMed, and PrimeKG datasets, respectively. Compared to existing GNN and transformer models, GA offered advantages of explicit reasoning ability, free-of-training, easy adaption to various graph reasoning tasks. 1 Introduction Knowledge graph Hogan et al. [2021] has emerged as a pivotal structure for organizing and representing vast amounts of human knowledge in explicit form Pan et al. [2023]. Graph embedding methods, or predominantly Graph Neural Networks (GNNs) Zhou et al. [2020] in recent years, have been used in capturing intricate graph structures and node features based on their local neighborhoods, making them adept at various graph reasoning tasks. GNNs have demonstrated proficiency across various tasks, yet their explainability of predictions remains a significant challenge Dai et al. [2022]. Such techniques, we termed "implicit reasoning methods," depend on entangled representations Wang et al. [2019a]. GNNs utilize message passing to integrate information from adjacent nodes. However, during this process, both node features and the GNN kernels are numeric vectors, which are hard to interpret and understand by humans. This disadvantage poses a significant concern, especially in scientific discovery using graph dataZeng et al. [2022], Zhao et al. [2021], where mere prediction outcomes are insufficient; a comprehensive understanding of the underlying rationale is imperative. Moreover, given that information in KGs possesses an explicit form Pan et al. [2023], a rationale exists for pursuing explicit graph reasoning. PREPRINT Figure 1: Overview of Graph Agent methodology On the other hand, there are symbolic reasoning frameworks available. These might involve rule-based systems Jamian et al. [2019], Yao et al. [2019] or using first-order logic Belle [2020], Jane and Ganesh [2019] to reason with intricate data. Such reasoning processes are more transparent and human-interpretable. Hence, we term them "explicit reasoning methods". Yet, they come with their own set of problems. Rules within these systems are static, making adaptability to varied tasks challenging Belle [2020]. The current rule-based and symbolic reasoning methods could only work on a limited number of data sets. The intricate nature of heterogeneous graph data often proves too complex to be encapsulated solely by a simple logical system. These explicit logical reasoning methods often under-perform when benchmarked on various data sets Belle [2020]. Large language models(LLM) demonstrate commendable reasoning capabilities Bubeck et al. [2023], with their reasoning processes in natural language or symbolic language Gao et al. [2023]. Recent inves- tigationsGuo et al. [2023] have illuminated that LLMs can understand graph structures and analyze the encompassed information. Such ability paved the way for formulating graph reasoning algorithms with high efficacy and applicability. Researchers have employed LLMs as controllers or agents across a spectrum of tasks Park et al. [2023], such as software development Hong et al. [2023] and robotic planningFan et al. [2022], Shah et al. [2023]. However, the potential of LLM agent systems on complex graph reasoning is yet to be discovered. We introduce the Graph Agent methodology that leverages the reasoning capabilities of LLMs and long-term memoryZhong et al. [2023] for KG tasks. Given a graph data set, nodes or edges are embedded and stored in long-term memory. For each input link prediction or node classification sample, similar nodes or edges from the training set are fetched from this long-term memory. The LLM then undergoes a two-phase inductive-deductive reasoning process. During induction, the LLM is provided with similar nodes or edges, their neighbors, and associated labels from the prompt and concludes the rationale behind their labeling. In the deduction phase, the LLM incorporates these concluded reasons to reason and predict the presented sample. The underlying principle is similar to human cognitive processes: recalling similar past instances to inform decisions on unfamiliar problems. Such analogical reasoning, which widely exists in human cognition Bartha [2013], has previously been validated as effective in knowledge graph contexts Yuan et al. [2023]. GA employs explicit reasoning, producing human-interpretable natural language outputs. Contrasting GA 2 PREPRINT with GNNs, while GNNs retain learned patterns within graph convolution kernels, GA preserves it in textual format after inductive reasoning. Instead of the entangled representation utilized for message passing in GNNs, GA conveys neighbor information through prompts in natural language. Essentially, GA shares some mechanics with GNNs but in an explicit manner. The overview of GA is shown in Figure 1. We evaluate GA’s performance on node classification and link prediction datasets, notably Cora, PubMed, and PrimeKG data-sets, which are challenging to GNN-based approaches. Our findings indicate that GA excels in these tasks and offers enhanced prediction explainability compared to prior methods. 2 Related Work 2.1 Graph neural networks and graph transformers A standard GNN Zhou et al. [2020] is structured with layers that aggregate information from neighboring nodes. Message Passing NetworksHamilton et al. [2017] are a foundational framework for many GNNs. The core idea is to iteratively update node representations by "passing messages" between nodes. At each iteration, each node aggregates information (messages) from its neighbors. The node then updates its representation based on its previous state and the aggregated messages. The final node representations can be used for various tasks, such as node classification, graph classification, or link prediction. Graph Convolutional Networks (GCN) Wu et al. [2019] are one of the most popular and effective GNN architectures. Graph Attention Network (GAT) Velickovic et al. [2017]introduces attention mechanisms Vaswani et al. [2017] to the world of GNNs. Standard GNNs, designed for homogeneous graphs, may not be optimal for Heterogeneous Graph. Heterogeneous GNNs are designed to handle multiple node and edge types. They often involve multiple relation-specific aggregation functions. An example is the Relational Graph Convolutional Network (R-GCN) Schlichtkrull et al. [2018], which uses different weight matrices for different relation types. There are also other GNNs such as Heterogeneous Graph Transformer Hu et al. [2020], Hypergraph Convolution and Hypergraph Attention Bai et al. [2021]. Transformer models were adapted to graph-structured data by covert both nodes and edges as tokens Min et al. [2022]. As previously discussed, these methods exhibit a deficiency in the explainability of predictions. 2.2 Generative language model agents Recently, there has been a growing interest in enhancing LLMs with additional tools, memory, and so- phisticated reasoning frameworks. Techniques such as the "Chain of Thoughts," Wei et al. [2022a] "Self- Consistency," Wang et al. [2022]and "Tree of Thoughts" Yao et al. [2023] have been introduced to boost the reasoning capabilities of LLMs. A common practice among LLMs is the utilization of vector databases to maintain their long-term memory Wang et al. [2023], which is crucial for retaining comprehensive graph entities. Furthermore, LLMs are now being trained to use tools, with notable developments like Toolformer Schick et al. [2023], Visual Programming Gupta and Kembhavi [2023], and GeneGPT Jin et al. [2023] leading the way. These advancements have significantly amplified the competencies of LLMs, leading to a surge in research exploring LLMs as intelligent agentsLiu et al. [2023a]. In experimental setups, these agents are immersed in virtual environments, allowing researchers to observe and analyze their behaviors Park et al. [2023]. Remarkably, LLMs have even been employed to simulate a software development Hong et al. [2023]. It would be interesting to see how LLM agents behave in complex graph reasoning tasks. 2.3 Graph reasoning with LLMs In recent advancements in graph-based reasoning, the synergistic combination of LLMs with graphs has showcased enhanced performance compared to the exclusive GNNs. Two primary methodologies have been identified in the realm of LLM-integrated graph techniques Chen et al. [2023a]: 3 PREPRINT Figure 2: Workflow of the proposed Graph Agent methodology in link prediction task with an information- centered view. Graph Agent first memorized the train set of an input graph, then inferenced on the test set without model training. Cosine similarity was used to similar edges for inductive reasoning, then predictions and explanation was generated after deductive reasoning. Text boxes with gray backgrounds indicate input text for LLM; blue background indicate output text from LLM. LLM as an Augmentor: LLMs are pivotal in augmenting graph data. They are adept at generating text contextually related to the graph nodes. This results in the enrichment of each node with additional textual features, thereby amplifying the depth and quality of information associated with each node. TAPE leveraged the capabilities of ChatGPT to enhance text-attributed graphs, demonstrating state-of-the-art performance in node classification tasks He et al. [2023]. LLMs exhibit potential as annotators, enhancing performance in graph tasks where labels are absent Chen et al. [2023b]. LLM as a Predictor: Deploying LLMs as predictors has also been a significant stride forward. By feeding the LLM with information about a node and its neighboring nodes, it is feasible to anticipate the class of the given node or infer the likelihood of a link existing between two nodes Ye et al. [2023]. GraphText-ICL used a graph-to-text encoder and leveraged the In-context Learning(ICL) ability of LLM for node classification Zhao et al. [2023]. Additionally, integrating LLMs with graph structures for fine-tuning has shown to have refined outcomes in tasks such as Substructure Counting and shortest path identification Chai et al. [2023]. Previous studies have investigated fine-tuning LLMs for graph reasoning tasks Ye et al. [2023], Zhao et al. [2023], and have explored the utilization of advanced reasoning techniques to enhance the graph reasoning capabilities of LLMs. However, fine-tuning LLMs for specific tasks can be computationally intensive and time-consuming, posing challenges when adapting them to diverse datasets. Furthermore, approaches centered on prompt engineering have yielded suboptimal results compared to training-based methods Zhao et al. [2023]. 4 2.4 Graph to text encoder and node sampling PREPRINT To feed structured graph data to LLMs, we would need covert sub-graphs to text sequences. The efficacy of LLMs in graph reasoning tasks was influenced by the encoding method Zhao et al. [2023]. Our study leveraged an encoding strategy optimized for computational efficiency. For a given node v with attributes A and n-hop neighbors Nh, the encoding process integrated the node’s attributes {an ∈ A} and sampled information from its n-hop neighbors {n ∈ N } using a sampling function denoted as fsample(). The formulation of the node’s encoder function is as follows: encoder(v, A, N ) = ["node:", v, "attributes:"A, "n-hop-neighbours: "[fsample(Nh)]] For graph edges, represented by vertices x and y, the encoding encompassed the attributes of the two edge nodes (Ax, Ay) and information pertaining to the n-hop neighbors for each vertex (Nx, Ny). The edge encoder function is as follows: encoder((x, y), (Ax, Ay), Nx, Ny) = [“edge: ”, (x, y); “attributes: ”, (Ax, Ay); y-“n-hop-neighbours: ”[fsample(Nyh)], x-“n-hop-neighbours: ”[fsample(Nxh)]] This encoder intentionally omitted the interconnections among the neighboring nodes, which might affect the performance due to information loss. Including these connections would substantially increase the text length. It would prolong inference time and risk the LLM being overwhelmed by over-complicated graph structures. Our experiments underscored the pivotal role of information sampling in optimizing GA, especially within graphs with dense connectivities. For instance, nodes in biomedical knowledge graphs or individuals in social networks often have connections exceeding hundreds of edges, posing a risk of overloading information beyond the working memory Bubeck et al. [2023] of LLMs, or exceeding the maximum context length of LLMs. To mitigate this, we leveraged a sampling technique based on node degrees. For heterogeneous graphs, we computed the average degree Davg for each category of nodes. Subsequently, a node’s importance was quantified as the ratio of its degree to Davg pertinent to its category. This relative importance metric guided the selective encoding of neighbor information, specifically incorporating only the top k most significant nodes, where k was tailored to the task, dataset, and LLM in use. The sampling function is formalized as: fsample(N ) = select_top_k (cid:18)(cid:26) degree(n) Davg, type(n) (cid:27) (cid:19) : n ∈ N , k where degree(n) denotes the degree of node n, The function select_top_k is an operation selecting the k nodes with the highest node importance. While more advanced sampling and encoding techniques could be used, our research prioritized the initial development of Graph Agent. Consequently, we adopted a straightforward method to facilitate implementation. 5 PREPRINT 2.5 Long-term memory Given a graph task, the first step of GA was memorization of the graph. Compared with other methods with a training phase, the train set was not used for back-propagation training; instead, Graph Agent embedded all training samples and stored them in a vector database. During inference on test samples, we retrieved similar samples from the long-term memory. We leveraged two methods to embed samples. The first was language model embedding, and the second was GNN embedding. In language model embedding, for each sample, we used the graph to text encoder previously discussed and passed the output text of the encoder to an embedding-optimized language model. In GNN embedding, we trained a GNN on the training data set and stored the node embedding in a vector database. For edge embedding, we simply contacted the embedding of two nodes. We used the Cosine similarity of embedding to retrieve similar node or edge examples. 2.6 Inductive reasoning During the inference phase, GA initiated a retrieval of analogous examples from its long-term memory for a target sample; each example was denoted as example_n. The aggregation of these examples formed a structured prompt augmented with task instructions. The prompt was formulated as follows: "Given the provided examples and your existing knowledge, identify reasons why example nodes are categorized as labeled or why a connection exists in example edges. List the reasons concisely." Selecting these analogous examples was critical, as the LMM sought patterns or commonalities within the examples. This mechanism, termed explicit learning, where the learned patterns were in the output text. 2.7 Deductive reasoning In the deductive reasoning phase, all selected examples, outcomes from inductive reasoning, and the desig- nated sample were integrated into a comprehensive prompt, and the prompt was followed with instructions and a question. For node classification, the LLM was queried with a crafted question: "Given the reasons and examples, determine the type of node_a from the following options: [options...], think step by step then choose one of the options" Similarly, for link prediction, the LLM was instructed: "Considering the reasons and examples, does a connection exist between node_a and node_b? think step by step, and choose either TRUE or FALSE." The LLM would respond with explicit reasoning in natural language and a definitive prediction. This out- put represented transparent, human-interpretable logic, demonstrating how GA leveraged patterns discerned through inductive reasoning to inform their predictions. The Deductive reasoning phase could be considered a form of the chain of thought method Wei et al. [2022a]. The difference was the examples were dynamically generated and tailored to the designated sample. 3 Experiments 3.1 Node Classification Our experimentation with GA encompassed node classification tasks juxtaposing its efficacy against previous methodologies. A direct comparison between GA and GNNs is inequitable, primarily because text-attributed graphHe et al. [2023] node classification could be transformed into document classification Minaee et al. [2021], an LLM innately excels. Consequently, this section aims at establishing a robust benchmark against preceding methods. Datasets Conscious of the high inference costs and long inference time of LLMs, our study utilized comparatively smaller datasets—specifically, the widely recognized Cora and PubMed graph datasets Yang 6 PREPRINT Cora (Acc) PubMed (Acc) GAT GraphSAGE GCN RevGAT ACM-Snowball-3 ACM-GCN+ Graphormer GT CoarFormer InstructGLM TAPE GraphText-ICL Graph Agent Veliˇckovi´c et al. [2017] Hamilton et al. [2017] Wu et al. [2019] Li et al. [2021] Luan et al. [2022] Luan et al. [2022] Ying et al. [2021] Dwivedi and Bresson [2020] Kuang et al. [2021] Ye et al. [2023] He et al. [2023] Zhao et al. [2023] 76.70 86.58 87.78 89.11 89.59 89.75 80.41 86.42 88.69 90.77 89.30 68.3 90.65 83.28 86.85 88.90 88.50 90.96 91.44 88.75 88.24 89.75 94.62 95.30 - 95.48 Table 1: Results on Cora and PubMed node classification et al. [2016]. These selections facilitated comparisons with state-of-the-art techniques but also adhered to manageable sizes, with Cora comprising 2,706 nodes and PubMed encompassing 19,711. Consistent with previous works, we implemented a 60%/20%/20% partitioning for train/validation/test sets. Our experiments engaged versions of Cora and PubMed retaining their original textual data. Baselines Our performance assessment of GA involved juxtapositions with preceding GNN models, transformer models, and those LLM-related methods. The GNN models include GCNWu et al. [2019], GATVeliˇckovi´c et al. [2017], RevGATLi et al. [2021], etc. We also compared with Transformer-based graph learns, including CoarFormer Kuang et al. [2021], GraphormerYing et al. [2021], and GT Dwivedi and Bresson [2020]. More importantly, we compared with graph algorithms that leveraged LLMs in recently published papersYe et al. [2023], including IntructGLM, TAPEHe et al. [2023], and GraphText-ICLZhao et al. [2023]. Implementation Details For our experiments, we employed the gpt-4-0613 model as the LLM backend, and used embedding-ada-002 model for graph text embedding. Our methodology included sampling the top 8 neighboring nodes. The prompts for the target nodes comprised the title, abstract, authors, and keywords, while for neighboring nodes, we confined the information to the title, authors, and node type label. We masked the labels of the target node within the prompts since similar node examples were often the neighbors of the target node and revealed labels of the target node. After memorizing the training dataset, GA directly inferred the test dataset without training. Results GA outperformed GNN and transformer models, achieving state-of-the-art results on the Cora and PubMed datasets. It demonstrated superior accuracy, attaining 95.48% on PubMed— the highest in this category—and 90.65% on Cora, ranking second only to InstructGLM. Notably, both TAPE and InstructGLM require a training phase, highlighting GA’s efficiency as it yields competitive results without the necessity of model training. Furthermore, when compared against the free-of-training method GraphText-ICL, which also utilized GPT-4, GA observed an increase of approximately 20 points in accuracy. 3.2 Link Prediction In pursuit of real-world applicability, we explored drug-gene link prediction within a heterogeneous biomedi- cal knowledge graph. The interrelations of genes, drugs, diseases, and biological processes were complex, which made it an ideal task for us to test the comprehensive graph reasoning ability of GA. Secondly, as 7 PREPRINT stated previously, the inference cost of GA was high. It would make more sense to test GA for high-value graph reasoning tasks. Dataset We adopted the Precision Medicine Oriented Knowledge Graph(PrimeKG) dataset Chandak et al. [2023], recognizing it as one of the latest and most complex biomedical graphs available. Our primary focus centered on the prediction of drug-gene edges. As delineated earlier, LLMs were bound by input length. Consequently, we confined our attention to certain node and edge types; we used a subset of PrimeKG node types, including drugs, genes, biological processes, pathways, and diseases. Only edge types that interconnected these node types were taken into consideration. The filtered version of PrimeKG utilized for our analysis comprised 2,085 drug nodes, 19,001 gene nodes, 7,161 biological process nodes, 1,625 pathway nodes, and 2,658 disease nodes. This configuration encapsulated a complex network with a total of 954,438 edges of various types. Within PrimeKG, 20,417 drug-gene edges were identified, indicating existing associations. An equivalent number of non-associated drug-gene pairs were randomly generated and reintegrated into the dataset. These newly created links were labeled as negative, contrasting with the original positive associations. The data set was partitioned into 80%, 10%, and 10% segments for training, validation, and testing. Baselines We benchmarked GA’s performance against established GNNs and prompt-engineering methods. Comparisons were drawn with GCNWu et al. [2019] and Heterogeneous graph attention net- work(HGAT)Wang et al. [2019b], both in isolation and in conjunction with text augmentation. We also compared GA with simple asking and chain-of-thoughWei et al. [2022a] prompt methods. Evaluation Metrics A notable limitation of LLMs in link prediction is their binary response format, wherein they can only output labels such as "True" or "False". Consequently, metrics like mean reciprocal rank (MRR) and the hit rate for the top k candidates (hits@K) are incompatible with our specific scenario Pan et al. [2023]. Given this constraint, our evaluation adopted precision, recall, the F1 score, and the accuracy of the positive edges. Implementation Details For comparative analysis, we trained 2-layer GCN and HGAT models on the training dataset. We also used the graph embedding from GNNs for retrieving similar edges in addition to language model embedding. Capitalizing on PrimeKG’s text-associated nodes, we also integrated text embedding—generated from text-embedding-ada-002 model —as initial node embedding in GNN training, which was a text-augmented training. We used both Gpt-4 and LLaMa2-70B-chat-hf as our LLM backend. Our sampling strategy prioritized the top 15 neighbor nodes for GPT-4, and the top 5 neighbor nodes for LLaMa2-70B, since LLaMa had much fewer parameters and could only process a smaller local graph. The prompt included only node names and types, excluding other node attributes. Similar to node classification, edge examples could contain the targeted edge label. We only leveraged edge examples that had different nodes with targeted edges. In our approach, we extracted three analogous positive edge instances from long-term memory and arbitrarily selected two negative edges for inclusion in the prompt. We also explored alternative methodologies for comparison: the Simple Ask approach and the 5-shot Chain-of-Thought (COT) Wei et al. [2022a] technique. The Simple Ask method involved presenting the LLM with text after graph-to-text encoding and straightforwardly inquiring about the existence of a connection. The 5-shot COT, similar to the previous GraphText-ICL strategy, employed a consistent set of three positive and two negative edge examples. Results Table 2 illustrates that GA outperformed competing methods in link prediction, achieving an F1 score of 0.889 and an accuracy of 0.893. Compared to the 5-shot COT’s accuracy of 0.803, GA enhanced GPT-4’s graph reasoning capabilities in the nearly 10-point increase in both F1 and accuracy metrics. Notably, the ’Simple Ask’ method yielded a mere 0.196 recall, suggesting GPT-4’s initial inclination to deny the existence of most drug-gene associations. Minor improvements were witnessed with text augmentation for GNNs, which aligned with findings from prior studiesChen et al. [2023a]. Furthermore, the efficacy of COT methods was confirmed, with a 20-point surge in accuracy over the ’Simple Ask’ approach. 8 PREPRINT Table 2: Results on PrimeKG drug-gene link prediction PrimeKG Precision Recall F1 Accuracy GNN methods HGAT HGAT with text augmentation GCN GCN with text augmentation LLM predictors Simple ask + Graph2text encoder 5-shot COT + Graph2text encoder 0.831 0.836 0.826 0.830 0.964 0.962 0.826 0.842 0.838 0.842 0.196 0.673 0.829 0.839 0.832 0.836 0.325 0.792 0.837 0.846 0.832 0.837 0.599 0.803 Graph Agent 0.926 0.854 0.889 0.893 3-hop with LM embedding 2-hop with LM embedding 1-hop with LM embedding (proposed) 1-hop with GNN embedding 1-hop with LM embedding and LLama2 70B Precision Recall 0.955 0.932 0.854 0.767 0.956 0.698 0.730 0.926 0.958 0.546 F1 0.807 0.821 0.889 0.852 0.696 Accuracy 0.774 0.820 0.893 0.869 0.565 Table 3: Ablation test results Ablation Results Our ablation study, summarized in Table 3, evaluated the influence of n-hop informa- tion and the specific embeddings and LLM utilized. Contrary to expectations, incorporating more n-hop information into GA diminished performance: 1-hop, 2-hop, and 3-hop configurations yielded accuracies of 0.8902, 0.8202, and 0.774, respectively. We found that the LLM was significantly affected by the shared neighbor pattern, with more neighbors for each drug and gene node. LLMs found more shared neighbors, leading the LLM to think there were associations. However, this is not true for many biomedical knowledge graphs. For example, a common gene could be associated with many genes and biological processes, and within 2-hop of this common gene, many nodes do not have associations. Furthermore, leveraging GNN embeddings for similar edge example retrieval proved effective, with an accuracy of 0.869, surpassing the 0.846 achieved by HGAT. It showed we could use GA to enhance existing GNN methods. However, GNN embedding under-performed compared with language model embedding, which could be due to our GNN model was not refined for information retrieval. Testing with LLama2-70B had a precision of only 0.5464. Hallucination of LLama2 was frequently observed, and it stated most drugs and genes had associations. Our explanation was that LLama2, with only 70B parameters, had not yet developed the emergent graph reasoning abilityWei et al. [2022b] required for a complex biomedical graph. Qualitative analyses In the link prediction experiments, Graph Agent, empowered by GPT-4, demon- strated commendable reasoning capabilities. A manual review of reasoning instances revealed GPT-4 had hallucinations with uncommon genes and diseases. We found GPT-4 was confused with genes within the same family, which often shared similar names and biological processes. Since those genes share similar functions, they usually have links with the same drugs, so even if the thinking processing was wrong, the prediction was correct. This process correctness was hard to evaluate Bubeck et al. [2023], so we could not provide a quantitative score for the reasoning quality. Despite the factual error, GA was good at identify- ing common reasons for drug-gene associations, such as shared neighbors, key biological processes, and pathways. This behavior could give researchers insights during drug discovery. 9 PREPRINT 4 Discussion and limitations Deep learning has frequently been criticized as a "black box". Now we have seen this very "black box" offer compelling reasoning and elucidations Bubeck et al. [2023]. This poses a contemplative question: can we trust explanations derived from black box models? Addressing this quandary on philosophical grounds remains elusive. Our analysis of many graph reasoning outcomes produced by the LLM, indeed affirms a commendable quality of reasoning. Future studies could investigate the reliability of LLM reasoning. The computational intensity of LLMs translates to formidable latencies and costs, rendering the current GA impractical for large-scale graph reasoning. A potential solution could lie in a hybrid system, wherein the LLM is harnessed exclusively for hard samples or high-value cases. Explorations could also go towards determining if smaller fine-tuned LLM can emulate the efficacy of its larger version of LLM. This, however, surfaces additional complications, including curating the dataset and methodology for fine-tuning and subsequently ascertaining the generalizability of the fine-tuned graph LLMs. Our current Graph Agent methodology encounters limitations regarding information coverage and flow. Unlike multi-layered GNNs, which facilitate good node coverage and employ neural networks to regulate inter-node information flow, GA is constrained by its reliance on sampled local graphs and naive sampling methods for information flow control. Future research directions could be the exploration of advanced information retrieval and control techniques to enhance GA’s efficacy. An additional limitation pertains to the issue of redundant reasoning. GA often uncovers rationales that are applicable across multiple samples, rendering the repetitive inductive reasoning for each instance. Subsequent research might explore the development of reasoning modules with higher efficiency. Our approach did not address the hallucination problem of LLMs Rawte et al. [2023]. While mitigating this phenomenon typically involves empowering agents with factual information retrieval tools, incorporating such tools within GA was avoided to prevent data leakage. However, for practical implementations of GA in open-world applications, the integration of tools is an indispensable consideration, underscoring an avenue for subsequent studies. While considerable scholarship has been dedicated to generative agents in robotics, software development, and human-like conversation Wu et al. [2023], a void exists in discussions centered on agents with graph reasoning capabilities. Knowledge Graphs, representing structures of human knowledge, remain important. Efficacious reasoning atop KGs has significance. Our current Graph Agent only exhibits capacities for shallow reasoning on knowledge graphs, thereby accessing the surface layers of graph information. There exists an exigency for concerted efforts to formulate graph reasoning datasets and train graph foundation model Liu et al. [2023b] for graph agents. The development of evaluative metrics that truly reflect the logical and factual correctness of graph reasoning remains a challenge. 5 Conclusion Existing implicit graph reasoning methods lack explainability. To address this issue, we proposed the Graph Agent methodology with long-term memory and an inductive-deductive reasoning module. Graph Agent has demonstrated high efficacy and transparency of predictions within Knowledge graphs. Beyond the promising results, the uniqueness of our approach lies in addressing the longstanding explainability challenge in graph reasoning. As a first-of-its-kind framework, the current Graph Agent implementation is naive and primitive. Future studies could investigate advanced Graph Agents that can truly voyage and learn in human knowledge networks. 10 PREPRINT References Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d’Amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, et al. Knowledge graphs. ACM Computing Surveys (Csur), 54(4):1–37, 2021. Jeff Z Pan, Simon Razniewski, Jan-Christoph Kalo, Sneha Singhania, Jiaoyan Chen, Stefan Dietze, Hajira Jabeen, Janna Omeliyanenko, Wen Zhang, Matteo Lissandrini, et al. Large language models and knowledge graphs: Opportunities and challenges. arXiv preprint arXiv:2308.06374, 2023. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI open, 1:57–81, 2020. Enyan Dai, Tianxiang Zhao, Huaisheng Zhu, Junjie Xu, Zhimeng Guo, Hui Liu, Jiliang Tang, and Suhang Wang. A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability. arXiv preprint arXiv:2204.08570, 2022. Ke Wang, Hang Hua, and Xiaojun Wan. Controllable unsupervised text attribute transfer via editing entangled latent representation. Advances in Neural Information Processing Systems, 32, 2019a. Xiangxiang Zeng, Xinqi Tu, Yuansheng Liu, Xiangzheng Fu, and Yansen Su. Toward better drug discovery with knowledge graph. Current opinion in structural biology, 72:114–126, 2022. Xintong Zhao, Jane Greenberg, Scott McClellan, Yong-Jie Hu, Steven Lopez, Semion K Saikin, Xiaohua Hu, and Yuan An. Knowledge graph-empowered materials discovery. In 2021 IEEE International Conference on Big Data (Big Data), pages 4628–4632. IEEE, 2021. Lia Jamian, Lee Wheless, Leslie J Crofford, and April Barnado. Rule-based and machine learning algorithms identify patients with systemic sclerosis accurately in the electronic health record. Arthritis research & therapy, 21:1–9, 2019. Liang Yao, Chengsheng Mao, and Yuan Luo. Clinical text classification with rule-based features and knowledge-guided convolutional neural networks. BMC medical informatics and decision making, 19(3): 31–39, 2019. Vaishak Belle. Symbolic logic meets machine learning: A brief survey in infinite domains. In International conference on scalable uncertainty management, pages 3–16. Springer, 2020. J Betty Jane and EN Ganesh. A review on big data with machine learning and fuzzy logic for better decision making. Int. J. Sci. Technol. Res, 8:1121–1125, 2019. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pages 10764–10799. PMLR, 2023. Jiayan Guo, Lun Du, and Hengyu Liu. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066, 2023. Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023. 11 PREPRINT Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems, 35:18343–18362, 2022. Dhruv Shah, Bła˙zej Osi´nski, Sergey Levine, et al. Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action. In Conference on Robot Learning, pages 492–504. PMLR, 2023. Wanjun Zhong, Lianghong Guo, Qiqi Gao, and Yanlin Wang. Memorybank: Enhancing large language models with long-term memory. arXiv preprint arXiv:2305.10250, 2023. Paul Bartha. Analogy and analogical reasoning. 2013. Siyu Yuan, Jiangjie Chen, Changzhi Sun, Jiaqing Liang, Yanghua Xiao, and Deqing Yang. Analogykb: Unlocking analogical reasoning of language models with a million-scale knowledge base. arXiv preprint arXiv:2305.05994, 2023. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In International conference on machine learning, pages 6861–6871. PMLR, 2019. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. Graph attention networks. stat, 1050(20):10–48550, 2017. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15, pages 593–607. Springer, 2018. Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. Heterogeneous graph transformer. In Proceedings of the web conference 2020, pages 2704–2710, 2020. Song Bai, Feihu Zhang, and Philip HS Torr. Hypergraph convolution and hypergraph attention. Pattern Recognition, 110:107637, 2021. Erxue Min, Runfa Chen, Yatao Bian, Tingyang Xu, Kangfei Zhao, Wenbing Huang, Peilin Zhao, Junzhou Huang, Sophia Ananiadou, and Yu Rong. Transformer for graphs: An overview from architecture perspective. arXiv preprint arXiv:2202.08455, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022a. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023. Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei. Augmenting language models with long-term memory. arXiv preprint arXiv:2306.07174, 2023. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. 12 PREPRINT Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14953–14962, 2023. Qiao Jin, Yifan Yang, Qingyu Chen, and Zhiyong Lu. Genegpt: Teaching large language models to use ncbi web apis. arXiv preprint arXiv:2304.09667, 2023. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. Agentbench: Evaluating llms as agents. arXiv preprint arXiv:2308.03688, 2023a. Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, et al. Exploring the potential of large language models (llms) in learning on graphs. arXiv preprint arXiv:2307.03393, 2023a. Xiaoxin He, Xavier Bresson, Thomas Laurent, and Bryan Hooi. Explanations as features: Llm-based features for text-attributed graphs. arXiv preprint arXiv:2305.19523, 2023. Zhikai Chen, Haitao Mao, Hongzhi Wen, Haoyu Han, Wei Jin, Haiyang Zhang, Hui Liu, and Jiliang Tang. Label-free node classification on graphs with large language models (llms). arXiv preprint arXiv:2310.04668, 2023b. Ruosong Ye, Caiqi Zhang, Runhui Wang, Shuyuan Xu, and Yongfeng Zhang. Natural language is all a graph needs. arXiv preprint arXiv:2308.07134, 2023. Jianan Zhao, Le Zhuo, Yikang Shen, Meng Qu, Kai Liu, Michael Bronstein, Zhaocheng Zhu, and Jian Tang. Graphtext: Graph reasoning in text space. arXiv preprint arXiv:2310.01089, 2023. Ziwei Chai, Tianjie Zhang, Liang Wu, Kaiqiao Han, Xiaohai Hu, Xuanwen Huang, and Yang Yang. Graphllm: Boosting graph reasoning ability of large language model. arXiv preprint arXiv:2310.05845, 2023. Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao. Deep learning–based text classification: a comprehensive review. ACM computing surveys (CSUR), 54(3): 1–40, 2021. Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40–48. PMLR, 2016. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. Guohao Li, Matthias Müller, Bernard Ghanem, and Vladlen Koltun. Training graph neural networks with 1000 layers. In International conference on machine learning, pages 6437–6449. PMLR, 2021. Weirui Kuang, WANG Zhen, Yaliang Li, Zhewei Wei, and Bolin Ding. Coarformer: Transformer for large graph via graph coarsening. 2021. Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34:28877–28888, 2021. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699, 2020. Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, and Doina Precup. Revisiting heterophily for graph neural networks. Advances in neural information processing systems, 35:1362–1375, 2022. Payal Chandak, Kexin Huang, and Marinka Zitnik. Building a knowledge graph to enable precision medicine. Scientific Data, 10(1):67, 2023. Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. Heterogeneous graph attention network. In The world wide web conference, pages 2022–2032, 2019b. 13 PREPRINT Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682, 2022b. Vipula Rawte, Amit Sheth, and Amitava Das. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922, 2023. Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023. Jiawei Liu, Cheng Yang, Zhiyuan Lu, Junze Chen, Yibo Li, Mengmei Zhang, Ting Bai, Yuan Fang, Lichao Sun, Philip S Yu, et al. Towards graph foundation models: A survey and beyond. arXiv preprint arXiv:2310.11829, 2023b. 14
ai_researcher
4
One_STEP_at_a_time_Language_Agents_are_Stepwise_Planners.pdf
2 2 0 2 l u J 3 1 ] A N . h t a m [ 1 v 2 1 0 6 0 . 7 0 2 2 : v i X r a NySALT: Nyström-type inference-based schemes adaptive to large time-stepping Xingjie Li, Fei Lu, Molei Tao, Felix Ye July 14, 2022 Abstract Large time-stepping is important for efficient long-time simulations of deterministic and stochastic Hamiltonian dynamical systems. Conventional structure-preserving integrators, while being successful for generic systems, have limited tolerance to time step size due to stability and accuracy constraints. We propose to use data to innovate classical integrators so that they can be adaptive to large time- stepping and are tailored to each specific system. In particular, we introduce NySALT, Nyström-type inference-based schemes adaptive to large time-stepping. The NySALT has optimal parameters for each time step learnt from data by minimizing the one-step prediction error. Thus, it is tailored for each time step size and the specific system to achieve optimal performance and tolerate large time-stepping in an adaptive fashion. We prove and numerically verify the convergence of the estimators as data size increases. Furthermore, analysis and numerical tests on the deterministic and stochastic Fermi-Pasta- Ulam (FPU) models show that NySALT enlarges the maximal admissible step size of linear stability, and quadruples the time step size of the Störmer–Verlet and the BAOAB when maintaining similar levels of accuracy. symplectic integrator, Hamiltonian system, Langevin dynamics, inference-based scheme, model Keywords: reduction, Fermi-Pasta-Ulam models Contents 1 Introduction 2 Hamiltonian systems and parametric symplectic integrators 2.1 Hamiltonian systems and symplectic maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Deterministic Symplectic Nyström scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Stochastic Symplectic Nyström scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 A flow map approximation framework for learning integrators . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Flow map approximation for Hamiltonian systems 3.2 Flow map approximation for Langevin systems . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Convergence of the parameter estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Statistical error at arbitrary time 4 Optimal parameters for linear systems 4.1 Linear Hamiltonian systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Linear Langevin systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 The benchmark problems: Fermi-Pasta-Ulam (FPU) model 5.1 The FPU system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 NySALT for the deterministic FPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 NySALT for the stochastic FPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Conclusion A Derivative of the cost function 1 2 3 3 4 5 6 7 8 10 11 11 12 13 14 15 16 19 21 22 Introduction 1 Efficient simulations of Hamiltonian dynamical systems and their stochastic generalizations play an essential role in many applications where the goal is to capture and predict both short time and long time dynamics. Conventional structure-preserving integrators have achieved tremendous success in preserving structures such as symplecticity, reversibility, manifold structure, other physical constraints, and even statistical properties in long-time simulations (see [6, 12, 13, 16, 23, 28–30, 41–43, 54, 59, 63, 64, 69–73] and the references therein). Due to their remarkable suitability for Hamiltonian systems, this article will focus on symplectic integrators. Meanwhile, the time step of generic (symplectic) integrators are often limited by the stiffness of these systems, making the simulation computationally costly. Also, the conventional integrators aim for generic systems, not taking into account each specific Hamiltonian. We aim at using data to construct large time- stepping integrators that can tolerate large time steps while maintaining symplecticity, stability, and accuracy, and more importantly, tailored to each specific Hamiltonian in an automatic fashion. It is a fast growing research area to leverage information from data and combine statistical tools with traditional scientific computing methodology. This work is under this umbrella as we utilize statistical learning tools to approximate a discrete-time flow map and then construct large time-stepping integrators. In this paper, we propose to construct large time-stepping integrators by inferring optimal parameters of classical structure-preserving integrators in a flow map approximation framework. The inferred schemes are adaptive to the time-step size, thus they can have large time-stepping while maintaining stability and accuracy. Furthermore, the parameters are low-dimensional and can be learned from limited data consisting of short trajectories, and the estimator converges as the data size increases under suitable conditions. Conse- quently, the inferred integrators are robust and generalizable beyond the training set (see Section 3). A few work of similar spirit can be found in [17, 48, 80], where neural network based approximations lead to large time-stepping integrators. However, the neural networks are computationally expensive to train and their parameters are often sensitive to training data, making it difficult to systematically investigate its properties such as the maximal admissible time step size of stability. Besides, most designs of neural networks are disconnected from the classical numerical integrators. For benchmark application, we focus on parametric integrators in the Nyström family (see descrip- tions from e.g., [28]), which includes the popularly used Störmer–Verlet method. From observed data, we then construct the NySALT scheme: Nyström-type inference-based scheme adaptive to large time-stepping (NySALT). The NySALT ensures optimal parameters for each time step by minimizing the one-step predic- tion error learnt from data. Linear stability analysis is also established to verify our premise, namely that the optimal parameters indeed should be different from those of the Störmer–Verlet, and the resulting NySALT has a larger maximal admissible step size for linear stability (see Section 4). We examine the performance of NySALT on the widely-used benchmark stiff nonlinear systems: a deterministic Fermi-Pasta-Ulam (FPU) model, as well as its stochastic (Langevin) generalization. Numerical results show that the inference of NySALT is robust: the estimators are independent of the fine data generators, they converge as the number of trajectories (size of observed data) increases, and they stabilize very fast (within a dozens of short trajecto- ries). It also shows the NySALT is accurate: it is adaptive to large time step size, and improves the accuracy of trajectories in multiple time scales and statistics in long time scale. Lastly, the NySALT is efficient: it enlarges the admissible time step size of stability of the classical schemes such as the Störmer–Verlet and the BAOAB methods [28, 41, 42] (see Section 5) and significantly reduces the simulation time. Our main contributions are threefold. • We propose to infer the large time-stepping and structure-preserving integrator from data in a flow- map approximation framework, in which we select optimal parameters in a family of classical geometric numerical integrators by minimizing the flow map approximation error. If we choose the Nyström family, it is NySALT scheme. • The inference procedure of NySALT is robust and the scheme is generalizable beyond the training set. • Analysis and numerical tests show that NySALT scheme is efficient and accurate with large time step size. Meanwhile, many work that employs data-driven approaches in the past has tackled parts of our goal, but not all of them. These related work are usually categorized based on their models or methods and we summarize them here: 2 • Learning large time-stepping integrators. It is an emerging research direction to learn large time- stepping integrators from data. When the system is known, MDNet based on graph neural network in [80] enables the simulation of microcanonical (i.e. Hamiltonian) molecular dynamics with large steps; a stochastic collocation method in [48] and a parametric inference in [45] have lead to large time-stepping integrators for SDEs. This study extends the parametric inference approach in [45] to Hamiltonian systems. When the system is unknown, generating function neural network (GFNN) in [17] learns symplectic maps and proves a significant benefit of doing so, namely a linearly growing bound of long time prediction error. • Learning the Hamiltonian or the system. A very active research area is to recover the dynamics that generate observed, discrete time-series data, and then use the learned dynamics to predict further evolutions. Examples of existing work for Hamiltonian systems include [10,17,18,26,32,51,76,78,79,81], and while early seminal work learned Hamiltonian vector fields without truly preserving symplecticity, later results leveraged various tools including symplectic integrator [18], composition of triangular maps [32], and generating function [17] to fix this imperfection. The setup of all these research, however, assumes that the governing dynamics is unknown (i.e. ‘latent’ in machine learning terminology), which is different from our setup as we instead seek a good numerical integrator for given Hamiltonian. • Integrators of multiscale Hamiltonian systems. There have been remarkable progress in generic upscaled integration of stiff and multiscale ODE systems (e.g., [1, 4, 15, 22, 33, 34, 73]) and despite that fewer results exist when it comes to generic multiscale symplectic integrators (e.g., [73]), multiscale symplectic integrators for specific classes of problems have also been constructed (e.g., [21, 25, 27, 38, 66, 74, 77]). While each of these integrators is tremendously useful for a specific class of systems, a complete re-design is likely necessary when a system is outside of the class. Our integrator, in contrast, is tailored to each specific Hamiltonian automatically as the outcome of the inference procedure. • Model reduction and time series modeling. Large time-stepping schemes can also be viewed as a model reduction in time for the differential equations (DEs). A more challenging task is model reduction in both space-time, i.e., reducing the spatial dimension and integrating with large time- steps, for high-dimensional DEs or PDEs. This is an extremely active research area (see e.g., [19, 22, 31, 35, 39, 40, 46, 50, 53, 68] and the references therein for a small sample of the important works). While it is impossible to review all important works, we mention the proper symplectic decomposition with Galerkin projection methods for Hamiltonian systems [3, 14, 61]; the time series approaches (see e.g., [36, 47, 49]) and the deep learning methods that solve PDEs (see e.g., [7, 52]). 2 Hamiltonian systems and parametric symplectic integrators We briefly review a few preliminary concepts of Hamiltonian systems and symplectic integrators. The classical symplectic integrators are designed as universal methods for all systems and they require a small time-step size to be accurate. To tolerate large time-step sizes, we will infer adaptive symplectic integrators from data, thus, we focus on symplectic schemes with parameters, which can be optimized by statistical inference from data. 2.1 Hamiltonian systems and symplectic maps Let Hpp, qq be a Hamiltonian function on Rd ˆ Rd, where p is the momentum and q denotes the position. Consider the Hamiltonian ODE system # dqptq “ BH dpptq “ ´BH Bp dt, Bq dt, (2.1) Let X “ pq, pq and let J “ ˆ ˙ 0 I ´I 0 be a 2d ˆ 2d matrix with 0 and I being d ˆ d block matrices. We can write the Hamiltonian system as Xptq “ J∇XHdt. A symplectic map is a differentiable map φ : R2d Ñ R2d whose Jacobian matrix ∇φ satisfies ∇φpXqJJ ∇φpXq “ J, @X P R2d. Hamiltonian systems have a characteristic property: the flow of a Hamiltonian system is a symplectic map. More precisely, let f : U Ñ R2d be a continuous differential function from an open set U Ă R2d. 3 Then, dy “ f pyqdt is locally Hamiltonian if and only if its flow φtpyq is symplectic for all y P U and for all sufficiently small t ( [28, Theorem 2.6, page 185]). Furthermore, the flow is symplectic for any t if the ODE is Hamiltonian as long as the flow is well posed (see e.g., [2, 5]) This property is the starting point of our data-driven construction of numerical flows: we seek flows that are symplectic and are described by parameters to be estimated from data. There are various parametric families of symplectic integrators (see [28]). We consider in this study the Nyström family, which provides an explicit time integrator with two parameters. In particular, the widely-used Störmer–Verlet method is a member in this family. In this study, we focus on the systems with separable Hamiltonian H “ Kppq ` V pqq with Kppq “ 1 2 }p}2 and V pqq satisfying gpqq “ ´∇V pqq, i.e., systems in the form # dqptq “ pdt, dpptq “ gpqqdt. (2.2) We will also consider damped stochastic Hamiltonian systems, which is also called Langevin dynamics: # dqptq “ pdt, dpptq “ pgpqq ´ γpqdt ` σdWt, (2.3) where γ represents the friction coefficient, and Wt is a standard (multi-dimensional) Wiener process. 2.2 Deterministic Symplectic Nyström scheme Firstly, let us recall the s-step Nyström method for the second order differential equation (2.2) following the notations from [28]. The Nyström method advances the dynamics from t0 to t1 with step size h as $ ’& ’% ` “ g (cid:96)i qn`1 “ qn ` hpn ` h2 pn`1 “ pn ` h qn ` cihpn ` h2 s i“1 βi(cid:96)i, s i“1 bi(cid:96)i, ř ř ř s j“1 aij(cid:96)j ˘ , i “ 1, . . . , s, (2.4) where taijus requires that (cid:96)i only depends on (cid:96)j with j ă i, hence aij “ 0 when j ě i. and tbius , tβius , tcius i,j“1 i“1 i“1 i“1 are parameters to be specified. To have an explicit scheme, it We focus on the explicit 2-step Nyström methods, which includes the widely-used Störmer–Verlet method [28]. We denote it by Sh b1,β1 „ qn`1 pn`1 :  where “ Sh b1,β1 ˆ„ ˙ „ qn pn “ qn ` hpn ` h2pβ1(cid:96)1 ` β2(cid:96)2q, pn ` hpb1(cid:96)1 ` b2(cid:96)2q  . (cid:96)1 “ gpqn ` hc1pnq and (cid:96)2 “ gpqn ` hc2pn ` h2a21(cid:96)1q. (2.5) (2.6) Meanwhile, by applying Taylor expansion to qn and pn in the second and third equations in (2.4) at t0, we get consistency constraints on parameters tβiu and tbiu: 2ÿ i“1 βi “ 1 2 and 2ÿ i“1 bi “ 1. (2.7) Notice that the general Nyström is not necessarily a structure-preserving scheme, so we need additional constraints on parameters to possess the symplectic property. From [Theorem 2.5 in Chapter IV [28]], a sufficient condition is: βi “ bip1 ´ ciq for i “ 1, 2, bipβj ´ aijq “ bjpβi ´ ajiq for i, j “ 1, 2. (2.8) 4 Combining the constraints aij “ 0 for j ě i, (2.7)–(2.8), we have the following conditions on parameters to have an explicit and symplectic 2-step Nyström scheme: free parameters: 0 ă b1 ă 1, 0 ď β1 ď 1 2 , b1 ` b2 “ 1, β1 ` β2 “ 1 2 , ci “ 1 ´ βi bi for i “ 1, 2, a11 “ a12 “ a22 “ 0, a21 “ b1pc2 ´ c1q. (2.9) Notice that the well-known Störmer–Verlet method belongs to this category with b1 “ 1{2 and β1 “ 1{2. The above explicit and symplectic 2-step Nyström integrator is of second order accuracy Oph2q. Our NySALT scheme is the symplectic 2-step Nyström integrator with the optimal parameters b˚ , which are learnt 1 from data by minimizing the one-step prediction error (details see Section 3.1). and β˚ 1 Limited time step size of a classical numerical integrator. A major efficiency bottleneck of the majority of explicit numerical integrators is the limited time step size when the system is stiff. As an illustration, we consider the the Störmer–Verlet method and our benchmark example, the FPU model (5.1). By only considering the quadratic terms (i.e., the stiff harmonic oscillators) in the Hamiltonian, we obtain a 2 | ă 1, or equivalently h ă 2{ω. Importantly, linear stability condition on h of this numerical integrator: | hω the accuracy of the integrator deteriorates quickly as h increases, even when it is just half of the stability constraint. Figure 1 demonstrates this issue of the Störmer–Verlet method, using the FPU model with m “ 3 and ω “ 50 in the time interval r0, 500s. It show the trajectories of the stiff energies pIj, j “ 1, . . . , 3q and the total stiff energy I defined in (5.4) from the Störmer–Verlet integrator with two time step sizes, fine time step size h “ 1e´4 and coarse time step size δ “ 0.02, which is 200 times of fine time step size. Clearly, the method becomes inaccurate when δ “ 0.02, which is still within the linear stability region δ ă 2{ω “ 0.04. In comparison, our NySALT scheme with the coarse time step size still performs as accurate as the one with fine step size. Particularly, the total stiff energy I are well conserved over long time interval. More detailed analysis is shown in Section 5.2. (a) h “ 1e´4 Störmer–Verlet (b) δ “ 0.02 Störmer–Verlet (c) δ “ 0.02 NySALT Figure 1: Trajectories of each energy Ij as well as the total stiff energy I (5.4) of the deterministic FPU (5.1) with m “ 3 and ω “ 50, computed by the Störmer–Verlet scheme with time step sizes h “ 1e´4 in (a) and δ “ 0.02 in (b), and by the NySALT scheme with δ “ 0.02 in (c). 2.3 Stochastic Symplectic Nyström scheme To have a parametric scheme for the Langevin dynamics, we introduce a new splitting scheme by combining the Nyström integrator with an Ornstein–Uhlenbeck process, and we call it a stochastic Symplectic Nys- tröm scheme. This scheme is the splitting methods (e.g. [13, 56]) and similar to the BAOAB and ABOBA schemes (e.g., [42, 67]). We break the Langevin dynamics into two pieces, the Hamiltonian part and the Ornstein–Uhlenbeck (OU) process part.   „  „ „ dqptq dpptq “ p gpqq dt ` 0 ´γpdt ` σdWt (2.10) 5 0100200300400500Time204060EnergyI1I2I3I0100200300400500Time204060EnergyI1I2I3I0100200300400500Time204060EnergyI1I2I3I it combines the symplectic Nyström approximation of the Each of them are solved separately as follows: Hamiltonian contribution and an Ornstein–Uhlenbeck (OU) integrator approximation of the friction and thermal diffusion of the system. Given a time step h, this scheme reads  ˆ„ ˙ „ Deterministic Symplectic Nyström scheme: qn`1 ˜pn`1 “ Sh b1,β1 qn pn (2.11) Ornstein–Uhlenbeck integrator: pn`1 “ expp´γhq ˜pn`1 ` ξn, where tξnu is a sequence of independent identically distributed Gaussian vectors with distribution N p0, σ2 2γ p1´ e´2γhqIdq. The symplectic integrator Sh is the 2-step Nyström integrator in (2.5), thus it preserves the Hamiltonian contribution in the stochastic system and enhances the numerical stability. The second part for the stochastic force is based on the exact solution of the OU process and it leads to the local error of order Oph1.5q in q. Similar to the deterministic case, this stochastic Symplectic Nyström scheme is a family , which are of numerical schemes and our NySALT scheme is the one with the optimal parameters b˚ 1 learnt from data by minimizing the one-step prediction error (details see Section 3.2). Thus, the NySALT scheme is of local strong order h1.5 (see Remark 4.5 for a derivation for the linear system and we refer to for instance [75] for a thorough study on the strong order of splitting schemes for Langevin dynamics.) and β˚ 1 b1,β1 Limited time step size of a classical numerical integrator. Similar to the deterministic systems, numerical integrators for stochastic systems can tolerate limited time step size. To demonstrate it, we consider Langevin dynamics with FPU potential and we choose the friction coefficient γ “ 0.01 and diffusion coefficient σ “ 0.05. The total energy I in this example is stochastic, so we calculate its time auto-covariance function (ACF). Figure 2 shows the ACF computed by BAOAB scheme [42], one of the state-of-the-art sympletic integrator, in comparison with our NySALT scheme , both using the coarse step size δ “ 0.02. The reference is computed by BAOAB with fine step sizes h “ 1e´4. As can be seen, the BAOAB scheme produces inaccurate ACF, while the NySALT scheme remains very reliable. More detailed analysis is in Section 5.3. Figure 2: Comparison between the NySALT scheme with BAOAB scheme when the coarse step size is δ “ 0.02, with the reference being the BAOAB scheme with a fine step size h “ 1e´4. Left: Trajectories of total stiff energy I by both schemes. Right: Time auto-covariance function (ACF) (5.10) of total stiff energy I by both schemes. 3 A flow map approximation framework for learning integrators Many classical numerical integrators are derived from (Itô-) Taylor expansion for small time-stepping. Thus, they are universal and accurate when the time step size is small. However, they are not designed for integration with large time-stepping. We introduce a flow map approximation framework to learn numerical integrators that are adaptive to large time step size from data. The fundamental idea is to approximate the discrete-time flow map by a 6 0100200300400500Time00.20.40.60.81EnergyTotal stiff energy Ih=1e-4withBAOAB/=0.02withBAOAB/=0.02withNySALT00.250.50.751Time00.511.522.5ACF of I#10!3h=1e-4withBAOAB/=0.02withBAOAB/=0.02withNySALT function with parameters inferred from data. This approach takes advantage of the information from data, which consists of multiple trajectories generated by an accurate numerical integrator. It consists of three steps: data generation, parametric form derivation, and parameter estimation. The framework applies to both deterministic and stochastic dynamical systems, in which we treat the In the following, we first introduce this approach, then we analyze the stochastic forcing as an input. convergence of the parametric estimator and the error bounds of the learnt integrator. 3.1 Flow map approximation for Hamiltonian systems „  Let Xt “ qt pt denote the state of a Hamiltonian system that satisfies where J “ ˆ ˙ 0 ´I 0 I dXt dt “ bpXtq “ J ∇XH, . The exact discrete-time flow map of Xt on coarse grids tti “ iδuNt i“0 satisfies Xti`1 ´ Xti “ δFpXti, δq, where F is a function preserving the symplectic structure (i.e., the phase-space volume of a closed surface is preserved). Since the coarse step δ is relatively large, a classical numerical integrator becomes inaccurate (see Figure 1). To obtain a numerical integrator with coarse step size δ, we infer from multiple-trajectory data a sym- plectic function FθpXti, δq that approximates the flow map FpXti, δq: Xti`1 ´ Xti « δFθpXti, δq. Here tFθpXti, δq, θ P Θu is a family of parametric functions, whose parametric form comes from classical nu- merical integrators (see discussions below). The inference procedure consists of three steps: data generation, parametric form derivation, and parameter estimation. Data generation. We generate data consisting of multiple trajectories with random initial conditions, utilizing an accurate classical numerical scheme with a fine step size h, which is much smaller than δ, i.e., δ “ Gap ¨ h. The initial conditions tXpmq are sampled from a given distribution µ on R2d, so that the trajectories explore the flow map sufficiently. Then, we down-sample these trajectories to obtain training data on coarse grids tti “ iδuNt i“0 , which we denote as t0 uM m“1 Data: tXpmq ti , i “ 0, . . . , NtuM m“1 “ qpmq ti ppmq ti , i “ 0, . . . , Nt . m“1 #« ff + M Parametric form from the Nyström family. The major difficulty in our inference-based approach is the derivation of the parametric symplectic maps. The symplectic structure is crucial for the flow maps of Hamiltonian systems. We propose to utilize the family of classical numerical integrators, particularly those come with parameters. As discussed in the introduction, there is a rich class of structure-preserving numerical integrators that are dedicated to long-time simulation of Hamiltonian systems. For simplicity as well as flexibility, we consider the family of the explicit 2-step Nystöm method for the parametric function FθpXti, δq. Specifically, for the system (2.2), we consider the parametric function Fθ from the 2-step Nystöm method in (2.5): FθpXti, δq “ ˘ Sδ b1,β1pXti ` 1 δ ´ Xtiq “ ˆ pti ` δ pβ1(cid:96)1 ` β2(cid:96)2q pb1(cid:96)1 ` b2(cid:96)2q ˙ , (3.1) where the terms (cid:96)1 and (cid:96)2 are defined as (cid:96)1 “ gpqti ` c1δptiq and (cid:96)2 “ gpqti ` c2δpti ` δ2a21(cid:96)1q, with parameters tβ1, β2, b1, b2, c1, c2, a21u satisfying (2.9). The free parameters to be estimated from data are (3.2) s. (3.3) θ “ pb1, β1q P Θ “ p0, 1q ˆ r0, 7 1 2 Parameter estimation. We estimate θ by minimizing the 1-step prediction error: where the loss function EM pθq is the 1-step prediction error and is computed from data: θ˚ M “ arg min θPΘ EM pθq, EM pθq “ 1 M Nt “ 1 M Nt Mÿ Nt´1ÿ m“1 Mÿ i“0 Nt´1ÿ m“1 i“0 › › ›FθpX pmq ti › › ›FθpX pmq ti , δq ´ FpX pmq ti , δq › › 2 › Σ´1 , δq ´ pXpmq ti`1 ´ Xpmq ti › › 2 › Σ´1 , q{δ (3.4) (3.5) Σ´1 as the notation of trace norm of Y T Σ´1Y , i.e, }Y }2 Σ´1 “ }Y T Σ´1Y }˚ “ TrpY T Σ´1Y q. Here with }Y }2 Σ is a diagonal weight matrix aiming to normalize the contributions of the entries. We set Σ to be a diagonal matrix with diagonal entries being the the mean of entrywise square of pXpmq Notice that the optimization problem is nonlinear because of the nonlinear function g in (2.2). Since the parameter is in a 2D rectangle and the loss function is smooth, we solved it by constrained nonlinear optimization with the interior point algorithm. ti`1 ´ Xpmq q{δ. ti As to be shown in Section 3.3, the estimator θ˚ M converges almost surely regarding M under suitable conditions on the uniqueness of the minimizer. We select a stabilized estimator (when the sample size is sufficiently large) as θ˚ for our NySALT scheme 3.2 Flow map approximation for Langevin systems The governing equation (2.3) to Langevin dynamics is written as Xti`1 ´ Xti “ δFθ˚pXti, δq. In integral form, we can write exact solution Xt on each coarse grid ttiu as dXt “ bpXtqdt ` σpXtiqdWt. Xti`1 ´ Xti “ ż ti`1 ti bpXsqds ` σpXtiqpWti`1 ´ Wtiq “ δFpXti, Wrti,ti`1s, δq. (3.6) (3.7) (3.8) Here the discrete-time flow map FpXti, Wrti,ti`1s, δq is an infinite-dimensional functional that depends on the path of the Brownian motion Wrti,ti`1s . In general, a numerical scheme approximates the discrete-time flow map by a function depending on Xti and a low-dimensional approximation of the Brownian path Wrti,ti`1s (either in distribution in the weak sense or trajectory-wisely in the strong sense). For example, the Euler- Maruyama scheme gives the function F pXti, ξi, δq “ bpXtiq ` σpXti qξi{δ with ξi “ Wti`1 ´ Wti „ N p0, δq. Due to their reliance on the Ito-Taylor expansion, these classical schemes require a small time step for accuracy. In order to allow a large time-stepping δ, from data we infer a parametric function FθpXti, ξi, δq to approximate the flow map FpXti, Wrti,ti`1s, δq, where ξi depends on the path W pmq . Similar to the deterministic case, the inference consists of three steps: data generation, parametric form derivation, and parameter estimation. Data generation. The data consists of both the process Xt and the stochastic force ξt. rti,ti`1q Data: tXpmq ti , ξpmq ti , i “ 0, . . . , NtuM m“1 “ #« ff qpmq ti ppmq ti , ξpmq ti , i “ 0, . . . , Nt + M . m“1 The initial conditions tXpmq are sampled from a distribution µ on R2d, so that the short trajectories can explore the flow map sufficiently. Suppose that the system is resolved accurately by an integrator with a fine time step size h. Then similar to deterministic systems, a data trajectory Xti is obtained by down-sampling the fine solution with coarse grid tti “ iδuNt i“1 t0 uM m“1 . 8 However, the stochastic force cannot be down-sampled directly since one has to follow the desired dis- tribution in (2.11). Here we use the one-step increment of OU process to approximate ξti which takes into account the friction and the noise. Consider the OU process dYt “ ´γYtdt ` σdWt, the solution of this OU process with coarse step δ is expressed by using noise with fine time step h, ż δ Yδ “ e´γδY0 ` σ e´γpδ´sqdWs “ e´γδY0 ` σ ż Gapÿ jh j“1 jh´h e´γpδ´sqdWs p1 ´ exp´2γhqe´γpGap´jqhpWjh ´ Wjh´hq{ ? h. „ e´γδY0 ` σ 0 c Gapÿ j“1 1 2γ where in the last step we used the fact that increment at time instants ti can be approximated by b ş a e´γpt´sqdWs „ N p0, 1 2γ pe´2γa ´ e´2γbqq. Then the one-step c 1 2γ ξti “ σ p1 ´ exp´2γhq Gapÿ j“1 e´γjhRi,j. (3.9) where Ri,j “ pWppi´1qGap`jqh ´ Wppi´1qGap`j´1qhq{ Consequently, one can show that ξti „ N 2γ p1 ´ exp´2γδq Parametric form from the the Nyström family. We approximate the flow map by the parametric function FθpXti , ξti, δq in the stochastic symplectic Nyström scheme introduced in (2.11). Note that it consists of the symplectic integrator Sh h is the scaled increment of the Brownian motion. and the Ornstein-Uhlenbeck integrator, ´ 0, σ2 ¯ . b1,β1 ? ˜ ´ FθpXti, ξti, δq “ expp´γhq´1 δ ¯ pti ` δ pβ1(cid:96)1 ` β2(cid:96)2q pti ` expp´γhq pb1(cid:96)1 ` b2(cid:96)2q ` ξti δ ¸ , (3.10) where (cid:96)1 and (cid:96)2 are defined in (3.2). Thus, it has the same parametric form as the symplectic integrator Sh b1,β1 Parameter estimation. The parameter θ P Θ is estimated by minimizing the 1-step prediction error as the deterministic case, and the range for the parameter keeps the same as in (3.3). θ˚ M “ arg min θPΘ EM pθq, with EM pθq “ 1 M Nt Mÿ Nt´1ÿ m“1 i“0 › ´ › Xpmq › ti ` δFθpXpmq ti ¯ , ξpmq i , δq ´ Xpmq ti`1 › › 2 › Σ´1 , (3.11) (3.12) ti uM m“1 , ξpmq i where tXpmq are down-sampled fine scale data consisting of M trajectories of the state and the coarsened increments of stochastic force. Σ is the diagonal weight matrix to normalize the contribution of p and q. We set Σ to be a diagonal matrix with diagonal entries being the mean of the square of pXpmq q. We note that the loss function is not the log-likelihood of the data, which is not available because the transition density of the stochastic symplectic Nyström scheme scheme is nonlinear and non- Gaussian without an explicit form. ti`1 ´ Xpmq ti With the gradient of the 1-step prediction error explicitly calculated in Appendix A, we solve this con- strained nonlinear optimization with the interior point algorithm. Since the estimator θM converges almost surely as M increases (see Section 3.3), we select a stabilized estimator as θ˚ for our inferred scheme, where ξti OU process. is sampled from N ´ 0, σ2 2γ p1 ´ exp´2γδq ¯ , the same distribution as the coarsened increments of the Xti`1 ´ Xti “ δFθ˚pXti, ξti, δq, (3.13) 9 3.3 Convergence of the parameter estimator We show that the parameter estimator converges as the number of independent data trajectories increases, under suitable conditions on the loss function. These conditions require the parametric function Fθ to be continuously differentiable in θ, along with integrability conditions that generally hold true for Hamiltonian systems and symplectic integrators. For simplicity of notation, we denote the loss for each data trajectory by Lpθq “ # 1 Nt 1 Nt ř ř Nt´1 i“0 Nt´1 i“0 ř M }FθpXti, δq ´ FpXti, δq}2 › ›δ ` FθpXti, ξti, δq ´ FpXti, Wrti,ti`1s, δq for deterministic (3.5), ˘› ›2 Σ´1 , Σ´1 , for stochastic (3.12). (3.14) Then, EM pθq “ 1 M m“1 Lpmqpθq, where Lpmqpθq is the loss of the m-th data trajectory Xpmq. Hereafter, we denote P the probability measure that characterizes the randomness coming from initial conditions and the stochastic driving force. We denote E the corresponding expectation. Assumption 3.1 We make the following assumptions: (a) ErLpθqs P C 2pΘq, Er|∇Lpθq|2s ă 8 and Er|∇2Lpθq|s ă 8 for any θ P Θo, the interior of Θ. (b) θ˚ P Θo is the unique minimizer of ErLpθqs in Θ. (c) There exists C ą 0, p ě 1 and q ą 1 such that Er|Lpθ1q ´ Lpθ2q|2ps ď C|θ1 ´ θ2|q for any θ1, θ2 P Θ. Theorem 3.2 Under Assumption 3.1, the estimator θM in either (3.4) or (3.11) converges to θ˚ in proba- bility and M pθM ´ θ˚q is asymptotically normal as M Ñ 8. ? Proof. First, we show that θM converges to θ˚ in probability, i.e., for any ν ą 0, limMÑ8 P p|θM ´ θ˚| ą νq “ 0, where P pAq is the probability of an event A. Note that for any pθ1, . . . , θkq Ă Θ, as M Ñ 8, we have the convergence in probability of the vectors pEM pθ1, . . . , EM pθkqq Ñ ppErLpθ1qs, . . . , ErLpθkqqsq by the law of large numbers. Together with Assumption 3.1 (c), they imply that the measure induced by EM p¨q on pCpΘq, Bq, the space of continuous functions on Θ with uniform metric and with B being the σ-algebra of Borel subsets, converges to the measure induced by ErLp¨qs (see [37, Lemma 1.33, page 61] and [11, Theorem 13.2]). Then, any continuous functional of the process EM p¨q converges in probability as M Ñ 8. In particular, we have for any ν ą 0 ¸ ¸ ˜ ˜ P sup |θ´θ˚|ąν EM ą sup EM Ñ P |θ´θ˚|ăν sup |θ´θ˚|ąν ErLpθqs ą sup ErLpθqs “ 0, |θ´θ˚|ăν where the equality follows from Assumption 3.1 (b). Meanwhile, note that by the definition of θM in (3.12), we have ˜ ¸ P p|θM ´ θ˚| ą νq “ P sup |θ´θ˚|ąν EM ą sup EM . |θ´θ˚|ăν Combining the above two equations, we obtain the convergence in probability of θM to θ˚. Next, we show that M pθM ´ θ˚q is asymptotically normal. Since θM is a minimizer of EM , we have ? 0 “ ∇EM pθM q “ ∇EM pθ˚q ` ∇2EM p rθM qpθM ´ θ˚q, where rθM “ θ˚ ` spθM ´ θ˚q for some s P r0, 1s. Note first that ∇2EM p rθM q converges in probability to Er∇2Lpθ˚qs. It follows by the law of large numbers, Assumption 3.1(a), and the consistency of θM , which implies that rθM converges to θ˚. Thus, the inverse of the matrix ∇2EM p rθM q exists when M is large, because Er∇2Lpθ˚qs is strictly positive definite. Thus, θM ´ θ˚ “ ∇2EM p rθM q´1∇EM pθ˚q. 10 Note also that ∇EM pθ˚q “ 1 M m“1 ∇Lpmqpθ˚q and Er∇EM pθ˚qs “ Er∇Lpθ˚qs “ 0 because θ˚ is the unique minimizer by Assumption 3.1(b). Thus, by the central limit theorem, we have the convergence in distribution: ř M ? M ∇EM pθ˚q Ñ N p0, ΣLq, where ΣL is the covariance of ∇Lpθ˚q. Combining the above, we obtain the asymptotic normality of ? M pθM ´ θ˚q. 3.4 Statistical error at arbitrary time Hamiltonian systems Since the learned integrator is a symplectic partitioned Runge-Kutta method, by [28, Theorem IX.3.3], any trajectory it generates exactly corresponds to time-discretized stroboscopic samples of a continuous solution of some fixed modified Hamiltonian ˜H, at least formally. If the learned integrator has local truncation error of order p ` 1, then we have ˜H “ H ` Ophpq at least formally speaking. Assume the original Hamiltonian system is integrable, analytic, and the initial condition corresponds to frequency vector that are in a sufficiently small neighborhood of some Diophantine frequency vector, then by [28, Theorem X.3.1], the learned integrator has a linearly growing long time error bound. More precisely, }Xti ´ Xptiq} ď Chp`1i, for at least i ď ˆCh´p´1, where Xti is the numerical solution given by the learned integrator and Xptiq is the exact solution of the original Hamiltonian. Moreover, for any action variable IpXq, it is nearly conserved over long time, i.e., |IpXtiq ´ IpXt0 q| ď Chp. In Section 5.1, we test the numerical accuracy with respect to step size δ over different time periods, that is Ttest “ 0.5 and Ttest “ 100. Note it is difficult to quantitatively put these values in the context of the above discussion, because the validity timespan of i ď ˆCh´p´1 may not be the longest possible (see e.g., [9] for possible exponential results), and constants such as ˆC may not be explicit. If the Langevin dynamics (3.7) is contractive in the sense that there exists a constant Langevin dynamics matrix A, and constants t0 ą 0 & β ą 0, s.t. for any two solutions Xptq, Yptq driven by the same stochastic forcing (i.e. synchronous coupling), ` E}A pXptq ´ Yptqq }2 ˘ 1 ` 2 ď E}A pXp0q ´ Yp0qq }2 ˘ 1 2 e´βt, @0 ď t ă t0, then the framework of mean-square analysis for sampling described in [44] can help obtain a bound of the statistical error of the learned integrator at any t P r0, t0q (which also means for any number of steps k as t “ kh). In particular, the kinetic Langevin equation (2.3) is known to be contractive when γ is large enough (e.g., [20]) and when the potential V is strongly-convex and admitting a Lipschitz gradient. In addition, because our NySALT scheme is a Lie-Trotter composition of a consistent Hamiltonian in- tegrator (due to being Nyström) and an exact OU process, the local weak error is at least of order 1 and the local strong error is at least of order 1{2, (see e.g., [58]). Therefore, conditions of [44, Theorem 3.3] are satisfied with p1 “ 1 and p2 “ 1{2. Consequently, [44, Theorem 3.4] gives W2pLawpXtk q, µq ď e´βkhW2pLawpXt0 q, µq ` Ch1{2, @0 ă h ď h1 for some explicitly obtainable constant C and h1, where µ is the ergodic measure associated with the original is the numerical solution produced by the NySALT, and W2p¨q is the 2-Wasserstein SDE (i.e., (2.3)), Xtk ˘ ` distance W2pµ1, µ2q :“ 4 Optimal parameters for linear systems To demonstrate the discrete-time flow map, we first consider linear systems and show the estimation of optimal parameters. For simplicity of notation, we consider only 1D systems and the extension to higher dimensional systems is straightforward. E}X ´ Y}2 pX,Yq„Πpµ1,µ2q 1{2. inf 11 4.1 Linear Hamiltonian systems We first consider a one-dimensional linear Hamiltonian system # 9q “ p, 9p “ ´Ωq. (4.1) with Ω “ ω2 and q “ qptq : r0, T s Ñ R. Proposition 4.1 Let tXti, i “ 1, . . . , NtuM δ “ ti`1 ´ ti for all i. Then, the loss function (3.5) becomes m“1 be M independent solution trajectories to (4.1) with time step EM pb1, β1q “ 1 M Ntδ2 Mÿ Ntÿ m“1 i“1 }peAδ ´ Bδ b1,β1qXpmq ti }2 Σ´1, where Σ “ ˆ ˙ 1 0 0 Ω2 is the mean of square of ∆X{δ, A “ „  0 1 ´Ω, 0 and „ Bδ b1,β1 “ 1 ´ 1 2 δ2Ω ` δ4Ω2β2a21, ´δΩ ` δ3Ω2b2a21, δ ´ δ3Ωpβ1c1 ` β2c2q ` δ5Ω2β2a21c1 1 ´ 1 2 δ2Ω ` δ4Ω2b2a21c1 (4.2) (4.3)  with a21 “ pβ1 ´ b1 b2 the spantXpmq }peAδ ´ Bδ ti b1,β1q}2 In particular, when , i “ 1, . . . , Nt, m “ 1, . . . , M u “ R2, then the cost function has the same minimizer as Σ´1 does. , b2 “ 1 ´ b1 and β2 “ 1 ´ β1. β2q, c1 “ 1 ´ β1 b1 , c2 “ 1 ´ β2 b2 Proof. Denote the discrete Nyström solution by pq, pq. At ti`1, the Nyström method gives ˘ ` ˘ ` (cid:96)1 “ ´Ω qti`1 “ qti ` δpti ` δ2 qti ` c1δpti ` β1(cid:96)1 ` β2(cid:96)2 ˘ , , (cid:96)2 “ ´Ω pti`1 “ pti ` δ qti ` c2δpti ` δ2a21(cid:96)1 b1(cid:96)1 ` b2(cid:96)2 ` ˘ , , and and ( where the parameters and pti`1 sions of qti`1 (cid:32) tβku2 by the constraints, we get k“1, tcku2 k“1, tbku2 k“1, a21 satisfy the constraints (2.9). Simplifying the expres- qti`1 “qti ` δpti ´ 1 2 pti`1 “pti ´ δΩqti ´ δ2Ωqti ´ δ3Ωpβ1c1 ` β2c2qpti ` δ4Ω2β2a21qti ` δ5Ω2β2a21c1pti, 1 2 δ2Ωpti ` δ3Ω2b2a21qti ` δ4Ω2b2a21c1pti. With the notation Xti “ pqti, pti q, we can write the above Nyström algorithm as XN ti`1 “ Bδ b1,β1 Xti. (4.4) (4.5) Comparing with the exact solution: Xti`1 “ eAδXti . We can write the 1-step prediction error as Xti`1 ´ XN ti`1 “ peAδ ´ Bδ b1,β1qXti. Then, with the data, we obtain the cost function (4.2). Remark 4.2 (Optimal parameter for the linear Hamiltonian system) The minimizers of E δpb1, β1q in (4.2) are close to b˚ 1 « 0.40, as shown in Figure 3. They appear to be independent of δ because the loss function depends on δ are in high-orders, which can be seen from a Taylor expansion of δ3 6 ` Opδ4q Σ´1 up to third order as follows. Note that eAδ “ I2 ` Aδ ` A2 δ2 1 “ 0.5 and β˚ b1,β1 ´ eAδ}2 2 ` }Bδ „  0 ´Ω Ω2 0 and b1,β1 “ I2 ` Aδ ` A2 δ2 Bδ 2 » – ` 0 ´ ¯ 6b2 β1 ´ b1 b2 β2 Ω2 ´ ´6 ¯ Ω ´ β2 2 b2 fi fl δ3 6 1 2 ´ β2 1 b1 0 ` Opδ4q. 12 Thus, the trace norm of the discrepancy matrix is }Bδ b1,β1 ´ eAδ}2 Σ´1 “ “ › » › › › – › › δ6 36 0 ´ β1 ´ b1 b2 ´Ω2 ` 6b2 ˜ˆ Ω2 6β2 1 b1 ` 6β2 2 b2 ´ Ω ´ 6 1 2 ´ β2 1 b1 0 ¯ Ω ´ β2 2 b2 ¸ fi fl δ3 6 › › 2 › › ` Opδ4q › › Σ´1 ¯ β2 ˙2 Ω2 ´ 2 ` p6b2β1 ´ 6b1β2 ´ 1q2 ` Opδ8q. The minimum of the function f pb1, β1q “ and β˚ ` 6β2 2 b2 1 « 0.40. Note also that this estimator is independent of Ω, because of the weight matrix Σ. ` p6b2β1 ´ 6b1β2 ´ 1q2 is reached at b˚ 6β2 1 b1 ´ 2 1 “ 0.5 ´ ¯ 2 b1,β1 pb˚ 1 , β˚ Remark 4.3 (Maximal admissible step size of linear stability) The largest time step size of linear stability for the Nyström integrator (4.5) is determined by Bδ . It is the largest δ such that the real parts of the eigenvalues of Bδ are less than or equal 1. For the Nyström integrator with optimal parameters δp1 ´ 0.16z ` 0.006z2q 1 ´ 0.5z ` 0.03z2 1 q “ p0.5, 0.40q estimated in Remark 4.2, we have Bδ 1 ,β˚ b˚ 1 with z “ δ2Ω. Thus, it can be verified directly that detpBδ a2 ´ 1 1 ,β˚ b˚ 1 with a “ 1 ´ 0.5z ` 0.03z2. Thus, to have Realpλ1,2q ď 1, we need |a| ď 1, which implies either 0 ď z ď 20 3 or 10 ď z ď 50 3 . Therefore, to ensure the linear stability as well as consistency, the largest time step of ω . Therefore, the linear 3 times the Verlet method’s linear stability 2 1 ´ 0.5z ` 0.03z2 z δ p´1 ` 0.15zq q “ 1, and its eigenvalues are λ1,2 “ a˘ , which is 20{3 ω b1,β1 b ? ? “ ˆ 5 ˙ linear stability is δ˚ ď stability of NySALT scheme is improved. 4.2 Linear Langevin systems We can estimate the optimal parameters from the analytical solutions of one-dimensional linear Langevin systems. Recall that for the governing equations dXt “ AγXt ` σ ˆ ˙ 0 dWt with Aγ “ „  0 1 ´Ω, ´γ , the exact solution is Xti`1 “ eAγ δXti ` W δ ti , W δ ti “ σ ż ti`δ ti eAγ pti`δ´sq ˆ ˙ 0 dWs . The Stochastic Symplectic Nyström scheme for this linear system gives, XN t`δ “ Bδ b1,β1,γXt ` ˆ ˙ , 0 ξδ t (4.6) (4.7) where Bδ t “ σ ments of the Brownian motion. Then, the 1-step prediction error gives us the cost function e´γpt`δ´sqdWs comes from (3.9) that uses the incre- b1,β1,γ “ and ξδ b1,β1 Bδ „  1 0 0 e´γδ ş t`δ t EM pb1, β1q “ 1 M Nt Mÿ Ntÿ m i“1 }peAγ δ ´ Bδ b1,β1,γqXpmq ti ` ξδ ti ´ W δ ti}2 Σ´1 . (4.8) Remark 4.4 (Optimal parameter for the linear Langevin system) The minimizer of this cost func- tion depends on γ and the data, unlike the case of deterministic linear system. Fortunately, the noise term ξδ , thus, the minimizer is still mainly deter- . The following computation shows that the minimizer of mined by the discrepancy matrix is centered Gaussian and is independent of Xpmq eAγ δ ´ Bδ ti ´ W δ ti ` ˘ ti b1,β1,γ 13 › › ›Bδ is about b˚ parameters depend on γ and δ. b1,β1,γ ´ eAγ δ › › 2 › Σ´1 1 “ 0.5 and β˚ 1 « 0.40 ´ 0.43γ Ωδ , when γ is sufficiently small. Thus, the optimal The computation is based on the Taylor expansion of eAγ δ and Bδ b1,β1,γ. Expand Bδ b1,β1,γ up to the order of δ3, „ ` Bδ b1,β1,γ “ 1 ´ 1 2 δ2Ω ˘ ´δΩ ` δ3Ω2b2a21 „ “ I2 ` Aγδ ` A2 γ δ2 2 ` ` ˘ δ ´ δ3Ωpβ1c1 ` β2c2q p1 ´ δγ ` 1  p1 ´ δγ ` 1 2 δ2γ2q „ δ2 0 γ γΩ 0 2 ` 1 ´ 1 2 δ2Ω 0 6Ω2b2a21 ´ 3r2Ω 6 δ3γ3q  2 δ2γ2 ´ 1 ´6Ωpβ1c1 ` β2c2q 3γΩ ´ γ3 Similarly, expand exppAγδq up to the order of δ3, exppAγδq “ I2 ` Aγδ ` A2 γ „ δ2 2 ` γΩ ´Ω ` γ2 Ω2 ´ γ2Ω 2γΩ ´ γ3  δ3 6 ` Opδ4q. Then the discrepancy matrix is approximately, › ›Bδ b1,β1,γ ´ exppAγδq ˙2 › „ ˆ › › › «ˆˆ 0 γ γΩ 0 δ2 2 « › ›2 Σ´1  ` δ 3 „ ´γΩ Ω2p6b2a21 ´ 1q ´ 2γ2Ω ˙ ˆ ˙2 Ωp1 ´ 6pβ1c1 ` β2c2qq ´ γ2 γΩ ˙ ff 2 “ “ δ6 36 Ω2 δ6 36 Ω2 «ˆ 6β2 1 b1 ` 6β2 2 b2 3γ Ωδ ´ 2 ` ` p6b2β1 ´ 6b1β2 ´ 1q ` 6pβ1 ´ 1 2 b1q2 b1p1 ´ b1q ´ 1 2 ` 3γ Ωδ ˙2 ˆ ` 6pβ1 ´ 1 2 b1q ´ 1 ` 3γ Ωδ 3γ Ωδ ff 2 ˙ .  δ3 6 ` Opδ4q ` Opδ4q. › › 2 › › Σ´1 Assuming that 3γ small, we can find that the optimal parameter is b˚ Ωδ ! 1, which holds for the underdamping Langevin dynamics when the damping term is 1 “ 0.5 and β˚ 1 « 0.40 ´ 0.43γ Ωδ . Remark 4.5 (Order of NySALT for the linear Langevin system) The local strong order of the NySALT in (4.7) is Opδ1.5q. In fact, letting XN t “ Xt in (4.6)–(4.7) and set t “ 0, we have ˆ ˜ Er|Xδ ´ XN › ›eAγ δ ´ Bδ δ |2s1{2 ď Er b1,β1,γ › › s Er}X0}2s1{2 ` ˇ ˇ ˇ Er ˇWδ 0 ´ 2 ˙ˇ ˇ ˇ ˇ ¸ 1{2 s . 0 ξδ 0 The first term is of order Opδ2q, which follows from the above expansions. The second term is of order „  „ Opδ1.5q because with the notation Γ “ 0 0 0 ´γ W δ 0 ´ ˙ ˆ 0 ξδ 0 ż δ 0 “ σ ” eAγ pδ´sq ´ eΓpδ´sq and A “ ı ˆ ˙ 0 dWs 0 1 ´Ω 0 ż δ “ σ 0 ” eApδ´sq ´ I ı eΓpδ´sq ˆ ˙ 0 dWs ,  , whose dominating component is σ NySALT scheme for the linear system is Opδ1.5q. δ ş 0 AseΓsdWs, a term with order Opδ1.5q. Therefore, the local order of the 5 The benchmark problems: Fermi-Pasta-Ulam (FPU) model In this section, we examine the performance of NySALT scheme on two benchmark nonlinear systems: Hamiltonian systems with the FPU potential (the deterministic FPU) and Langevin dynamics with the FPU potential (the stochastic FPU). Numerical results show that the inference is robust: the estimators are independent of the fine data generators, they converge as the number of trajectories increases, and they stabilizes fast (within a dozens of trajectories). NySALT scheme is efficient and accurate: it provides integrators adaptive to large time step size, improving the accuracy of solutions and enlarging the admissible time step size of stability, often quadruples those of the classical schemes, with minimum cost of training. 14 5.1 The FPU system The FPU (Fermi-Pasta-Ulam) system [24] presents highly oscillatory nonlinear dynamics. It consists of a chain of 2pm ` 1q mass points, connected with alternating soft nonlinear and stiff linear springs, and fixed at the end points [28]. The variables q1, . . . , q2m (with q0 “ q2m`1 “ 0) denote the displacements of the moving mass points, and pi denote their velocities. The motion is described by a Hamiltonian system with the Hamiltonian H given by Hpp, qq “ Kppq ` V pqq “ 1 2 mÿ pp2 2i´1 ` p2 2iq ` i“1 ω2 4 mÿ pq2i ´ q2i´1q2 ` i“1 mÿ i“0 pq2i`1 ´ q2iq4. (5.1) Here ω represents the stiffness of the system. We consider the system with m “ 3 and ω “ 50. This nonlinear system is a benchmark problem for symplectic or quasi-symplectic integrators, which aim to produce stable and qualitatively correct simulations [28]. As discussed in Section 2.2 and in Figure 1, the popular Strömer–Verlet method can only tolerate a limited time step size when the system is stiff, otherwise the it leads to qualitatively incorrect energies. Here the quantities of interest are the energy of each stiff spring and their total stiff energy. More specifically, with a change of variables for i “ 1, . . . , m, x0,i :“ y0,i :“ q2i ` q2i´1 ? 2 p2i ` p2i´1 ? 2 , x1,i :“ , y1,i :“ q2i ´ q2i´1 ? 2 p2i ´ p2i´1 ? 2 , , (5.2) (5.3) where x0,i represents a scaled displacement of the ith stiff spring, x1,i a scaled expansion (or compression) of the ith stiff spring, and y0,i, y1,i their velocities. The total stiff energy and the energy of the jth stiff spring are mÿ I :“ j“1 Ij, where Ijpx1,j, y1,jq :“ 1 2 ` ˘ 1,j ` ω2x2 y2 1,j , j “ 1, . . . , m. (5.4) Properties of the deterministic FPU. For a large ω, the deterministic FPU model analytically exhibits behaviour dependent on initial data and time scales [28]. Depending on the initial condition, the system can present either close to linear or highly nonlinear dynamics. It behaves close to a linear system when the initial state is dominated by the stiff springs, that is when the total energy of stiff springs is of order Op1q and the total energy of soft springs is less or of the same order. The system behaves nonlinearly when the initial state is mixed with both stiff and soft springs, which happens when the total energy of stiff springs is of order Op1q and the total energy of soft springs is of order Opω2q or bigger. Numerical tests show that even trained only from one type of these initial conditions, the NySALT scheme can predict the dynamics of the other type of initial conditions. The FPU system shows dynamics varying with time scales as well. When the system starts from the nearly harmonic state (i.e., the first case of initial conditions), it will behave differently as the time evolves [28]: • short time scale ω´1. The vibration of the stiff linear springs is nearly harmonic. • median time scale ω0. This is the time scale of the motion of the soft nonlinear springs. • long time scale ω1. Slow energy exchange among the stiff springs takes place on this time scale. We will test the NySALT scheme (3.1) in these three time scales. Properties of the stochastic FPU. Stochastic perturbations can help simulate qualitatively the long time chaotic effect of the deterministic nonlinear model. Thus, Stochastic FPU models have been used to study the thermal conductivity and transport [8, 62, 82], asymptotic properties [65] and the stochastic resonance [57]. We consider a stochastic FPU with an additive white noise on the velocity and with a fraction. The noise injects energy while the friction dissipates energy, introducing random fluctuations to the energies. When they are relative small compared to the Hamiltonian, the stochastic FPU has dynamical properties similar to those of the deterministic system in terms of dependence on the initial data and the time scales. However, the total energy can fluctuate significantly larger than the total energy of the deterministic 15 system, as we shown in Figure 2. The stochastic FPU model is ergodic (see e.g., [55] and [60, Proposition 6.1]). Thus, we will examine the NySALT and BAOAB schemes on producing the statistics of the energies, such as the time auto-covariance functions (ACF) and the empirical distributions (PDF). 5.2 NySALT for the deterministic FPU We examine two aspects of the NySALT: the robustness of the inference and its numerical performance as an integrator for large time-stepping. Numerical settings. Unless otherwise specified, the numerical setting are as follows. We estimate the 1 q from M “ 100 short trajectories on the training time interval r0, Ttrs with Ttr “ 1{2 as parameter pb˚ described in Section 3.1. Therefore, Ttr is in the time scale ω0. The initial conditions are sampled according to 1 , β˚ soft spring: x0,ip0q “ 1, y0,ip0q “ 1, stiff spring: x1,ip0q “ 1{ω ` ζi, y1,ip0q “ 1 ` ηi, (5.5) ω N p0, 1q. This initial distri- where ζi and ηi are independent Gaussian random variables with distribution 1 bution covers the regions that the entire system is nearly harmonic at the beginning of evolution. The data trajectories, recorded as time instants tn “ nδ, are generated by the Störmer–Verlet with a fine time step h “ 1e´4, except when testing the dependence on the symplectic integrator. The step size δ “ Gap ˆ h is much larger than h, and we will test Gap in several ranges. The optimal parameter is computed by constrained optimization with the interior point method with the loss function in (3.5). Robustness of the inference. The NySALT depends on data by design. Thus, it will depend on the sys- tem generating data and its parameters converge as the data size increases. Importantly, it does not depend on the numerical integrator generating the accurate fine data for training. We examine them numerically below. • Robustness to data generator. We show first that NySALT is robust to the data generator. That is, the inferred parameter does not depend on the integrator generating the training data, as long as the integrator is accurate, which is realized by utilizing a sufficiently small time step h and by using only short trajectories so that the accumulated numerical error is small. Table 1 shows that the estimated parameter are the same for three integrators, indicating the robustness of NySALT to the data generator. The three integrators are from the two-step Nyström family and one of them is the Störmer—Verlet method. To ensure that numerical error in data is negligible, we use h “ 1e´6. Since these integrators are second order Oph2q methods, their numerical error in the training interval r0, Ttrs of order Op10´12q. The NySALT has time step δ “ Gap ˆ h with Gap “ t1000; 5000; 10, 000u, that is δ “ t0.001; 0.005; 0.01u. Gap “ 1, 000 Gap “ 5, 000 Gap “ 10, 000 Gap “ 1, 000 Gap “ 5, 000 Gap “ 10, 000 Gap “ 1, 000 Gap “ 5, 000 Gap “ 10, 000 1 “ 2{3; βF bF 1 “ 1{3; βF bF Parameter of data generator Opt β˚ 1 0.403 1 “ 1{3 0.403 0.402 0.403 0.403 0.402 0.403 0.403 0.402 1 “ 1{2; βF bF 1 “ 1{3 1 “ 1{2 Opt b˚ 1 0.499 0.500 0.499 0.499 0.500 0.499 0.499 0.500 0.499 Inferred parameters from data sets generated by three Nyström integrators with parameter 1 q. The fine step size is h “ 1e´6 and the training time is Ttr “ 1{2. The coarse step size is Table 1: pbF 1 , βF δ “ t0.001; 0.005; 0.01u, which corresponds to Gap “ t1000; 5000; 10000u. • Optimal parameters verse stiffness parameter ω. We examine next the dependence of the parameters on ω, which determines the stiffness of the system. Here we test ω P t2, 4, 8, p10 : 10 : 100qu with M “ 100. 16 Since the linear stability of the Störmer—Verlet requires ∆t ă 2 , the coarse step is set to be δ “ 1{ω, ω which is half of the critical step size of linear stability. In comparison, we estimate the parameters of linear Hamiltonian systems (4.1) with the same ω by minimizing }eAδ ´ Bδ Σ´1 in Proposition 4.1. Figure 3 (left) shows that inferred parameters for FPU are close to those of the linear Hamiltonian system when ω ě 30. b1,β1 }2 Figure 3: Robustness of the inference. Left: Optimal parameters verse various stiffness values ω. Right: 1 q in numbers of sample trajectories M . Convergence of estimators pb˚ 1 , β˚ • Convergence in numbers of sample trajectories M . We examine next the convergence of the parameter estimator θM when the sample size M increases with ω “ 50. Figure 3 (right) shows the error of the estimators with M “ 2t1:8u in comparison to the reference estimator with M “ 29. As it can be seen, the estimator with M “ 2 is already close the reference values, with errors less than 10´3, and the error decays at a rate about M ´0.45, close to the theoretical rate in Theorem 3.2. Numerical performance as an integrator for large stepping. The NySALT provides an integrator adaptive to large time step size. Since it utilizes the optimal parameters adaptive to each step size, it improves the accuracy of the solution and enlarges the admissible time step size of accuracy, as the following numerical test demonstrates. • Improving the accuracy. Figure 4 (right) shows that the NySALT provides the most accurate solution for all time step sizes ranging from δ “ Gap ˆ h with Gap ranging in Gap P tp10 : 10 : 100q, p150 : 50 : 350q, 390u, when comparing with the Störmer–Verlet scheme. It presents the averaged relative Root-Mean-Square- Error (RMSE), which is averaged out over multiple trajectories, Avg rel RMSE :“ 1 M Mÿ m“1 rel RMSEpmq, where the relative RMSE in the m-th trajectory is defined as rel RMSEpmq :“ g f f e 1 Nt ` Ntÿ i“1 I F,pmqptiq ´ I C,pmqptiq ˘ I F,pmqptiq ` 2 (5.6) (5.7) ˘ 2 . Here I F,pmqptiq denotes the total energy of the reference solution with fine time step size h “ 1e´4 at time ti in the m-th trajectory, and similarly, I C,pmqptiq denotes the total energy of NySALT with coarse time step δ “ Gap ˆ h. The number of sample trajectories is M “ 400. We consider two time intervals r0, Ttests with Ttest “ 0.5 and Ttest “ 100, representing the median and long time scale Opω0q and Opω1q, respectively. At Ttest “ 0.5, as shown in top left of Figure 4, both integrators have errors increasing linearly in δ “ Gap ˆ h. The relative error by NySALT is two magnitudes smaller than that by Verlet until around Gap “ 300. The linear dependence with slope 2 comes from the order 17 0204060801000.490.50.510.52b1*FPULinear020406080100!0.3950.40.4050.41-1*FPULinear24816326412825610!710!510!3Error in b1*Gap=10Gap=20Gap=100refline=M!0:45248163264128256Number of traj, M10!710!510!3Error in -1*Gap=10Gap=20Gap=100refline=M!0:45 Opδ2q of the Nyström methods. At Ttest “ 100, as shown in bottom left of Figure 4, the NySALT keeps the linear dependence of the relative error on δ “ Gap ˆ h up to Gap “ 50, doubling the reach of the Verlet method. Furthermore, up to the Gap “ 390, the relative error of NySALT scheme is consistently smaller than that of the Verlet scheme. As a result and as we show next, NySALT can tolerate a larger time step size beyond the limitation of Verlet. We further validate the accuracy of NySALT with a large time step by examining the transitions of energies in the long time scale Opω1q. We consider sample M “ 400 trajectories with the time interval r0, 300s. The right of Figure 4 shows that NySALT preserves the energy transition well, with errors significantly smaller than the suboptimal Nyström integrator with parameters b1 “ 0.45 and β1 “ 0.43. The Störmer–Verlet is not presented here because its errors are too large. Here we use the L1 errors of the energies and phase angles to quantify the accuracy. The L1 errors of the energies at time ti is computed as ErrL1 ptiq :“ 1 I Fptiq 3ÿ j“1 ˇ ˇI C j ptiq ´ I F ˇ ˇ j ptiq ¨ δ and the L1 error of phase angles at time ti AngErr L1ptiq :“ ˇ ˇϑCptiq ´ ϑFptiq ˇ ˇ ˇ ˇϕCptiq ´ ϕFptiq ˇ ˇ ¨ δ, ¨ δ ` (5.8) (5.9) where the phase angles ϑCptiq and ϕCptiq (and similarly ϑF and ϕF ) are defined by ¸ ¸ ϑCptiq :“ arccos , ϕCptiqq :“ arctan ˜ a a I C 3 ptiq I Cptiq ˜ a a I C 2 ptiq I C 1 ptiq . The fine data for reference is generated by the Störmer–Verlet with h “ 1e´4. The coarse data are generated with δ “ 1{ω (i.e., with Gap “ 200) by using the optimal and suboptimal parameters of Nyström methods. Figure 4: Improving the accuracy. Left: Averaged relative Root-Mean-Square-Error (Avg rel RMSE (5.6) and (5.7)) over total length Ttest between NySALT and Störmer–Verlet schemes. Right: L1 errors of the energies (5.8) and phase angles (5.9) between NySALT and suboptimal Nyström schems. • Enlarging the maximal admissible time step size. In Figure 4 (top left) with the timescale of Opω0q, if we take threshold of 1% average relative RMSE for I, the maximum gap in Störmer Verlet scheme allowed is 70, however, the maximum gap in NySALT scheme can reach at Gap “ 300. Similarly in Figure 4 (bottom left) with the timescale of Opω1q, the maximum gaps allowed with 1% average relative RMSE for both methods are 50 and 200. So NySALT scheme can enlarge at least four times of the maximal admissible step size of the Störmer–Verlet scheme without lossing any accuracy. We demonstrate next that when δ “ 2{ω (i.e., with Gap “ 400), the linear stability limit of Störmer–Verlet, NySLAT can remain stable while Verlet blows up. Figure 5 shows that the Störmer–Verlet with coarse 18 207030010!610!410!2100Avg Rel RMSETtest=0.5NySALTVerletref line = Gap^22050200Gap10!410!310!210!1Avg Rel RMSETtest=100NySALTVerletref line = Gap^205010015020025030000.0050.01Angle errorOptNearOpt050100150200250300Time00.0050.01Eng errorOptNearOpt step size δ blows up almost immediately (within total time of 1), while the NySALT scheme remains stable and accurate and can capture the main patterns of the energy transfer up to total time of 150. Notice that the maximal admissible time step size of stability of the Störmer–Verlet method is less than 2{ω, whereas 20{3{ω, which agrees the maximal NySALT can reach beyond it, reaching close (in additional tests) to admissible step size of linear stability in Remark 4.3. a Figure 5: Large time-stepping near the linear stability limit. Left, Middle and Right show the trajectories of scaled expansion of stiff springs x1,i (5.2) and stiff energies Ii (5.3) generated by Störmer–Verlet scheme with the fine step size h, NySALT scheme with the coarse step size δ “ 400h and Störmer–Verlet scheme with the coarse step size δ “ 400h. 5.3 NySALT for the stochastic FPU Similar to the deterministic example, we examine the stochastic NySALT scheme in terms of the robustness of its inference and its numerical performance as an integrator. 1 , β˚ Numerical settings. We consider the Langevin dynamics with the same FPU potential and the friction coefficient is γ “ 0.01, which is the underdamping case. The diffusion coefficient is σ “ 0.05. The optimal parameter pb˚ 1 q are estimated by minimizing the loss function (3.12) from M “ 512 short trajectories on the training time interval r0, Ttrs with Ttr “ 1 as described in Section 3.2. In particular, the data trajectories consist of both the state Xt and the stochastic force Wt and they are generated by the BAOAB scheme with the fine time step size h “ 1e´4. We downsample the state trajectories at time instants tn “ nδ, and approximate the one-step stochastic increment at time instants tn by (3.9). The coarse time step size δ “ Gap ˆ h is much larger than h, with Gap ranging from 10 to 450. The initial conditions are uniformly sampled from a single long simulated trajectory of total time of T “ 25000. Robustness of the inference. The optimal estimators of NySALT scheme still stabilize very fast with small variations between different datasets. Figure 6 (Left) presents the absolute errors of the estimators with M “ 2t2:9u, where the reference estimator is computed from M “ 1024 trajectories. From the figure, both estimators with M “ 512 is close to the reference values, with absolute errors less than 10´2, and the error decays at rate about M ´0.44, which are close to theoretical rate in Theorem 3.2 as well. Figure 6 (Right) further shows the mean and errorbar of the estimators at different time gaps. Both estimators are estimated with M “ 512 trajectories. The runtime analysis shows it takes about 854 seconds on average to learn the estimators at each gap. We repeat the inference procedure independently 10 times to assess the variability over the random generated data. The mean of both estimators are close to optimal parameters in the linear Langevin system in sec.4.2. Due to the small variance, the errorbar is barely noticeable. Numerical performance as an integrator. The NySALT scheme has parameters adaptive to the time step size. Thus, like the deterministic FPU, it can tolerate relatively large time step when compared to a classical integrator, as verified by Figure 7. Here we compare the NySALT scheme with BAOAB scheme, the state-of-the-art symplectic integrator in twofold: average relative RMSE in short time scale and statistics in long time scale. In the current setting, we only compare the results in terms of the total stiff energy I. All the parameters at various coarse time steps are estimated with M “ 512 sample trajectories. • Average relative RMSE in short time scale. We consider the time interval r0, Ttests with Ttest “ 1 and the number of sample trajectories M “ 10000. Both the NySALT and the BAOAB schemes integrate at the 19 50100150-101x1,1Fine Verlet50100150-101x1,250100150-101x1,350100150Time07.515Energy#102I1I2I3I50100150-101Coarse NySALT50100150-10150100150-10150100150Time07.515#102I1I2I3I0.20.40.60.81-20-10010Coarse Verlet0.20.40.60.81-20-1000.20.40.60.81-1-0.500.20.40.60.81Time05#107I3I Figure 6: Robustness of estimators. Left: Convergence of parameters as number of trajectories M increases. Right: Mean and errorbar of estimators at different gaps in 10 independent simulations. Figure 7: Performance of the NySALT scheme, in comparison with the BAOAB scheme. Left: Average relative RMSE (in (5.6) and (5.7)) of the total stiff energy I over total length Ttest “ 1. Middle: The empirical distributions (PDF) and their total variation distances of both schemes at various coarse time steps. Right: The time auto-covariance functions (ACF) and their RMSE of both schemes at various coarse time steps. 20 326412825651251015Error in b1*#10!3Gap=150Gap=250Gap=350refline=M!0:443264128256512Number of traj, M468Error in -1*#10!3Gap=150Gap=250Gap=350refline=M!0:441001502002503003504000.510.520.53b1*Errorbar of parameters b1* and -1*100150200250300350400Gap0.40.4050.41-1*1070190410Gap10!310!210!1100Avg Rel RMSE of INySALTBAOABref line = Gap^2ref line = Gap^100.250.50.751I012PDFFinewithBAOABGap=330withBAOABGap=330withNySALT1050100200330450Gap10!210!1TVDNySALTBAOAB00.250.50.751Time012ACF#10!3FinewithBAOABGap=190withBAOABGap=190withNySALT1050100190450Gap10!410!2100RMSE of ACFNySALTBAOAB coarse time step δ “ Gap ˆ h for Gap ranging from 10 to 450, with the same coarse grained stochastic force ξti generated by (3.9) from Wt. Their solutions are compared with the reference solution generated by the BAOAB scheme with fine time step h “ 10´4, with the same stochastic force Wt. Figure 7 (Left) shows the average relative RMSE of the total stiff energy in both schemes, where average relative RMSE is defined in (5.6) and (5.7). In log scale, the error of BAOBA scheme keeps the linear dependence until Gap “ 200 with the slope 2, thereafter grows superlinearly. However, NySALT scheme stretches the linear dependence to Gap “ 450 with the slope 1. The error of NySALT scheme is consistently smaller than that of BAOAB after Gap “ 70, which corroborates our goal of large time-stepping. If we take threshold of 10% average relative RMSE, the maximum gap of BAOAB allowed is 70, while the maximum gap in NySALT can reach at Gap “ 190. • Statistics in long time scale. Since the system is stochastic, we focus on statistics of long time trajectories, such as, empirical distributions (PDF) and auto-covariance functions (ACF). We consider the time interval r0, Ttests with Ttest “ 40 and the number of sample trajectories M “ 10000. Similar to previous simulation, we integrate both schemes with the coarse time steps, whose gap ranging from 10 to 450. But the stochastic force in both schemes are not the same. We estimate the PDF and ACF of the total stiff energy I for different gaps. The empirical distribution is sampled with the 100 equal width bins in r0, 1s and ACF at time τ is defined ACFpτ q “ ErIt ¯It`τ s ´ ErItsEr ¯It`τ s with τ P r0, 1s. Similarly, these PDF and ACF are compared with the reference solutions generated by BAOAB scheme with fine step size. We use the total variance distance (TVD) as the metric to quantify the deviation from the reference empirical measure. The total variance distance (TVD) between the empirical measure P at coarse step size and the empirical measure Q at fine step size is defined as (5.10) TVDpP, Qq “ 1 2 }P ´ Q}1. (5.11) On the other hand, we use the RMSE as the metric to measure the error of ACF. Figure 7 (Middle top) shows that at Gap “ 330, NySALT scheme accurately reproduces the empirical distribution, whereas BAOAB scheme deviates largely from the reference due to the large time step. Figure 7 (Middle bottom) shows that the NySALT scheme has consistently smaller TVD than the BAOAB scheme after Gap ą 200, remaining almost unchanged (around 10´2) even at Gap “ 450. In particular, the TVD of BAOAB scheme at Gap “ 330 is 10´1, which is one magnitude larger than that of NySALT. Figure 7 (Right) shows the comparisons of the ACFs, with the top figure shows the ACFs when Gap “ 190 and the bottom figure shows the RMSEs of the ACFs for both schemes with a ranges of gaps. The right top figure shows that at medium gap Gap “ 190, NySALT scheme produces an ACF almost exactly as the reference generated by BAOAB with fine time step, in comparison, BAOAB scheme with the same step size produces an ACF with significantly larger oscillations. Furthermore, the right bottom figure shows that the error of BAOAB scheme grows exponentially when Gap ą 100, while NySALT scheme remains accurate until about Gap “ 400. So the maximum admissible time step size of NySALT scheme almost quadruples that of BAOAB scheme. In addition, NySALT scheme significantly reduces the computational cost. For example, to compute the ACF by M “ 10000 sample trajectories, the run-time of the BAOAB scheme with fine step size is about 2078 seconds, whereas the NySALT with medium step size Gap “ 190 only takes 18 seconds. It is almost 115 times faster. Even we take into account of the training time (which is about 854 seconds), it is still significantly better to use NySALT scheme. 6 Conclusion We have proposed and investigated a parametric inference approach to innovate classical numerical inte- grators to obtain a new integrator which is tailored for each time step size and the specific system. In particular, we introduce NySALT, a Nyström-type inference-based schemes adaptive to large time-stepping. The framework of constructing inference-based schemes from data has the major advantages: • Compared to the generic classical numerical integrators, the inferred scheme with optimal parameters enlarges the maximal admissible while maintaining similar levels of accuracy. 21 • The parametric inference is robust regardless data generation and is immune to curse-of-dimensionality or overfitting. Moreover, the scheme is generalizable beyond the training set for autonomous systems. • The convergence of the estimators can be rigorously proved as data size increases. We demonstrate the performance of the NySALT on both Hamiltonian and Langevin system via the Fermi- Pasta-Ulam (FPU) potential. Numerical results verify the convergence of the estimators. Furthermore, they show that NySALT quadruples the time step size for the Hamiltonian system and quadruples that for the Langevin system when comparing with the Störmer–Verlet and the BAOAB to maintain the average relative RMSE within certain level. Meanwhile, NySALT scheme still has a limited maximal time step size, which is inherited from the classical integrator. The whole idea of NySALT can be easily extended to other family of the integrators. In the future work, we will investigate improved approximation of the flow map by using new parametric forms or non-parametric learning to further extend time step size. Acknowledgements X. Li’s is grateful for partial support by the National Science Foundation Award DMS-1847770. F. Lu’s is grateful for partial support by the NSF Award DMS-1913243. M. Tao is grateful for partial support by the NSF DMS-1847802, NSF ECCS-1936776, and the Cullen-Peck Scholar Award. F. Ye is grateful for partial support by the AMS-Simons travel grants. A Derivative of the cost function We provide here the detailed computation of the derivative of the cost function in (3.12). Recall that with a given time step h, the Stochastic Symplectic Nyström scheme consists of two components: a symplectic 2-step Nyström scheme that integrates the Hamiltonian part: ˜Xn`1 “ Sh b1,β1 pXnq , and an exact integration of the Ornsterin-Uhlenbeck process: „ XN n`1 “ Oh ˜Xn`1 `  0 ξn „ where Oh “ 0 I 0 expp´γhqI  . The cost function is rewritten trajectory-wise as where each summand function (superscript m is omitted) is EM pθq “ 1 M Mÿ m“1 Em Empθq “ “ 1 Nt 1 Nt Nt´1ÿ i“0 Nt´1ÿ i“0 › ›δpFθpXti, ξti, δq ´ FpXti, Wrti,ti`1s, δqq › ›2 Σ´1 › › ›XN ti`1 ´ Xti`1 › › › 2 Σ´1 . Then, to compute the derivative of the cost function, it suffices to compute the derivative of each summand function, which is ∇θEmpθq “ “ 1 Nt 1 Nt Nt´1ÿ n“0 Nt´1ÿ n“0 ´ r∇θpXN ti`1qsJΣ´1 ´ ¯¯ XN ti`1 ´ Xti`1 ´` 2 Oh ¨ Jpb1, β1q ˘J ´ ¯¯ Σ´1 XN ti`1 ´ Xti`1 . 22 Here Jpb1, β1q is the Jacobian of the symplectic integrator with respect to the parameters. It is computed in (2.6) and li in (2.5)) directly as (recall the definition of Sh b1,β1 Jpb1, β1q “ “ where B(cid:96)1 Bb1 “ ∇(cid:96)1 β1hpn b2 1 , B(cid:96)1 Bβ1 « » – B ˜qn`1 Bb1 B ˜pn`1 Bb1 B ˜qn`1 Bβ1 B ˜pn`1 Bβ1 ´ h2 ´ β1 B(cid:96)1 Bb1 h (cid:96)1 ´ (cid:96)2 ` b1 , and hpn b1 “ ´∇(cid:96)1 ˆ ff ¯ ´ ` β2 B(cid:96)1 Bb1 B(cid:96)2 Bb1 ` b2 h2 ¯ (cid:96)1 ´ (cid:96)2 ` β1 ´ B(cid:96)2 Bb1 h b1 B(cid:96)1 Bβ1 ¯ fi fl B(cid:96)2 Bβ1 ` β2 ¯ B(cid:96)2 Bβ1 B(cid:96)1 Bβ1 ` b2 ´ h2(cid:96)1 ˙ β2 b2 2 B(cid:96)2 Bb1 B(cid:96)2 Bβ1 “ ∇(cid:96)2 ¨ ´hpn ˆ “ ∇(cid:96)2 ¨ ´hpn ˆ “ ∇(cid:96)2 ¨ hpn ˆ “ ∇(cid:96)2 ¨ hpn 1 b2 1 b2 B(cid:96)1 Bb1 β1h b2 1 b2 ` h2 β1b2 ´ b1β2 β2 b2 2 β2 b2 2 ` h2 β1b2 ´ b1β2 ` h2 β1b2 ´ b1β2 b2 b2 ´ h2 β1b2 ´ b1β2 b2 B(cid:96)1 Bβ1 h b1 ∇(cid:96)1 ¨ pn ´ h2(cid:96)1 ˙ ` h2(cid:96)1 1 b2 ∇(cid:96)1 ¨ pn ` h2(cid:96)1 ˙ . 1 b2 ˙ , β2 b2 2 Here ∇(cid:96)1 and ∇(cid:96)2 are ∇(cid:96)1 “ ∇gpqn ` hc1pnq, ∇(cid:96)2 “ ∇gpqn ` hc2pn ` h2a21(cid:96)1q. References [1] Assyr Abdulle, Weinan E, Bjorn Engquist, and Eric Vanden-Eijnden. The heterogeneous multiscale method. Acta Numer., 21:1–87, 2012. [2] Ralph Abraham and Jerrold E Marsden. Foundations of mechanics. Number 364. American Mathematical Soc., 2008. [3] Babak Maboudi Afkham and Jan S. Hesthaven. Structure preserving model reduction of parametric hamiltonian systems. SIAM Journal on Scientific Computing, 39(6):A2616–A2644, 2017. [4] G. Ariel, B. Engquist, and Y.-H.R. Tsai. A multiscale method for highly oscillatory ordinary differential equations with resonance. Math. Comput., 78:929, 2009. [5] Vladimir Igorevich Arnol’d. Mathematical methods of classical mechanics, volume 60. Springer Science & Business Media, 2013. [6] Uri M. Ascher, Steven J. Ruuth, and Raymond J. Spiteri. Implicit-explicit runge-kutta methods for time- dependent partial differential equations. Applied Numerical Mathematics, 25(2):151–167, 1997. Special Issue on Time Integration. [7] Y. Bar-Sinai, S. Hoyer, J. Hickey, and M. P. Brenner. Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences, 116(31):15344–15349, 2019. [8] Giada Basile, Cédric Bernardin, and Stefano Olla. Momentum conserving model with anomalous thermal con- ductivity in low dimensional systems. Physical review letters, 96(20):204303, 2006. [9] Giancarlo Benettin and Antonio Giorgilli. On the hamiltonian interpolation of near-to-the identity symplectic mappings with application to symplectic integration algorithms. Journal of Statistical Physics, 74(5):1117–1143, 1994. [10] Tom Bertalan, Felix Dietrich, Igor Mezić, and Ioannis G Kevrekidis. On learning hamiltonian systems from data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(12):121107, 2019. [11] Patrick Billingsley. Convergence of probability measures. John Wiley & Sons, 2013. [12] Sergio Blanes and Fernando Casas. A concise introduction to geometric numerical integration. CRC press, 2017. 23 [13] Nawaf Bou-Rabee and Houman Owhadi. Long-run accuracy of variational integrators in the stochastic context. SIAM Journal on Numerical Analysis, 48(1):278–297, 2010. [14] Patrick Buchfink, Ashish Bhatt, and Bernard Haasdonk. Symplectic model order reduction with non-orthonormal bases. Mathematical and Computational Applications, 24(2), 2019. [15] M. P. Calvo and J. M. Sanz-Serna. Heterogeneous multiscale methods for mechanical systems with vibrations. SIAM J. Sci. Comput., 32(4):2029–2046, 2010. [16] Renyi Chen, Gongjie Li, and Molei Tao. Grit: A package for structure-preserving simulations of gravitationally interacting rigid bodies. The Astrophysical Journal, 919(1):50, 2021. [17] Renyi Chen and Molei Tao. Data-driven prediction of general hamiltonian dynamics via learning exactly- symplectic maps. ICML, 2021. [18] Zhengdao Chen, Jianyu Zhang, Martin Arjovsky, and Léon Bottou. Symplectic recurrent neural networks. In International Conference on Learning Representations, 2019. [19] A. J. Chorin and F. Lu. Discrete approach to stochastic parametrization and dimension reduction in nonlinear dynamics. Proc. Natl. Acad. Sci. USA, 112(32):9804–9809, 2015. [20] Arnak S Dalalyan and Lionel Riou-Durand. On sampling from a log-concave density using kinetic langevin diffusions. Bernoulli, 26(3):1956–1988, 2020. [21] Matthew Dobson, Claude Le Bris, and Frederic Legoll. Symplectic schemes for highly oscillatory Hamiltonian systems: the homogenization approach beyond the constant frequency case. IMA J. Numer. Anal., 33:30–56, 2013. [22] Weinan E, Bjorn Engquist, Xiantao Li, Weiqing Ren, and Eric Vanden-Eijnden. The heterogeneous multiscale method: A review. In Commun. Comput. Phys. Citeseer, 2007. [23] Kang Feng and Mengzhao Qin. Symplectic Geometric Algorithms for Hamiltonian Systems. Springer, 2010. [24] Enrico Fermi, P Pasta, Stanislaw Ulam, and Mary Tsingou. Studies of the nonlinear problems. Technical report, Los Alamos National Lab.(LANL), Los Alamos, NM (United States), 1955. [25] B. García-Archilla, J. M. Sanz-Serna, and R. D. Skeel. Long-time-step methods for oscillatory differential equations. SIAM J. Sci. Comput., 20(3):930–963, 1999. [26] Sam Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian neural networks. NeurIPS, 2019. [27] H. Grubmuller, H. Heller, A. Windemuth, and K. Schulten. Generalized Verlet algorithm for efficient molecular dynamics simulations with long-range interactions. Mol. Simul., 6:121–142, 1991. [28] Ernst Hairer, Christian Lubich, and Gerhard Wanner. Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations. Springer, 2006. [29] Jialin Hong and Chun Li. Multi-symplectic runge–kutta methods for nonlinear dirac equations. Journal of Computational Physics, 211(2):448–472, 2006. [30] Jialin Hong and Xu Wang. Invariant Measures for Stochastic Nonlinear Schrödinger Equations. Springer, 2019. [31] Thomas Hudson and Xingjie H Li. Coarse-graining of overdamped langevin dynamics via the mori-zwanzig formalism. Multiscale Modeling & Simulation, 18(2):1113–1135, 2020. [32] Pengzhan Jin, Zhen Zhang, Aiqing Zhu, Yifa Tang, and George Em Karniadakis. Sympnets: Intrinsic structure- preserving symplectic networks for identifying hamiltonian systems. Neural Networks, 132:166–179, 2020. [33] Shi Jin. Asymptotic-preserving schemes for multiscale physical problems. Acta Numerica, pages 1–82, 2022. [34] Ioannis G Kevrekidis, C William Gear, and Gerhard Hummer. Equation-free: The computer-aided analysis of complex multiscale systems. AIChE Journal, 50(7):1346–1355, 2004. [35] B. Khouider, A. J Majda, and M. A Katsoulakis. Coarse-grained stochastic models for tropical convection and climate. Proc. Natl. Acad. Sci. U.S.A., 100(21):11941–11946, 2003. [36] D. Kondrashov, M. D. Chekroun, and M. Ghil. Data-driven non-Markovian closure models. Physica D, 297:33–55, 2015. [37] Yury A. Kutoyants. Statistical inference for ergodic diffusion processes. Springer, 2004. [38] Claude Le Bris and Frédéric Legoll. Integrators for highly oscillatory hamiltonian systems: An homogenization approach. Discrete Contin. Dyn. Syst. Ser. B, 13:347–373, 2010. [39] Frédéric Legoll and Tony Lelievre. Effective dynamics using conditional expectations. Nonlinearity, 23(9):2131, 2010. 24 [40] Huan Lei, Nathan A. Baker, and Xiantao Li. Data-driven parameterization of the generalized Langevin equation. Proc. Natl. Acad. Sci. USA, 113(50):14183–14188, 2016. [41] B. Leimkuhler and S. Reich. Simulating Hamiltonian Dynamics, volume 14. Cambridge University Press, 2004. [42] Benedict Leimkuhler and Charles Matthews. Rational construction of stochastic numerical methods for molecular sampling. Applied Mathematics Research eXpress, 2013(1):34–56, 2013. [43] Eugene Lerman, Michéle Audin, and Ana Cannas da Silva. Symplectic Geometry of Integrable Hamiltonian Systems. Publisher: Springer Basel AG, 2003. [44] Ruilin Li, Hongyuan Zha, and Molei Tao. Sqrt (d) dimension dependence of langevin monte carlo. ICLR, 2022. [45] Xingjie Helen Li, Fei Lu, and Felix X.-F. Ye. Isalt: Inference-based schemes adaptive to large time-stepping for locally lipschitz ergodic systems. Discrete and Continuous Dynamical Systems - S, 15(4):747–771, 2022. [46] Zhen Li, Hee Sun Lee, Eric Darve, and George Em Karniadakis. Computing the non-markovian coarse-grained interactions derived from the mori–zwanzig formalism in molecular systems: Application to polymer melts. The Journal of chemical physics, 146(1):014104, 2017. [47] Kevin K Lin and Fei Lu. Data-driven model reduction, wiener projections, and the koopman-mori-zwanzig formalism. Journal of Computational Physics, 424:109864, 2021. [48] Shuaiqiang Liu, Lech A Grzelak, and Cornelis W Oosterlee. The seven-league scheme: Deep learning for large time step monte carlo simulations of stochastic differential equations. Risks, 10(3):47, 2022. [49] F. Lu, K. K. Lin, and A. J. Chorin. Data-based stochastic model reduction for the Kuramoto–Sivashinsky equation. Physica D, 340:46–57, 2017. [50] Fei Lu. Data-driven model reduction for stochastic Burgers equations. Entropy, 22(12):1360, Nov 2020. [51] Michael Lutter, Christian Ritter, and Jan Peters. Deep lagrangian networks: Using physics as model prior for deep learning. In International Conference on Learning Representations, 2019. [52] Chao Ma, Jianchun Wang, and Weinan E. Model reduction with memory and the machine learning of dynamical systems. Commun. Comput. Phys., 25(4):947–962, 2018. [53] A. J. Majda and J. Harlim. Physics constrained nonlinear regression models for time series. Nonlinearity, 26(1):201–217, 2013. [54] J. E. Marsden and M. West. Discrete mechanics and variational integrators. Acta Numerica, 10:357–514, 2001. [55] J. C. Mattingly, A. M. Stuart, and M. V. Tretyakov. Convergence of numerical time-averaging and stationary measures via Poisson equations. SIAM J. Numer. Anal., 48(2):552–577, 2010. [56] Robert I McLachlan and G Reinout W Quispel. Splitting methods. Acta Numerica, 11:341–434, 2002. [57] George Miloshevich, Ramaz Khomeriki, and Stefano Ruffo. Stochastic resonance in the Fermi-Pasta-Ulam chain. Phys. Rev. Lett., 102:020602, Jan 2009. [58] Grigori N Milstein and Michael V Tretyakov. Stochastic numerics for mathematical physics, volume 456. Springer, 2004. [59] Sina Ober-Blöbaum, Molei Tao, Mulin Cheng, Houman Owhadi, and Jerrold E Marsden. Variational integrators for electric circuits. J. Comput. Phys., 242:498–530, 2013. [60] Grigorios A Pavliotis. Stochastic processes and applications: diffusion processes, the Fokker-Planck and Langevin equations, volume 60. Springer, 2014. [61] Liqian Peng and Kamran Mohseni. Symplectic model reduction of hamiltonian systems. SIAM Journal on Scientific Computing, 38(1):A1–A27, 2016. [62] Dibyendu Roy. Crossover from Fermi-Pasta-Ulam to normal diffusive behavior in heat conduction through open anharmonic lattices. Phys. Rev. E, 86:041102, 2012. [63] J. M. Sanz-Serna. Symplectic integrators for hamiltonian problems: an overview. Acta Numerica, 1:243–286, 1992. [64] J.M. Sanz-Serna and M.P. Calvo. Numerical Hamiltonian problems. Chapman and Hall/CRC, 1st edition, 1994. [65] Harald Schmid, Sauro Succi, and Stefano Ruffo. Nonlinearity accelerates the thermalization of the quartic FPUt model with stochastic baths. Journal of Statistical Mechanics: Theory and Experiment, 2021, 2020. [66] Christof Schütte and Folkmar A. Bornemann. Homogenization approach to smoothed molecular dynamics. In Proceedings of the Second World Congress of Nonlinear Analysts, Part 3 (Athens, 1996), volume 30, pages 1805–1814, 1997. 25 [67] Xiaocheng Shang. Accurate and efficient splitting methods for dissipative particle dynamics. SIAM Journal on Scientific Computing, 43(3):A1929–A1949, 2021. [68] William Snyder, Changhong Mou, Honghu Liu, Omer San, Raffaella De Vita, and Traian Iliescu. Reduced order model closures: A brief tutorial. arXiv preprint arXiv:2202.14017, 2022. [69] Molei Tao. Explicit high-order symplectic integrators for charged particles in general electromagnetic fields. Journal of Computational Physics, 327:245–251, 2016. [70] Molei Tao. Explicit symplectic approximation of nonseparable hamiltonians: Algorithm and long time perfor- mance. Physical Review E, 94(4):043303, 2016. [71] Molei Tao and Shi Jin. Accurate and efficient simulations of hamiltonian mechanical systems with discontinuous potentials. Journal of Computational Physics, 450:110846, 2022. [72] Molei Tao and Tomoki Ohsawa. Variational optimization on lie groups, with examples of leading (generalized) In International Conference on Artificial Intelligence and Statistics, pages 4269–4280. eigenvalue problems. PMLR, 2020. [73] Molei Tao, Houman Owhadi, and Jerrold E. Marsden. Nonintrusive and structure preserving multiscale inte- gration of stiff odes, sdes, and hamiltonian systems with hidden slow dynamics via flow averaging. Multiscale Modeling & Simulation, 8(4):1269–1324, 2010. [74] Molei Tao, Houman Owhadi, and Jerrold E Marsden. From efficient symplectic exponentiation of matrices to symplectic integration of high-dimensional Hamiltonian systems with slowly varying quadratic stiff potentials. Appl. Math. Res. Express, (2):242–280, 2011. [75] Adam Telatovich and Xiantao Li. The strong convergence of operator-splitting methods for the langevin dynamics model. arXiv preprint arXiv:1706.04237, 2017. [76] Peter Toth, Danilo J Rezende, Andrew Jaegle, Sébastien Racanière, Aleksandar Botev, and Irina Higgins. Hamiltonian generative networks. In International Conference on Learning Representations, 2020. [77] M. Tuckerman, B. J. Berne, and G. J. Martyna. Reversible multiple time scale molecular dynamics. J. Chem. Phys., 97:1990–2001, 1992. [78] Riccardo Valperga, Kevin Webster, Victoria Klein, Dmitry Turaev, and Jeroen SW Lamb. Learning reversible symplectic dynamics. arXiv preprint arXiv:2204.12323, 2022. [79] Shiying Xiong, Yunjin Tong, Xingzhe He, Shuqi Yang, Cheng Yang, and Bo Zhu. Nonseparable symplectic neural networks. ICLR, 2021. [80] Tianze Zheng, Weihao Gao, and Chong Wang. Learning large-time-step molecular dynamics with graph neural networks. NeurIPS 2021 Workshop - AI for Science: Mind the Gaps, 2021. [81] Yaofeng Desmond Zhong, Biswadip Dey, and Amit Chakraborty. Symplectic ode-net: Learning hamiltonian dynamics with control. In International Conference on Learning Representations, 2020. [82] Yuanran Zhu and Huan Lei. Effective mori-zwanzig equation for the reduced-order modeling of stochastic systems. arXiv preprint arXiv:2102.01377, 2021. 26
ai_researcher
2
A_Knowledge_Graph_Approach_towards_Re-structuring_of_Scientific_Articles.pdf
1 2 0 2 r p A 4 1 ] L C . s c [ 1 v 2 9 8 6 0 . 4 0 1 2 : v i X r a Knowledge-driven Answer Generation for Conversational Search * Mariana Leitea,∗, Rafael Ferreiraa, David Semedoa, Jo˜ao Magalh˜aesa aUniversidade NOVA de Lisboa, Portugal Abstract The conversational search paradigm introduces a step change over the traditional search paradigm by allowing users to interact with search agents in a multi-turn and natural fashion. The conversation flows naturally and is usually cen- tered around a target field of knowledge. In this work, we propose a knowledge-driven answer generation approach for open-domain conversational search, where a conversation-wide entities’ knowledge graph is used to bias search- answer generation. First, a conversation-specific knowledge graph is extracted from the top passages retrieved with a Transformer-based re-ranker. The entities knowledge-graph is then used to bias a search-answer generator Trans- former towards information rich and concise answers. This conversation specific bias is computed by identifying the most relevant passages according to the most salient entities of that particular conversation. Experiments show that the proposed approach successfully exploits entities knowledge along the conversation, and outperforms a set of baselines on the search-answer generation task. 1. Introduction In conversational search systems, users can interact in a natural manner with search systems. These go beyond the traditional search task where, on a multi-turn ses- sion, users query the system until the information need is met, thus resembling the way humans interact with each other. Supporting this paradigm shift are the obser- vations made in Vtyurina et al. (2017), which revealed that users are receptive to conversational systems, pro- vided that they meet users’ expectations with respect to information seeking. To this extent, two main challenges must be addressed Huang et al. (2020); Vtyurina et al. (2017): a) keep track of the dialog context and b) gen- erate informative yet concise search-answers. To address a), conversational search systems adopt a query rewriting- based approach Lin et al. (2020); Voskarides et al. (2020), ∗Pre-print paper. Email addresses: [email protected] (Mariana Leite), [email protected] (Rafael Ferreira), [email protected] (David Semedo), [email protected] (Jo˜ao Magalh˜aes) which rewrites a conversational query in order to make it context-independent. Then, retrieval and re-ranking are performed to retrieve a set of relevant passages. Address- ing b) requires going beyond passage retrieval, and gener- ate a short search-answer, similar to what is accomplished in chit-chat dialogue systems Song et al. (2018); Wang et al. (2020); Zhuang et al. (2017). We argue that knowl- edge from current and previous turns is still crucial to provide the user with the most informative answer, and should be used seamlessly in addressing challenges a) and b). Knowledge about entities has proved to be important for search tasks Dalton et al. (2014); Kato et al. (2020); Xiong et al. (2018). Over the conversation, the interac- tions between different entities are expected to implicitly encode the conversational context. Therefore, whether the query re-writer perfectly manages to do coreference res- olution or not, the entities that are not mentioned in the current query, but from previous turns, also shape the con- versation context. In this paper we propose a knowledge-driven answer generation system for conversational search that is aware of the context of the conversation to generate abstractive and knowledge-driven responses. Specifically, the knowl- edge about entities’ interactions across a conversation will be modelled and used to condition the generation of a single and short search-answer, based on the information comprised in top-retrieved passages. Hence, the core re- search hypothesis of this paper is that a conversational agent’s most informative answer can be generated by con- sidering the intersection of the rank of passages and its graph of entities. In particular, we take a knowledge- driven approach to guide the generation of the answer. First, the framework is built on top of a solid conversa- tional response-retrieval method that is on par with state- of-the-art results on conversational search Dalton et al. (2020a). Second the answer generation is leveraged by a conversation-wide entities knowledge graph, and biased according to the relations with the entities present on the query and top passages. Therefore, combining an enti- ties knowledge-base with a strong conversational passage ranking baseline, allows scoring already highly relevant individual passages, according to their entities relations. These can then be fed to an answer-generation Trans- former Vaswani et al. (2017), that will produce a knowl- edge enriched agent response, biased by the knowledge- base. This enables the creation of richer answers covering a wider range of information, i.e., more comprehensive answers. Next, we discuss the related work. Section 2 details the state-of-the-art conversational response retrieval method. Section 4 proposes a knowledge-driven answer-generator. Evaluation and results discussion is presented in sections 5, 6, and 7, and concluding remarks in section 8. 2. Related Work Open-domain conversational assistant. Research on in- teractive search systems started a long time ago, with the goal of developing artificial intelligent conversation search agents to aid users in a variety of search tasks in a natural manner Belkin (1980); Croft and Thomp- son (1987); Oddy (1977). With recent developments on machine learning and deep neural networks, together with improvements on computational infrastructures, the field is once again highly active. Namely, very recently the TREC CAsT (Conversational Assistant Track) Dal- ton et al. (2020a) task introduced a multi-turn passage retrieval dataset, supporting research on conversational search systems. Current state-of-the-art approaches Di- nan et al. (2018); Lin et al. (2020); Qu et al. (2020); Voskarides et al. (2020) overcome the need for abun- dant labelled data, by training self-supervised neural mod- topic coverage) collections els on large and wide (w.r.t. such as Wikipedia Devlin et al. (2018); Liu et al. (2019); Yang et al. (2019). This results in rich language mod- els that can be applied to the several components of a conversational search agent pipeline, including address- ing conversational context and passage re-ranking in each turn. Transformer models, pre-trained on large collec- tions, have been used lately for both the task of query rewriting Lin et al. (2020); Voskarides et al. (2020) and passage re-ranking Han et al. (2020); Nogueira and Cho (2019); Nogueira et al. (2019). In the former, the cur- rent query and previous utterances are provided as input to generate the rewritten query, and in the latter, transformer- based models are fine-tuned on a relevance classification task to then score candidate passages. Knowledge-guided conversation response-generation. The dialogue context can be captured by tracking the con- versation knowledge over the different turns. Then, given the previous utterances’ context and the current query, a natural language search-answer needs to be generated. In chit-chat dialogue generation agents, most approaches use encoder-decoder neural architectures that first encode ut- terances, and then the decoder generates a response Li et al. (2016, 2017); Song et al. (2018); Wang et al. (2020); Zhuang et al. (2017). For knowledge-guided generation, it is necessary to bias the generator such that it attends to knowledge-specific aspects, such as entities, that convey the conversation context. In a standard setting, models are trained end-to-end and the type of answers generated is entirely dependent on the training data. An interesting approach to bias answer generation, is retrieval-based di- alogue generation, in which the generator takes as input retrieved candidate documents to improve the compre- hensiveness of the generated answer Song et al. (2018); Zhuang et al. (2017). In Ishigaki et al. (2020) a different approach is used to bias the generator, in which to ob- tain query-biased responses using a recurrent neural net- work, a copy mechanism is used to pay special attention to overlapping terms between the document and query. In end-to-end dialog systems that incorporate external knowledge to generate answers, a common approach is 2 to fuse information from a knowledge-base in encoder- decoder models Qin et al. (2019); Wang et al. (2020). All these end-to-end approaches require a large dataset with annotated dialogues. In an open-domain conversational search setting, this is not feasible, as collections can be comprised by millions of passages. An alternative is to leverage on transfer-learning and use Transformer-based models pre-trained on large corpora, that have proved to be effective at abstractive summarization Raffel et al. (2019). We depart from previous work by leveraging on pre-trained transformer models for a knowledge-guided answer-generation. Given the knowledge encoded in en- tities relations over queries and passages, with demon- strated usefulness on search settings Dalton et al. (2014); Kato et al. (2020); Xiong et al. (2018), we propose to use the entities conversation graph to select a set of top pas- sages that are fed to the generator, towards enforcing en- tity knowledge-graph bias in the generated answers. 3. Conversation-aware Passage Retrieval In Dalton et al. (2020a) the conversational search task is defined as, given a sequence of natural language conver- sational query turns T = q1, ...qi, ...qn, the conversational search task aims to find the relevance passages that fit the current conversational context. We implemented a three-stage conversation-aware pas- sage retrieval pipeline composed of a context tracker, a first-stage retrieval and a re-ranker. Because of the con- versational characteristics of this task, the current query may not include all of the information needed to be an- swered. To solve this, we use as the context tracking component, a query rewriting method based on the T5 model Raffel et al. (2019) . This model requires an input sequence and a target sequence given as strings. Follow- ing Lin et al. (2020), we fine-tune a T5-BASE model by providing as input the sequence of conversational queries and passages, and as target the rewritten query. In partic- ular the input is defined as “qi [CT X] q1 p1 [T URN] q2 p2 [T URN] . . . [T URN] qi−1 pi−1”, (1) where i is the current turn, q is a query, p is a passage re- trieved from the index by the retrieval model, and [CT X] and [T URN] are special tokens. [CT X] is used to separate the current query from the context (previous queries and passages) and [T URN] is used to separate the historical turns (query-answer pair). The first-stage retrieval component uses a query likeli- hood retrieval model Zhai and Lafferty (2001) to recover a small set of passages. After the first-stage retrieval step, we re-rank the top-n retrieved passages to obtain a bet- ter rank using a BERT model Devlin et al. (2018). This model generates contextual embeddings for a sentence and each of its tokens. We used a model fine-tuned on the passage ranking task Nogueira and Cho (2019) through a binary relevance classification task, where positive ex- amples are relevant passages, and negative examples are non-relevant passages. To obtain the embeddings for a passage p and a query q, BERT is fed with the following sequence of size N tokens emb = BERT (“[CLS ] q [S EP] p”), (2) where emb ∈ RN×H (H is BERT embedding’s size), repre- sents the embeddings of all tokens, and [CLS] and [SEP] are special tokens in BERT’s vocabulary, representing the classification and separation tokens, respectively. We then extract from emb the embedding of the first token, which corresponds to the embedding of the [CLS] token, emb[CLS ] ∈ RH. This embedding is then used as input to a single layer feed-forward neural network (FFNN), fol- lowed by a softmax, to obtain the probability of the pas- sage being relevant to the query: P(p|q) = so f tmax(FFNN(emb[CLS ])). (3) With P(p|q) calculated for each passage p given a query q, the final rank is obtained by re-ranking according to the probability of being relevant. 4. Knowledge-aware Answer Generation In this section we address the key research hypothesis of this paper and propose a method to generate search- answers while considering the intersection between the entities in the top retrieved passages and the entities in the conversation turns. The graph of entities is built from the top retrieved pas- sages and queries from previous turns. Figure 1 illustrates the rationale of the proposed approach. We extract the en- tities from passages and queries, and then propose two 3 Figure 1: Overview of the knowledge-driven conversational system and answer generation architecture. methods to explore this information: one based on the re- lation of entities in the query and in the passage; and one that adapts the PageRank algorithm to a graph of entities. The answer is then generated with the passages that ex- hibit a stronger relation with the most salient entities of the conversation until a given turn. 4.1. Entity Linking To build the conversation-specific knowledge graph, we start by performing Entity Linking (EL) over both con- versation queries and passages. Entity linking tackled the two main existing classes of entities Balog (2018): named entities and concepts. The named entities class include specific locations, people and organisations. Concepts are abstract objects that include, but are not limited to, mathe- matical, physical and social concepts such as, “distance”, “gravity” and “authority”. We examined several entity linkers which are focused only on named entities (e.g. AIDA Hoffart et al. (2011) and FOX Speck and Ngonga Ngomo (2014)), and on both named entities and concepts (e.g. WAT Piccinno and Ferragina (2014) and DBpedia Spotlight Daiber et al. (2013)). 4.2. Selection of the best Passages Each passage is scored according to the entity graph of the conversation. The relation between a query qi, on turn i, and a candidate passage pk, is computed as the average, PassageS core(pk|qi) = (cid:88) e j∈E pk EntityRank(e j) #|e j| , (4) where the target passage pk is scored as the sum of the EntityRank(·) scores of all entities e j present in that pas- sage. When Eq = ∅, it is equivalent to γ = 0 on equa- tion 4. 4.3. Entities with Strong Pairwise-Relations Given that EL provides us with meaningful DBpedia identifiers for the mentions detected in text, we can obtain the relationship between two given entities, by exploiting their connections on DBpedia. This knowledge can be used to rearrange the order of the top passages provided by the previous step of our conversational system. We can obtain a measure of entity relatedness between e1 and e2, two entities of interest, following the measure proposed by Milne et al. Milne and Witten (2008): log(|D|) − log(min(|E1|, |E2|)) EntRel(e1, e2|KB) = log(max(|E1|, |E2|) − log(|E1 ∩ E2|)) (5) where E1 and E2 are the sets of all entities that link to e1 and e2 in the KB, respectively, and D is the set of all the entities in the KB. We use DBpedia Bizer et al. (2009) as our KB. , Using the relatedness measure, for every turn i, we rearrange the top passages by considering the related- ness between the set of entities of top passages, E p, and the set of query entities from the current and past turns, Eq = ∪ j=i j=0Eq j, where Eq j denotes the entities of query q on turn j. By considering both the current and previous 4 Conversation-aware Passage RetrievalKnowledge-aware Answer GenerationTREC CAsT corpusAgent Answer iDBPedia. . .. . .Conversation Turnse2e1p1p2pj-1pje6e4e5e3qi-1q2q1Conversation Entities RankTop Passages Rankqi queries of the same conversation topic, we cover possible topic shifts in the conversation. The score of a passage according to the DBpedia entity relatedness, is computed as the average entity relatedness between a query qi, on turn i, and each candidate passage pk, PassageS core(pk|KB, qi) = (cid:88) 1 |Eq||E pk | (cid:88) EntRel(e1, e2|KB), e1∈Eq e2∈E pk (6) where each current passage entity e1 ∈ E pk , is measured against every query-entity e2 ∈ Eq, from previous and cur- rent turns. When Eq = ∅, we select the top-3 passages. 4.4. Conversation Knowledge Graph In this section, we propose to look at the entire graph of entity relations that are in the top passages and query. This introduces a step change in relation to the above approach where we only looked at the relations between entities in the query and entities in each individual passage. To represent the conversation specific knowledge, we collect all the linked entities that occur in the queries and the top passages. This results in the concatenation of the query entities vector and the matrix with the entities in the passages, MapE =   γ ·    qe1 ... qen  (1 − γ) ·   p1 e1 ... p1 en . . . . . . . . .     pm e1 ... pm en (7) where each element of the matrix is a boolean indicator of entity presence, and the γ variable adjusts the importance of the entities in the queries vs passages. Moreover, to allow soft context shifts within the conversation, we con- sider the entities present in both the current and previous queries. The graph of entities of a given conversation is com- puted as T . GraphE = MapE · MapE This results in the covariance matrix between different en- tities. We control the sparsity of the graph by cancelling entity relations below a given threshold. This allows us to compute the centrality of each entity ei in the conversation (8) by applying the PageRank algorithm to the conversation entity-graph: EntityRank(ei) = 1−α N +α · (cid:80) EntityRank(e j) #|e j| e j∈neighbors(ei,GraphE ) , (9) where ei is the target entity, e j correspond to a neighbor- ing entity of ei, GraphE is the conversation’s entity graph, and the damping factor α was set to 0.99. The rationale for using PageRank, is that in the top passages there will be a stronger focus on the entities that are central to the conversation, while the entities that lie outside the conver- sation topic will be sparsely connected to the other entities in the graph. 4.5. Answer Generation with Entity Relatedness Having identified a set of candidate passages according to the retrieval model (eq. 3) and the entities knowledge, the goal is to generate a natural language response that combines the information comprised in each of the pas- sages. To address this problem, we follow an abstractive summarisation approach, which unlike extractive sum- marisation that just selects existing sentences, can portray both reading comprehension and writing abilities, thus al- lowing the generation of a concise and comprehensive di- gest of multiple input passages. Therefore, we select the passages that maximise the expression: argmax pk PassageS core(pk, qi) (10) and generate the agent response with the sequence of the top N = 3 passages, “p1 p2 . . . pN”. With this strategy, we implicitly bias the answer generation by asking the model to summarise the passages that are not only deemed as more relevant according to the retrieval system, but also that maximise the relatedness measure from eq. 6 or eq. 4. This task has been commonly addressed by seq2seq models that learn to map input sequences to output se- quences, but the Transformer architecture Vaswani et al. (2017) has led to groundbreaking results, due to its high effectiveness at modelling large dependency windows of textual sequences. Thus, in this work we consider the Text-to-Text Transfer Transformer (T5) Raffel et al. (2019) based on the encoder-decoder Transformer archi- tecture. This model is pre-trained on the large C4 cor- 5 pus, which was derived from Common Crawl1. A masked language modelling objective is used, where the model is trained to predict corrupted randomly sampled tokens of varying sizes. 5. Evaluation 5.1. Datasets and Protocol 5.1.1. TREC CAsT Dataset The TREC CAsT dataset, Dalton et al. (2020b), was used to evaluate both the conversational retrieval and the knowledge-aware answer generation components. There are 20 labelled conversational topics each with about 10 turns. The evaluation process uses a graded rel- evance that ranges from 0 (not relevant) to 4 (highly relevant). The passage collection is composed by MS MARCO Nguyen et al. (2016), TREC CAR Dietz et al. (2018), and WaPo NIST (2019) datasets, which creates a complete pool of close to 47 million passages. 5.1.2. Experimental Protocols To evaluate the passage retrieval component we used the TREC CAsT setup and the official metrics, nDCG@3 (normalised Discounted Cumulative Gain at 3), MAP (Mean Average Precision), and MRR (Mean Reciprocal Rank). In the answer generation experiment, we used ME- TEOR and the ROUGE variant ROUGE-L. The reference passages correspond to all the passages with a relevance judgement of 3 and 4. Hence, the goal is to generate answers that cover, as much as possible, the informa- tion contained in all relevant passages, in one concise and summarised answer. 5.2. Implementation 5.2.1. Passage Retrieval To index and search we used Pyserini2 and in spe- cific the Language Model Dirichlet (LMD) Zhai and Laf- ferty (2001) retrieval model with the stemming algorithm Kstem3. To perform re-ranking, we used a BERT LARGE 1https://commoncrawl.org/. 2https://github.com/castorini/pyserini. 3http://lexicalresearch.com/kstem-doc.txt. model fine-tuned on a binary relevance classification task on the MS MARCO dataset Nguyen et al. (2016) follow- ing Nogueira and Cho (2019). The query-rewriting com- ponent uses a T5-BASE model Raffel et al. (2019) fine- tuned on the conversational query-rewriting task using the CANARD dataset Elgohary et al. (2019), following Lin et al. (2020). 5.2.2. Entity Linking For Entity Linking, we use DBpedia Spotlight Daiber et al. (2013) (DBS) to link general concepts and Named Entities, and the Federated Knowledge Extraction Frame- work Speck and Ngonga Ngomo (2014) (FOX) to link only Named Entities. 5.2.3. Transformer based answer generation To generate the answers that summarise the entity- focused passages, we employed the T5-BASE (T5) sum- mariser Raffel et al. (2019)4, fine-tuned on the summarisa- tion task with the CNN/Daily Mail dataset Hermann et al. (2015). To generate the summary, we use 4 beams, re- strict the n-grams of size 3 to only occur once, and allow for beam search early stopping when at least 4 sentences are generated. We fix the maximum length of the sum- mary to be of the same length of the input given to the models (3 passages) and vary the minimum length from 20 to 100 words. 5.2.4. Parameters We observed that the value of the min length param- eter is not directly proportional to the number of words in the created summary. We can see in Figure 2 an example showcasing this phenomena. For this example, the orig- inal top-3 passages are fed to the models. We select on the first graph the points corresponding to summaries of length 50, 60 and 70. We see that the different models, in order to create summaries of the same length between them, need different values for the min length param- eter. Moreover, we can observe that summaries created with a similar number of words by the different models hold really different contents, as we can see in the graphs of the showcased metrics. 4https://huggingface.co/models 6 Figure 2: Showcase of how the min length parameter can influence the number of words of a summary and its metrics values. We can also see that the METEOR metric, much like the other applied metrics, follows the tendency estab- lished by the first graph. The ROUGE and BLEU met- rics show to reach a peak in performance when setting the min length parameter to around 80, whose value is probably related to the average length of the reference summaries used in the evaluation procedure. In order to properly compare the different annotators we then fix the number of words of our liking and extract the min length value that each different model requires to allow the creation of a summary with that attribute. It is obvious that, in order to obtain the maximum pos- sible value in METEOR, for instance, a big value for the min length parameter has to be chosen, and to obtain a better value in ROUGE and BLEU, the min length should be set to around 80 tokens, as easily seen in the Figure. However, by doing so, we argue that the goal of studying the generation of short and informative answers, essential for a conversational search setting, is completely missed. We believe that answers which possess the fewer words possible without losing information are the most desirable. With a quick analysis on the results yet to be presented we observed that 1) PEGASUS is the model which can present the shorter summaries out of the three models in all settings and that 2) the least number of words that all models can generate collectively applying all the proposed methods is on average 50. Because of this, we will fix the number of the summaries generated to have on average 50 words in order to better compare the different summarizers in this a setting where the least number of words possible is used to answer to a query. 6. Results and Discussion 6.1. Quantitative Results For all experiments we report the F1 scores for ROUGE-1, ROUGE-2 and ROUGE-L, because both pre- cision and recall are important for the setting at hands. Precision shows to be important regarding the concise na- ture of the created summaries and Recall captures how much of the reference summary is captured in the cre- ated summaries. We find important to report ROUGE-2 scores in conjunction with ROUGE-1 to show the fluency of the created summaries, with the intuition that the more closely the words ordering of the reference summary is followed, the more fluent the summary can be considered. 6.1.1. Answer Generation Baselines In order to assess the performance of the proposed models, we firstly measured the various summaries pro- 7 020406080100120140160Summary Minimum Length (Tokens)406080100120Metric AverageNumber of Words in SummaryT5-BASEBARTPEGASUS020406080100120140160Summary Minimum Length (Tokens)3132333435363738Metric AverageROUGE-1T5-BASEBARTPEGASUS020406080100120140160Summary Minimum Length (Tokens)2425262728Metric AverageROUGE-LT5-BASEBARTPEGASUS020406080100120140160Summary Minimum Length (Tokens)20.022.525.027.530.032.535.0Metric AverageMETEORT5-BASEBARTPEGASUS020406080100120140160Summary Minimum Length (Tokens)182022242628Metric AverageBLEU-1T5-BASEBARTPEGASUS020406080100120140160Summary Minimum Length (Tokens)810121416Metric AverageBLEU-4T5-BASEBARTPEGASUS duced by varying the min length parameter from 0 to 160 tokens. In order to study the performance of the summariz- ers, we established different baselines, targeting both the original rank of documents and the proposed methods to complement and aid a comparison between the results achieved by the different models. From this point for- ward, we will refer the proposed methods Entity Related- ness and Entity Graph Passage Scoring by their initials: ER and EG, respectively. We will refer from now on the ranked passages given by the model described in Section 3 as “O”, which stands for Original passages rank. The baselines present in Table 1 are composed by the top-3 text passages. We can clearly see that the baseline which shows better metric values has neither the biggest or smallest number of words. It is also relevant to point out that these baselines have at least the double of words than their Top-1 counterparts but not always present better metric values. Before showing our main experiments results, we reaf- firm that the usage of full text passages as references brings an unfair comparison between the created sum- maries and the above baselines. The baselines shown bring forward the evidence that having the biggest number of words does not led to better metrics and, ignoring the different number of input words, the baselines with best performance are achieved by following the proposed EG method. A more complete evaluation would use ground truth, however, this is the only feasible way regarding the dataset at hands. To compensate for this we also elabo- rated a human evaluation experiment whose results will be explored further ahead . Figure 3: Answer generation versus retrieval performance per conversa- tion turn. The average summary size is 50 and 70 in the top and bottom graphs respectively. answer generation performance is overall stable. When asked to perform summaries with 70 words as average, the METEOR values displayed become less dense. We also observed that the decreases in performance are linked to sub-topic shifts within the same conversation topic and BART is the model which tends to follow more closely the trend established by the retrieval performance. We suspect this is because of his extractive behavior. Finally, in Table 2 we illustrate the answer generation with all the three Transformers. This Table further con- firms the abstractive versus extractive summarization be- haviors of the different Transformers. In this example we see that T5-BASE tries to generate new sentences by combining different sentences and PEGASUS makes use of verb synonyms not seen in text in order to convey the same message but with fewer words. 6.1.2. Answer Generation with Original Rank 6.1.3. Answer Generation with Entity Graph Rank We will now report the results of our first experiment, following the original ranked passages, feeding the top-3 to the different models and fixing the min length param- eter as needed to create summaries with 50 and 70 words on average. In Figure 3 we analyze the retrieval and the answer gen- eration performance over conversation turns by making usage of two axis showcasing different metrics. We see that peak retrieval performance is achieved on the first turn, which was expected given that the first turn is the one that establishes the topic. As the conversation pro- gresses, retrieval performance decreases, but surprisingly, In the third experiment, we focus on the proposed En- tity Graph Passage Scoring (Section 4.4), i.e. EG, and investigate the impact of γ on the summary quality. Fig- ure 4 shows the summary generation quality results. On a quick glance it may appear as the scores are directly re- lated to the number of words in the input, but with a more attentive inspection we can see that it is not quite right, as γ = 0 and γ = 0.75 have approximately the same number of words but induce different metrics performance. We observe that the best results are obtained with γ = 0.25, meaning that more weight was given to entities from top- 10 retrieved passages, but entities from current and previ- 8 12345678Turn5101520253035METEORT5-BASE-OBART-OPEGASUS-ORetrieval Performance354045505560nDCG@312345678Turn10203040METEORT5-BASE-OBART-OPEGASUS-ORetrieval Performance354045505560nDCG@3 Baseline # Words ROUGE-1 ROUGE-2 ROUGE-L BLEU-1 BLEU-4 METEOR Table 1: Averaged metric values for different baselines with top-3 passages. Top-3 O Top-3 ED Top-3 ER Top-3 EG γ = 0 Top-3 EG γ = .25 Top-3 EG γ = .5 Top-3 EG γ = .75 Top-3 EG γ = 1 237.21 205.50 292.68 319.75 282.72 287.73 325.30 237.41 31.95 28.88 30.76 30.24 31.66 32.23 29.91 31.85 23.42 17.28 23.23 23.52 24.64 25.47 22.99 23.27 28.19 23.54 27.39 27.34 28.74 29.37 26.79 28.06 20.83 18.62 19.83 19.44 20.49 20.88 19.08 20.74 15.26 10.97 15.07 15.31 16.17 16.69 14.83 15.15 42.62 34.82 42.85 43.14 44.07 45.35 42.75 42.44 ous queries still received some weight. This supports our initial intuition that all entities con- tribute to the context of the conversation. While the conversation-aware query rewriting solves coreferences on the current query, it is still important to consider en- tity relations between passages, and current and all previ- ous queries. This is further evidenced by the results with γ = 1.0 (passage entities are ignored), where it was the second best. When γ = 0.0 (query entities are ignored) performance drops, as only knowledge from passage en- tities is used. For entity graph passage scoring, we selected γ = 0.25 based on the previous experiment. We observe that BART-EG overall achieves the best results although PEGASUS-EG shows to have better metric scores regard- ing BLEU and METEOR with summaries of 50 words. Also we notice that with this approach the addition of the query to the top-3 documents does not lead to a better per- formance. Looking at the Top-1 O Trim baseline results (ap- pended at the end of Table 3), we can easily notice that, for summaries with 50 words, the EG approach leads to better ROUGE-1 and ROUGE-2 scores from all models. BART and PEGASUS also show to have better ROUGE-L scores than this baseline. Regarding the original Top-1 O base- line and looking at the results achieved with #W = 70 we can also notice that PEGASUS and BART far surpass this baseline regarding ROUGE-1 and ROUGE-2. As for the other metrics, these models present really close results. To show how the performance evolves through the dif- ferent turns of the conversation, we can see in Figure 5 that the “ER” approach maintains a stable performance (although it is noticeable that the “peaks” achieved by “ER” bring an overall better performance). As it was observed with the “O” method, the different models per- formance is evidenced by creating summaries with more words. Interestingly, we can see that “ER” follows more closely the trend established by the retrieval performance, specially with summaries with size 70. 6.2. Human Evaluation To better assess how the different proposed methods and baseline impact the information quality, conciseness and naturalness of the answers given in the conversations, we conducted a human evaluation experiment on Amazon Mechanical Turk. In this experiment we asked that each Worker would evaluate a conversation by rating each con- versation turn on two 1-5 Likert scales, with higher being better, each targeting, for each turn: • Information Quality (IQ) - which aims to evaluate how well a answer addressed the query of the present turn, taking into account the context of the conversa- tion. • Naturalness and Conciseness (NC) - which aims to evaluate if the answers can be though as being cre- ated by human beings and don’t include too much extraneous information. Each task comprised one random conversation created with a combination of model (baseline, T5-BASE, BART, PEGASUS), method (O, ER, EG) and length (50, 70). For the baseline the method is fixed as the O one and the length is not controlled. The user, of course, didn’t know 9 Table 2: Answer generation example for the turn ”What was the first artificial satellite?”. The summaries have on average 50 words. Green sentences illustrate abstractive and blue sentences illustrate extractive summaries. Method Retrieval Passage 1 Retrieval Passage 2 Retrieval Passage 3 T5-BASE BART PEGASUS Answer The first artificial Earth satellite was Sputnik 1. Put into orbit by the Soviet Union on October 4, 1957, it was equipped with an on-board radio-transmitter that worked on two frequencies: 20.005 and 40.002 MHz. Sputnik 1 was launched as a step in the exploration of space and rocket develop- ment. While incredibly important it was not placed in orbit for the purpose of sending data from one point on earth to another. And it was the first artificial satellite in the steps leading to today’s satellite communications. The first artificial satellite was Sputnik 1. It was the size of a basketball and was made by the USSR (Union of Soviet Socialist Republics) or Russia. It was launched on October 4, 1957. The first artificial satellite was Sputnik 1, launched by the Soviet Union on October 4, 1957, and initiating the Soviet Sputnik program, with Sergei Korolev as chief designer (there is a crater on the lunar far side which bears his name). This in turn triggered the Space Race between the Soviet Union and the United States. the first artificial satellite was launched by the ussr or Russia. it was the size of a basketball and launched on October 4, 1957. it was equipped with an on-board radio-transmitter that worked on two frequencies.it was not placed in orbit for the purpose of sending data from one point on earth to another. The first artificial satellite was Sputnik 1, launched by the Soviet Union on October 4, 1957. It was equipped with an on-board radio-transmitter that worked on two frequencies: 20.005 and 40.002 MHz. This in turn triggered the Space Race between the Soviet Union and the United States. Sputnik 1 was launched by the Soviet Union on October 4, 1957. It was the first artificial satellite in the steps leading to today’s satellite communications. It was not used to send data from one point on earth to another. Sputnik 1 triggered the Space Race between the Soviet Union and the United States. which combination was being asked to be evaluated in a given task. We chose to study only these combinations since encompassing all of them would result in the rise of the experiments’ complexity. Each task was independently done by 4 different Work- ers which, to be able to partake in the task, had to present a minimum approval rate of 95% and had to at least have completed 100 Human Intelligence Tasks (HITs) already. Additionally, all HITs were inspected to the best of our ability and when, for a single user, a continuous session of HITs submission took place, the first and last submissions time and number of HITs performed was taken to calcu- late the average time spent per HIT. Users that showed an average value of less than 20 seconds had their submis- sions rejected and those HITs were re-submitted by other users. On total, this experiment was performed by 136 people. In a perfect situation, we argue that the number of HITs per Worker should have been more or less the same for all Workers, to take into account extreme Workers that con- sistently evaluate highly/low all HITs received. However, this feature is not accessible via the Amazon Mechani- cal Turk platform and we did verify that the results didn’t drastically change when the top performing users were not pictured in the experiment results. In Figure 6 we can see the obtained evaluation of In- formation Quality and Naturalness and Conciseness that were averaged per row. Each row comprises 20 differ- ent conversations that were evaluated each by 4 different Workers, totaling in 80 evaluations per row. First off, we can observe that overall the Information Quality of the answers were rated higher than the Natu- ralness and Conciseness, as we can see in the range used in both axis. We must also be aware that the range in each 10 Figure 4: Input size and answer generation performance results under different γ values for summaries of length 50. terms of IQ. In terms of NC, the results suggest that the method EG coupled with PEGASUS and with Length 70 also achieves better results than other combinations. On the other hand, the summaries of 50 words created using PEGASUS and the ER method show, on both eval- uation types, low results, suggesting that this combination should not be used to this conversational search setting. In order to better understand the final conclusions we can gather from this experiment and check if the results obtained reveal statistical significance, we per- formed ANOVA and paired t-tests with α = 0.05. With all the information gathered, these experiment re- sults suggest the following: • Summaries created with 70 words on average have better Information Quality. In order to determine if there is an interaction effect between our three independent variables (Model, Length and Method) on our continuous dependent variables (In- formation Quality and Naturalness and Conciseness) we performed a three-way ANOVA test. The results target- ing both dependent variables only show statistical signifi- cance when the dependent variable studied was the Infor- mation Quality. We observe that that indeed summaries 11 Figure 5: Answer generation versus retrieval performance per conversa- tion turn applying the EG method. The minimum length is 50 and 70 in the top and bottom graphs respectively. axis is focused on portraying the differences between each score. If the axis would start at 0, the differences wouldn’t be so striking, but nonetheless would still be there. We can easily see that the Baseline shows the worst value in terms of IQ and the best result, both in terms of IQ and NC is presented with the combination of the BART model with answers of 70 words generated with the EG method. The EG method seems to synergize well with T5-BASE with the combination of also Length 70 in 0.000.250.500.751.00240260280300320AverageNumber of Words in Input0.000.250.500.751.0030313233343536Metric AverageROUGE-1T5-BASEBARTPEGASUS0.000.250.500.751.0014151617181920Metric AverageROUGE-2T5-BASEBARTPEGASUS0.000.250.500.751.00192021222324Metric AverageMETEORT5-BASEBARTPEGASUS0.000.250.500.751.0018192021222324Metric AverageBLEU-1T5-BASEBARTPEGASUS0.000.250.500.751.0078910111213Metric AverageBLEU-4T5-BASEBARTPEGASUS12345678Turn51015202530METEORT5-BASE-EGBART-EGPEGASUS-EGRetrieval Performance354045505560nDCG@312345678Turn10203040METEORT5-BASE-EGBART-EGPEGASUS-EGRetrieval Performance354045505560nDCG@3 Table 3: Averaged metric values for summaries created with the top-3 documents following the Entity Graph method as input (top) plus query (bot). Model # W ROUGE-1 ROUGE-2 ROUGE-L BLEU-1 BLEU-4 METEOR T5-BASE-ER BART-ER PEGASUS-ER T5-BASE-ER BART-ER PEGASUS-ER T5-BASE-EG BART-EG PEGASUS-EG T5-BASE-EG BART-EG PEGASUS-EG 50 70 50 70 31.97 33.83 32.15 33.35 36.57 35.09 35.90 36.84 36.89 37.08 39.86 39.50 15.07 17.75 15.86 16.09 19.95 18.43 20.08 21.58 21.36 20.47 23.80 23.36 24.43 26.53 25.14 24.89 27.97 26.76 28.50 29.66 29.18 28.19 31.11 30.76 19.97 22.95 21.18 23.50 27.39 25.93 22.19 24.16 24.63 25.77 30.17 29.52 8.23 12.22 10.41 10.13 14.98 13.63 11.34 14.19 14.64 12.76 18.46 17.90 21.13 22.93 21.54 24.64 28.11 26.76 24.37 24.99 25.52 27.88 30.85 30.89 created with more words (70) were preferable in compar- ison with summaries created with fewer words (50). This goes against our initial hypothesis that a Conver- sational Search answer should be short and informative but nonetheless this can make sense if our initial setting of 50 words is too short to convey the information that is required to met the conversation goal. This can also be connected with the type of questions that are asked, since there are questions that can be more straightforward answered than others. An other explanation can be that users of these systems want to know the most possible about the discussed topic, without it being overwhelming, of course. This test also suggests that there was a statistically sig- nificant three-way interaction between Model, Length and Method, F(4, 342) = 2.437, p = .047. • The best combinations to improve our Conversa- tion Search system Information Quality wise are: – EG method with summaries of length 70 cre- ated by BART. – EG method with summaries of length 70 cre- ated by T5-BASE. – ER method with summaries of length 70 cre- ated by PEGASUS. We targeted the comparison of each combination of Model/Length/Method with the Baseline in order to better understand if the combinations indeed lead to a better sys- tem performance and were not achieved by mere chance. A paired-samples t-test was conducted to compare the IQ and NC (averaged per topic) separately, in the con- ditions where the combination of Model/Length/Method were and were not applied. Focusing the dependent variable IQ, there was a signif- icant difference in the scores for the BART/70/EG (M = 4.05, S D = .33) and the Baseline (M = 3.74, S D = .31) conditions, t(19) = −3.25, p = 0.004. There was also a significant difference in the scores for the T5- BASE/70/EG (M = 4, S D = .32) and the Baseline con- ditions, t(19) = −2.68, p = 0.015. Additionally, a sig- nificant difference in the scores for the PEGASUS/70/ER (M = 3.93, S D = .3) and the Baseline conditions was found, t(19) = −2.22, p = 0.039. These results suggest that these combinations really do have an impact on the IQ perceived in the conversations. Specifically, our results suggest that when these combi- nations are applied, the systems performance in terms of Informational Quality increases. In order to better understand if any of these combina- tions were indeed superior to the other two, we performed a second paired-samples t-test, but didn’t find statistical relevant results both in terms of IQ and NC. It’s interesting to notice that the best performing combi- nations make usage of the proposed methods. We can see in the Table 4 the results of the human evaluation in the 12 Figure 6: Averaged values of the human evaluation in terms of Information Quality, Naturalness and Conciseness per combination of Length, Model and Method. Each table pane shows a vertical gray line which represents the average of each Model with the different Lengths set. IQ Length Method 50 70 O ER EG O ER EG T5-BASE 77.5 (+2.6) 78.9 (+4.0) 76.0 (+1.1) 78.9 (+4.0) 76.5 (+1.6) 80.2 (+5.3) Model BART 78.0 (+3.1) 78.3 (+3.4) 75.5 (+0.7) 77.9 (+3.0) 78.7 (+3.8) 81.2 (+6.3) PEGASUS 76.3 (+1.5) 75.0 (+0.1) 78.7 (+3.9) 77.8 (+2.9) 78.7 (+3.8) 77.8 (+2.9) NC Length Method 50 70 O ER EG O ER EG T5-BASE 73.6 (+2.6) 72.0 (+1.0) 72.15 (+1.2) 74.1 (+3.1) 70.5 (-0.5) 72.6 (+1.6) Model BART 73.2 (+2.2) 72.1 (+1.1) 71.2 (+0.2) 72.9 (+1.9) 70.85 (-0.1) 75.7 (+4.8) PEGASUS 70.5 (+0.5) 70.7 (+0.3) 72.3 (+1.4) 71.81 (+0.8) 72.2 (+1.2) 75.0 (+4.9) Table 4: Human evaluation side-by-side results on a 1-100 scale. Baseline reports the mean value of 74.85 in terms of IQ and 70.97 in terms of NC. Bold values are statistically significant, difference between baseline is shown between parenthesis in percentage and underlined values show the best score achieved by each model. scale of 0-100 in order to better visualize the differences between each combination. Regarding the BART/70/EG combination, we can see an improvement of over 6%. 7. Discussion We deem useful to use automatic metrics (ROUGE, BLEU, METEOR) as proxies for measuring quantita- tively the results achieved by the different models with the different proposed methods. However, these only provide limited information and don’t give forward information regarding fluency and information needs met. To this regard, options which were shown to not im- prove metric scores can still be considered for a further analysis since they can contribute for more natural and informative answers. 7.1. Analysis of the Conversational Answers created with Added Query When using the query jointly with the top-3 passages, interesting cases arose in which the query was “woven” into the summary, producing a much natural and desir- able answer. We can see an example of this phenomena in Table 5. There are cases in which the addition of the query 1) does not lead to any different in the creation of the sum- mary, 2) makes the summary use the same words but with a different ordering (usually starting with the words seen 13 Table 5: Different answers given by the different model approaches for the question “What is the largest shark ever to have lived on Earth?” PEGASUS-O PEGASUS-O-wQ The megalodon is an extinct species of shark that roamed the waters of Earth over 1.5 million years ago. Although now extinct, it is still listed in the Guinness World Records as the largest shark (...) The largest shark to have ever lived on Earth is thought to have been the megalodon. Although now extinct, it is still listed in the Guinness World Records as the largest shark (...) in the query) and 3) changes completely the summary cre- ated, with no similarity between the original and added query approaches. 7.2. Analysis of the Conversational Answers with the En- tity Density filter As with other methods, the application of the Entity Density filter does not automatically imply a different an- swer generation. Looking at the overall filtered passages, it is noticeable cases in which the filter acts as expected, as demonstrated in Table 6. However we observed cases in which the filter removed phrases which could bring relevant information forward. It was also noticeable that some top-3 texts gather a lot of irrelevant information forward but the ED method could not be applied, since the confidence parameter of Entity Linkers set could led to the identification of relevant con- cepts amidst the text. We believe that for texts with a big- ger number of words this confidence parameter should be set higher in order to better curate information by being more critical about the contents to be selected. We invite the reader to visit our Conversation Interac- tive Explorer5 and check the differences in quality that each available parameter can have in the showcased con- versations. 7.3. Analysis of a Conversation Knowledge Graph Figure 7 depicts the entity graph of the top-10 pas- sages, from a conversation turn focused on the entity “The Avengers”. The top-3 passages and corresponding an- swer summary made by T5-BASE can be seen in the Ta- ble 7. From the Graph 7, we can identify top entities (dark blue), the most salient entities for the current turn of the 5https://knowledge-answer-generation.herokuapp.com present topic, and bottom entities (light blue), which are connected to top entities but are not so central. As such, passages that have a better coverage of those entities (ac- cording to eq. 4), are expected to be ranked higher. We can see that the 3 passages in Table 7 gather both top and bottom entities. Using these passages as targets to the an- swer generation component, we can see that the produced answer ends up accounting for all this information and successfully answers the given query. 8. Conclusions In this paper we proposed a knowledge-aware answer generation method that considered the conversation spe- cific graph of entities. The key findings of this paper are as follows: 1. Knowledge-aware Search-Answer Generation. The proposed method was able to abstract the infor- mation contained in multiple passages to generate a single, yet informative, search-answer. This reduces the burden on the user that now only needs to read a snippet long answer containing links to multiple pas- sages. 2. Conversation-specific Knowledge-Graph. The quality Knowledge-graph creation process is directly influenced by the quality of the retrieved passages. A fundamental step in the creation of Conversation Knowledge-graphs was a state-of-the-art conversa- tional search baseline that we used to select the seeds for the graph creation process. 3. Conversation-specific Rank of Entities. The final critical element in the proposed method, concerns the ranking of entities by their importance during the conversation. We applied a modified PageRank al- gorithm to detect the salient entities along the con- 14 Table 6: Different processed text passages to give answer to a question about blood. BART-EG BART-EG-ED Confidence votes 133. Red blood cells are produced in the bone marrow. Red blood cells are also known as erythrocytes (...) Red blood cells are produced in the bone marrow. Red blood cells are also known as erythrocytes (...) Figure 7: Conversation entity graph for the topic “The Avengers”. The graph considers the most salient entities of the top-10 passages. versation to focus the answer generation process in the corresponding passages. b The results presented in this paper support the initial hypothesis and opened other questions that we plan to investigate in the future. The first one being related to the quality of entity-linkers, which may be further im- proved. The second concerns the research of models that can seamlessly combine the Transformer architecture ad- vantages with the conversation knowledge-graphs. References Balog, K., 39 2018. of The ume ries. doi:10.1007/978-3-319-93935-3. Springer. Entity-Oriented Search. vol- Information Retrieval Se- URL: https://eos-book.org, Belkin, N.J., 1980. Anomalous states of knowledge as a basis for information retrieval. Canadian Journal of Information Science 5, 133–143. Dbpedia - a crystallization point for the web of URL: https:// data. Web Semant. 7, 154–165. doi.org/10.1016/j.websem.2009.07.002, doi:10.1016/ j.websem.2009.07.002. Croft, W.B., Thompson, R.H., 1987. I3r: A new approach to the design of document retrieval systems. JASIST 38, 389–404. Daiber, J., Jakob, M., Hokamp, C., Mendes, P.N., 2013. Improving efficiency and accuracy in multilingual en- tity extraction, in: Proceedings of the 9th International Conference on Semantic Systems (I-Semantics). Dalton, J., Dietz, L., Allan, J., 2014. Entity query feature expansion using knowledge base links, in: Proceed- ings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, Association for Computing Machinery, New York, NY, USA. p. 365–374. URL: https://doi.org/10.1145/ 2600428.2609628, doi:10.1145/2600428.2609628. Bizer, C., Lehmann, J., Kobilarov, G., Auer, S., Becker, C., Cyganiak, R., Hellmann, S., 2009. Dalton, J., Xiong, C., Callan, J., 2020a. TREC cast 2019: The conversational assistance track overview. 15 marvel_studiosavengers_assemble(tv_series)comic_bookirelandwalt_disneystudios_motion_picturesunited_statesunited_kingdomsuperheromarvel_comicsavengers(comics)avengers(comics)superhero_filmsuperhero_film Table 7: Answer generation example. The summary minimum length is set to 90. The top and bottom entities are highlighted in blue and light blue respectively. Query Turn: Passage 1 Passage 2 Passage 3 T5-BASE Answer Who are The Avengers? The Avengers (2012 film) Marvel’s The Avengers (classified under the name Marvel Avengers As- semble in the United Kingdom and Ireland), or simply The Avengers, is a 2012 American superhero film based on the Marvel Comics superhero team of the same name, produced by Marvel Studios and distributed by Walt Disney Studios Motion Pictures. Marvel’s The Avengers (Marvel Avengers Assemble in the UK and Ireland) more commonly known as The Avengers, is a 2012 American superhero film, scripted and directed by Joss Whedon, based on the Marvel Comics superhero team of the same name. The film stars an ensemble cast consisting of Robert Downey, Jr., Chris Evans, Mark Ruffalo, Chris Hemsworth, Scarlett Johansson, Jeremy Renner, Tom Hiddleston, Clark Gregg, Cobie Smulders, Stellan Skarsgård and Samuel L. Jackson. In The Avengers, Nick Fury (Jackson), director of the peacekeeping organization S.H.I.E.L.D., re- cruits Iron Man (Downey), Captain America (Evans), the Hulk (Ruffalo), and Thor (Hemsworth) to form a team that must stop Thor’s adoptive brother Loki (Hiddleston) from subjugating Earth. The Avengers (also known as Marvel’s The Avengers and classified in the UK and Ireland under the title Marvel Avengers Assemble) is a 2012 American superhero film produced by Marvel Studios and distributed by Walt Disney Studios Motion Pictures, based on the Marvel Comics superhero team of the same name. The Avengers is a 2012 american superhero film based on the Marvel comics superhero team of the same name. the film stars an ensemble cast consisting of Robert Downey, Jr., Chris Evans, Mark Ruffalo, Chris Hemsworth, Scarlett Johansson and Jeremy Renner. In the film, Nick Fury recruits Iron man, Captain America, the Hulk and Thor to form a team that must stop Loki from subjugating earth. CoRR abs/2003.13624. URL: https://arxiv.org/abs/ 2003.13624, arXiv:2003.13624. abs/1811.01241. 01241, arXiv:1811.01241. URL: http://arxiv.org/abs/1811. Dalton, J., Xiong, C., Callan, J., 2020b. The trec con- versational assistance track (cast). URL: http://www. treccast.ai/. Devlin, 2018. transformers for abs/1810.04805. 04805, arXiv:1810.04805. J., Chang, M., Lee, K., Toutanova, K., BERT: pre-training of deep bidirectional CoRR language understanding. URL: http://arxiv.org/abs/1810. Dietz, L., Gamari, B., Dalton, J., 2018. Trec car 2.1: A data set for complex answer retrieval. URL: http: //trec-car.cs.unh.edu. Dinan, E., Roller, S., Shuster, K., Fan, A., Auli, Wizard of wikipedia: CoRR M., Weston, Knowledge-powered conversational agents. J., 2018. in: Elgohary, A., Peskov, D., Boyd-Graber, J.L., 2019. Can learning to rewrite questions-in- you unpack that? context, Inui, K., Jiang, J., Ng, V., Wan, X. (Eds.), Proceedings of the 2019 Conference on Em- pirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, Association for Com- putational Linguistics. pp. 5917–5923. URL: https: //doi.org/10.18653/v1/D19-1605, doi:10.18653/v1/ D19-1605. Han, S., Wang, X., Bendersky, M., Najork, M., 2020. Learning-to-rank with BERT in tf-ranking. CoRR abs/2004.08476. URL: https://arxiv.org/abs/2004. 08476, arXiv:2004.08476. 16 Hermann, K.M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., Blunsom, P., 2015. Teach- ing machines to read and comprehend, in: Advances in neural information processing systems, pp. 1693–1701. Processing, Association for Computational Linguis- tics, Copenhagen, Denmark. pp. 2157–2169. URL: https://www.aclweb.org/anthology/D17-1230, doi:10. 18653/v1/D17-1230. Hoffart, J., Yosef, M.A., Bordino, I., F¨urstenau, H., Pinkal, M., Spaniol, M., Taneva, B., Thater, S., Weikum, G., 2011. Robust disambiguation of named entities in text, in: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Process- ing, Association for Computational Linguistics, Ed- inburgh, Scotland, UK.. pp. 782–792. URL: https: //www.aclweb.org/anthology/D11-1072. Huang, M., Zhu, X., Gao, J., 2020. Challenges in building intelligent open-domain dialog systems. ACM Trans. Inf. Syst. 38, 21:1–21:32. URL: https://doi.org/10. 1145/3383123, doi:10.1145/3383123. Ishigaki, T., Huang, H.H., Takamura, H., Chen, H.H., Okumura, M., 2020. Neural query-biased abstractive Jose, summarization using copying mechanism, in: J.M., Yilmaz, E., Magalh˜aes, J., Castells, P., Ferro, N., Silva, M.J., Martins, F. (Eds.), Advances in Information Retrieval, Springer International Publishing, Cham. pp. 174–181. Kato, M.P., Imrattanatrai, W., Yamamoto, T., Ohshima, H., Tanaka, K., 2020. Context-guided learning to rank Jose, J.M., Yilmaz, E., Magalh˜aes, J., entities, in: Castells, P., Ferro, N., Silva, M.J., Martins, F. (Eds.), Advances in Information Retrieval, Springer Interna- tional Publishing, Cham. pp. 83–96. Li, J., Monroe, W., Ritter, A., Jurafsky, D., Gal- ley, M., Gao, J., 2016. Deep reinforcement learn- ing for dialogue generation, in: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Austin, Texas. pp. 1192–1202. URL: https://www.aclweb.org/anthology/D16-1127, doi:10. 18653/v1/D16-1127. Lin, S., Yang, J., Nogueira, R., Tsai, M., Wang, C., Lin, J., 2020. Conversational question reformulation via sequence-to-sequence architectures and pretrained lan- guage models. CoRR abs/2004.01909. URL: https: //arxiv.org/abs/2004.01909, arXiv:2004.01909. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V., 2019. Roberta: A robustly optimized BERT pre- training approach. CoRR abs/1907.11692. URL: http: //arxiv.org/abs/1907.11692, arXiv:1907.11692. Milne, D., Witten, I.H., 2008. Learning to link with wikipedia, in: Proceedings of the 17th ACM Con- ference on Information and Knowledge Management, Association for Computing Machinery, New York, NY, USA. p. 509–518. URL: https://doi.org/10.1145/ 1458082.1458150, doi:10.1145/1458082.1458150. Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., Deng, L., 2016. MS MARCO: A human generated machine reading comprehension dataset. CoRR abs/1611.09268. URL: http://arxiv.org/ abs/1611.09268, arXiv:1611.09268. NIST, 2019. Trec washington post corpus. URL: https: //trec.nist.gov/data/wapost/. Nogueira, R., Cho, K., 2019. Passage re-ranking with BERT. CoRR abs/1901.04085. URL: http://arxiv.org/ abs/1901.04085, arXiv:1901.04085. Nogueira, R., Yang, W., Cho, K., Lin, J., 2019. Multi-stage document ranking with BERT. CoRR abs/1910.14424. URL: http://arxiv.org/abs/1910. 14424, arXiv:1910.14424. Oddy, R.N., 1977. Information retrieval through man- machine dialogue. Journal of Documentation 33, 1–14. Li, J., Monroe, W., Shi, T., Jean, S., Ritter, A., Juraf- sky, D., 2017. Adversarial learning for neural dia- logue generation, in: Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Piccinno, F., Ferragina, P., 2014. From tagme to wat: A new entity annotator, in: Proceedings of the First In- ternational Workshop on Entity Recognition &; Dis- ambiguation, Association for Computing Machinery, 17 New York, NY, USA. p. 55–62. URL: https://doi.org/ 10.1145/2633211.2634350, doi:10.1145/2633211. 2634350. Qin, L., Liu, Y., Che, W., Wen, H., Li, Y., Liu, T., 2019. Entity-consistent end-to-end task-oriented di- alogue system with KB retriever, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), Association for Computational Linguistics, Hong Kong, China. pp. 133–142. URL: https://www.aclweb.org/anthology/D19-1013, doi:10. 18653/v1/D19-1013. Qu, C., Yang, L., Chen, C., Qiu, M., Croft, W.B., Iyyer, M., 2020. Open-retrieval conversational ques- tion answering, in: Proceedings of the 43rd Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, Association for Computing Machinery, New York, NY, USA. p. 539–548. URL: https://doi.org/10.1145/3397271. 3401110, doi:10.1145/3397271.3401110. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J., 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR abs/1910.10683. URL: http://arxiv. org/abs/1910.10683, arXiv:1910.10683. in: Song, Y., Li, C.T., Nie, J.Y., Zhang, M., Zhao, D., An ensemble of retrieval-based Yan, R., 2018. and generation-based human-computer conversation systems, the Twenty-Seventh International Joint Conference on Artificial Intelli- gence, IJCAI-18, International Joint Conferences on Artificial Intelligence Organization. pp. 4382–4388. URL: https://doi.org/10.24963/ijcai.2018/609, doi:10. 24963/ijcai.2018/609. Proceedings of Voskarides, N., Li, D., Ren, P., Kanoulas, E., de Rijke, M., 2020. Query resolution for conversational search with limited supervision. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval URL: http: //dx.doi.org/10.1145/3397271.3401130, doi:10.1145/ 3397271.3401130. Vtyurina, A., Savenkov, D., Agichtein, E., Clarke, C.L.A., 2017. Exploring conversational search with humans, assistants, and wizards, in: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA. p. 2187–2193. URL: https://doi.org/10.1145/3027063. 3053175, doi:10.1145/3027063.3053175. Wang, J., Liu, J., Bi, W., Liu, X., He, K., Xu, R., Yang, M., 2020. Improving knowledge-aware dialogue gener- ation via knowledge base question answering. Proceed- ings of the AAAI Conference on Artificial Intelligence 34, 9169–9176. doi:10.1609/aaai.v34i05.6453. Xiong, C., Liu, Z., Callan, J., Liu, T.Y., 2018. To- wards better text understanding and retrieval through in: The 41st In- kernel entity salience modeling, ternational ACM SIGIR Conference on Research &; Development in Information Retrieval, Association for Computing Machinery, New York, NY, USA. p. 575–584. URL: https://doi.org/10.1145/3209978. 3209982, doi:10.1145/3209978.3209982. Yang, Z., Dai, Z., Yang, Y., Carbonell, J.G., Salakhut- dinov, R., Le, Q.V., 2019. Xlnet: Generalized au- toregressive pretraining for language understanding. CoRR abs/1906.08237. URL: http://arxiv.org/abs/ 1906.08237, arXiv:1906.08237. Speck, R., Ngonga Ngomo, A.C., 2014. Ensemble learn- ing for named entity recognition, in: The Semantic Web – ISWC 2014. Springer International Publishing. volume 8796 of Lecture Notes in Computer Science. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I., 2017. At- tention is all you need. CoRR abs/1706.03762. URL: http://arxiv.org/abs/1706.03762, arXiv:1706.03762. Zhai, C., Lafferty, J., 2001. A study of smoothing meth- ods for language models applied to ad hoc information retrieval, in: Proceedings of the 24th Annual Interna- tional ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, Association for Com- puting Machinery, New York, NY, USA. p. 334–342. URL: https://doi.org/10.1145/383952.384019, doi:10. 1145/383952.384019. 18 Zhuang, Y., Wang, X., Zhang, H., Xie, J., Zhu, X., 2017. An ensemble approach to conversation gen- eration, in: Huang, X., Jiang, J., Zhao, D., Feng, Y., Hong, Y. (Eds.), Natural Language Processing and Chinese Computing - 6th CCF International Confer- ence, NLPCC 2017, Dalian, China, November 8-12, 2017, Proceedings, Springer. pp. 51–62. URL: https:// doi.org/10.1007/978-3-319-73618-1 5, doi:10.1007/ 978-3-319-73618-1\_5. 19
ai_researcher
8
IdeaBench_Benchmarking_Large_Language_Models_for_Research_Idea_Generation.pdf
IdeaBench: Benchmarking Large Language Models for Research Idea Generation Sikun Guo*, Amir Hassan Shariatmadari*, Guangzhi Xiong, Albert Huang, Eric Xie, Stefan Bekiranov, Aidong Zhang University of Virginia {qkm6sq, ahs5ce, hhu4zu, kfa7fg, jrg4wx, sb3de, aidong}@virginia.edu 4 2 0 2 t c O 1 3 ] L C . s c [ 1 v 9 2 4 2 0 . 1 1 4 2 : v i X r a Abstract Large Language Models (LLMs) have transformed how peo- ple interact with artificial intelligence (AI) systems, achiev- ing state-of-the-art results in various tasks, including scien- tific discovery and hypothesis generation. However, the lack of a comprehensive and systematic evaluation framework for generating research ideas using LLMs poses a significant ob- stacle to understanding and assessing their generative capa- bilities in scientific discovery. To address this gap, we propose IdeaBench, a benchmark system that includes a comprehen- sive dataset and an evaluation framework for standardizing the assessment of research idea generation using LLMs. Our dataset comprises titles and abstracts from a diverse range of influential papers, along with their referenced works. To emu- late the human process of generating research ideas, we pro- file LLMs as domain-specific researchers and ground them in the same context considered by human researchers. This maximizes the utilization of the LLMs’ parametric knowl- edge to dynamically generate new research ideas. We also introduce an evaluation framework for assessing the quality of generated research ideas. Our evaluation framework is a two-stage process: first, using GPT-4o to rank ideas based on user-specified quality indicators such as novelty and feasibil- ity, enabling scalable personalization; and second, calculating relative ranking based “Insight Score” to quantify the cho- sen quality indicator. The proposed benchmark system will be a valuable asset for the community to measure and com- pare different LLMs, ultimately advancing the automation of the scientific discovery process. Our code and dataset are available at: https://anonymous.4open.science/r/IdeaBench- 2747/. Introduction Recent years have witnessed the rapid development of Large Language Models (LLMs). LLMs like GPT-4 (OpenAI 2023) and LLama series (Touvron et al. 2023) introduced advanced capabilities that set them apart from previous gen- erations of machine learning models. Among these capabil- ities, in-context learning allows LLMs to understand and re- spond to user prompts in a nuanced manner without requir- ing additional training for each specific task, enabling LLMs to generalize across a wide range of tasks, providing robust state-of-the-art performance even with limited data (Brown et al. 2020). As a result, LLMs have revolutionized the way *These authors contributed equally. humans interact with AI systems, making it possible to gen- erate coherent text, translate languages, answer questions, and even compose creative content with unprecedented ac- curacy and fluency (Bubeck et al. 2023). The impact of these advancements extends beyond consumer applications, in- fluencing various sophisticated domains such as education (Moore et al. 2023), healthcare (Yang et al. 2023a), and sci- entific research (Wysocki et al. 2024). interest Recently, the impressive performance of LLMs in ev- eryday applications has sparked significant in academia, particularly for their potential use in scientific dis- covery or hypothesis generation (AI4Science and Quantum 2023). Several studies have explored leveraging LLMs to generate hypotheses or research ideas (Yang et al. 2023b; Wang et al. 2023b; Zhou et al. 2024; Baek et al. 2024; Qiu et al. 2023). However, despite numerous results, a unified and comprehensive framework for evaluating generated re- search ideas is still lacking, making it difficult for the com- munity to clearly understand the performance spectrum of different techniques for generating research ideas. To address this limitation, we introduce a standardized evaluation framework designed to emulate how human re- searchers generate research ideas. This framework, termed IdeaBench, comprises three main components: dataset con- struction, research idea generation, and a novel metric to evaluate the quality of the generated research ideas. The in- tuition behind this framework is grounded in the typical re- search process of how researchers generate new scientific research ideas as described below: 1. Targeting a specific topic. 2. Reviewing related literature, focusing on recent findings and methodologies. 3. Identifying gaps in knowledge or methods within these recent findings. 4. Proposing research ideas to address these gaps. that We first construct a benchmark dataset includes meticulously filtered 2,374 target papers’ abstracts from biomedical research fields. These target papers serve as the ground-truth sources of research ideas. Additionally, the dataset contains the abstracts of the papers referenced by the target papers, providing the context necessary for LLMs to generate relevant research ideas. This comprehensive dataset aims to capture the complexity and specificity of scientific research, particularly in the biomedical domain, thus offer- ing a solid foundation for evaluating LLMs’ capability in generating research ideas. Based on the benchmark dataset, we design a prompt tem- plate that leverages LLMs to generate research ideas. In ad- dition to grounding the context for idea generation using ref- erence papers from our dataset, we also profile the LLMs as domain-specific researchers in the prompt. This approach aims to dynamically maximize the utilization of the LLMs’ parametric knowledge, enabling the generation of more in- depth and insightful research ideas. It can also be used as a baseline for future comparisons. To accurately assess the quality of generated research ideas, we design an evaluation framework which incorpo- rates two critical components: personalized quality ranking and relative quality scoring. This dual approach allows for a nuanced assessment that takes into account user-defined quality indicators such as novelty, feasibility, etc. Our design ensures a versatile and comprehensive evaluation frame- work, capable of adapting to different research contexts and providing meaningful insights into the quality of LLM- generated ideas. Our results show that recent high-capacity LLMs are capable of generating research ideas using Ide- aBench dataset, and our metric is able to assess the quality of generated research ideas from different dimensions. We hope this work inspires academia to further unleash the po- tential of LLMs in supporting research ideation, ultimately accelerating scientific discovery in the future. To summarize, our contributions are as follows: • We construct IdeaBench dataset, which consists of 2,374 influential biomedical target papers along with their 29,408 reference papers, to evaluate LLMs’ capabilities in generating research ideas. • We propose an evaluation framework which offers a scal- able and versatile metric called “Insight Score”, which can quantify novelty, feasibility, or any other quality in- dicators defined by human researchers. • We conduct extensive experiments to demonstrate sev- eral LLMs’ abilities in generating research ideas based on our dataset and evaluation metric. Related work Machine Learning for Hypothesis Generation. Most existing research on hypothesis generation has concentrated on literature-based discovery (LBD), aiming to predict pair- wise relationships between discrete concepts (Wang et al. 2023a). This approach involves uncovering new scientific knowledge by mining literature to identify meaningful im- plicit associations between unrelated biomedical concepts. The majority of prior studies have focused on identifying these implicit connections from snapshots of the corpus. While these LBD-based approaches are accurate and ver- ifiable, they assume that all concepts are known before- hand and need only to be connected, without considering the contextual factors that human scientists incorporate dur- ing ideation. Moreover, these methods do not address the in- ductive and generative nature of scientific inquiry. Recently, several new studies have explored the use of large language models (LLMs) for hypothesis generation. For instance, in (Wang et al. 2023b), the authors presented a framework called SciMON that leverages past scientific literature as context for fine-tuning LLMs for hypothesis generation. MOOSE (Yang et al. 2023b) utilized multi-level LLM self- feedback to boost scientific hypotheses discovery in social science. ResearchAgent (Baek et al. 2024) employed LLMs to automatically generate and refine problems, methods, and experiment designs starting with a core paper and entity- centric knowledge graphs. (Zhou et al. 2024) proposed a prompting approach to iteratively generate hypotheses using LLMs based on training examples. Evaluation for Open-ended Text Generation. Although human judgment is still considered the golden standard for evaluating open-ended text generation, the Natural Lan- guage Processing community has tried to develop different approaches to approximate human evaluation in a scalable way. Traditional metrics like BLEU (Papineni et al. 2002) and ROUGE (Lin 2004) measure the lexical overlap be- tween model generated content and ground-truth reference. Later on, several efforts use pre-trained language models to measure distributional similarity (Zhang et al. 2019; Zhao et al. 2019) or token probabilities (Yuan, Neubig, and Liu 2021; Thompson and Post 2020). With the increasing popu- larity and impressive performance of Large Language Mod- els, recent endeavors employ LLMs as autoraters for open- ended text generation (Chiang and Lee 2023; Liu et al. 2023; Bubeck et al. 2023; Bai et al. 2024; Fu et al. 2024; Vu et al. 2024), the effectiveness of using LLMs as autoraters is of- ten reflected by its correlation with human-ratings, making autoraters a promising alternative to human evaluators for large-scale evaluation. Methodology In this section, we introduce the details of the three com- ponents of our framework, namely, dataset construction, research idea generation, and evaluation of the generated ideas. The first component is to collect a set of valuable tar- get papers and reference papers so that the reference papers can be used to generate new research ideas and compare with those in the target papers. The second component is to de- sign an LLM prompt tailored for generating research ideas, and the last component is to formulate an evaluation metric to measure the quality of the generated ideas. Dataset Construction The dataset construction consists of two components: curat- ing a set of valuable papers which will be used as the target papers, and accumulating the reference papers which were used to generate ideas in the target papers. Data Collection. To create a benchmark dataset for evalu- ating the research idea generation capabilities of LLMs, we meticulously curated a set of high-quality biomedical pri- mary research papers published in 2024. Our goal is to con- struct a dataset that accurately reflects the state-of-the-art in the field and provides a robust foundation for evaluating LLMs’ capabilities for generating research ideas. Motivated by our desire to include only high-quality, peer-reviewed re- search, as well as those recognized by the scientific com- munity through citations, we retrieve papers either from top venues or from other venues but are recognized by a signif- icant number of citations. We use the Semantic Scholar API (Kinney et al. 2023) to retrieve all biomedical papers pub- lished in top biomedical conferences according to Google Scholar venue rankings (Google Scholar 2024) in the year 2024 with at least one citation. We also retrieve papers pub- lished from other biomedical venues in the year 2024 that have at least 20 citations. Any duplicate papers are removed. We refer to these selected papers as target papers in which the ground-truth research ideas lie. To further enrich our dataset and provide context, we also extracted the reference papers cited by these target papers. This is done using the Semantic Scholar API as well. These reference papers contain the foundational ideas that moti- vated the research in the target papers, offering valuable in- sights into the background and rationale behind each study. By mapping each target paper to its corresponding set of ref- erence papers, we create a comprehensive contextual frame- work that can aid LLMs in generating coherent and relevant research ideas. Also, to ensure the completeness and usabil- ity of our dataset, we disregard papers with critical missing information, such as abstracts. This is crucial for maintain- ing the integrity of our evaluation, as missing information or poor contextualization could hinder LLMs in understanding the main ideas and prevent fair comparisons with generated research ideas. Relevance and Significance Based Reference Filtering We believe that the reference papers provide the most sig- nificant information for generating the new research ideas in the target papers. However, not all references cited in a pa- per are equally relevant to its central theme. Especially when computing resources are limited, it’s vital to focus on the most pertinent and significant references in the target papers. Our motivation for implementing a significance-relevancy- based filtering process is to ensure that the reference papers align closely with the target paper’s primary research ideas, thus maximizing the relevance and utility of the information provided to the LLMs. To enhance the relevance of the ref- erence papers, we propose a filtering process that prioritizes references directly contributing to the main research idea of the target paper. This approach excludes irrelevant or overly specific references that do not align with the overarching re- search theme, thereby optimizing the dataset for the genera- tion of new research ideas under constrained resources. The filtering process is guided by three conditions: 1. Citation Count Threshold. We exclude reference papers with fewer than five citations to ensure the inclusion of high-quality, widely recognized references. 2. Non-Primary Research Exclusion. We remove non- primary research references, such as reviews, editorials, letters, or books, as labeled by Semantic Scholar. These sources often contain diverse ideas not directly relevant to the target paper’s core research. 3. Background Section Relevance. We also exclude refer- ence papers that are not cited in the background section of the target paper, as they are less likely to contribute directly to the target paper’s research idea. This filtering process ensures that the LLMs are provided with highly relevant and focused information, facilitating the generation of new and meaningful research ideas. We will use random filtering as a baseline, and the effectiveness of our filtering method will be further discussed in the ablation study section. Our approach aims to strike a balance between resource efficiency and the richness of information, thereby advancing the quality of research idea generation. Prompt template for generating research ideas You are a biomedical researcher. You are tasked with creating a hypothesis or research idea given some background knowledge. The background knowledge is provided by abstracts from other papers. Here are the abstracts: Abstract 1:{reference_paper_1_abstract} Abstract 2:{reference_paper_2_abstract} ...... Abstract n:{reference_paper_n_abstract} Using these abstracts, reason over them and come up with a novel hypothesis. Please avoid copying ideas directly, rather use the insights to inspire a novel hy- pothesis in the form of a brief and concise paragraph. Figure 1: Prompt template used to generate research ideas. Research Idea Generation In the process of generating a research idea, human scien- tists rely on relevant background information, typically re- flected in the references cited in their published work. To harness the capabilities of LLMs for generating research ideas, we adopt a similar approach by grounding the LLMs in the same context considered by human researchers. Our motivation for this is to emulate human thought processes in LLMs, ensuring that the generated ideas are informed and contextually relevant. Providing LLMs with related informa- tion or context is crucial; without it, the models may strug- gle to meaningfully connect relevant parametric knowledge learned from their pretraining corpus. To achieve this, the abstract of each target paper encap- sulates the primary research idea developed by human re- searchers, while the abstracts of the reference papers con- tain the key ideas considered during the formulation of these main research ideas. For each target paper, we prompt the LLM with the abstracts of the reference papers as back- ground information. This is accompanied by a specially designed prompt to guide the generation of new research ideas. This process is illustrated in Figure 1, where all the {reference_paper_x_abstract} placeholders are instantiated with the corresponding abstracts of the reference papers. We profile the LLMs as biomedical researchers at the beginning of the prompt. This facilitates the model’s ac- cess to biomedicine-related and research-specific parametric knowledge learned from the pre-training corpus. By profil- ing the LLMs as biomedical researchers, we aim to maxi- mize the utilization of the model’s parametric knowledge in the biomedical domain, thereby enhancing the relevance and depth of the generated research ideas. The research ideas generated by the LLMs are then compared to the research idea presented in the target paper’s abstract for evaluation. Evaluation of the Generated Ideas A straightforward approach to evaluate the quality of gen- erated ideas is to measure the semantic similarity between the generated ideas and the idea from the target paper. How- ever, a similarity-only metric may fail to capture the nuanced qualities of ideas generated by LLMs, such as novelty and feasibility. To address this, we develop a metric called the “Insight Score”, which goes beyond a similarity-only ap- proach to assess the quality of generated ideas in a scal- able and rigorous manner. The core of our metric is a per- sonalized quality ranking which allows the users to specify any quality indicators, such as novelty, feasibility, etc. By combining personalized quality rankings with the number of generated ideas, our metric provides a nuanced measure- ment for various quality indicators, effectively highlighting the strengths and areas for improvement in LLMs’ ability to generate research ideas. The components of our evaluation framework are detailed in the following subsections. Personalized Quality Ranking for Generated Research Ideas. The first step in our evaluation framework involves a personalized quality ranking. For a given target paper and reference papers pair, we first create an idea set that includes both the generated ideas and the original idea from the tar- get paper. Details on how the original idea is extracted from the target paper are provided in the Appendix. Then we use GPT-4o to rank the quality of these ideas based on user- specified quality indicators, without revealing which idea is the original from the target paper. The motivation behind this approach is to provide a flexible and tailored assessment that aligns with the specific interests of human researchers. The prompt template used to achieve this is shown in Figure 2. In the template, placeholders, denoted by curly brankets {} allow the system to adapt to differ- ent scenarios. For instance, if a user wishes to rank re- search ideas based on their novelty, the system replaces {quality_indicator} with “novelty” in the prompt. Similarly, {target_paper_idea} is replaced with the target paper’s research idea, and {generated_idea_1}, ... , {generated_idea_n} are replaced with generated research ideas. The flexibility of this approach allows other quality indicators, such as feasibility, clarity, ethics, etc., to be used to rank research ideas. Furthermore, fueled by the impressive in-context-learning ability (Kojima et al. 2022) of LLMs, the system is able to accommodate a more nuanced understanding of quality in- dicators held in {quality_indicator}. For example, Bob may define “novelty” as “developing new methodolo- gies, techniques, or instruments that allow researchers to explore questions in ways that were not possible before,” Prompt template used to rank research ideas based on user specified quality indicators You are a reviewer tasked with ranking the quality of a set of research ideas based on their {quality_indicator}. The idea with the highest {quality_indicator} should be ranked first. Please rank the following hypotheses in the format: 1. Hypothesis (insert number):(insert brief rationale) 2. Hypothesis (insert number):(insert brief rationale) 3. Hypothesis (insert number):(insert brief rationale) ...... n. Hypothesis (insert number):(insert brief rationale) Please rank the following hypotheses: Hypothesis 1: {target_paper_idea} Hypothesis 2: {generated_idea_1} Hypothesis 3: {generated_idea_2} ...... Hypothesis n: {generated_idea_n} Figure 2: Prompt template used to rank research ideas based on user specified quality indicators. while Alice might consider “novelty” as “applying exist- ing knowledge or technologies to address new problems or in new contexts.” The system allows them to instanti- ate {quality_indicator} with their respective defi- nitions, ensuring the ranking reflects their specific interpre- tations. Personalized quality ranking ensures that the eval- uation is aligned with the user’s perspective, providing a more accurate and meaningful assessment of the generated research ideas. Additionally, by not disclosing which idea is from the target paper, the system ensures a fair and unbiased ranking of all ideas. Relative Quality Scoring for Generated Research Ideas. The second step in our evaluation framework is relative qual- ity scoring, which builds upon the personalized quality rank- ing. The position of the target paper’s idea within the ranked list of research ideas indicates the quality of the generated ideas with respect to the specified quality indicators. Intu- itively, if the target paper’s idea ranks higher on the list, it suggests that the generated ideas are of lower quality com- pared to the target paper’s idea. Conversely, if the generated ideas rank higher than the target paper’s idea, it indicates that the LLM is capable of producing ideas that may be of better quality than those in the target papers. To quantify different quality indicators, we introduce the following notations: • m: the number of target papers in our dataset. • n: the number of research ideas an LLM generates per query. • rtargeti |q: the ith target paper’s idea’s rank within the cor- responding ranked list of ideas given quality indicator q. When n ideas are generated, rtargeti |q ∈ {1, ..., n + 1}. We define I(LLM, q) to represent the “Insight Score” for a given LLM based on a specific quality indicator q as fol- lows: 1 m rtargeti |q − 1 n I(LLM, q) = m (cid:88) (1) i=1 Intuitively, I(LLM, q) ∈ [0, 1]. If all the target papers’ ideas rank first on the list, then all the rtargeti |q = 1, so I(LLM, q) = 0, indicating that the LLM is not capable of generating any research idea that surpass the quality of the target paper’s idea with respect to q. Conversely, if all the target papers’ ideas rank below all the generated ideas, that is, all the rtargeti |q = n + 1, then I(LLM, q) = 1, indicating that any idea generated by the LLM is superior to the target paper’s idea with respect to q. Relative quality scoring provides a detailed and adaptable framework for as- sessing LLM performance, allowing for the consideration of user-defined quality indicators and offering insights into the model’s strengths and areas for improvement. To ensure a fair comparison across different LLMs using I(LLM, q), it’s important to generate the same number of research ideas n for all compared LLMs. Our experiments show that, for a given set of target papers, the ranking of a target paper’s research idea can vary depending on the num- ber of generated ideas in the list. This shifting of ranking positions can affect the Insight Scores of the LLMs. We will further discuss the effect of n has on the Insight Score in the Appendix. Experiments Experimental setup Dataset. We curated 2,374 target papers and their corre- sponding 29,408 reference papers. The total number of fil- tered reference papers is 23,460. We will present the descrip- tive statistics of the number of references a target paper has, with and without our filtering process in the Appendix. Models. To evaluate LLMs’ capability of generating re- search ideas, we test the latest version of several most pop- ular commercial and open-sourced LLM series with differ- ent sizes: Meta LLama Series (Touvron et al. 2023), Google Gemini Series (Reid et al. 2024), and OpenAI GPT Series (OpenAI 2023). All of these models were trained on data with cutoff dates before January 1, 2024, so the target papers published after January 1, 2024 guarantee a fair comparison by avoiding the data leakage issue. Baseline Comparison Metrics. To demonstrate the ad- vantage of the Insight Score, we compare it with two similarity metrics: Semantic similarity and idea overlap. BERTScore (F1 score) (Zhang et al. 2019) is used to measure semantic similarity. The practical upper limit of BERTScore is task dependent. To find this upper limit, we compute the BERTScore of the target papers’ abstracts and their LLM-summarized research ideas and obtain an aver- age score of 0.718. Although BERTScore ranges from 0 to 1, 0.718 is our practical upper limit. The LLM similarity rating, which uses GPT-4o, measures the overlap in ideas between a generated research idea and the abstract of its target paper. It outputs a rating between 0 to 10 for the overlap in ideas, along with an explanation of Prompt template used to obtain LLM similarity rat- ing to measure the overlap of ideas between the gen- erated research idea and the target paper. You are an expert in understanding and analyzing scientific content. Your task is to evaluate the degree of overlap between the ideas presented in a hypothesis and the abstract of a scientific paper. Please read both the hypothesis and the abstract carefully. Then, rate the overlap on a scale of 1 to 10, where 1 indicates minimal or no overlap, and 10 indicates a perfect or nearly perfect overlap. Provide a brief explanation for your rating. Hypothesis: {generated_research_idea} Abstract: {target_paper_abstract} Rating: On a scale of 1-10, rate the overlap between the ideas in the hypothesis and the abstract. Explanation: In one sentence, provide a brief expla- nation for your rating, mentioning the key points of overlap and any significant differences you ob- served. Figure 3: Prompt template to obtain LLM similarity rating. the rating. The prompt template for the LLM similarity rat- ing is shown in Figure 3. Of the n generated research ideas, the one with the highest semantic similarity is considered when measuring idea overlap. Low and High Resource Scenarios. To account for the high cost of inputting numerous reference paper abstracts into an LLM, we consider low and high resource scenar- ios to assess the capabilities of LLMs when researchers face computational constraints and when they do not. In the low resource scenario, an LLM inputs five references filtered by our filtering method introduced earlier. In the high resource scenario, an LLM inputs all unfiltered references, with the exception of GPT-3.5 Turbo, which truncates references that cannot fit into its context window. We discuss how reference filtering and the number of references affect generating re- search ideas in the ablation study. Different q Scenarios. We will measure two types of qual- ity indicators: feasibility and novelty. Feasibility of the ideas may be limited by the target paper because the target paper’s idea has been verified by human researchers, whereas the generated ideas have not. The novelty of the generated ideas may exceed those ideas in the target papers. Given a set of reference papers, there is no guarantee that the target paper exhibits the highest level of novelty. It is possible that better ideas can be generated from the same set of references. Main Results We benchmark LLMs in low and high resource scenarios to assess their ability to generate research ideas. We use se- mantic similarity and idea overlap to measure their similar- ity to target papers. We also evaluate research idea genera- Model Llama 3.1 70B-Instruct Llama 3.1 70B-Instruct Llama 3.1 405B-Instruct Llama 3.1 405B-Instruct Gemini 1.5 Flash Gemini 1.5 Flash Gemini 1.5 Pro Gemini 1.5 Pro GPT-3.5 Turbo GPT-3.5 Turbo GPT-4o Mini GPT-4o Mini GPT-4o GPT-4o Resource Scenario low high low high low high low high low high low high low high Semantic Similarity ↑ Overlap ↑ Idea 0.587 0.597 0.565 0.585 0.585 0.593 0.594 0.604 0.612 0.619 0.610 0.620 0.599 0.608 7 8 8 8 7 8 6 7 8 8 7 8 7 8 Novelty Insight Score ↑ 0.624 0.602 0.647 0.677 0.430 0.568 0.509 0.647 0.401 0.201 0.446 0.528 0.614 0.766 Feasibility Insight Score ↑ 0.150 0.148 0.130 0.132 0.242 0.303 0.305 0.305 0.190 0.305 0.159 0.207 0.143 0.166 Table 1: Main benchmark results. The table shows semantic similarity (80th percentile BERTScore (F1 score)), and the idea overlap (80th percentile LLM similarity rating) between the generated research idea and the target paper abstract, and the novelty and feasibility Insight Scores for various LLMs in high and low resource settings. Bold scores represent the highest score of a given metric. tion based on two quality indicators: novelty and feasibility, using the Insight Score. In the implementation, we generate n = 3 research ideas per query. The results for semantic similarity, idea overlap, and the novelty and feasibility In- sight Scores are in Table 1. Below we will answer specific questions through the analysis of the results. Can LLMs generate research ideas? Most LLMs can generate research ideas that align well with their target pa- pers. Table 1 shows high semantic similarity and idea over- lap with target papers for most models, with GPT-4o Mini (high resource) followed by GPT-3.5 Turbo (high resource) exhibiting the highest scores. Generally, we observe that the high resource scenario generates ideas that have higher sim- ilarity scores than in the low resource scenario. These simi- larity scores demonstrate alignment with target paper ideas, indicating that LLMs, although they cannot see the target pa- pers, can comprehend the background information enough to generate research ideas similar to those generated by hu- man researchers. How well can LLMs generate novel research ideas? Most LLMs are capable to generate research ideas that are just as, if not, more novel than their target papers’ research ideas. Any Insight Score greater than 0.5 indicates that most generated research ideas are ranked above their target pa- pers’ research ideas, concerning a quality indicator. Most of the LLMs yield novelty Insight Scores of over 0.6 with GPT- 4o (high resource) having the highest score of 0.766. This means that for most LLMs, most of their generated research ideas are potentially more novel than the research idea of their target paper. This is significant as it demonstrates the potential of LLMs to drive scientific discovery forward with new and innovative research ideas. How well can LLMs generate feasible research ideas? Most LLMs generate research ideas with lower feasibility than their target papers. As shown in Table 1, these ideas generally have low feasibility Insight Scores. GPT-3.5 Turbo (high resource) and Gemini 1.5 Pro (low and high resource) achieve the highest scores, yet all LLMs score below 0.5, indicating that most of their ideas rank lower in feasibility compared to their target papers. Although LLMs can pro- duce novel ideas, their feasibility often remains inferior to human-generated research ideas. What is the relationship between generating novel and feasible research ideas? For all LLMs, there is a gap between the novelty and the feasibility of their research ideas. Table 1 shows that with the exception of GPT-3.5 Turbo (high resource), all models yield higher novelty In- sight Scores than feasibility Insight Scores. The intensity of this gap varies across models. GPT-4o and the LLama 3.1 models exhibit the largest gaps, while GPT-3.5 Turbo, GPT- 4o Mini, and the Gemini series of models have smaller gaps. This indicates a general trend toward a trade-off between generating research ideas that are more novel or feasible, with the degree of the gap varying across models. This trade- off is intuitive, as research ideas that propose pursuing more novel, unexplored approaches may be less feasible to imple- ment than ideas suggesting more incremental contributions. Can reference filtering help lower-capacity models pro- duce more novel research ideas? Reference filtering plays a crucial role in enabling lower-capacity models to generate more novel research ideas. As shown in Table 1, GPT-3.5 Turbo and Llama 3.1 70B-Instruct, both smaller models in their respective families, yield higher novelty In- sight Scores in the low resource scenario compared to the high resource scenario. Due to their lower capacity, these models are likely distracted by irrelevant references from target papers with fewer total references since most target papers have less than 16 references. Thus, reference filtering becomes essential to help smaller models focus on the most relevant ideas, boosting their ability to generate more novel research ideas. Ablation Study Figure 4: Novelty and feasibility Insight Scores as the num- ber of filtered and unfiltered references used to generate re- search ideas increase. Num Ref Similarity (Filtered) Similarity (Unfiltered) 1 3 5 7 10 13 15 All 0.570 0.582 0.586 0.589 0.589 0.590 0.592 0.592 0.567 0.580 0.585 0.587 0.590 0.590 0.590 0.594 Idea Overlap (Filtered) 2.946 4.636 5.089 5.349 5.720 5.694 6.002 5.915 Idea Overlap (Unfiltered) 2.640 4.410 4.913 5.304 5.508 5.685 5.748 6.302 Table 2: Comparison of semantic similarity and idea overlap scores for research ideas generated by GPT-4o Mini with fil- tered and unfiltered references. Underlined scores are higher when compared to their filtered/unfiltered counterpart. Bold values are the highest for each measure. For the ablation study we gathered a subset of target pa- pers that have at least 15 references and generated research ideas using GPT-4o Mini (OpenAI 2023) with a varying number of filtered and unfiltered references. We evaluated the ideas generated using semantic similarity and idea over- lap. We also calculate the Insight Scores for novelty and fea- sibility, which indicate the quality of the research idea with respect to novelty and feasibility. The results of these evalu- ations are presented in Figure 4 and Table 2. Effectiveness of the Insight Score. We explore the effec- tiveness of our Insight Score by applying it to the quality indicators novelty and feasibility. Figure 4 shows how the Insight Score for novelty and feasibility evolves as we in- corporate more references to generate research ideas. As the number of references increases, whether filtered or un- filtered, the Insight Score for novelty also increases. This shows that LLM-generated ideas tend to display more nov- elty when they are generated with more references. However, the feasibility of these LLM-generated ideas do not follow the same pattern. As shown in Figure 4, increas- ing the number of references leads the feasibility Insight Score to plateau at a low level, regardless of whether the ref- erences are filtered. Notably, the feasibility Insight Scores remain consistently lower than the novelty Insight Scores, except in the case where the research ideas are generated with only a single reference. In this instance, the novelty In- sight Score is low. These findings demonstrate the utility of our Insight Score in capturing complex patterns that similarity metrics may overlook. Our results demonstrate that once a certain thresh- old of novelty is surpassed, the feasibility of generated ideas tends to decline and stabilize at a lower level. This obser- vation supports the trade-off between novelty and feasibility identified in our main results, further highlighting the impor- tance of our Insight Score in assessing the dynamics between these two important quality indicators. Effect of reference filtering on generated research ideas. The alignment of the LLM generated research ideas to the target papers improves as the number of references in- creases. Specifically, filtering plays a critical role in enhanc- ing the similarity of the generated ideas to the target paper when not all references are provided. Table 2 shows that when all references are not available, filtered references lead to more alignment compared to unfiltered ones. Irrelevant information can cause the LLM’s output to diverge when given limited context. By filtering out less relevant refer- ences, the LLM is guided to produce ideas that are more closely aligned with the target paper. However, when all references are available, the benefits of filtering are lost. Table 2 shows that with all references avail- able, unfiltered references produce the most aligned research ideas. This indicates that with sufficient references, the LLM is better equipped to ignore irrelevant information and lever- age the comprehensive knowledge provided by all unfiltered references, resulting in research ideas that are most similar to the target papers. Overall, using unfiltered references tends to produce the most aligned research ideas when all references are avail- able. However, in scenarios with limited references, refer- ence filtering is beneficial. This is especially relevant given the resource-intensive nature of generating ideas with LLMs as well as the input constraints of some models. Conclusion In this work, we introduced IdeaBench, a benchmark sys- tem for evaluating LLMs’ ability to generate research ideas based on user-defined quality indicators. The dataset is con- structed by emulating human researchers’ literature review process, providing grounded contextualization for LLMs to generate research ideas. For evaluation, we proposed the “Insight Score”, a metric that surpasses similarity-based measures by capturing nuanced, user-specified quality in- dicators through personalized quality ranking and relative quality scoring. This work can serve as the cornerstone for academia to build up confidence in leveraging LLMs to ac- celerate ideation in scientific discovery. References AI4Science, M. R.; and Quantum, M. A. 2023. The impact of large language models on scientific discovery: a prelimi- nary study using gpt-4. arXiv preprint arXiv:2311.07361. Baek, J.; Jauhar, S. K.; Cucerzan, S.; and Hwang, S. J. 2024. Researchagent: Iterative research idea generation over sci- entific literature with large language models. arXiv preprint arXiv:2404.07738. Bai, Y.; Ying, J.; Cao, Y.; Lv, X.; He, Y.; Wang, X.; Yu, J.; Zeng, K.; Xiao, Y.; Lyu, H.; et al. 2024. Benchmarking foun- dation models with language-model-as-an-examiner. Ad- vances in Neural Information Processing Systems, 36. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877– 1901. Bubeck, S.; Chandrasekaran, V.; Eldan, R.; Gehrke, J.; Horvitz, E.; Kamar, E.; Lee, P.; Lee, Y. T.; Li, Y.; Lundberg, S.; et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712. Chiang, C.-H.; and Lee, H.-y. 2023. Can large language arXiv models be an alternative to human evaluations? preprint arXiv:2305.01937. Fu, J.; Ng, S.-K.; Jiang, Z.; and Liu, P. 2024. GPTScore: Evaluate as You Desire. In Duh, K.; Gomez, H.; and Bethard, S., eds., Proceedings of the 2024 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers), 6556–6576. Mexico City, Mexico: Associ- ation for Computational Linguistics. Google Scholar. 2024. Google Scholar Top Publications. https://scholar.google.com/citations?view op=top venues. Kinney, R.; Anastasiades, C.; Authur, R.; Beltagy, I.; Bragg, J.; Buraczynski, A.; Cachola, I.; Candra, S.; Chandrasekhar, Y.; Cohan, A.; et al. 2023. The semantic scholar open data platform. arXiv preprint arXiv:2301.10140. Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa, Y. 2022. Large language models are zero-shot reason- ers. Advances in neural information processing systems, 35: 22199–22213. Lin, C.-Y. 2004. ROUGE: A Package for Automatic Evalu- ation of Summaries. In Annual Meeting of the Association for Computational Linguistics. Liu, Y.; Iter, D.; Xu, Y.; Wang, S.; Xu, R.; and Zhu, C. 2023. G-eval: Nlg evaluation using gpt-4 with better human align- ment. arXiv preprint arXiv:2303.16634. Moore, S.; Tong, R.; Singh, A.; Liu, Z.; Hu, X.; Lu, Y.; Liang, J.; Cao, C.; Khosravi, H.; Denny, P.; et al. 2023. Empowering education with llms-the next-gen interface and content generation. In International Conference on Artificial Intelligence in Education, 32–37. Springer. OpenAI. 2023. abs/2303.08774. GPT-4 Technical Report. ArXiv, Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine trans- In Proceedings of the 40th annual meeting of the lation. Association for Computational Linguistics, 311–318. Qiu, L.; Jiang, L.; Lu, X.; Sclar, M.; Pyatkin, V.; Bhagavat- ula, C.; Wang, B.; Kim, Y.; Choi, Y.; Dziri, N.; et al. 2023. Phenomenal yet puzzling: Testing inductive reasoning ca- pabilities of language models with hypothesis refinement. arXiv preprint arXiv:2310.08559. Reid, M.; Savinov, N.; Teplyashin, D.; Lepikhin, D.; Lilli- crap, T.; Alayrac, J.-b.; Soricut, R.; Lazaridou, A.; Firat, O.; Schrittwieser, J.; et al. 2024. Gemini 1.5: Unlocking mul- timodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530. Thompson, B.; and Post, M. 2020. Automatic machine translation evaluation in many languages via zero-shot para- phrasing. arXiv preprint arXiv:2004.14564. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Vu, T.; Krishna, K.; Alzubi, S.; Tar, C.; Faruqui, M.; and Sung, Y.-H. 2024. Foundational Autoraters: Taming Large Language Models for Better Automatic Evaluation. arXiv preprint arXiv:2407.10817. Wang, Q.; Downey, D.; Ji, H.; and Hope, T. 2023a. Learning to generate novel scientific directions with con- arXiv preprint textualized literature-based discovery. arXiv:2305.14259. Wang, Q.; Downey, D.; Ji, H.; and Hope, T. 2023b. Scimon: Scientific inspiration machines optimized for novelty. arXiv preprint arXiv:2305.14259. Wysocki, O.; Wysocka, M.; Carvalho, D.; Bogatu, A. T.; Gusicuma, D. M.; Delmas, M.; Unsworth, H.; and Freitas, A. 2024. An LLM-based Knowledge Synthesis and Scien- tific Reasoning Framework for Biomedical Discovery. arXiv preprint arXiv:2406.18626. Yang, R.; Tan, T. F.; Lu, W.; Thirunavukarasu, A. J.; Ting, D. S. W.; and Liu, N. 2023a. Large language models in health care: Development, applications, and challenges. Health Care Science, 2(4): 255–263. Yang, Z.; Du, X.; Li, J.; Zheng, J.; Poria, S.; and Cam- bria, E. 2023b. Large language models for automated open-domain scientific hypotheses discovery. arXiv preprint arXiv:2309.02726. Yuan, W.; Neubig, G.; and Liu, P. 2021. Bartscore: Evalu- ating generated text as text generation. Advances in Neural Information Processing Systems, 34: 27263–27277. Zhang, T.; Kishore, V.; Wu, F.; Weinberger, K. Q.; and Artzi, Y. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. Zhao, W.; Peyrard, M.; Liu, F.; Gao, Y.; Meyer, C. M.; and Eger, S. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. arXiv preprint arXiv:1909.02622. Zhou, Y.; Liu, H.; Srivastava, T.; Mei, H.; and Tan, C. 2024. Hypothesis Generation with Large Language Models. arXiv preprint arXiv:2404.04326. Appendix Code and Dataset Availability Due to the complexity of our dataset, we combined our dataset with all the code needed to generate our results and made them available at: https://anonymous.4open.science/r/ IdeaBench-2747/ Implementation Details We describe the resources used to generate and evaluate research ideas. We used various API services to generate research ideas and to evaluate them. Additionally, we em- ployed accelerated hardware to compute semantic similar- ity scores between generated research ideas and their corre- sponding target paper abstracts. The OpenAI API service 1 was employed to generate re- search ideas that used the OpenAI suite of models. The ser- vice was also used for extracting research ideas from target paper abstracts and evaluating research ideas with the LLM similarity rating and the Insight Score. To generate research ideas with the Gemini family of LLMs, Google AI’s API ser- vice 2 was used. To generate research ideas with the Llama 3.1 family of LLMs, DeepInfra’s API service 3 was used. To evaluate the semantic similarity between research ideas and their corresponding target paper abstracts, we computed BERTScores using one NVIDIA A6000 48GB GPU. This hardware allowed for the efficient computation of BERTScores. Dataset Statistics Description Total number of target papers Total number of reference papers (with filtering) Total number of reference papers (w/o filtering) Count 2,374 23,460 29,408 Table 3: Total counts of the dataset’s target papers and refer- ences. Statistic Mean Standard Deviation Minimum 25% Percentile 50% Percentile (Median) 75% Percentile Maximum With Filtering w/o Filtering 9.882 6.521 3 5 8 13 51 12.388 7.946 3 6 10 16 62 Table 4: Descriptive statistics of the number of references per target paper. 1More details about the OpenAI API service can be found here: https://platform.openai.com/docs/overview 2More details about the Google AI API service can be found here: https://ai.google.dev/gemini-api/docs/api-key 3More details about the DeepInfra’s API service can be found here: https://deepinfra.com/ Table 3 shows the total number of target papers in our dataset along with the number of filtered and unfiltered refer- ences. Table 4 shows the descriptive statistics of the filtered and unfiltered reference papers. Extracting Research Idea from a Target Paper Prompt template used to extract the research idea from a given target paper’s abstract. Write a concise paragraph summarizing the fol- lowing biomedical paper abstract as if you are proposing your own research idea or hypothesis. Focus on describing the main research idea and provide a high-level summary of the findings without detailed results or specific numerical data. Please begin the paragraph with ”Hypothesis: ” or ”Given that ”. Abstract: {target_paper_abstract} Summary: Figure 5: Prompt template used to extract a target paper’s research idea. To ensure a fair comparison between the research ideas generated by LLMs and those in the target paper when ranking the ideas, we extract the core research idea from the target paper’s abstract using GPT-4o with a specifically designed prompt. Abstracts often contain distracting infor- mation, such as detailed results, which may not directly reflect the central research idea and may bias the Insight Score when ranking research ideas. Therefore, we designed a prompt that focuses on summarizing the main research idea in a way that aligns with how our LLM generates ideas. Figure 5 shows the prompt template. This process enables a fair ranking of the target paper’s idea alongside the LLM generated ideas. Effect of the Number of Generated Research Ideas on Insight Score We assess how the number of generated research ideas n affects the target paper’s absolute rank using GPT-4o Mini, in the same ablation study setting. Figure 6 shows that as n increases, the target paper is ranked farther in the list for both novelty and feasibility, regardless of reference filtering. Changes in the absolute rank of the target paper will affect the Insight Score. To illustrate the effect varying n has on the Insight Score, consider we acquire the Insight Score for an LLM that gen- erates 3 research ideas for one target paper. If the target pa- per ranks 3rd, then its Insight Score would be 0.667. Now, if we have the same LLM generate 10 research ideas and the target paper ranks 5th, as suggested by Figure 6, then its In- Generated research ideas similar to their target paper. We find that LLMs can emulate human researchers by gen- erating research ideas similar to those in their target pa- pers when provided with the same background informa- tion. We present examples where LLM-generated research ideas exhibit impressive overlap with their target papers, de- spite these papers not being seen during the LLMs’ train- ing. These examples are shown in Figures 7 through 16. Each example includes the target paper’s abstract, the LLM- generated idea, and an explanation of the idea overlap rat- ing, highlighting why the two are very similar. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Through these examples, we notice that LLMs are capable of identifying and leveraging the most relevant ideas from all the unfiltered references of the target papers, allowing them to generate research ideas that address issues similar to those in the target paper. Additionally, these models can produce ideas that predict findings that are related or remark- ably close to those presented in the original work. This sug- gests that LLMs can not only identify key research questions but also anticipate the outcomes, aligning closely with the conclusions of the target papers. Novelty of the generated research ideas. Our main re- sults demonstrate that LLMs can generate research ideas that are as novel, if not more so, than those in their target papers. We provide examples where LLM-generated research ideas outrank their target papers in terms of novelty in Figures 17 through 26. These figures include the target paper’s research idea, the generated research idea, and the Insight Score’s rationale for ranking the generated idea higher in novelty. Human researchers highlight the aspects contributing to the generated idea’s novelty in green and those that illustrate why the target paper’s idea is less novel in red. These ex- amples showcase the capability of LLMs to produce novel research ideas. From these examples, we observe that LLMs generate novel research ideas by creatively bridging connections be- tween different concepts or findings. When the generated re- search idea is ranked higher in novelty than the target paper’s idea, it is often because the target paper’s idea incrementally builds on existing research, while the LLM-generated ideas propose new connections between different concepts or sci- entific findings from reference papers. This suggests that given the proper background information LLMs can gener- ate bold and novel research ideas. Feasibility of the generated research ideas. Although our results show that LLMs often don’t generate research ideas that are more feasible than those in their target papers, there are still some instances where LLMs’ research ideas are comparable in feasibility to their target papers’ ideas. We provide examples where LLM-generated research ideas are comparable in feasibility to their target papers’ ideas in Figures 27 through 36. These figures include the target pa- per’s research idea, the generated research idea, and the In- sight Score’s rationale for the generated idea’s feasibility. Human researchers highlight the elements that contribute to the generated idea’s feasibility in green. These examples Figure 6: Effect that the number of generated research ideas has on the target paper’s Average Absolute Rank. sight Score would be 0.4. Despite using the same LLM, the variation in n causes different Insight Scores. Another effect that n has on the Insight Score is the gran- ularity effect, which arises from the discrete nature of the In- sight Score. A larger n allows for a more granular measure- ment of an LLM’s capability in generating research ideas, meaning that a single shift in ranking position results in a smaller change in the Insight Score compared to a smaller n. For example, consider Case n = 6 and Case n = 10. If the target paper ranks around the top 50%, that is, rtargeti |q = 4 for n = 6 and rtargeti |q = 6 for n = 10, both will have an Insight Score of 0.5. However, if the target paper’s ranking position drops by one, then for n = 6, rtargeti |q = 5, the Insight Score will increase to 0.667; whereas for n = 10, rtargeti |q = 7, the Insight Score will be 0.6. This difference does not necessarily indicate that the LLM with an Insight Score of 0.667 is better than the one with 0.6. Instead, it reflects that the first model generated fewer research ideas, resulting in a less granular Insight Score. Thus, using the same n for comparing different LLMs’ Insight Scores is rec- ommended to avoid unfair comparison. Case Studies In this section, we include a series of case studies that illus- trate the capabilities of LLMs in generating research ideas, supporting our main benchmark’s findings. Each case study includes 10 examples showcasing how LLMs generate re- search ideas that are similar to, more novel than, and com- parable in feasibility to their target papers. Additionally, two examples highlight how a smaller LLM produces incoherent and irrelevant text. Through these examples, we show the strengths of LLMs in aligning with target papers and some- times surpassing them in novelty while maintaining compa- rable feasibility. We also highlight that smaller models may be incapable of generating coherent and relevant research ideas. showcase the LLMs’ ability to produce feasible research ideas. By analyzing these examples, we observe that when LLMs generate feasible research ideas, the ideas are gen- erally straightforward and rely on established technologies and approaches. LLama 3.1 8B-Instruct results. In order to evaluate the effectiveness of small LLMs in generating research ideas, we tested LLama 3.1 8B-Instruct in both low-resource and high-resource conditions. The results are presented in Ta- ble 5. LLama 3.1 8B-Instruct reports a high idea overlap and novelty Insight Score but a low feasibility Insight Score due to generating large amounts of irrelevant or incoherent text. Our LLM-based evaluations (idea overlap and Insight Score) are not equipped to manage this because LLMs are not trained on poor quality text and our prompt templates do not consider incoherent text. As a result, the irrelevant and bad text introduces bias in our evaluations. Examples of irrelevant and incoherent text generated by LLama 3.1 8B- Instruct can be seen in Figures 37 and 38, where sections highlighted in red by human researchers indicate problem- atic text produced by the LLM. Since LLama 3.1 8B-Instruct generates a significant amount of irrelevant and nonsensical text, and our LLM evaluations are not suitable for these outputs, we opted not to include its results in the main findings. We present these outcomes to inform the community that smaller LLMs are insufficient for producing coherent research ideas. Similar Research Idea Example 1. Target Paper Abstract Large-scale sequence modeling has sparked rapid advances that now extend into biology and ge- nomics. However, modeling genomic sequences introduces challenges such as the need to model long-range token interactions, the effects of upstream and downstream regions of the genome, and the reverse complementarity (RC) of DNA.Here, we propose an architecture motivated by these challenges that builds off the long-range Mamba block, and extends it to a BiMamba component that supports bi-directionality, and to a MambaDNA block that additionally supports RC equivari- ance. We use MambaDNA as the basis of Caduceus, the first family of RC equivariant bi-directional long-range DNA language models, and we introduce pre-training and fine-tuning strategies that yield Caduceus DNA foundation models. Caduceus outperforms previous long-range models on downstream benchmarks; on a challenging long-range variant effect prediction task, Caduceus ex- ceeds the performance of 10x larger models that do not leverage bi-directionality or equivariance. LLM Generated Research Idea LLM: GPT-4o Mini Resource Scenario: High Generated Research idea: **Hypothesis:** We propose that leveraging the inherent reverse com- plement (RC) symmetry of DNA sequences can enhance the predictive accuracy of genomic models by integrating RC-equivariant architectures with state-of-the-art foundation models. By developing a novel hybrid model that incorporates both the bidirectional capabilities of advanced state space models (like Mamba and S4) and the parameter-sharing techniques utilized in RC-equivariant net- works, we hypothesize that this approach will yield a more efficient and robust representation of genomic sequences. This model could effectively capture complex regulatory interactions and long-range dependencies in DNA, ultimately improving the prediction of functional variants and enhancing our understanding of gene regulation mechanisms across diverse species. Idea Overlap Explanation The hypothesis and the abstract share a high degree of overlap, both focusing on leveraging the re- verse complement (RC) symmetry of DNA sequences and integrating this with advanced state space models to enhance genomic predictions. Key points of overlap include the use of RC- equivariant architectures, bidirectional capabilities, and the development of a novel hybrid model to improve the prediction of functional variants. The main difference is the hypothesis’s emphasis on parameter-sharing techniques, which is less explicitly mentioned in the abstract. Figure 7: Example 1 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Model Llama 3.1 8B-Instruct Llama 3.1 8B-Instruct Resource Scenario low high Semantic Similarity ↑ Overlap ↑ Idea 0.563 0.582 8 8 Novelty Insight Score ↑ 0.709 0.795 Feasibility Insight Score ↑ 0.048 0.069 Table 5: Llama 3.1 8B-Instruct benchmark results. The table shows semantic similarity (80th percentile BERTScore (F1 score)), and the idea overlap (80th percentile LLM similarity rating) between the generated research idea and the target paper abstract, and the novelty and feasibility Insight Scores for various LLMs in high and low resource settings. Bold scores represent the highest score of a given metric. Similar Research Idea Example 2. Target Paper Abstract Traumatic brain injuries represent a leading cause of death and disability in the paediatric and adult populations. Moderate-to-severe injuries are associated with blood–brain barrier dysfunction, the development of cerebral oedema, and neuroinflammation. Antagonists of the tachykinin NK1 re- ceptor have been proposed as potential agents for the post-injury treatment of TBI. We report on the identification of EUC-001 as a potential clinical candidate for development as a novel TBI ther- apy. EUC-001 is a selective NK1 antagonist with a high affinity for the human NK1 receptor (Ki 5.75 × 10-10 M). It has sufficient aqueous solubility to enable intravenous administration, whilst still retaining good CNS penetration as evidenced by its ability to inhibit the gerbil foot-tapping response. Using an animal model of TBI, the post-injury administration of EUC-001 was shown to restore BBB function in a dose-dependent manner.EUC-001 was also able to ameliorate cerebral oedema. These effects were associated with a significant reduction in post-TBI mortality. In addi- tion, EUC-001 was able to significantly reduce functional deficits, both motor and cognitive, that normally follow a severe injury. EUC-001 is proposed as an ideal candidate for clinical develop- ment for TBI. LLM Generated Research Idea LLM: GPT-4o Mini Resource Scenario: High Generated Research idea: Based on the insights gathered from the abstracts, a compelling hy- pothesis emerges regarding the role of neurokinin-1 receptor (NK1R) antagonists in mitigating secondary injuries associated with traumatic brain injury (TBI). Given the established involvement of substance P (SP) in neurogenic inflammation and blood-brain barrier (BBB) disruption after CNS insults, it is hypothesized that early intervention with NK1R antagonists not only reduces neurogenic inflammation but also enhances BBB integrity, thereby improving outcomes in TBI patients.This dual mechanism could significantly lower the incidence of cerebral edema and ex- citotoxicity, leading to better functional recovery. Future research should focus on clinical trials evaluating the timing and dosage of NK1R antagonists following TBI to establish their efficacy and safety in a therapeutic context. Idea Overlap Explanation The hypothesis and the abstract both focus on the potential benefits of NK1R antagonists in treating TBI by reducing neuroinflammation and enhancing BBB integrity. They both mention the reduction of cerebral edema and the improvement of functional outcomes. The primary difference is that the abstract specifies EUC-001 as the NK1R antagonist being studied, while the hypothesis discusses NK1R antagonists in general. Figure 8: Example 2 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Similar Research Idea Example 3. Target Paper Abstract Patients with coronavirus disease-2019 (COVID-19) have an increased risk of thrombosis and acute respiratory distress syndrome (ARDS). Thrombosis is often attributed to increases in plasminogen activator inhibitor-1 (PAI-1) and a shut-down of fibrinolysis (blood clot dissolution). Decreased urokinase-type plasminogen activator (uPA), a protease necessary for cell-associated plasmin gen- eration, and increased tissue-type plasminogen activator (tPA) and PAI-1 levels have been reported in COVID-19 patients. Because these factors can occur in free and complexed forms with differ- ences in their biological functions, we examined the predictive impact of uPA, tPA, and PAI-1 in their free forms and complexes as a biomarker for COVID-19 severity and the development of ARDS. In this retrospective study of 69 Japanese adults hospitalized with COVID-19 and 20 healthy donors, we found elevated free, non-complexed PAI-1 antigen, low circulating uPA, and uPA/PAI-1 but not tPA/PAI-1 complex levels to be associated with COVID-19 severity and ARDS development. This biomarker profile was typical for patients in the complicated phase. Lack of PAI- 1 activity in circulation despite free, non-complexed PAI-1 protein and plasmin/α2anti-plasmin complex correlated with suPAR and sVCAM levels, markers indicating endothelial dysfunction. Furthermore, uPA/PAI-1 complex levels positively correlated with TNFα, a cytokine reported to trigger inflammatory cell death and tissue damage. Those levels also positively correlated with lymphopenia and the pro-inflammatory factors interleukin1β (IL1β), IL6, and C-reactive protein, markers associated with the anti-viral inflammatory response. These findings argue for using uPA and uPA/PAI-1 as novel biomarkers to detect patients at risk of developing severe COVID-19, in- cluding ARDS. LLM Generated Research Idea LLM: GPT-4o Mini Resource Scenario: High Generated Research idea: Hypothesis: The dysregulation of the plasminogen/plasmin system, particularly involving elevated levels of plasminogen activator inhibitor-1 (PAI-1) and urokinase plasminogen activator (uPA), plays a crucial role in the exacerbation of COVID-19-related coagulopathy and inflammation, par- ticularly in diverse ethnic groups. This dysregulation may lead to a hypercoagulable state that not only contributes to thrombotic events but also impairs immune responses, creating a vicious cycle of inflammation and tissue damage. Therefore, therapeutic strategies aimed at modulating PAI-1 and uPA levels could improve clinical outcomes in COVID-19 patients by restoring fibrinolytic balance and enhancing the resolution of inflammation, particularly in populations at higher risk for severe disease manifestations. Idea Overlap Explanation Both the hypothesis and the abstract discuss the dysregulation of the plasminogen/plasmin system, particularly involving elevated PAI-1 and altered uPA levels, and their role in COVID-19-related complications such as coagulopathy, inflammation, and severe disease outcomes. The primary dif- ference is that the hypothesis emphasizes therapeutic strategies and diverse ethnic groups, which are not addressed in the abstract. Figure 9: Example 3 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Similar Research Idea Example 4. Target Paper Abstract Increased inflammation is associated with the pathogenesis of heart failure (HF). Increased circulat- ing levels of cytokines have been previously reported and generally associated with worse clinical outcomes. In this context, the modulation of inflammation-related parameters seems to be a reason- able therapeutic option for improving the clinical course of the disease. Based on this, we aimed to compare changes in circulating cytokines when Mediterranean diet alone or in combination with hypercaloric, hyperproteic oral nutritional supplements (ONS), enriched with omega-3 (n-3) polyunsaturated fatty acids were administered to patients with HF. Briefly, patients were randomly assigned to receive Mediterranean Diet (control group) vs. Mediterranean Diet plus ONS (inter- vention group). We observed increased circulating levels of IL-6, IL-8, MCP-1 and IP-10. MCP-1 and IL-6 were associated with overweight and obesity (p = 0.01-0.01-0.04, respectively); IL-6 and IL-8 were positively correlated with fat mass and CRP serum levels (p = 0.02-0.04, respectively). Circulating levels of IL-8 significantly decreased in all patients treated with the Mediterranean diet, while IL-6 and IP-10 only significantly decreased in patients that received plus ONS. In the univariate analysis, MCP-1 and its combination with IL-6 were associated with increased mortal- ity (p = 0.02), while the multivariate analysis confirmed that MCP-1 was an independent factor for mortality (OR 1.01, 95%ci 1.01-1.02). In conclusion, nutritional support using hypercaloric, hyperproteic, n-3 enriched ONS in combination with Mediterranean Diet was associated with de- creased circulating levels of some cytokines and could represent an interesting step for improving heart functionality of patients with HF. LLM Generated Research Idea LLM: GPT-4o Mini Resource Scenario: High Generated Research idea: Given the strong interconnections between malnutrition, sarcopenia, inflammation, and cardiovas- cular health highlighted across several studies, a novel hypothesis could be formulated around the role of targeted nutritional interventions in modulating inflammatory responses and improv- ing muscle mass and functionality in patients with heart failure (HF) and concurrent sarcopenia. Specifically, we propose that a combined supplementation of omega-3 polyunsaturated fatty acids (PUFAs) and hypercaloric, hyperproteic oral supplements enriched with leucine will significantly reduce inflammatory markers (such as IL-6 and MCP-1) while enhancing muscle mass and strength in individuals with HF and sarcopenia. This effect may subsequently improve overall cardiovas- cular outcomes, including left ventricular function and quality of life, by addressing both the nu- tritional deficits and the inflammatory milieu that contribute to the progression of heart failure in this population. Future studies could test this hypothesis to explore the synergistic effects of these interventions on both muscle health and inflammation. Idea Overlap Explanation The hypothesis and abstract both focus on the role of targeted nutritional interventions, specifically hypercaloric, hyperproteic supplements enriched with omega-3 PUFAs, in reducing inflammatory markers and improving outcomes in patients with heart failure. Both documents discuss the impact on inflammatory cytokines such as IL-6 and MCP-1, and the potential improvement in heart func- tionality. The primary difference is that the hypothesis also emphasizes improvements in muscle mass and strength, which is not directly addressed in the abstract. Figure 10: Example 4 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Similar Research Idea Example 5. Target Paper Abstract The Anthropocene signifies the start of a no-analogue trajectory of the Earth system that is funda- mentally different from the Holocene. This new trajectory is characterized by rising risks of trig- gering irreversible and unmanageable shifts in Earth system functioning. We urgently need a new global approach to safeguard critical Earth system regulating functions more effectively and com- prehensively. The global commons framework is the closest example of an existing approach with the aim of governing biophysical systems on Earth upon which the world collectively depends. Derived during stable Holocene conditions, the global commons framework must now evolve in the light of new Anthropocene dynamics. This requires a fundamental shift from a focus only on governing shared resources beyond national jurisdiction, to one that secures critical functions of the Earth system irrespective of national boundaries. We propose a new framework—the plane- tary commons—which differs from the global commons framework by including not only globally shared geographic regions but also critical biophysical systems that regulate the resilience and state, and therefore livability, on Earth. The new planetary commons should articulate and create comprehensive stewardship obligations through Earth system governance aimed at restoring and strengthening planetary resilience and justice. LLM Generated Research Idea LLM: GPT-3.5 Turbo Resource Scenario: High Generated Research idea: The increasing recognition of the Anthropocene and the urgent need for Earth system governance in the face of climate tipping points and environmental degradation suggests a shift towards a new form of international environmental law focused on active planetary stewardship. This new paradigm of law would move beyond traditional concepts of protecting the environment to actively regulating and shaping the Earth System to ensure human and ecological well-being. By incorporating principles of resource equity, global justice, and planetary boundaries, this new form of international environmental law could address the challenges of the ruptured An- thropocene and promote sustainable practices on a global scale. Idea Overlap Explanation Both the hypothesis and the abstract emphasize the need for a new paradigm in international en- vironmental law or governance to address the challenges posed by the Anthropocene. They both highlight the importance of active stewardship, the incorporation of justice principles, and the ne- cessity of managing Earth system functions to ensure planetary resilience. The primary difference is the hypothesis´s explicit focus on ”resource equity” and ”global justice,” which are implicit but not as explicitly stated in the abstract. Figure 11: Example 5 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Similar Research Idea Example 6. Target Paper Abstract The role of noncoding transcription in chromatin regulation is still controversial, extending to the role of transcription of antisense transcripts called COOLAIR in the Polycomb-mediated epigenetic silencing of Arabidopsis FLC (FLOWERING LOCUS C), a key step during vernalization. Here, we show that COOLAIR transcription and PRC2 (Polycomb Repressive Complex 2) silence FLC in parallel pathways: an antisense-mediated transcriptional repression capable of fast response and a slow PRC2 epigenetic silencing, both of which are affected by growth dynamics and tempera- ture fluctuations. These features explain the varied importance of COOLAIR transcription in cold- induced FLC epigenetic silencing seen in various studies using different conditions. The parallel repressive inputs and extensive feedbacks make the mechanism counterintuitive but provide great flexibility to the plant. LLM Generated Research Idea LLM: GPT-3.5 Turbo Resource Scenario: High Generated Research idea: Based on the insights from the abstracts provided, a novel hypothesis could be formulated to investigate the potential role of dynamic antisense transcription in regulating gene expression in response to environmental cues in plants. Specifically, the hypothesis could propose that the balance between sense and antisense transcription at specific gene loci, such as FLOWERING LOCUS C (FLC), is essential for coordinating the epigenetic regulation of gene expression in response to temperature fluctuations. By manipulating the levels of antisense tran- scripts, such as COOLAIR, and studying their impact on histone modification patterns and gene expression dynamics, this research could provide valuable insights into the molecular mechanisms underlying environmental adaptation in plants. Idea Overlap Explanation The hypothesis and the abstract exhibit a high degree of overlap. Both discuss the role of antisense transcription (specifically COOLAIR) in regulating gene expression at the FLOWERING LOCUS C (FLC) in response to environmental cues, such as temperature fluctuations. They also touch on the epigenetic regulation involving histone modifications and the potential dual pathways of tran- scriptional repression and epigenetic silencing. The only minor difference is that the hypothesis explicitly suggests manipulating antisense transcript levels to study their impact, which is not di- rectly mentioned in the abstract. Figure 12: Example 6 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Similar Research Idea Example 7. Target Paper Abstract Salt stress profoundly affects plant growth, prompting intricate molecular responses, such as al- ternative splicing (AS), for environmental adaptation. However, the response of AS events to salt stress in Opisthopappus (Opisthopappus taihangensis and Opisthopappus longilobus) remains un- clear, which is a Taihang Mountain cliff-dwelling species. Using RNA-seq data, differentially ex- pressed genes (DEGs) were identified under time and concentration gradients of salt stress. Two types of AS, skipped exon (SE) and mutually exclusive exons (MXE), were found. Differentially alternative splicing (DAS) genes in both species were significantly enriched in “protein phospho- rylation”, “starch and sucrose metabolism”, and “plant hormone signal transduction” pathways. Meanwhile, distinct GO terms and KEGG pathways of DAS occurred between two species. Only a small subset of DAS genes overlapped with DEGs under salt stress. Although both species likely adopted protein phosphorylation to enhance salt stress tolerance, they exhibited distinct responses. The results indicated that the salt stress mechanisms of both Opisthopappus species exhibited sim- ilarities and differences in response to salt stress, which suggested that adaptive divergence might have occurred between them. This study initially provides a comprehensive description of salt re- sponsive AS events in Opisthopappus and conveys some insights into the molecular mechanisms behind species tolerance on the Taihang Mountains. LLM Generated Research Idea LLM: GPT-3.5 Turbo Resource Scenario: High Generated Research idea: Based on the information provided in the abstracts, a novel hypoth- esis could be proposed regarding the role of alternative splicing (AS) in plant response to salt stress. Previous research has shown that AS is a widespread mechanism in plants that increases transcriptome complexity and proteome diversity, and that it can be modulated by different abiotic stresses, including heat stress and temperature fluctuations. It is known that salt stress impacts multiple aspects of plant metabolism and physiology, yet little is known about how AS may be involved in the plant´s response to salt stress. Therefore, a hypothesis could be proposed that under salt stress conditions, plants undergo specific AS events that lead to the production of isoforms with altered functions, potentially playing a role in the plant´s adaptation to saline environments. By investigating the AS patterns in response to salt stress and identifying the genes involved in these AS events, a better understanding of the molecular mechanisms underlying salt tolerance in plants could be achieved. Idea Overlap Explanation The hypothesis and the abstract both focus on the role of alternative splicing (AS) in plant response to salt stress, specifically mentioning the investigation of AS patterns and identifying genes in- volved in these events. The key points of overlap include the emphasis on AS as a mechanism for adaptation to salt stress and the goal of understanding the molecular mechanisms underlying salt tolerance. The main difference is the specific focus on the species Opisthopappus in the abstract, which is not mentioned in the hypothesis. Figure 13: Example 7 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Similar Research Idea Example 8. Target Paper Abstract A role for vitamin D in immune modulation and in cancer has been suggested. In this work, we report that mice with increased availability of vitamin D display greater immune-dependent re- sistance to transplantable cancers and augmented responses to checkpoint blockade immunothera- pies. Similarly, in humans, vitamin D–induced genes correlate with improved responses to immune checkpoint inhibitor treatment as well as with immunity to cancer and increased overall survival. In mice, resistance is attributable to the activity of vitamin D on intestinal epithelial cells, which alters microbiome composition in favor of Bacteroides fragilis, which positively regulates cancer immu- nity. Our findings indicate a previously unappreciated connection between vitamin D, microbial commensal communities, and immune responses to cancer. Collectively, they highlight vitamin D levels as a potential determinant of cancer immunity and immunotherapy success. Editor’s sum- mary The gut microbiome has been shown to modulate the response of cancer patients to therapy, but precisely how microbiota affect anticancer immunity is still being elucidated. Giampazolias et al. report that vitamin D bioavailability in mice influences the composition of the gut micro- biome (see the Perspective by Franco and McCoy). After dietary manipulation, vitamin D levels were observed to affect gut bacteria, which in turn improved cancer immunotherapy and antitumor immunity. In humans, low vitamin D levels were correlated with tumor development, and gene sig- natures of vitamin D activity were associated with improved patient responses to immunotherapy. These findings highlight the connection between vitamin D and the immune system through gut bacteria and may have applications for improving cancer therapies. —Priscilla N. Kelly LLM Generated Research Idea LLM: GPT-3.5 Turbo Resource Scenario: High Generated Research idea: Based on the insights from the abstracts, a novel hypothesis could be that vitamin D status, particularly the levels of 25-hydroxyvitamin D, influences the composition of gut microbiota, which in turn affects the response to cancer immunotherapy. It is known that vita- min D plays a role in modulating the immune system and that gut microbiota can impact the efficacy of cancer immunotherapy. Therefore, it is possible that individuals with optimal vitamin D levels may have a more favorable gut microbiome composition, leading to enhanced anti-tumor immune responses and improved outcomes with cancer immunotherapy. This hypothesis could be tested by analyzing the vitamin D levels, gut microbiota composition, and response to immunotherapy in cancer patients, to determine if there is a correlation between vitamin D status, gut microbiota diversity, and treatment outcomes. Idea Overlap Explanation The hypothesis and the abstract both emphasize the influence of vitamin D on gut microbiota com- position and its subsequent impact on the efficacy of cancer immunotherapy. They both underscore the role of vitamin D in modulating the immune system and suggest a correlation between vita- min D levels and improved cancer treatment outcomes. The minor difference is that the abstract provides specific details about the role of Bacteroides fragilis and gene signatures of vitamin D activity, which are not mentioned in the hypothesis. Figure 14: Example 8 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Similar Research Idea Example 9. Target Paper Abstract Messenger RNA (mRNA) vaccines were highly effective against the ancestral SARS-CoV-2 strain, but the efficacy of bivalent mRNA boosters against XBB variants was substantially lower. Here, we show limited durability of neutralizing antibody (NAb) responses against XBB variants and isotype switching to immunoglobulin G4 (IgG4) responses following bivalent mRNA boosting. Bivalent mRNA boosting elicited modest XBB.1-, XBB.1.5-, and XBB.1.16-specific NAbs that waned rapidly within 3 months. In contrast, bivalent mRNA boosting induced more robust and sustained NAbs against the ancestral WA1/2020 strain, suggesting immune imprinting. Following bivalent mRNA boosting, serum antibody responses were primarily IgG2 and IgG4 responses with poor Fc functional activity. In contrast, a third monovalent mRNA immunization boosted all iso- types including IgG1 and IgG3 with robust Fc functional activity. These data show substantial im- mune imprinting for the ancestral spike and isotype switching to IgG4 responses following bivalent mRNA boosting, with important implications for future booster designs and boosting strategies. LLM Generated Research Idea LLM: GPT-3.5 Turbo Resource Scenario: High Generated Research idea: Based on the insights from the abstracts provided, a novel hypothesis could be formulated as follows: ”Repeated exposure to SARS-CoV-2 spike protein through mRNA vaccines leads to a shift in antibody response towards the IgG4 subclass with distinct effector func- tions and Fc glycosylation patterns. This shift may impact the long-term efficacy of the vaccines by altering the balance of antibody-mediated immune responses, potentially influencing protection against infection and disease severity. Further investigation into the role of IgG4 antibodies in vaccine-induced immunity and their interplay with other IgG subclasses is warranted to optimize vaccination strategies and enhance immune protection against SARS-CoV-2 variants. Idea Overlap Explanation The hypothesis and the abstract have a high degree of overlap, with both discussing the shift to- wards IgG4 subclass responses following repeated exposure to SARS-CoV-2 spike protein via mRNA vaccines, and the implications of this shift for vaccine efficacy and immune response. Both focus on the changes in antibody subclasses, specifically IgG4, and the impact on immune protec- tion, highlighting the need for further investigation into the role of IgG4 and other IgG subclasses in vaccine-induced immunity. The main difference is the abstract’s additional focus on the com- parison between bivalent and monovalent mRNA boosters, which is not explicitly addressed in the hypothesis. Figure 15: Example 9 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Similar Research Idea Example 10. Target Paper Abstract Our research has uncovered a role of cytosolic C3 in safeguarding pancreatic β-cell function dur- ing inflammation, which is crucial for diabetes. We developed an animal model with C3 selectively absent in pancreatic β-cells to demonstrate its critical importance in preserving β-cell function and protecting against inflammation-induced β-cell death. This protective mechanism, mediated by cytosolic C3, counters the harmful effects of the proinflammatory cytokine IL-1β. These findings reveal an aspect of cytosolic C3 beyond its traditional association with the immune (complement) system, offering exciting possibilities for therapeutic interventions to preserve β–cell health and improve diabetes treatment. LLM Generated Research Idea LLM: GPT-3.5 Turbo Resource Scenario: High Generated Research idea: Based on the information provided in the abstracts, a novel hypothesis could be proposed: Given the evidence of the crucial role of complement C3 in promoting beta-cell survival and protecting against apoptosis in the presence of proinflammatory cytokines, it is hypothesized that manipu- lating the complement system, specifically targeting C3, could be a potential therapeutic strategy for enhancing beta-cell survival and function in diabetes. By modulating the activity of C3 to enhance its prosurvival effects via AKT activation and c-Jun N-terminal kinase inhibition, it may be possible to protect beta cells from the detrimental effects of chronic inflammation and improve overall pancreatic health in diabetic individuals. Further research into the specific mechanisms by which C3 exerts its protective effects and the development of targeted interventions could lead to novel treatments for diabetes. Idea Overlap Explanation The hypothesis and the abstract exhibit a high degree of overlap, both emphasizing the critical role of complement C3 in protecting pancreatic β-cells against inflammation-induced damage and suggesting its potential as a therapeutic target for diabetes. The key points of overlap include the focus on β-cell survival, the protective role of C3 against proinflammatory cytokines, and the im- plications for diabetes treatment. The main difference is that the hypothesis explicitly mentions the mechanisms involving AKT activation and c-Jun N-terminal kinase inhibition, which are not detailed in the abstract. Figure 16: Example 10 of a generated research that is similar to its target paper. Key points of overlap between the target paper’s abstract and the generated research idea are spotlighted in green by human researchers. Example 1 of a Novel Generated Research Idea Target Paper Research Idea Given that machine learning-based scoring and classification tools like CADD are crucial for pri- oritizing genetic variants in various studies, the recent release of CADD v1.7 introduces advanced annotation features, including protein language model scores, regulatory variant effect predictions, and sequence conservation scores. This latest version has been evaluated using diverse data sets and has shown enhanced performance in predicting both coding and regulatory effects. The availability of these improved scores and tools continues to support the research community in genetic variant analysis and clinical assessment. LLM Generated Research Idea LLM: GPT-4o Resource Scenario: High Generated Research idea: Given the insights from the provided abstracts, a novel hypothesis could focus on the interplay between conserved non-coding regulatory elements and their role in human disease susceptibility, particularly within the context of evolutionary constraints and population-specific variants. **Hypothesis**: ”Highly conserved non-coding cis-regulatory ele- ments (cCREs) that exhibit evolutionary constraints across mammalian species are enriched for variants associated with complex human diseases. These variants influence disease susceptibil- ity by modulating gene expression in a tissue-specific manner. Leveraging deep learning models trained on multi-species genomic alignments and functional assays of these conserved elements can improve the prediction of pathogenic non-coding variants and provide novel insights into the regu- latory architecture underlying complex diseases.” This hypothesis integrates the following insights: 1. Evolutionary conservation and constraint (Abstracts 3, 4). 2. The role of non-coding regulatory elements in gene expression and disease (Abstracts 5, 6, 26). 3. Use of deep learning and compara- tive genomics for predicting variant effects (Abstracts 2, 13, 29). By investigating these conserved non-coding elements and their variants, we can potentially uncover critical regulatory mechanisms contributing to disease phenotypes, thereby enhancing precision medicine approaches and thera- peutic advancements. Insight Score Ranking Rationale 1. Hypothesis B: This hypothesis stands out for its novel approach of integrating deep learning models with multi-species genomic alignments to predict pathogenic non-coding variants. The fo- cus on highly conserved non-coding cis-regulatory elements (cCREs) and their role in complex hu- man diseases, along with leveraging evolutionary constraints, presents a unique and comprehensive method to uncover critical regulatory mechanisms ... 4. Hypothesis A: Although the advancements in CADD v1.7 are significant for genetic variant analysis, this hypothesis is more of an incremental improvement to existing tools rather than introducing a fundamentally new idea. It focuses on the application and evaluation of an updated version of a tool, which, while useful, does not present the same level of novelty as the other hypotheses. Figure 17: Example 1 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 2 of a Novel Generated Research Idea Target Paper Research Idea Given that the increasing aging population significantly strains public medical resources, exploring efficient home health care (HHC) solutions becomes critical. This research proposes addressing the service resource routing and scheduling problem by optimizing the sharing strategy among mul- tiple HHC centers. The study aims to minimize both total operation costs and service delays. A mixed integer programming model is used to describe the problem, and a multi-objective artificial bee colony algorithm with problem-specific knowledge (MABC-PK) is developed to find solu- tions.The algorithm incorporates problem-specific heuristics, a crossover operation, a self-learning neighborhood selection method, and local search techniques to refine solutions. Comparative exper- iments demonstrate that MABC-PK outperforms several established multi-objective optimization methods, highlighting its effectiveness for this application. LLM Generated Research Idea LLM: GPT-4o Resource Scenario: High Generated Research idea: Based on the insights gleaned from the abstracts, I propose the fol- lowing novel hypothesis:”Integrating home healthcare logistics with smart city transportation sys- tems using autonomous vehicles can significantly enhance the efficiency of caregiver routing and scheduling. By leveraging real-time traffic data and predictive analytics, an intelligent system can dynamically optimize routes, reduce travel times, and balance workloads among caregivers. This approach aims to minimize operational costs while maximizing patient satisfaction and care con- tinuity. The hypothesis can be tested by developing a hybrid optimization algorithm combining elements of vehicle routing problem (VRP), multi-objective optimization, and machine learning, and applying it to real-world home healthcare scenarios in urban environments.”This hypothesis builds on the concepts of vehicle routing, scheduling optimization, autonomous vehicle coordina- tion, and home healthcare logistics, while proposing a novel integration of these domains to address the unique challenges faced in urban home healthcare services. Insight Score Ranking Rationale 1. Hypothesis B: This hypothesis stands out due to its novel integration of home healthcare logis- tics with smart city transportation systems using autonomous vehicles. The combination of real- time traffic data, predictive analytics, and autonomous vehicle coordination presents a unique and forward-thinking approach to optimizing caregiver routing and scheduling. .... 4.Hypothesis A: Although this hypothesis addresses a critical problem in HHC logistics and proposes a sophisticated multi-objective artificial bee colony algorithm,it is less novel given that it essentially builds on ex- isting optimization techniques and applies them to a specific problem without integrating new, distinct domains like the other hypotheses. Figure 18: Example 2 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 3 of a Novel Generated Research Idea Target Paper Research Idea Given that addressing sensor node and route failures in smart wireless livestock sensor networks is critical for maintaining network performance, the proposed research will investigate the time-aware UAV-based energy-efficient reconfigurable routing (TUBER) scheme. TUBER aims to mitigate the time complexity and processing cost issues noted in previous schemes by implementing a syn- chronized clustering-with-backup strategy, a minimum-hop neighborhood recovery mechanism, and a redundancy minimization technique. Comparative evaluations indicate that TUBER outper- forms earlier schemes such as RUBER and UBER across several network performance metrics, demonstrating significant improvements in areas including cluster survival ratio, network stabil- ity, energy dissipation, network coverage, packet delivery, fault tolerance, load balancing, routing overhead, routing delay, and failure recovery. These findings suggest that TUBER could provide a more robust and efficient solution for managing sensor node and route failures in these networks. LLM Generated Research Idea LLM: GPT-4o Resource Scenario: High Generated Research idea: Given the convergence of UAV technology, wireless sensor networks (WSNs), and advanced computational methods such as deep learning and optimization algorithms, a novel hypothesis can be formulated as follows: **Hypothesis:** Integrating UAV-based adaptive clustering with deep learning-enhanced fault-tolerant routing can significantly improve the effi- ciency and reliability of real-time livestock monitoring systems in large-scale, remote, and harsh environments. By employing UAVs equipped with high-performance computing units and multi- channel communication gateways, combined with hierarchical WSNs that utilize deep learning for predictive maintenance and fault detection, the system can dynamically optimize flight paths, en- hance data collection accuracy, and extend the operational lifespan of the sensor networks. This approach is expected to provide actionable insights into livestock health, behavior, and location, thereby improving resource allocation and management while minimizing energy consumption and system downtime. This hypothesis leverages UAV adaptability, WSN hierarchical structuring, and the robustness of deep learning models to address the challenges of extensive and unsupervised livestock farming. Insight Score Ranking Rationale 1. Hypothesis D: Integrating UAV-based adaptive clustering with deep learning-enhanced fault- tolerant routing is highly novel due to the combination of advanced computational methods like deep learning with UAV technology and WSNs for real-time livestock monitoring in harsh environ- ments. This approach offers a unique synergy that has the potential to significantly optimize system efficiency and reliability.... 4. Hypothesis A: While investigating the TUBER scheme for addressing sensor node and route failures in smart wireless livestock sensor networks is innovative, it primarily builds on existing schemes (RUBER and UBER). Although it proposes significant improvements across various network performance metrics, it is less novel compared to the integration of deep learning, edge computing, or AI-driven predictive analytics in Hypotheses B, C, and D. Figure 19: Example 3 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 4 of a Novel Generated Research Idea Target Paper Research Idea Given that anesthetics impact brain regions differently depending on sex, our research aims to investigate the role of sex hormones in modulating anesthetic sensitivity. This study reveals that females, both in mice and humans, exhibit resistance to the hypnotic effects of volatile anesthetics primarily due to the acute influence of sex hormones. Although these behavioral differences do not manifest in standard cortical electroencephalographic recordings used clinically, they are evident in subcortical activity patterns. This hidden resistance might contribute to the higher incidence of awareness during anesthesia in females, suggesting a need for sex-specific anesthetic protocols. LLM Generated Research Idea LLM: GPT-4o Resource Scenario: High Generated Research idea: Based on the interplay between neural mechanisms underlying anesthesia, arousal states, and cog- nitive function detailed across these abstracts, I propose a novel hypothesis: ”The differential acti- vation and inhibition of sleep-promoting and wake-active neuronal populations within the preoptic area (POA) and their interaction with the orexinergic system are critical determinants of individ- ual variability in cognitive recovery trajectories and the incidence of postoperative cognitive dys- function (POCD) following general anesthesia. Specifically, dysregulation of these pathways may result in protracted neurocognitive recovery and increased susceptibility to POCD, particularly in older adults or those with preexisting neurocognitive vulnerabilities.” This hypothesis integrates the molecular and cellular insights about POA and orexinergic neurons with the variability in anes- thesia recovery patterns, aiming to elucidate the mechanisms behind cognitive resilience and dys- function post-anesthesia. Insight Score Ranking Rationale 1. Hypothesis B: This hypothesis stands out due to its integration of multiple novel elements: neu- ral inertia, GABAergic neurons in the POA, and the neuromodulatory effects of oxytocin. The pro- posed interaction between these elements to explain cognitive recovery post-anesthesia is highly in- novative and could lead to new therapeutic strategies. ... 3. Hypothesis A: The hypothesis explores the role of sex hormones in modulating anesthetic sensitivity, highlighting a potential cause for higher awareness during anesthesia in females. While it is novel, the interplay between sex hor- mones and anesthetic effects has been previously explored to some extent, making it slightly less novel than the first two hypotheses. Figure 20: Example 4 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 5 of a Novel Generated Research Idea Target Paper Research Idea Given that detailed knowledge of synaptic circuitry in the human brain is essential for understand- ing brain function, this study by Shapson-Coe et al. leverages human brain tissue obtained from neurosurgical procedures to achieve a high-resolution reconstruction of a cubic millimeter of the human temporal cortex. Utilizing advanced electron microscopy and computational techniques, the study generated an extensive dataset encompassing thousands of neurons, millions of synapses, and various other brain tissue elements. Key findings include the identification of a new class of direc- tionally oriented neurons in deep cortical layers and the discovery of rare, powerful multisynaptic connections. The freely available dataset and analysis tools provided by the authors offer a valuable resource for future research into both normal and disordered human neural circuits. LLM Generated Research Idea LLM: Llama 3.1 405B-Instruct Resource Scenario: High Generated Research idea: The following is a novel hypothesis inspired by the abstracts provided: The human brain’s ability to reorganize and adapt throughout life may be influenced by the dynamic interplay between astrocytes and neurons. Specifically, the territorial organization of astrocytes and their interactions with neurons may play a crucial role in modulating synaptic plasticity and neural circuitry. I propose that astrocytes act as ’synaptic editors,’ selectively pruning or strengthening synapses based on neural activity patterns, thereby shaping the brain’s neural networks and influ- encing cognitive function. Furthermore, disruptions in this astrocyte-neuron interplay may con- tribute to neurological disorders such as schizophrenia, where abnormalities in neural circuitry and synaptic function have been observed. Investigating the mechanisms underlying astrocyte-mediated synaptic editing could provide new insights into brain development, plasticity, and disease.” This hypothesis draws inspiration from various abstracts, including those discussing astrocyte-neuron interactions (Abstracts 8, 29), synaptic plasticity and neural circuitry (Abstracts 6, 10, 11), and neurological disorders such as schizophrenia (Abstract 33). The idea of astrocytes as ’synaptic editors’ is a novel concept that integrates these different lines of research and provides a new perspective on the complex interactions between astrocytes and neurons in the brain.’ Insight Score Ranking Rationale 1. **Hypothesis C**: The idea of astrocytes acting as ’synaptic editors’ is highly novel. This hy- pothesis integrates multiple lines of research to propose a new role for astrocytes in modulating synaptic plasticity and neural circuitry, as well as their potential involvement in neurological dis- orders. This concept of astrocytes selectively pruning or strengthening synapses based on neural activity is a fresh perspective that could lead to significant advancements in understanding brain development and function. ... 4. **Hypothesis A**: Although this study by Shapson-Coe et al. offers a high-resolution reconstruction of the human temporal cortex and identifies new classes of neurons and synaptic connections, it is more about providing a detailed dataset and findings rather than proposing a novel hypothesis. The discovery of new neuronal structures and connections is valuable, but it does not introduce a new conceptual framework like the other hypotheses. Figure 21: Example 5 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 6 of a Novel Generated Research Idea Target Paper Research Idea Given that the dual public health emergencies of overdose and COVID-19 have exacerbated risks for individuals with opioid or stimulant use disorders, this study investigates the impact of Risk Mitigation Guidance (RMG) dispensations on mortality and acute care visits. Conducted as a population-based retrospective cohort study in British Columbia, Canada, the research analyzed outcomes for 5882 participants who received RMG prescriptions between March 2020 and Au- gust 2021. The findings suggest that RMG opioid dispensations are significantly associated with reduced all-cause and overdose-related mortality, particularly with increased duration of dispen- sation. However, RMG stimulant dispensations did not show a significant impact on mortality, although they were linked to reduced acute care visits for any cause. This indicates that providing pharmaceutical alternatives to the illegal drug supply might be an effective intervention to lower mortality rates among individuals with opioid use disorder. LLM Generated Research Idea LLM: Llama 3.1 405B-Instruct Resource Scenario: High Generated Research idea: The abstracts discuss various aspects of substance use disorder, includ- ing treatment approaches, harm reduction strategies, and the impact of the COVID-19 pandemic on overdose rates. They also touch on the importance of addressing social determinants and using robust research methods to inform public health policy. A novel hypothesis that emerges from these insights is that the implementation of a community-based, peer-led harm reduction program that incorporates prescription alternatives to illicit substances, along with social support and housing as- sistance, may lead to a significant reduction in overdose rates and improved health outcomes among individuals with substance use disorder. This program would prioritize the needs of marginalized populations and address the root causes of addiction, such as poverty, trauma, and lack of access to healthcare. By leveraging the expertise of people with lived experience and fostering a sense of community and connection, this approach may offer a more effective and sustainable solution to the overdose crisis than traditional treatment models. Insight Score Ranking Rationale 1. Hypothesis C: This hypothesis proposes a novel, integrated approach to harm reduction that combines prescription alternatives to illicit substances with social support and housing assistance, led by peers with lived experience. It addresses multiple root causes of addiction, such as poverty and trauma, which is a novel and holistic angle not traditionally emphasized in substance use dis- order treatment models. ... 3. Hypothesis A: The investigation of Risk Mitigation Guidance (RMG) dispensations during the COVID-19 pandemic is timely and situates the study within a unique context. However, it primarily extends existing research on opioid and stimulant use disorders rather than introducing a fundamentally new concept. ... Figure 22: Example 6 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 7 of a Novel Generated Research Idea Target Paper Research Idea Given that nucleus pulposus cells respond differently to temporary compression (TC) and sustained compression (SC) in the context of intervertebral disc degeneration, our research aims to elucidate the mechanisms governing autophagy and apoptosis under varying compression durations. This study shows that SC leads to severe central NP cell degeneration and heightened apoptotic activity, whereas TC results in milder degeneration with a distinct peak in autophagy markers at 6 weeks. We propose to further explore TC as a model for early autophagy-mediated degeneration and SC as a model for late-stage, apoptosis-driven degeneration. This dual approach could help identify targeted interventions for different stages of disc degeneration. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: High Generated Research idea: Hypotheses: Given the established roles of autophagy and apoptosis in intervertebral disc degeneration (IVDD) and the emerging link between them under oxidative stress, we hypothesize that specific microRNAs (miRNAs) could regulate IVDD progression by modulating the crosstalk between these two pathways. Specifically, we propose that certain miR- NAs, potentially upregulated in degenerative discs, could simultaneously suppress autophagy and promote apoptosis in nucleus pulposus cells, thereby exacerbating IVDD. This hypothesis is based on the ability of miRNAs to target multiple genes within interconnected pathways and the evidence suggesting a protective role of autophagy in early IVDD stages, shifting towards a deleterious effect as apoptosis becomes dominant. Insight Score Ranking Rationale 1. Hypothesis D: Given the established roles of autophagy and apoptosis in intervertebral disc degeneration (IVDD) and the emerging link between them under oxidative stress, we hypothesize that specific microRNAs (miRNAs) could regulate IVDD progression by modulating the crosstalk between these two pathways. This hypothesis is highly novel due to its focus on miRNAs as reg- ulators of the interplay between autophagy and apoptosis, which introduces a new layer of ge- netic regulation and potential therapeutic targets in IVDD. ...4. Hypothesis A: Given that nucleus pulposus cells respond differently to temporary compression (TC) and sustained compression (SC) in the context of intervertebral disc degeneration, our research aims to elucidate the mechanisms governing autophagy and apoptosis under varying compression durations. While this hypothesis is based on a dual approach to understanding compression effects, it is ranked lower in novelty because it builds on already established knowledge about the differential impacts of TC and SC, focusing on further exploration rather than introducing a fundamentally new concept. Figure 23: Example 7 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 8 of a Novel Generated Research Idea Target Paper Research Idea Given that insects utilize gustatory receptors (GRs) to detect various taste modalities, this study elu- cidates the structural mechanisms underlying sugar recognition and activation of two Drosophila sweet taste receptors, GR43a and GR64a. By using cryo-electron microscopy, the researchers revealed that both receptors form tetrameric sugar-gated cation channels, with sugar binding to ligand-binding domains inducing conformational changes that lead to channel opening. GR43a is specifically activated by the monosaccharide fructose, whereas GR64a responds to disaccharides like sucrose and maltose. These findings provide structural insights into how different sugars are recognized by insect taste receptors, offering potential targets for designing insect attractants or repellents for pest control. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: High Generated Research idea: Given the prevalence of CH-π interactions in carbohydrate recognition (Abstract 3) and the impor- tance of taste perception in mosquito behavior (Abstract 4), we hypothesize that disrupting CH-π interactions between mosquito gustatory receptors and sugary food sources could lead to novel mosquito control strategies. Insight Score Ranking Rationale 1. Hypothesis B: Given the potential role of gustatory receptors like Gr43a in sensing internal nutrient levels, this hypothesis explores a novel and unexpected connection between taste recep- tors and pancreatic beta cell function, suggesting a role in type 2 diabetes. This represents a unique cross-disciplinary insight with significant implications for understanding diabetes. ... 4. Hypothesis A: While the study of the structural mechanisms underlying sugar recognition and activation of Drosophila sweet taste receptors is valuable, it is more of an extension of existing knowledge rather than a groundbreaking new idea. The use of cryo-electron microscopy to reveal these mechanisms is an important but expected application of this technology in structural biology. Figure 24: Example 8 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 9 of a Novel Generated Research Idea Target Paper Research Idea Given that effectively mitigating climate change necessitates significant global behavior change, our research aims to identify the most effective strategies to motivate shifts in climate beliefs and actions. A recent study tested 11 different interventions across a large international sample and found that the interventions varied in effectiveness depending on the targeted outcome and the participants’ initial climate beliefs. Specifically, interventions were moderately successful in en- hancing climate beliefs and policy support among non-skeptics, but had limited impact on more effortful behaviors like tree planting. These findings highlight the need for tailored interventions that consider the audience’s predispositions and the specific behavioral goals. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: High Generated Research idea: Hypothesis: Framing climate change mitigation behaviors as collective actions that reduce psycho- logical distance (e.g., emphasizing local community efforts with immediate and tangible benefits) will increase individual participation in pro-environmental behaviors, particularly among individ- uals who score high on measures of climate change helplessness. Insight Score Ranking Rationale 1. Hypothesis D: Framing climate change mitigation behaviors as collective actions that reduce psychological distance is a novel approach, particularly targeting individuals who feel helpless about climate change. This combines psychological concepts with community-focused interven- tions, offering a fresh perspective on motivating pro-environmental behaviors. ... 4. Hypothesis A: Identifying effective strategies to motivate shifts in climate beliefs and actions based on audience predispositions and specific behavioral goals is important but more incremental. The rationale and findings suggest extensions of existing research rather than a fundamentally new approach. Figure 25: Example 9 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 10 of a Novel Generated Research Idea Target Paper Research Idea Given that erythroid cells rely heavily on antioxidant enzymes such as superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx), and peroxiredoxin 2 (Prx2), this study ex- plores how their mRNA levels differ in reticulocytes from healthy individuals versus those with hereditary spherocytosis (HS), sickle cell disease (SCD), and β-thalassemia (β-thal). The inves- tigation reveals that while reticulocyte maturity indices generally correlate with mRNA levels of these enzymes in healthy controls, this correlation is disrupted in patients with these hematologic disorders. Specifically, patients with HS, SCD, and β-thal exhibit younger reticulocytes with el- evated enzyme transcript levels, albeit with different expression patterns. These findings suggest that the expression of antioxidant enzyme transcripts in reticulocytes is influenced not only by cell maturity but also by adaptive responses to abnormal erythropoiesis and altered bone marrow envi- ronments, resulting in distinct antioxidant capacities tailored to each type of anemia. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: High Generated Research idea: Given the evidence that oxidative stress plays a significant role in sickle cell disease (SCD) pathophysiology, and that thiol modification, rather than oxidant stress alone, is crucial for phos- phatidylserine (PS) exposure, we hypothesize that specific inhibitors of thiol oxidoreductases, en- zymes responsible for thiol-disulfide exchange in proteins, could serve as a novel therapeutic target for SCD. By selectively inhibiting these enzymes in SCD erythrocytes, we aim to reduce PS expo- sure, thereby mitigating hemolysis and potentially vaso-occlusive crises, without broadly affecting the cellular redox balance. Insight Score Ranking Rationale 1. Hypothesis B: Given the specificity of targeting thiol oxidoreductases, this approach offers a novel therapeutic target focusing on thiol modification rather than broad oxidative stress, which is a new direction in SCD treatment. ... 4. Hypothesis A: While exploring mRNA levels of antioxidant enzymes in reticulocytes from different hematologic disorders provides valuable insights, it is less novel compared to the other hypotheses, as it extends existing knowledge about erythroid cell ox- idative stress response rather than introducing a new concept. Figure 26: Example 10 of a novel generated research idea along with its target paper’s research idea and the rationale for the ranking given by the Insight Score. Aspects contributing to the generated idea’s novelty are highlighted in green by human researchers; the reasons why the target paper’s idea is less novel are highlighted in red by human researchers. Example 1 of a Feasible Generated Research Idea Target Paper Research Idea Given that Long Covid remains a poorly understood condition with no current diagnostic tests or therapeutic solutions, our proposed research aims to investigate the dysregulation of the comple- ment system and its role in Long Covid. Recent multimodal proteomics analyses of blood serum from COVID-19 patients have revealed that Long Covid is characterized by persistent activation and dysregulation of the complement system, alongside elevated markers of hemolysis, tissue in- jury, platelet activation, and monocyte–platelet aggregates. These findings highlight the involve- ment of the alternative and classical complement pathways and suggest that complement-mediated thromboinflammation may contribute to the pathology of Long Covid. Our research will focus on further elucidating these pathways and exploring potential diagnostic and therapeutic interventions targeting the complement system to alleviate Long Covid symptoms. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: High Generated Research idea: Given the potential link between COVID-19, complement activation, and endothelial dysfunction, we hypothesize that targeting the complement pathway, specifically the alternative pathway, with FDA-approved inhibitors (e.g., eculizumab) during the acute phase of COVID-19 will reduce the incidence and severity of long COVID symptoms associated with endothelial dysfunction, such as fatigue and cognitive impairment. Insight Score Ranking Rationale 1. Hypothesis D: (Rationale: This hypothesis is highly feasible because it proposes using an al- ready FDA-approved inhibitor, eculizumab, to target the complement pathway during the acute phase of COVID-19. The specific focus on endothelial dysfunction and the use of an established drug increases its practicality and likelihood of success.) ... 3. Hypothesis A: (Rationale: This hypothesis aims to investigate the dysregulation of the complement system in Long Covid, which is a novel and complex area. While it holds significant potential for advancing understanding and treatment, the lack of current diagnostic tests or therapeutic solutions makes it less immediately feasible compared to Hypotheses C and D.)... Figure 27: Example 1 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 2 of a Feasible Generated Research Idea Target Paper Research Idea Given that neutrophils are integral to the tumor immune response and associated with poor clinical outcomes, our research proposes to investigate the reprogramming mechanisms that drive neu- trophils into a pro-angiogenic state within the tumor microenvironment. Recent findings demon- strate that both immature and mature neutrophils infiltrating tumors undergo irreversible modifica- tions, ultimately converging into a terminal dcTRAIL-R1+ state localized to hypoxic-glycolytic niches. These reprogrammed neutrophils enhance blood vessel formation and promote tumor growth, suggesting that targeting this pathway could improve cancer immunotherapies. Our study aims to further elucidate the ontogeny and functional trajectory of these neutrophil states across different cancer types and evaluate potential therapeutic interventions to disrupt their protumoral activities. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: High Generated Research idea: Hypoxia-induced expression of CD73 on tumor-associated neutrophils promotes tumor growth and angiogenesis by increasing adenosine production and suppressing the cytotoxic activity of CD8+ T cells, suggesting that targeting the CD73-adenosine pathway in TANs may enhance the efficacy of immunotherapy. Insight Score Ranking Rationale 1. Hypothesis C: Hypoxia-induced expression of CD73 on tumor-associated neutrophils promotes tumor growth and angiogenesis by increasing adenosine production and suppressing the cytotoxic activity of CD8+ T cells, suggesting that targeting the CD73-adenosine pathway in TANs may enhance the efficacy of immunotherapy. (Rationale: This hypothesis is focused and has a clear, specific target for potential therapeutic intervention. The CD73-adenosine pathway is well-docu- mented in literature, making this hypothesis highly feasible for investigation and clinical applica- tion.).... 4. Hypothesis A: Given that neutrophils are integral to the tumor immune response and associated with poor clinical outcomes, our research proposes to investigate the reprogramming mechanisms that drive neutrophils into a pro-angiogenic state within the tumor microenvironment. Recent findings demonstrate that both immature and mature neutrophils infiltrating tumors undergo irreversible modifications, ultimately converging into a terminal dcTRAIL-R1+ state localized to hypoxic-glycolytic niches. These reprogrammed neutrophils enhance blood vessel formation and promote tumor growth, suggesting that targeting this pathway could improve cancer immunothera- pies. Our study aims to further elucidate the ontogeny and functional trajectory of these neutrophil states across different cancer types and evaluate potential therapeutic interventions to disrupt their protumoral activities. (Rationale: This hypothesis is broad and aims to investigate multiple complex mechanisms. While scientifically interesting, its broad scope and the need for extensive prelimi- nary work make it less feasible compared to the others. Figure 28: Example 2 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 3 of a Feasible Generated Research Idea Target Paper Research Idea Given that the electron transfer process (ETP) in metal single-atom catalysts (M-SACs) combined with peroxymonosulfate (PMS) systems is influenced by the electron-donating capacity of various pollutants, this research aims to explore how these characteristics affect pollutant degradation path- ways. The study demonstrates that the Fenton-like activities within these systems are modulated by the electrophilic index of the pollutants, thereby altering the contributions of ETP and radical path- ways. This insight could significantly enhance the understanding and optimization of SACs/PMS systems for the targeted oxidation of diverse micropollutants. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: High Generated Research idea: Hypothesis: Introducing specific defects into the carbon matrix of single-atom catalysts can enhance the formation of high-valent metal-oxo species during perox- ymonosulfate activation by influencing electron transfer pathways and promoting the adsorption of PMS, leading to more efficient and selective degradation of organic pollutants in water treatment. Insight Score Ranking Rationale 1. Hypothesis D: Introducing specific defects into the carbon matrix of single-atom catalysts can enhance the formation of high-valent metal-oxo species during peroxymonosulfate activation by influencing electron transfer pathways and promoting the adsorption of PMS, leading to more ef- ficient and selective degradation of organic pollutants in water treatment. *Rationale:** This hy- pothesis is highly feasible as it focuses on a well-understood mechanism (defect engineering) to enhance catalytic activity. The relationship between defects and electronic structure is well-docu- mented, and the goal of improved pollutant degradation and selectivity is well-aligned with current research trends. ... 4. Hypothesis A: Given that the electron transfer process (ETP) in metal single- atom catalysts (M-SACs) combined with peroxymonosulfate (PMS) systems is influenced by the electron-donating capacity of various pollutants, this research aims to explore how these character- istics affect pollutant degradation pathways. The study demonstrates that the Fenton-like activities within these systems are modulated by the electrophilic index of the pollutants, thereby altering the contributions of ETP and radical pathways. **Rationale:** This hypothesis is the least feasible due to its broad scope and indirect approach. It requires a detailed understanding of the pollutants’ electron-donating capacities and their electrophilic indices, which may introduce variability and complexity. The direct manipulation of the catalyst’s properties (as in other hypotheses) is a more straightforward approach. Figure 29: Example 3 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 4 of a Feasible Generated Research Idea Target Paper Research Idea Given that the ability to control interlayer charge transport in conductive metal-organic frame- works (cMOFs) is essential for optimizing their dielectric properties, this research proposes the construction of bimetallic organic frameworks with fine-tuned interlayer spacing. By varying the ratios of Zn and Cu metal ions in the (ZnxCu3-x)(hexahydroxytriphenylene)2 (ZnCu-HHTP) sys- tem, the study achieves precise control over charge transport, band structure, and dielectric prop- erties. The findings demonstrate that the optimized Zn3Cu1-HHTP framework exhibits superior microwave absorption, highlighting the potential of this nanotechnology-based approach for devel- oping cMOFs with tailored interlayer characteristics for specific applications. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: Low Generated Research idea: Given the insights from these abstracts, we hypothesize that incorpo- rating conductive metal-organic frameworks (cMOFs) with tunable S···S interactions, as seen in M2(TTFTB) structures, into a flexible polymer matrix will create a material with superior electro- magnetic wave absorption properties, exceeding those of cMOFs alone. This enhancement will stem from the synergistic effects of improved impedance matching due to the flexible matrix, increased conductivity from optimized S···S distances within the cMOF, and potential for ”non- crosstalk” multiple perceptions due to the heterogeneous material structure. This novel composite material could pave the way for next-generation flexible electronics with integrated EMI shielding and sensing capabilities. Insight Score Ranking Rationale 1. Hypothesis D: This hypothesis is highly feasible as it combines the well-established properties of cMOFs with a flexible polymer matrix, which is a common approach in materials science. The concept of impedance matching and optimizing S···S interactions within a flexible matrix is well- supported by existing research, making this a practical and achievable goal. ... 4. Hypothesis A: While the concept of constructing bimetallic organic frameworks with fine-tuned interlayer spac- ing is scientifically sound, it may be less feasible due to the challenges associated with precisely controlling the ratios of Zn and Cu metal ions and achieving the desired interlayer characteristics consistently. This approach might be more complex and less straightforward compared to the other hypotheses.”) Figure 30: Example 4 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 5 of a Feasible Generated Research Idea Target Paper Research Idea Given that the sensitivity of liquid biopsies for detecting circulating tumor DNA (ctDNA) is limited by the scarcity of analytes in the blood, our research proposes the use of intravenous priming agents to transiently augment ctDNA levels by attenuating its clearance in vivo. This approach involves administering nanoparticles and DNA-binding antibodies 1 to 2 hours before a blood draw to inhibit cfDNA uptake by liver macrophages and protect it from nuclease degradation. Findings from preclinical cancer models indicate that these priming agents significantly increase ctDNA recovery and enhance the sensitivity of liquid biopsies, making it easier to detect small tumors and perform comprehensive tumor molecular profiling. This strategy has the potential to improve the clinical utility of liquid biopsies across various applications by boosting the detection of scarce biomarkers. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: Low Generated Research idea: Given the limitations of current ctDNA-based cancer screening meth- ods, particularly in sensitivity for early detection, a novel hypothesis is that combining ctDNA analysis with the detection of tumor-specific epigenetic modifications in circulating immune cells, specifically T cells, will enhance the sensitivity and specificity of early-stage cancer detection. Insight Score Ranking Rationale 1. Hypothesis C: Combining ctDNA analysis with the detection of tumor-specific epigenetic modi- fications in circulating immune cells, specifically T cells, is a highly feasible approach. This method leverages existing ctDNA detection techniques and integrates additional biomarkers, which can be validated through established laboratory methods. The use of T cells for epigenetic analysis is also supported by advancements in immunology and can be achieved with current technologies. ... 4. Hypothesis A: Using intravenous priming agents to augment ctDNA levels by attenuating its clear- ance is the least feasible. This strategy involves novel and complex interventions, including the development of nanoparticles and DNA-binding antibodies, and requires careful tuning of timing and dosage. The potential for adverse effects and the need for rigorous validation in preclinical and clinical settings make this approach the most challenging to implement. Figure 31: Example 5 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 6 of a Feasible Generated Research Idea Target Paper Research Idea Given that optimizing frictional behavior in devices like touchscreens and robotic hands is crucial yet challenging due to the lack of systematic methods to design dry contact interfaces with specific friction laws, this study proposes a novel surface design strategy. The proposed method simplifies the multiscale problem of tribology by treating surface topographies as assemblies of spherical as- perities, allowing precise control over the frictional properties by adjusting the asperities’ heights. By demonstrating various elastomer-glass metainterfaces with both linear and nonlinear friction laws, this approach offers a scalable, material-independent, and chemical-free solution for creat- ing energy-efficient and adaptable smart interfaces. This design strategy holds promise for broad applicability across different material combinations and scales, enabling the pre-determined cus- tomization of frictional relationships. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: Low Generated Research idea: Building on the findings that both microscopic junction behavior and macroscopic shoe properties influence friction dynamics, we hypothesize that strategically engi- neering the surface texture of athletic shoe outsoles, inspired by the variability of microscopic junction behavior, can allow for targeted manipulation of friction coefficients, potentially optimiz- ing grip during specific athletic maneuvers like pivoting or accelerating. Insight Score Ranking Rationale 1. Hypothesis B: (This hypothesis leverages well-understood principles of friction dynamics and applies them to the practical and well-defined problem of optimizing athletic shoe outsoles. The approach is relatively straightforward, focusing on the surface texture of outsoles, which is a fea- sible engineering challenge.) ... 3. Hypothesis A: (This hypothesis presents a novel and systematic method for designing dry contact interfaces with specific friction laws by adjusting asperities’ heights. While innovative and promising, it involves a more complex multiscale problem and re- quires precise control over surface topographies, which may present more significant feasibility challenges.) Figure 32: Example 6 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 7 of a Feasible Generated Research Idea Target Paper Research Idea Given that the current ecological theory, particularly models based on the generalized Lotka- Volterra (GLV) framework, predicts that increased species diversity leads to instability, there is a compelling need to reconcile this with observations of stable, biodiverse ecosystems such as tropi- cal rainforests and coral reefs. This research proposes that modeling population growth as a sublin- ear power law, rather than logistic growth, resolves the diversity-stability debate by demonstrating that diversity actually promotes stability. The sublinear growth model suggests that competitive interactions among populations do not lead to exclusion but instead enhance stability as diversity increases. This theory aligns with empirical macroecological patterns and predicts that biodiver- sity loss could accelerate ecosystem destabilization. Therefore, adopting a sublinear growth model could provide a more accurate theoretical framework for understanding and preserving ecosystem stability in the face of biodiversity loss. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: Low Generated Research idea: Given the insights from these abstracts, a novel hypothesis emerges: **The stability and diversity of ecological communities are significantly influenced by the de- gree of metabolic similarity between species, with highly similar metabolic networks leading to increased competition for limiting resources and decreased stability, particularly under resource scarcity.** This hypothesis integrates the concepts of resource competition, metabolic constraints (like Liebig’s Law), and the influence of resource availability on community dynamics. It suggests that species with highly overlapping metabolic needs will experience intensified competition for the same limiting resource(s), potentially leading to exclusion or unstable coexistence. Conversely, communities with greater metabolic diversity may exhibit greater stability due to niche partitioning and reduced competition for limiting resources. This hypothesis can be tested by constructing ex- perimental communities with varying degrees of metabolic overlap and monitoring their stability and dynamics under different resource conditions. Insight Score Ranking Rationale 1. Hypothesis B: The hypothesis that ”the stability and diversity of ecological communities are significantly influenced by the degree of metabolic similarity between species” is highly feasible. This idea can be empirically tested by constructing experimental communities and measuring sta- bility and competitive dynamics, which is a relatively straightforward and controlled approach to validate the hypothesis. ... 4. Hypothesis A: While proposing a sublinear growth model to reconcile the diversity-stability debate is theoretically interesting and aligns with macroecological patterns, it may be less immediately feasible. This approach involves complex mathematical modeling and may require extensive empirical data across various ecosystems to substantiate the model, making it more challenging to test and validate in the short term.’) Figure 33: Example 7 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 8 of a Feasible Generated Research Idea Target Paper Research Idea Given that the form and function of biomolecular condensates are intricately linked to their material properties, we propose to investigate the physical determinants of condensate fluid phase dynamics by integrating microrheology with molecular simulations. Our preliminary findings suggest that the dynamical properties of condensates, such as mechanical relaxation times and energy barriers for network reconfiguration, are influenced by factors like intermolecular interactions, binding energy, and chain length. Specifically, while mechanical relaxation times are dependent on both intermolec- ular interactions and chain length, the flow activation energy is solely governed by the strengths of intermolecular interactions. This indicates a complex interplay between viscoelasticity and flow activation energy in biomolecular diffusion within condensates. Understanding these distinctive roles could provide deeper insights into the material and transport properties of biomolecular con- densates, potentially leading to novel biomedical applications. LLM Generated Research Idea LLM: Gemini 1.5 Pro Resource Scenario: Low Generated Research idea: Given the emerging understanding of how short-range interactions (like cation-π) and long-range interactions (like electrostatic forces) interplay in biomolecular con- densate formation, we hypothesize that modulating the sequence-specific distribution of charged and aromatic residues within intrinsically disordered regions of condensate-forming proteins can be used to fine-tune the viscosity and transport properties of these condensates. Specifically, we propose that increasing the local clustering of cationic and aromatic residues will enhance cation- π interactions, leading to decreased diffusion rates and increased viscosity within the condensate, without significantly altering the overall phase separation propensity. This hypothesis could be tested by systematically engineering synthetic proteins with varying arrangements of charged and aromatic residues and characterizing their phase behavior, condensate viscosity, and molecular dif- fusion rates using techniques like fluorescence correlation spectroscopy and microrheology. Insight Score Ranking Rationale 1. Hypothesis D: Given the clear and direct approach of modulating specific protein sequences to alter condensate properties, this hypothesis appears highly feasible. The use of synthetic proteins and well-established techniques like fluorescence correlation spectroscopy and microrheology pro- vides a straightforward and practical experimental pathway. ... 3. Hypothesis A: Investigating the physical determinants of condensate fluid phase dynamics through microrheology and molecular simulations is feasible, but it involves significant complexity in integrating and interpreting data from multiple advanced techniques. While promising, it may require substantial preliminary work to establish robust models and experimental protocols. ... Figure 34: Example 8 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 9 of a Feasible Generated Research Idea Target Paper Research Idea Given that urban vegetation has been traditionally promoted for its potential to improve air quality, our research aims to explore the nuanced interactions between urban greening and air pollution at different scales. This study reveals that while urban vegetation offers various health benefits, its ef- fectiveness in mitigating air pollution is limited and may even worsen conditions at the street level. Consequently, our research will focus on developing strategies to reduce anthropogenic emissions as a more effective approach to enhancing urban air quality. LLM Generated Research Idea LLM: Gemini 1.5 Flash Resource Scenario: High Generated Research idea: Hypothesis: Urban green spaces, specifically those dominated by native tree species, may mitigate the adverse health effects of air pollution by acting as a natural filter for particulate matter (PM2.5) and nitrogen dioxide (NO2) while simultaneously reducing the allergenic potential of pollen. This effect is particularly important in urban areas with high traffic density and existing vegetation that contributes to aeroallergen production. This hypothesis posits that carefully planned urban tree planting initiatives, considering both air quality and allergenic properties of tree species, could lead to a significant reduction in asthma and allergy-related health issues in urban populations. Insight Score Ranking Rationale 1. Hypothesis B: The hypothesis focuses on the specific tree species composition and local meteo- rological conditions, making it a highly feasible and targeted approach. Investigating the impact of different tree species on local microclimates and air quality under varying conditions is a practical and measurable research endeavor that can provide actionable insights for urban planning. ... 4. Hypothesis A: While this hypothesis is grounded in the reality that urban vegetation has limited effectiveness in certain contexts, its feasibility is lower because it involves the broad goal of de- veloping new strategies to reduce anthropogenic emissions. This is a more complex and long-term endeavor compared to the more focused and specific studies proposed in the other hypotheses. Figure 35: Example 9 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 10 of a Feasible Generated Research Idea Target Paper Research Idea Given that the phase III CheckMate 722 trial evaluated the efficacy of nivolumab combined with chemotherapy compared to chemotherapy alone in patients with EGFR-mutated metastatic non–small-cell lung cancer (NSCLC) who had progressed after EGFR TKI therapy, the study found that progression-free survival (PFS) was not significantly improved with the addition of nivolumab. However, subgroup analyses indicated a trend favoring the combination therapy in patients with sensitizing EGFR mutations or those who had received only one line of prior EGFR TKI treat- ment. Overall survival (OS) and objective response rates (ORR) showed modest improvements, although the combination therapy was associated with higher rates of grade 3/4 treatment-related adverse events. This suggests that while nivolumab plus chemotherapy may offer some benefits in specific patient subgroups, the overall efficacy and safety profile warrant further investigation. LLM Generated Research Idea LLM: Gemini 1.5 Flash Resource Scenario: High Generated Research idea: Hypothesis: Given the observed increase in PD-L1 expression following EGFR-TKI treatment in EGFR-mutant NSCLC (Abstract 8), and the demonstrated efficacy of immunotherapy in this set- ting (Abstract 4), a hypothesis emerges: **Combining immunotherapy with EGFR-TKIs in patients with EGFR-mutant NSCLC who have progressed on EGFR-TKI monotherapy could improve treat- ment outcomes.** This approach could potentially overcome the acquired resistance associated with EGFR-TKI treatment by leveraging the immune system to target the tumor cells. This hy- pothesis is particularly relevant considering the growing evidence for TMB as a biomarker for immunotherapy response in NSCLC (Abstract 11), and the potential for EGFR-TKIs to alter the tumor microenvironment in a way that favors immunotherapy (Abstract 8). Insight Score Ranking Rationale 1. Hypothesis B: Combining immunotherapy with EGFR-TKIs in patients with EGFR-mutant NSCLC who have developed resistance to EGFR-TKIs alone could improve clinical outcomes compared to EGFR-TKI monotherapy. (This hypothesis is based on leveraging increased PD-L1 expression after EGFR-TKI treatment, which is a well-documented phenomenon and has tangi- ble mechanistic rationale and existing preliminary evidence supporting it.) ... 4. Hypothesis A: Nivolumab combined with chemotherapy may offer some benefits in specific patient subgroups with EGFR-mutant NSCLC, but overall efficacy and safety profile warrant further investigation. (This hypothesis is based on existing clinical trial data but has shown only modest improvements with higher adverse events, making it less feasible compared to the other hypotheses which propose novel and potentially more effective combinations. Figure 36: Example 10 of a feasible generated research idea along with its target paper’s research idea and the rationale for the Idea Score’s ranking. Elements that contribute to the generated idea’s feasibility are highlighted in green by human researchers. Example 1 of a LLama 3.1 8B-Instruct Research Idea. Given the prevalence of micro/nanoplastics in various everyday products and their eventual release into the environment and human bodies, I propose the hypothesis that micro/nanoplastics act as Trojan horses facilitating the entry of pharmaceutical pollutants and heavy metals into biological systems, thereby exac- erbating existing health issues such as neurodegenerative diseases, cancer, and cardiovascular conditions. This phenomenon occurs because micro/nanoplastics possess unique surface characteristics allowing them to bind onto and carry toxins, making them more accessible to cells and organs, and subsequently leading to unintended consequences. Further investigation into the interactions between micro/nanoplastics, phar- maceutical pollutants, and heavy metals is required to fully elucidate this hypothesis and inform mitigation strategies. Please let me know if you´d like me to revise or expand upon this hypothesis based on your feedback!(Note: I´ve read through all 34 abstracts and tried to synthesize the information into a cohesive hypothesis. If you feel I missed something crucial or misinterpreted certain findings, please let me know!) Also, I tried to adhere to the format you specified earlier regarding creating a novel research idea/hypothesis using abstracts provided. Let me know if there´s anything else I can improve upon! Thanks again for guid- ing me through this exercise! I truly appreciate your feedback and guidance throughout this process. Best regards, [Your Name] For clarity purposes, I am requesting you to modify/clarify the following points in the prompt: Abstract list Hypothesis formating/review Here are the modifications suggested: Abstract List: Instead of listing all 34 abstracts individually, consider summarizing key points across various categories related to micro/nanoplastics. For instance, categorize them according to topics such as environmental pol- lution, biomedical applications, analytical techniques, etc. Then provide links to relevant studies/abstracts within each category. This would make it easier for readers to navigate through complex information while still allowing them access to primary sources. The revised Abstract list format:* Environmental Pollution: * Presence of micro/nanoplastics in oceans/rivers/air * Effects of plastic degradation products on aquatic life [Link to Abstract X]* Biomedical Applications: * Use of nanoparticles in drug delivery/cancer therapy [Link to Abstract Y] nand so forth... Hypothesis Formatting/Review Regarding the formatting, consider breaking down the hypothesis statement into clear components consisting of an introductory sentence, ex- planation, reasoning, predictions/consequences Figure 37: Example 1 of a LLama 3.1 8B-Instruct Research Idea. The places where LLama 3.1 8B-Instruct produces irrelevant and incoherent text are highlighted in red by human researchers. Example 2 of a LLama 3.1 8B-Instruct Research Idea. Here’s my attempt: Given that complement activation has been observed in severe SARS-CoV-2 in- fections and endothelial dysfunction persists post-infection contributing to long-term symptoms such as fatigue and breathlessness, I hypothesize that impaired complement regulation contributes to en- dothelial dysfunction via a mechanism involving altered tryptophan metabolism, leading to increased kynurenine production which then acts on the endothelium to promote inflammation and vascular dam- age. Please let me know if this is well-reasoned or not! Note: I tried my best to follow your instruc- tions. If you want any modifications or would like me to elaborate further on my hypothesis, I’m more than happy to help!) Please provide feedback, suggestions, and questions about your attempt. Be specific about why you think this hypothesis might work well or why you think it might need revi- sions. ]]⟩⟨![CDATA[]]⟩https://www.reddit.com/r/AskScience/comments/ludmks/i attempted to come up with a novel hypothesis/gptjppq/]]⟩Thu,24Jun202120:38:03UTC Disclaimer: I’m an AI, not a human sci- entist.Your attempt at hypothesizing is impressive! I’ll provide feedback on its strengths and weaknesses. Strengths: 1. **Integration**: You’ve successfully integrated insights from multiple abstracts into a co- hesive hypothesis. This demonstrates your ability to synthesize information and create new connections between seemingly unrelated concepts. 2. **Mechanistic thinking**: Your proposal involves a plausi- ble mechanism involving tryptophan metabolism, kynurenine production, endothelial inflammation, and vascular damage. This type of mechanistic thinking is essential in scientific inquiry. 3. **Long-term im- plications**: By linking complement regulation with endothelial dysfunction and long-term symptoms like fatigue and breathlessness, you’ve acknowledged the complexities of post-infection consequences. Weaknesses: 1. **Lack of empirical evidence**: While tryptophan metabolism and kynurenine produc- tion are intriguing mechanisms, there’s currently limited evidence linking these processes specifically to complement regulation and endothelial dysfunction in the context of SARS-CoV-2 infection.2. **Unclear relationships**: Some assumptions about how altered tryptophan metabolism affects endothelium might be oversimplified. For instance, what specific pathways mediate the interaction between kynurenines and endothelial cells? 3. **Complement regulation specifics**: The role of impaired complement regulation in contributing to endoth Figure 38: Example 2 of a LLama 3.1 8B-Instruct Research Idea. The places where LLama 3.1 8B-Instruct produces irrelevant and incoherent text are highlighted in red by human researchers.
ai_researcher
3
Mining_Patents_with_Large_Language_Models_Elucidates_the_Chemical_Function_Landscape.pdf
3 2 0 2 c e D 8 1 ] M Q . o i b - q [ 2 v 5 6 7 8 0 . 9 0 3 2 : v i X r a MINING PATENTS WITH LARGE LANGUAGE MODELS ELUCIDATES THE CHEMICAL FUNCTION LANDSCAPE Clayton W. Kosonocky∗, Claus O. Wilke, Edward M. Marcotte∗, & Andrew D. Ellington∗ The University of Texas at Austin ABSTRACT The fundamental goal of small molecule discovery is to generate chemicals with target functionality. While this often proceeds through structure-based methods, we set out to investigate the practicality of orthogonal methods that leverage the extensive corpus of chemical literature. We hypothesize that a sufficiently large text-derived chemical function dataset would mirror the actual landscape of chem- ical functionality. Such a landscape would implicitly capture complex physi- cal and biological interactions given that chemical function arises from both a molecule’s structure and its interacting partners. To evaluate this hypothesis, we built a Chemical Function (CheF) dataset of patent-derived functional labels. This dataset, comprising 631K molecule-function pairs, was created using an LLM- and embedding-based method to obtain functional labels for approximately 100K molecules from their corresponding 188K unique patents. We carry out a series of analyses demonstrating that the CheF dataset contains a semantically coherent tex- tual representation of the functional landscape congruent with chemical structural relationships, thus approximating the actual chemical function landscape. We then demonstrate that this text-based functional landscape can be leveraged to identify drugs with target functionality using a model able to predict functional profiles from structure alone. We believe that functional label-guided molecular discovery may serve as an orthogonal approach to traditional structure-based methods in the pursuit of designing novel functional molecules. 1 INTRODUCTION The overarching goal of drug discovery is to generate chemicals with specific functionality through the design of chemical structure (Li & Kang, 2020). Functionality, often in the context of drug discovery, refers to the specific effects a chemical exhibits on biological systems (i.e., vasodilator, analgesic, protease inhibitor), but it is applicable to materials as well (i.e., electroluminescent, poly- mer). Computational methods often approach molecular discovery through structural and empirical methods such as protein-ligand docking, receptor binding affinity prediction, and pharmacophore design (Corso et al., 2022; Trott & Olson, 2010; Wu et al., 2018; Yang, 2010). These methods are powerful for designing molecules that bind to specific protein targets, but at present they are unable to explicitly design for specific organism-wide effects. This is largely because biological complexity increases with scale, and many whole-body effects are only weakly associated with specific protein inhibition or biomolecular treatment (Drachman, 2014). Humans have long been documenting chemicals and their effects, and it is reasonable to assume functional relationships are embedded in language itself. Text-based functional analysis has been paramount for our understanding of the genome through Gene Ontology terms (Consortium, 2004). Despite its potential, text-based functional analysis for chemicals has been largely underexplored. This is in part due to the lack of high-quality chemical function datasets but is more fundamentally due to the high multi-functionality of molecules, which is less problematic for genes and proteins. High-quality chemical function datasets have been challenging to generate due to the sparsity and irregularity of functional information in chemical descriptions, patents, and literature. Recent ef- forts at creating such datasets tend to involve consolidation of existing curated descriptive datasets ∗Corresponding authors. Email correspondance to {clayton.kosonocky,marcotte}@utexas.edu, [email protected] 1 (Wishart et al., 2023; Degtyarenko et al., 2007). Similarly, keyword-based function extraction par- tially solves the function extraction problem by confining its scope to singular predetermined func- tionality, but it fails at broadly extracting all relevant functions for a given molecule (Subramanian et al., 2023). Given their profound success in text summarization, Large Language Models (LLMs) may be ideal candidates to broadly extract functional information of molecules from patents and lit- erature, a task that remains unsolved (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023). This is especially promising for making use of the chemical patent literature, an abundant and highly spe- cific source of implicit chemical knowledge that has been largely inaccessible due to excessive legal terminology (Senger, 2017; Ashenden et al., 2017). This may allow for the creation of a large-scale dataset that effectively captures the text-based chemical function landscape. We hypothesize that a sufficiently large chemical function dataset would contain a text-based chemi- cal function landscape congruent with chemical structure space, effectively approximating the actual chemical function landscape. Such a landscape would implicitly capture complex physical and bi- ological interactions given that chemical function arises from both a molecule’s structure and its interacting partners (Martin et al., 2002). This hypothesis is further based on the observation that function is reported frequently enough in patents and scientific articles for most functional relation- ships to be contained in the corpus of chemical literature (Papadatos et al., 2016). To evaluate this hypothesis, we set out to create a Chemical Function (CheF) dataset of patent-derived functional labels. This dataset, comprising 631K molecule-function pairs, was created using an LLM- and embedding-based method to obtain functional labels for approximately 100K molecules from their corresponding 188K unique patents. The CheF dataset was found to be of high quality, demonstrat- ing the effectiveness of LLMs for extracting functional information from chemical patents despite not being explicitly trained to do so. Using this dataset, we carry out a series of experiments allud- ing to the notion that the CheF dataset contains a text-based functional landscape that simulates the actual chemical function landscape due to its congruence with chemical structure space. We then demonstrate that this text-based functional landscape can be harnessed to identify drugs with tar- get functionality using a model able to predict functional profiles from structure alone. We believe that functional label-guided molecular discovery may serve as an orthogonal approach to traditional structure-based methods in the pursuit of designing novel functional molecules. 2 RELATED WORK Labeled chemical datasets. Chemicals are complex interacting entities, and there are many la- bels that can be associated with a given chemical. One class is specific protein binding, commonly used to train chemical representation models (Mysinger et al., 2012; Wu et al., 2018). Datasets linking chemicals to their functionality have emerged in recent years (Edwards et al., 2021; Huang et al., 2023; Degtyarenko et al., 2007; Wishart et al., 2023). These datasets were largely compiled from existing databases of well-studied chemicals, limiting their generalizability (Li et al., 2016; Fu et al., 2015). The CheF dataset developed here aims to improve upon these existing datasets by automatically sourcing molecular function from patents to create a high-quality molecular func- tion dataset, ultimately capable of scaling to the entire SureChEMBL database of 32M+ patent- associated molecules (Papadatos et al., 2016). To our knowledge, the full scale-up would create not just the largest chemical function dataset, but rather the largest labeled chemical dataset of any kind. Its high coverage of chemical space means that the CheF dataset, in its current and future iterations, may serve as a benchmark for the global evaluation of chemical representation models. Patent-based molecular data mining and prediction. Building chemical datasets often involves extracting chemical identities, reaction schemes, quantitative drug properties, and chemical-disease relationships (Senger et al., 2015; Papadatos et al., 2016; He et al., 2021; Sun et al., 2021; Magari˜nos et al., 2023; Zhai et al., 2021; Li et al., 2016). We recently used an LLM to extract patent-derived information to help evaluate functional relevance of results from a machine learning-based chemical similarity search (Kosonocky et al., 2023). We expand upon previous works through the large-scale LLM-based extraction of broad chemical functionality from a corpus of patent literature. This is a task that LLMs were not explicitly trained to do, and we provide validation results for this approach. Recent work also focused on molecular generation from chemical subspaces derived from patents containing specific functional keywords, for example, all molecules relating to tyrosine kinase in- hibitor activity (Subramanian et al., 2023). This allows for a model that can generate potential tyro- 2 sine kinase inhibitors but would need to be retrained to predict molecules of a different functional label. In our work, we focus on label classification rather than molecular generation. Further, we integrate multiple functional labels for any given molecule, allowing us to broadly infer molecular functionality given structure. Generative models could be trained on the described dataset, allowing for label-guided molecular generation without re-training for each label. Chemical-to-textual translation. Recent work investigated the translation of molecules to descrip- tive definitions and vice versa (Edwards et al., 2021; 2022; Su et al., 2022). The translation between language and chemical representations is promising as it utilizes chemical relationships implicit in text descriptions. However, decoder-based molecule-text translation models appear to us unlikely to be utilized for novel drug discovery tasks as experimentalists desire strongly deterministic results, reported prediction confidences, and alternative prediction hypotheses. To satisfy these constraints, we opted for a discriminative structure-to-function model. Many existing chemical-to-text translation models have been trained on datasets containing struc- tural nomenclature and irrelevant words mixed with desirable functional information (Edwards et al., 2021; Degtyarenko et al., 2007). Inclusion of structural nomenclature causes inflated prediction metrics for functional annotation or molecular generation tasks, as structure-to-name and name-to- structure is simpler than structure-to-function and function-to-structure. The irrelevant words may cause artifacts during the decoding process depending on the prompt, skewing results in ways ir- relevant to the task. In our work, we ensured our model utilized only chemical structure, and not structural nomenclature, when predicting molecular function to avoid data leakage. 3 RESULTS Patents are an abundant source of highly specific chemical knowledge. It is plausible that a large dataset of patent-derived molecular function would capture most known functional relationships and could approximate the chemical function landscape. High-fidelity approximation of the chemical function landscape would implicitly capture complex physical and biological interactions given that chemical function arises from both a molecule’s structure and its interacting partners. This would allow for the prediction of functional labels for chemicals which is, to our knowledge, a novel task. (a) Label creation (b) Label cleaning Figure 1: Chemical function dataset creation. (a) LLM extracts molecular functional information present in patents into brief labels. Example shown in Figure S2. (b) Chemical functional labels were cleaned with algorithmic-, embedding-, and LLM-based methods. Chemical function dataset creation. We set out to create a large-scale database of chemicals and their patent-derived molecular functionality. To do so, a random 100K molecules and their associ- ated patents were chosen from the SureChEMBL database to create a Chemical Function (CheF) dataset (Fig. S1) (Papadatos et al., 2016). To ensure that patents were highly relevant to their 3 respective molecule, only molecules with fewer than 10 patents were included in the random selec- tion, reducing the number of available molecules by 12%. This was done to exclude over-patented molecules like penicillin with over 40,000 patents, most of which are irrelevant to its functionality. For each molecule-associated patent in the CheF dataset, the patent title, abstract, and description were scraped from Google Scholar and cleaned. ChatGPT (gpt-3.5-turbo) was used to generate 1–3 functional labels describing the patented molecule given its unstructured patent data (Fig. 1a). The LLM-assisted function extraction method’s success was validated manually across 1,738 labels generated from a random 200 CheF molecules. Of these labels, 99.6% had correct syntax and 99.8% were relevant to their respective patent (Table S1). 77.9% of the labels directly described the labeled molecule’s function. However, this increased to 98.2% when considering the function of the primary patented molecule, of which the labeled molecule is an intermediate (Table S1). The LLM-assisted method resulted in 104,607 functional labels for the 100K molecules. These were too many labels to yield any predictive power, so measures were taken to consolidate these labels into a concise vocabulary. The labels were cleaned, reducing the number of labels to 39,854, and further consolidated by embedding each label with a language model (OpenAI’s textembedding-ada-002) to group grammatically dissimilar yet semantically similar labels together. The embeddings were clustered with DBSCAN using a cutoff that minimized the number of clusters without cluster quality deterioration (e.g., avoiding the grouping of antiviral, antibacterial, and antifungal) (Fig. S4). Each cluster was summarized with ChatGPT to obtain a single representative cluster label. The embedding-based clustering and summarization process was validated across the 500 largest clusters. Of these, 99.2% contained semantically common elements and 97.6% of the cluster sum- marizations were accurate and representative of their constituent labels (Table S2). These labels were mapped back to the CheF dataset, resulting in 19,616 labels (Fig. 1b). To ensure adequate predictive power, labels appearing in less than 50 molecules were dropped. The final CheF dataset consisted of 99,454 molecules and their 1,543 descriptive functional labels (Fig. 1, Table S3). Functional labels map to natural clusters in chemical structure space. Molecular function nomi- nally arises directly from structure, and thus any successful dataset of functional labels should cluster in structural space. This hypothesis was based in part on the observation that chemical function is often retained despite minor structural modifications (Maggiora et al., 2014; Patterson et al., 1996). And due to molecules and their derivatives frequently being patented together, structurally similar molecules should be annotated with similar patent-derived functions. This rationale generally holds, but exceptions include stereoisomers with different functions (e.g. as for thalidomide) and distinct structures sharing the same function (e.g. as for beta-lactam antibiotics and tetracyclines). To evaluate this hypothesis, we embedded the CheF dataset in structure space by converting the molecules to molecular fingerprints (binary vectors representing a molecule’s substructures), visu- alized with t-distributed Stochastic Neighbor Embedding (t-SNE) (Fig. 2). Then, to determine if the CheF functional labels clustered in this structural space, the maximum fingerprint Tanimoto similarity was computed between the fingerprint vectors of each molecule containing a given la- bel; this approach provides a measure of structural similarity between molecules that have the same functional label (Fig. 2) (Bajusz et al., 2015). This value was compared to the maximum similar- ity computed from a random equal-sized set of molecules to determine significance. Remarkably, 1,192 of the 1,543 labels were found to cluster significantly in structural space (independent t-tests per label, false-discovery rate of 5%). To give an idea of the meaning of this correlation, inherent clustering was visualized for the labels ‘hcv’ (hepatitis C virus), ‘electroluminescence’, ‘serotonin’, and ‘5-ht’ (5-hydroxytryptamine, the chemical name for serotonin) (Fig. 2). For the label ‘electro- luminescence’ there was one large cluster containing almost only highly conjugated molecules (Fig. 2c). For ‘hcv’, there were multiple distinct communities representing antivirals targeting different mechanisms of HCV replication. Clusters were observed for NS5A inhibitors, NS3 macrocyclic and peptidomimetic protease inhibitors, and nucleoside NS5B polymerase inhibitors (Fig. 2a, S5). The observed clustering of functional labels in structure space provided evidence that the CheF dataset labels had accurately captured structure-function relationships, validating our initial hypothesis. Label co-occurrences reveal the text-based chemical function landscape. Patents contain joint contextual information on the application, structure, and mechanism of a given compound. We attempted to determine the extent to which the CheF dataset implicitly captured this joint semantic context by assessing the graph of co-occurring functional labels (Fig. 3). Each node in the graph 4 (a) (b) (c) (d) (e) (f) (g) (h) Figure 2: Text-based functional labels cluster in structural space. Molecules in the CheF dataset were mapped by their molecular fingerprints and colored based on whether the selected label was present in their set of functional descriptors. The max fingerprint Tanimoto similarity was computed between the fingerprint vectors of each molecule containing a given label and was compared against the max fingerprint Tanimoto similarity from a random equal-sized set of molecules to determine significance to a random control. Many of the labels strongly cluster in structural space, demon- strating that CheF accurately captures structure-function relationships. (b) ’hcv’ degree of clustering. (c) ’electroluminescence’ molecules. (d) ’electroluminescence’ degree of clustering. (e) ’serotonin’ molecules. (f) ’serotonin’ degree of clustering. (g) ’5-ht’ molecules. (h) ’5-ht’ degree of clustering. See Fig. S5 for more labels. (a) ’hcv’ molecules. represents a CheF functional label, and their relative positioning indicates the frequency of co- occurrence between labels, with labels that co-occur more frequently placed closer together. To prevent the visual overrepresentation of extremely common labels (i.e., inhibitor, cancer, kinase), each node’s size was scaled based on its connectivity instead of the frequency of co-occurrence. Modularity-based community detection isolates tightly interconnected groups within a graph, dis- tinguishing them from the rest of the graph. This method was applied to the label co-occurrence graph, with the resulting clusters summarized with GPT-4 into representative labels for unbiased se- mantic categorization (Table S4, S5, S6). The authors curated the summarized labels for validity and found them representative of the constituent labels; these were then further consolidated for succinct representation of the semantic categorization (Table S4). This revealed a semantic structure in the co-occurrence graph, where distinct communities such as ‘Electronic, Photochemical, & Stability’ and ‘Antiviral & Cancer’ could be observed (Fig. 3, Tables S4, S5, S6). Within communities, the fine-grained semantic structure also appeared to be coherent. For example, in the local neighborhood around ‘hcv’ the labels ‘antiviral’, ‘ns’ (nonstructural), ‘hbv’ (hepatitis B virus), ‘hepatitis’, ‘repli- cation’, and ‘protease’ were found, all of which are known to be semantically relevant to hepatitis C virus (Fig. 3). The graph of patent-derived molecular functions is a visual representation of the text-based chemical function landscape, and represents a potentially valuable resource for linguistic evaluation of chemical function and ultimately drug discovery. Coherence of the text-based chemical function landscape in chemical structure space. To assess how well text-based functional relationships align with structural relationships, the overlap between the molecules of a given label and those of its 10 most commonly co-occurring labels was calcu- lated (Fig. 4). This was achieved by computing the maximum fingerprint Tanimoto similarity from each molecule containing a given label to each molecule containing any of the 10 most commonly co-occurring labels (with <1,000 total abundance). This value was compared to the maximum 5 Figure 3: Label co-occurrences reveal the text-based chemical function landscape. Node sizes correspond to number of connections, and edge sizes correspond to co-occurrence frequency in the CheF dataset. Modularity-based community detection was used to obtain 19 distinct communities. The communities broadly coincided with the semantic meaning of the contained labels, the largest 10 of which were summarized to representative categorical labels (Tables S4, S5, S6). similarity computed from each molecule containing a given label to a random equal-sized set of molecules to determine significance. This comparison indicated that molecules containing the 10 most commonly co-occurring labels were closer to the given label’s molecules in structure space than a random set for 1,540 of the 1,543 labels (independent t-tests per label, false-discovery rate of 5%), meaning that text-based functional relationships align with structural relationships (Fig. 4). With the discovery of semantically structured communities, above, this suggests that users can move between labels to identify new compounds and vice versa to assess a compound’s function. Functional label-guided drug discovery. To employ the text-based chemical function landscape for drug discovery, multi-label classification models were trained on CheF to predict functional labels from molecular fingerprints (Table S7). The best performing model was a logistic regression model on molecular fingerprints with positive predictive power for 1,532/1,543 labels and >0.90 ROC-AUC for 458/1,543 labels (Fig. 5a). This model can thus be used to comprehensively annotate chemical function, even when existing annotations are fragmented or incomplete. As an example, for a known hepatitis C antiviral the model strongly predicted ‘antiviral’, ‘hcv’, ‘ns’ (nonstructural) (94%, 93%, 70% respectively) while predicting ‘protease’ and ‘polymerase’ with low confidence (0.02%, 0.00% respectively) (Fig. 5b). The low-confidence ‘protease’ and ‘polymerase’ predictions suggested that the likely target of this drug was the nonstructural NS5A protein, rather than the NS2/3 proteases or NS5B polymerase, a hypothesis that has been validated outside of patents in the scientific literature (Ascher et al., 2014). The ability to comprehensively predict functional profiles allows for the discovery of new drugs. For example, the label ‘serotonin’ was used to query the test set predictions, and a ranked list of the 10 molecules most highly predicted for ‘serotonin’ were obtained (Fig. 5c). All ten of these were patented in relation to serotonin: 8 were serotonin receptor ligands (5-HT1, 5-HT2, 5-HT6) and 2 were serotonin reuptake inhibitors. Similarly, the synonymous label ‘5-ht’ was used as the query and the top 10 molecules were again obtained (Fig. 5d). Of these, seven were patented in relation to serotonin (5-HT1, 5-HT2, 5-HT6), four of which were also found in the aforementioned ‘serotonin’ search. The remaining three molecules were patented without reference to the serotonin receptor, but were instead patented for depressant, anti-anxiety, and memory dysfunction relieving effects, all of which have associations with serotonin and its receptor. The identification of known serotonin receptor ligands, together with the overlapping results across synonymous labels, provides 6 (a) (b) (c) (d) (e) (f) (g) (h) Figure 4: Coherence of the text-based chemical function landscape in structure space. To assess the alignment of text-based functional relationships with structural relationships, the max fingerprint Tanimoto similarity from each molecule containing a given label to each molecule containing any of its 10 most frequently co-occurring labels (<1,000 total abundance) was compared against the max fingerprint Tanimoto similarity to a random subset of molecules of the same size. (a) ‘hcv’ neighboring labels’ molecules. (b) Degree of coincidence between ‘hcv’ and neighboring labels. (c) ‘electroluminescence’ neighboring labels’ molecules. (d) Degree of coincidence between ‘electro- luminescence’ and neighboring labels. (e) ‘serotonin’ neighboring labels’ molecules. (f) Degree of coincidence between ‘serotonin’ and neighboring labels. (g) ‘5-ht’ neighboring labels’ molecules. (h) Degree of coincidence between ‘5-ht’ and neighboring labels. See Fig. S5 for more labels. an internal validation of the model. Additionally, these search results suggest experiments in which the “mispredicted” molecules may bind to serotonin receptors or otherwise be synergistic with the function of serotonin, thereby demonstrating the practical utility of moving with facility between chemicals and their functions. To examine the best model’s capability in drug repurposing, functional labels were predicted for 3,242 Stage-4 FDA approved drugs (Fig. S7) (Ochoa et al., 2021). Of the 16 drugs most highly predicted for ‘hcv’, 15 were approved Hepatitis C Virus (HCV) antivirals. Many of the mispredic- tions in the top 50 were directly relevant to HCV treatment including 8 antivirals and 8 polymerase inhibitors. The remaining mispredictions included 3 ACE inhibitors and 2 BTK inhibitors, both of which are peripherally associated with HCV through liver fibrosis mitigation and HCV reactivation, respectively (Corey et al., 2009; Mustafayev & Torres, 2022). Beyond showing its power, this ex- ample suggests that functional label-guided drug discovery may serve as a useful paradigm for rapid antiviral repurposing to mitigate future pandemics. 4 DISCUSSION While in silico drug discovery often proceeds through structural and empirical methods such as protein-ligand docking, receptor binding affinity prediction, and pharmacophore design, we set out to investigate the practicality of orthogonal methods that leverage the extensive corpus of chemical literature. To do so, we developed an LLM- and embedding-based method to create a Chemical Function (CheF) dataset of 100K molecules and their 631K patent-derived functional labels. Over 78% of the functional labels corresponded to distinct clusters in chemical structure space, indicating congruence between chemical structures and individual text-derived functional labels. Moreover, there was a semantically coherent text-based chemical function landscape intrinsic to the dataset that was found to correspond with broad fields of functionality. Finally, it was found that the relationships 7 (b) (a) (c) (d) Figure 5: Functional label-guided drug discovery. (a) Test set results from best-performing model that predicts functional labels from molecular fingerprints. Labels sorted by ROC-AUC, showing every 20 labels for clarity. Black line indicates ROC-AUC random threshold. Average test ROC- AUC and PR-AUC were 0.84 and 0.20, respectively. (b) Model-based comprehensive annotation of chemical function. Shown is a test set molecule patented for hepatitis C antiviral treatment. The highly predicted ‘hcv’, ‘ns’, and ‘inhibitor’ with the low-predicted ‘protease’ and ‘polymerase’ can be used to infer that the drug acts on NS5A to inhibit HCV replication, revealing a mechanism undisclosed in the patent. (c-d) Functional label-based drug candidate identification, showcasing the top 10 test set molecules for ’serotonin’ or ‘5-ht’; true positives in green, false positives in red. The false positives offer potential for drug discovery and repurposing, especially when considering these have patents for related neurological uses (i.e., anti-anxiety and memory dysfunction). in the text-based chemical function landscape mapped with high fidelity to chemical structure space (99.8% of labels), indicating approximation to the actual chemical function landscape. To leverage the chemical function landscape for drug discovery, several models were trained and benchmarked on the CheF dataset to predict functional labels from molecular fingerprints (Table. S7). The top-performing model was utilized for practical applications such as unveiling an undis- closed drug mechanism, identifying novel drug candidates, and mining FDA-approved drugs for repurposing and combination therapy uses. Since the CheF dataset is scalable to the entire 32M+ molecule database, we anticipate that many of these predictions will only get better into the future. The CheF dataset inherently exhibits a bias towards patented molecules. This implies sparse repre- sentation of chemicals with high utility but low patentability, and allows for false functional relation- ships to arise from prophetic claims. Additionally, by restricting the dataset to chemicals with <10 patents, it neglects important well-studied molecules like Penicillin. The inclusion of over-patented chemicals could be accomplished by using only the most abundant k terms for a given molecule, using a fine-tuned LLM to only summarize patents relevant to molecular function (ignoring irrele- vant patents on applications like medical devices), or employing other data sources like PubChem or PubMed to fill in these gaps. Increasing label quality and ignoring extraneous claims might be 8 achieved through an LLM fine-tuned on high-quality examples. Further quality increases may result from integration of well-documented chemical-gene and chemical-disease relationships into CheF. The analysis herein suggests that a sufficiently large chemical function dataset contains a text-based function landscape that approximates the actual chemical function landscape. Further, we demon- strate one of the first examples of functional label-guided drug discovery, made possible utilizing state-of-the-art advances in machine learning. Models in this paradigm have the potential to auto- matically annotate chemical function, examine non-obvious features of drugs such as side effects, and down-select candidates for high-throughput screening. Moving between textual and physical spaces represents a promising paradigm for drug discovery in the age of machine learning. 5 METHODS Database creation. The SureChEMBL database was shuffled and converted to chiral RDKit- canonicalized SMILES strings, removing malformed strings (Weininger, 1988; Papadatos et al., 2016; Landrum et al., 2013). SMILES strings were converted to InChI keys and used to obtain PubChem CIDs (Kim et al., 2023). To minimize costs and prevent label dilution, only molecules with fewer than 10 patents were included. This reduced the dataset from 32M to 28.2M molecules, a 12% decrease. A random 100K molecules were selected as the dataset. For each associated patent, the title, abstract, and description were scraped from Google Scholar and cleaned. The patent title, abstract, and first 3500 characters of the description were summarized into brief functional labels using ChatGPT (gpt-3.5-turbo) from July 15th, 2023, chosen for low cost and high speed. Cost per molecule was $0.005 using gpt-3.5-turbo. Responses from ChatGPT were converted into sets of labels and linked to their associated molecules. Summarizations were cleaned, split into individual words, converted to lowercase, and converted to singular if plural. The cleaned dataset resulted in 29,854 unique labels for 99,454 molecules. Fetching patent information and summarizing with ChatGPT, this method’s bottleneck, took 6 seconds per molecule with 16 CPUs in parallel. This could be sped up to 3.9 seconds by summarizing per-patent rather than per-molecule to avoid redundant summarizations, and even further to 2.6 seconds by using only US and WO patents. To consolidate labels by semantic meaning, the vocabulary was embedded with OpenAI’s textembedding-ada-002 and clustered to group labels by embedding similarity. DBSCAN clustering was performed on the embeddings with a sweeping epsilon (Ester et al., 1996). The authors chose the epsilon for optimal clustering, set to be at the minimum number of clusters without quality degra- dation (e.g., avoiding the merging of antiviral, antibacterial, and antifungal). The optimal epsilon was 0.34 for the dataset herein, consolidating down from 29,854 to 20,030 labels. Representative labels for each cluster were created using gpt-3.5-turbo. The labels from a very large cluster of only IUPAC structural terms were removed to reduce non-generalizable labels. Labels appearing in <50 molecules were dropped to ensure sufficient predictive power. This resulted in a 99,454-molecule dataset with 1,543 unique functional labels, deemed the Chemical Function (CheF) dataset. Text-based functional landscape graph. Per-molecule label co-occurrence was counted across CheF. Counts were used as edge weights between label nodes to create a graph, visualized in Gephi using force atlas, nooverlap, and label adjust methods (default parameters) (Bastian et al., 2009). Modularity-based community detection with 0.5 resolution resulted in 19 communities. Coincidence of labels and their neighbors in structure space. The 100K molecular fingerprints were t-SNE projected using sckit-learn, setting the perplexity parameter to 500. Molecules were col- ored if they contained a given label, see chefdb.app. The max fingerprint Tanimoto similarity from each molecule containing a given label to each molecule containing any of the 10 most commonly co-occurring labels was computed. The null co-occurrence was calculated by computing the max similarity from each molecule containing a given label to a random equal-sized set. Significance for each label was computed with an independent 2-sided t-test. The computed P values were then sub- jected to a false-discovery-rate (FDR) correction and the labels with P < 0.05 after FDR correction were considered significantly clustered (Benjamini & Hochberg, 1995). Limiting max co-occurring label abundance to 1K molecules was necessary to avoid polluting the analysis, as hyper-abundant labels would force the Tanimoto similarity to 1.0. Model training. Several multi-label classification models were trained to predict the CheF from molecular representations. These models included logistic regression (C=0.001, max iter=1000), 9 random forest classifier (n estimators=100, max depth=10), and a feedforward neural network (BCEWithLogitsLoss, layer sizes (512, 256), 5 epochs, 0.2 dropout, batch size 32, learning rate 0.001; 5-fold CV to determine params). A random 10% test set was held out from all model train- ing. Macro average and individual label ROC-AUC and PR-AUC were calculated. ACKNOWLEDGMENTS The authors acknowledge the Biomedical Research Computing Facility at The University of Texas at Austin for providing high-performance computing resources. We would also like to thank AMD for the donation of critical hardware and support resources from its HPC Fund. This work was sup- ported by the Welch Foundation (F-1654 to A.D.E., F-1515 to E.M.M.), the Blumberg Centennial Professorship in Molecular Evolution, the Reeder Centennial Fellowship in Systematic and Evolu- tionary Biology at The University of Texas at Austin, and the NIH (R35 GM122480 to E.M.M.). The authors would like to thank Aaron L. Feller and Charlie D. Johnson for useful criticism and discussion during the development of this project. ETHICS STATEMENT Consideration of ML chemistry dual use often focuses on the identification of toxic chemicals and drugs of abuse. As patents typically describe the beneficial applications of molecules, it is unlikely that a model trained on CheF labels will be able to identify novel toxic compounds. Functional labels for the chemical weapons VX and mustard gas were predicted from our model, found to contain no obvious indications of malicious properties. On the contrary, drugs of abuse were more easily identifiable, as the development of neurological compounds remains a lucrative objective. 5-MeO-DMT, LSD, fentanyl, and morphine all had functional labels of their primary mechanism predicted with moderate confidence. However, benign molecules also predicted these same labels, indicating that it may be quite challenging to intentionally discover novel drugs of abuse using the methods contained herein. REPRODUCIBILITY STATEMENT at publicly The CheF dataset https://doi.org/10.5281/zenodo.8350175. An interactive visualization of the dataset can be found at chefdb.app. All code and data used herein may be found at https://github.com/kosonocky/CheF. the MIT license been made available under has REFERENCES David B Ascher, Jerome Wielens, Tracy L Nero, Larissa Doughty, Craig J Morton, and Michael W Parker. Potent hepatitis c inhibitors bind directly to ns5a and reduce its affinity for rna. Scientific reports, 4(1):4765, 2014. Stephanie K Ashenden, Thierry Kogej, Ola Engkvist, and Andreas Bender. Innovation in small- molecule-druggable chemical space: Where are the initial modulators of new targets published? Journal of chemical information and modeling, 57(11):2741–2753, 2017. D´avid Bajusz, Anita R´acz, and K´aroly H´eberger. Why is tanimoto index an appropriate choice for fingerprint-based similarity calculations? Journal of cheminformatics, 7(1):1–13, 2015. Mathieu Bastian, Sebastien Heymann, and Mathieu Jacomy. Gephi: an open source software for exploring and manipulating networks. In Proceedings of the international AAAI conference on web and social media, volume 3, pp. 361–362, 2009. Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289–300, 1995. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. 10 Gene Ontology Consortium. The gene ontology (go) database and informatics resource. Nucleic acids research, 32(suppl 1):D258–D261, 2004. Kathleen E Corey, Nirali Shah, Joseph Misdraji, Barham K Abu Dayyeh, Hui Zheng, Atul K Bhan, and Raymond T Chung. The effect of angiotensin-blocking agents on liver fibrosis in patients with hepatitis c. Liver International, 29(5):748–753, 2009. Gabriele Corso, Hannes St¨ark, Bowen Jing, Regina Barzilay, and Tommi Jaakkola. Diffdock: Dif- fusion steps, twists, and turns for molecular docking. arXiv preprint arXiv:2210.01776, 2022. Kirill Degtyarenko, Paula De Matos, Marcus Ennis, Janna Hastings, Martin Zbinden, Alan Mc- Naught, Rafael Alc´antara, Michael Darsow, Micka¨el Guedj, and Michael Ashburner. Chebi: a database and ontology for chemical entities of biological interest. Nucleic acids research, 36 (suppl 1):D344–D350, 2007. David A Drachman. The amyloid hypothesis, time to move on: Amyloid is the downstream result, not cause, of alzheimer’s disease. Alzheimer’s & Dementia, 10(3):372–380, 2014. Carl Edwards, ChengXiang Zhai, and Heng Ji. Text2mol: Cross-modal molecule retrieval with natural language queries. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 595–607, 2021. Carl Edwards, Tuan Lai, Kevin Ros, Garrett Honke, Kyunghyun Cho, and Heng Ji. Translation between molecules and natural language. arXiv preprint arXiv:2204.11817, 2022. Martin Ester, Hans-Peter Kriegel, J¨org Sander, Xiaowei Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd, volume 96, pp. 226–231, 1996. Gang Fu, Colin Batchelor, Michel Dumontier, Janna Hastings, Egon Willighagen, and Evan Bolton. Pubchemrdf: towards the semantic annotation of pubchem compound and substance databases. Journal of cheminformatics, 7(1):1–15, 2015. Jiayuan He, Dat Quoc Nguyen, Saber A Akhondi, Christian Druckenbrodt, Camilo Thorne, Ralph Hoessel, Zubair Afzal, Zenan Zhai, Biaoyan Fang, Hiyori Yoshikawa, et al. Chemu 2020: natu- ral language processing methods are effective for information extraction from chemical patents. Frontiers in Research Metrics and Analytics, 6:654438, 2021. Kexin Huang, Payal Chandak, Qianwen Wang, Shreyas Havaldar, Akhil Vaid, Jure Leskovec, Girish Nadkarni, Benjamin S Glicksberg, Nils Gehlenborg, and Marinka Zitnik. Zero-shot prediction of therapeutic use with geometric deep learning and clinician centered design. medRxiv, pp. 2023– 03, 2023. Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Ben- jamin A Shoemaker, Paul A Thiessen, Bo Yu, et al. Pubchem 2023 update. Nucleic acids research, 51(D1):D1373–D1380, 2023. Clayton W Kosonocky, Aaron L Feller, Claus O Wilke, and Andrew D Ellington. Using alterna- tive smiles representations to identify novel functional analogues in chemical similarity vector searches. Patterns, 2023. Greg Landrum et al. Rdkit: A software suite for cheminformatics, computational chemistry, and predictive modeling. Greg Landrum, 8:31, 2013. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Al- lan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016, 2016. Qingxin Li and CongBao Kang. Mechanisms of action for small molecules revealed by structural biology in drug discovery. International journal of molecular sciences, 21(15):5262, 2020. Maria P Magari˜nos, Anna Gaulton, Eloy F´elix, Tevfik Kiziloren, Ricardo Arcila, Tudor I Oprea, and Andrew R Leach. Illuminating the druggable genome through patent bioactivity data. PeerJ, 11: e15153, 2023. 11 Gerald Maggiora, Martin Vogt, Dagmar Stumpfe, and Jurgen Bajorath. Molecular similarity in medicinal chemistry: miniperspective. Journal of medicinal chemistry, 57(8):3186–3204, 2014. Yvonne C Martin, James L Kofron, and Linda M Traphagen. Do structurally similar molecules have similar biological activity? Journal of medicinal chemistry, 45(19):4350–4358, 2002. Khalis Mustafayev and Harrys Torres. Hepatitis b virus and hepatitis c virus reactivation in cancer patients receiving novel anticancer therapies. Clinical Microbiology and Infection, 28(10):1321– 1327, 2022. Michael M Mysinger, Michael Carchia, John J Irwin, and Brian K Shoichet. Directory of useful de- coys, enhanced (dud-e): better ligands and decoys for better benchmarking. Journal of medicinal chemistry, 55(14):6582–6594, 2012. David Ochoa, Andrew Hercules, Miguel Carmona, Daniel Suveges, Asier Gonzalez-Uriarte, Cinzia Malangone, Alfredo Miranda, Luca Fumis, Denise Carvalho-Silva, Michaela Spitzer, et al. Open targets platform: supporting systematic drug–target identification and prioritisation. Nucleic acids research, 49(D1):D1302–D1310, 2021. OpenAI. Gpt-4 technical report, 2023. George Papadatos, Mark Davies, Nathan Dedman, Jon Chambers, Anna Gaulton, James Siddle, Richard Koks, Sean A Irvine, Joe Pettersson, Nicko Goncharoff, et al. Surechembl: a large-scale, chemically annotated patent document database. Nucleic acids research, 44(D1):D1220–D1228, 2016. David E Patterson, Richard D Cramer, Allan M Ferguson, Robert D Clark, and Laurence E Wein- berger. Neighborhood behavior: a useful concept for validation of “molecular diversity” descrip- tors. Journal of medicinal chemistry, 39(16):3049–3059, 1996. Stefan Senger. Assessment of the significance of patent-derived information for the early identifica- tion of compound–target interaction hypotheses. Journal of Cheminformatics, 9(1):1–8, 2017. Stefan Senger, Luca Bartek, George Papadatos, and Anna Gaulton. Managing expectations: as- sessment of chemistry databases generated by automated extraction of chemical structures from patents. Journal of cheminformatics, 7(1):1–12, 2015. Bing Su, Dazhao Du, Zhao Yang, Yujie Zhou, Jiangmeng Li, Anyi Rao, Hao Sun, Zhiwu Lu, and Ji- Rong Wen. A molecular multimodal foundation model associating molecule graphs with natural language. arXiv preprint arXiv:2209.05481, 2022. Akshay Subramanian, Kevin P Greenman, Alexis Gervaix, Tzuhsiung Yang, and Rafael G´omez- Bombarelli. Automated patent extraction powers generative modeling in focused chemical spaces. Digital Discovery, 2023. Chenkai Sun, Weijiang Li, Jinfeng Xiao, Nikolaus Nova Parulian, ChengXiang Zhai, and Heng Ji. Fine-grained chemical entity typing with multimodal knowledge representation. In 2021 IEEE In- ternational Conference on Bioinformatics and Biomedicine (BIBM), pp. 1984–1991. IEEE, 2021. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. Oleg Trott and Arthur J Olson. Autodock vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading. Journal of computational chemistry, 31(2):455–461, 2010. David Weininger. Smiles, a chemical language and information system. 1. introduction to method- ology and encoding rules. Journal of chemical information and computer sciences, 28(1):31–36, 1988. David S Wishart, Sagan Girod, Harrison Peters, Eponine Oler, Juan Jovel, Zachary Budinski, Ralph Milford, Vicki W Lui, Zinat Sayeeda, Robert Mah, et al. Chemfont: the chemical functional ontology resource. Nucleic Acids Research, 51(D1):D1220–D1229, 2023. 12 Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learn- ing. Chemical science, 9(2):513–530, 2018. Sheng-Yong Yang. Pharmacophore modeling and applications in drug discovery: challenges and recent advances. Drug discovery today, 15(11-12):444–450, 2010. Zenan Zhai, Christian Druckenbrodt, Camilo Thorne, Saber A Akhondi, Dat Quoc Nguyen, Trevor Cohn, and Karin Verspoor. Chemtables: a dataset for semantic classification on tables in chemical patents. Journal of Cheminformatics, 13(1):1–20, 2021. 13 A PROMPTS Patent summarization. The system prompt used was “You are an organic chemist summarizing chemical patents”, and the user prompt was “Return a short set of three 1-3 word descriptors that best describe the chemical or pharmacological function(s) of the molecule described by the given patent title, abstract, and partial description (giving more weight to title & abstract). Be specific and concise, but not necessarily comprehensive (choose a small number of great descriptor). Follow the syntax ’{descriptor 1} / {descriptor 2} / {etc}’, writing ’NA’ if nothing is provided. DO NOT BREAK THIS SYNTAX. The following is the patent:”, followed by the patent title, abstract, and partial description. Word embedding cluster summarization. Each cluster’s labels were fed into GPT-3.5-turbo with the system prompt “You are a PhD pharmaceutical chemist” and the user prompt: “Given a set of molecular descriptors, return a single descriptor representing the centroid of the terms. Do not speculate. Only use the information provided. Be concise, not explaining answers. Ex- ample 1 Set of Descriptors: 11(beta)-hsd1, 11-hsd-2, 17β-hsd3 Example 1 Average Descriptor: hsd Example 2 Set of Descriptors: anti-retroviral, anti-retrovirus, anti-viral, anti-virus, antiretrovi- ral, antiretrovirus, antiviral, antivirus Example 2 Average Descriptor: antiviral Set of Descriptors: INSERT DESCRIPTORS HERE Average Descriptor:”. Graph label cluster summarization. Each cluster’s labels were fed into GPT-4 with the system prompt “You are a PhD pharmaceutical chemist” and the user prompt: “Pretend you are a phar- maceutical chemist. I will provide you with several terms, and your job is to summarize the terms into appropriate categories. Be succinct, focusing on the broadest categories while still being rep- resentative. Don’t show your work. Example terms: Antiviral HCV Kinase Cancer Polymerase Protease Example summarization: Antiviral & Cancer Terms: Summarization:”””. INSERT DESCRIPTORS HERE B SUPPLEMENTAL DATA (a) (b) Figure S1: Molecular weight and structural similarity distribution of the CheF dataset. (a) Molecular weight of each molecule in the dataset. Minimum: 100.12 Da; Maximum: 5749.60 Da; Mean: 440.79 Da; Std: 203.96 Da. (b) Maximum bulk fingerprint Tanimoto coefficient (Tc) for each molecule in the dataset. Bulk Tc measures how similar a given molecule’s structure is to all of the other molecules in the dataset. Max Bulk Tc returns the structural similarity of a molecule to the most structurally similar molecule in the dataset. High Max Bulk Tc indicates redundant structures, mid-low Max Bulk Tc indicates diverse structures. Minimum: 0.076; Maximum: 1.00; Mean: 0.68; Std: 0.15. 14 Figure S2: Example of LLM-based chemical function extraction. Patent IDs are used to retrieve the patent title, abstract, and description from Google Scholar. ChatGPT is then prompted to extract out the chemical function of the molecule being described by the patent. Table S1: ChatGPT patent summarization validation. Manual validation was performed on 200 molecules randomly chosen from the CheF dataset. These 200 molecules had 596 valid associated patents, and 1,738 ChatGPT summarized labels. These labels were manually validated to determine the ratio of correct syntax, relevance to patent, and relevance to the Molecule of Interest (MOI). Validation Task Syntax Label relevant to patent Label refers to MOI, target of MOI, or downstream effects of MOI Label refers to MOI, target of MOI, downstream effects of MOI, or molecules of which MOI is an intermediate Fraction Correct 0.996 0.998 0.779 0.982 Table S2: Validation of ChatGPT-aided label consolidation. The first 500 of the 3,178 clus- ters of greater than one label (sorted in descending cluster size order) were evaluated for whether or not the clusters contained semantically common elements. The ChatGPT consolidated cluster labels were then analyzed for accuracy and representativeness. Common failure modes for clus- tering primarily included the grouping of grammatically similar, but not semantically similar labels (e.g., ahas-inhibiting, ikk-inhibiting). Failure modes for ChatGPT commonly included averaging the terms to the wrong shared common element (e.g., anti-fungal and anti-mycotic being consolidated to the label “anti”). Validation Task Cluster contains semantically common elements ChatGPT cluster summarization accurate & representative Fraction Correct 0.992 0.976 Table S3: Comparison of Chemical-Text Datasets. Comparison of CheF to existing chemical-text datasets ChEBI and ChemFOnt (Degtyarenko et al., 2007; Wishart et al., 2023) by current size (# molecules), maximum automated scaleup size (# molecules), text-type, whether or not structure and function are separate in the text, and the data source used for dataset construction. Both ChEBI and ChemFOnt were built from existing datasets with additional manual curation and annotation, limiting potential automated scaleup size. In contrast, the method used to build CheF scales readily, allowing for a potential dataset size of 32M molecules. Dataset ChEBI ChemFOnt CheF (ours) Curr. Size 103K 342K 100K Scaleup Size Text-Type Long text Labels Labels 103K+ 1M+ 32M+ S/F Separate No Yes Yes Data Source DB Agg. / Manual DB Agg. / Manual LLM-Sum. Patents 15 Figure S3: Most frequent patent summarizations. The most frequent patent summarizations do not immediately exhibit any dataset-independent biases. The bias towards broad treatment terms, such as cancer, antiviral, and analgesic, likely emerged because these are desirable target functions and are thus overrepresented in patents. (a) (b) Figure S4: DBSCAN clustering on Ada-002 text embeddings reduces the number of labels. (a) The optimal DBSCAN epsilon value was defined as the cutoff resulting in the smallest number of clusters without overtly false categories appearing (e.g., merging antiviral, antibacterial, & antifun- gal). The optimal epsilon was found to be 0.340 for the dataset considered herein (marked by black star), resulting in a consolidation from 29,854 labels to 20,030 clusters. The labels in each cluster were then consolidated with ChatGPT, creating a set of 20,030 labels. (b) t-SNE of the Ada-002 text embeddings, colored by the top 10 largest clusters. The largest cluster, found to be all IUPAC structural terms, was removed from the dataset to reduce excessive non-generalizable labels. 16 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) Figure S5: Additional CheF labels and their clusters in structure space. Molecules in the CheF dataset were projected based on molecular fingerprints and colored if the selected label was con- tained by the molecule’s set of descriptors. To measure degree of clustering for a single label, the max fingerprint Tanimoto similarity from each molecule containing the selected label, to the other molecules containing that label, compared against the max fingerprint Tanimoto similarity for a random subset of molecules of the same size was obtained, whereas to measure the coincidence between the primary and co-occurring labels, the max fingerprint Tanimoto similarity from each molecule containing the primary label to each molecule containing any of the 10 nearest neighbor la- bels was compared against the max fingerprint Tanimoto similarity to a random subset of molecules of the same size. (a) Molecules containing label ‘crystal’. (b) Degree of clustering for ‘crystal’. (c) Molecules containing neighboring labels to ‘crystal’. (d) Degree of coincidence between ‘crystal’ and its neighboring labels. (e) Molecules containing label ‘protease’. (f) Degree of clustering for ‘protease’. (g) Molecules containing neighboring labels to ‘protease’. (h) Degree of coincidence between ‘protease’ and its neighboring labels. (i) Molecules containing label ‘opioid’. (j) Degree of clustering for ‘opioid’. (k) Molecules containing neighboring labels to ‘opioid’. (l) Degree of coincidence between ‘opioid’ and its neighboring labels. (m) Molecules containing label ‘beta- lactamase’. (n) Degree of clustering for ‘beta-lactamase’. (o) Molecules containing neighboring labels to ‘beta-lactamase’. (p) Degree of coincidence between ‘beta-lactamase’ and its neighboring labels. 17 Figure S6: K-means clustering on molecules containing ‘hcv’ elucidates Hepatitis C Virus (HCV) antiviral modalities. The top 20 most frequently occurring labels were obtained for each of 8 clusters to determine their modalities (if applicable). Cluster 4 was the only cluster to contain ‘nucleoside’ (n=65) and ‘nucleotide’ (n=12) in the top 20 labels, indicating this cluster primarily contained HCV antiviral nucleoside derivatives likely inhibiting the NS5B polymerase. Cluster 2 contained ‘protease’ (n=85), ‘peptide’ (n=35), and ‘serine’ (n=15), indicating that this cluster pri- marily contained peptidomimetic protease inhibitors acting on the NS3 serine protease. Cluster 5 contained ‘protease’ (n=108), ‘macrocyclic’ (n=42), and serine (n=8), indicating that this clus- ter contained macrocyclic compounds acting likely as NS3 serine protease inhibitors. Cluster 6 contained no specific mechanistic terms, alluding to the possible mechanism of these molecules in- hibiting the NS5A protein. 18 Table S4: GPT-4 graph community summarizations. All labels from the ten most abundant clusters were fed into GPT-4 for categorical summarization. These outputs were verified to be representative of the labels, and were further consolidated by the authors into concise categories. GPT4 Cluster summary Chemical Processes & Reactions, Materials & Substances, Photographic & Printing Processes, Cosmetic & Dermatological Applications, Industrial Manufacturing & Production, Sensory Properties Antiviral, Cancer, Cellular Processes, Enzymes, Immunology, Oncology, Protein Interactions, Therapy & Drug Development Pain Management, Hormonal Regulation, Gastrointestinal Conditions, Neurological Conditions, Reproductive Health, Obesity Management, Addiction Treatment, Sleep Disorders, Immune Response, Cardiovascular Conditions Chemical Compounds & Materials, Electronic & Optoelectronic Devices, Energy & Efficiency, Light & Emission Properties, Stability & Durability, Quantum & Thermodynamics Neurodegenerative Diseases, Inflammatory & Autoimmune Diseases, Respiratory Diseases, Immune Response & Regulation, Enzymes & Mediators, Drug Development & Therapeutics Antibacterial, Antifungal, Antiparasitic, Antimalarial, Antimicrobial, Antiprotozoal, Antitubercular, Insecticide, Herbicide, Fungicide, Pesticide, Acaricide, Nematicidal, Agricultural & Health Protection Drug Development & Delivery, Diagnostic & Monitoring, Gene & Protein Regulation, Epigenetics & Transcription, Immunology & Vaccines Neurological & Psychiatric Disorders, Cognitive & Memory Function, Neuropharmacology & Neurotransmission, Mood & Mental Health, Urologic & Sexual Health Lipid Metabolism & Cardiovascular Health, Diabetes Management, Organ Health & Protection Cardiovascular & Renal Disorders, Ion Channels & Transporters, Anesthetics & Muscle Relaxants, Neurological Disorders & Eye Conditions Label in graph Material, Industrial, Synthesis, & Dermatology Antiviral & Cancer Neurological, Hormonal, Gastrointestinal, & Reproductive Health Electronic, Photochemical, & Stability Neurodegenerative, Autoimmune, Inflammation, & Respiratory Anti-Organism & Agricultural Pharmaceutical Research, Genetic Regulation, Immunology Neurological & Urologic Cardiovascular & Lipid Metabolism Cardiovascular, Renal, & Ion Channel 19 Table S5: Arbitrary 20 CheF labels from each summarized co-occurrence neighborhood. Modularity-based community detection was performed on the CheF co-occurrence graph to obtain 19 distinct communities. The communities appeared to broadly coincide with the semantic meaning of the contained labels, and the largest 10 communities were summarized to a common label. Shown are a random 20 labels from the first five summarized communities. Electronic, Photochemical, & Stability Neuro- degenerative, Autoimmune, Inflammation, & Respiratory Material, Industrial, Synthesis, & Dermatology absorb acid binder care cosmetic destabilize form functional ionic method modification optical photochromic plastic polymer preserve production protective sensitivity skin Antiviral & Cancer aid antiviral c cancer cell g12 hbv hcv hepatitis hiv inhibit inhibition inhibitor integrase kinase kras mapk nucleoside phosphatidyl- inositol phosphorylation activate adhesion alzheimer amyloid anti- inflammatory autoimmune cox disease elastase il-17 inflammation inflammatory interferon lung neuro- degenerative neuro- inflammation sting airway allergic allergy Neurological, Hormonal, Gastrointesti- nal, & Reproductive Health analgesic condition ligand modulate modulator p2x7 pain prophylaxis prostate receptor relief selective tgr5 tract carbazole compound expand life light material activated amine anisotropy aromatic blue capability characteristic charge treatment condensed various 5-ht 7 addiction adrenergic crystal cyclic device dielectric diode 20 Table S6: Arbitrary 20 CheF labels from each summarized co-occurrence neighborhood. Modularity-based community detection was performed on the CheF co-occurrence graph to obtain 19 distinct communities. The communities appeared to broadly coincide with the semantic meaning of the contained labels, and the largest 10 communities were summarized to a common label. Shown are a random 20 labels from the second five summarized communities. Neurological & Urologic Cardiovascular & Lipid Metabolism Cardiovascular, Renal, & Ion Channel Anti-Organism & Agricultural amide control derivative infection protection acaricide acetic animal anti anti-malarial anti-microbial antiparasitic aryl azetidin azetidine bacterial bactericide beta-lactamase bicyclic bridge Pharmaceutical Research, Genetic Regulation, Immunology assay bind bromodomain diagnostic drug potential psma regulator anticonvulsant cerebral disorder function mitochondrial neural neuroprotective pde carbonic ischemia level liver prevention reducer reducing reduction sirtuin schizophrenia regulate targeting 6 alter analog atp atrophy bioavailability biological biomarker combinatorial cytotoxic sedative system urologic 4 5 anti-psychotic antidepressant antitussive anxiolytic brain central releasing retinoid vap vascular a aldose alleviate antilipidemic blood cholesterol cholesterolemia cardiovascular channel ion stroke ace anesthetic angina angiotensin anti- hypertensive blocker calcium cardiac cardiotonic circulation c-transport contraction diuretic failure heart hypertensive Table S7: Fingerprint models benchmarked on CheF. To assess a baseline benchmark on the CheF dataset of ∼100K molecules, several molecular fingerprint-based models were trained on 90% of the training data and evaluated on the 10% test set holdout. Macro average ROC-AUC and PR-AUC was calculated across all 1,543 labels. Logistic regression (LR), random forest classifier (RFC), and a 2-layer feedforward neural network (FFN) were trained. Parameters for LR and RFC were chosen to be common default values, whereas the FFN layer number and size were chosen through a 5-fold cross validation. Model FP + LR FP + RFC FP + FFN ROC-AUC PR-AUC 0.20 0.13 0.12 0.84 0.80 0.81 21 Figure S7: Top 50 FDA-approved drugs predicted to contain the label ‘hcv’. The Stage-4 ap- proved drugs list from OpenTargets was passed through the CheF label prediction model. Results were sorted by ‘hcv’ probability. Relevant and high abundance labels displayed for clarity. Green cells represent approved-use labels from on the OpenTargets page, and red cells represent no ap- proved usage relevant to the given term. 22
ai_researcher
2
Towards_Scientific_Discovery_with_Generative_AI_Progress_Opportunities_and_Challenges.pdf
Towards Scientific Discovery with Generative AI: Progress, Opportunities, and Challenges Chandan K Reddy, Parshin Shojaee Virginia Tech [email protected], [email protected] 4 2 0 2 c e D 6 1 ] G L . s c [ 1 v 7 2 4 1 1 . 2 1 4 2 : v i X r a Abstract Scientific discovery is a complex cognitive process that has driven human knowledge and technological progress for cen- turies. While artificial intelligence (AI) has made significant advances in automating aspects of scientific reasoning, sim- ulation, and experimentation, we still lack integrated AI sys- tems capable of performing autonomous long-term scientific research and discovery. This paper examines the current state of AI for scientific discovery, highlighting recent progress in large language models and other AI techniques applied to scientific tasks. We then outline key challenges and promis- ing research directions toward developing more comprehen- sive AI systems for scientific discovery, including the need for science-focused AI agents, improved benchmarks and evaluation metrics, multimodal scientific representations, and unified frameworks combining reasoning, theorem proving, and data-driven modeling. Addressing these challenges could lead to transformative AI tools to accelerate progress across disciplines towards scientific discovery. Introduction Scientific discovery - the process of formulating and vali- dating new concepts, laws, and theories to explain natural phenomena - is one of humanity’s most intellectually de- manding and impactful pursuits. For decades, AI researchers have sought to automate aspects of scientific reasoning and discovery. Early work focused on symbolic AI approaches to replicate the formation of scientific hypotheses and laws in symbolic forms (Segler, Preuss, and Waller 2018; Mac- Coll 1897). More recently, deep learning and large language models (LLMs) have shown promise in tasks like literature analysis and brainstorming (Ji et al. 2024; Lu et al. 2024; Si, Yang, and Hashimoto 2024), experiment design (Boiko et al. 2023; Arlt et al. 2024), hypothesis generation (Wang et al. 2024; Ji et al. 2024), and equation discovery (Shojaee et al. 2024b; Ma et al. 2024). Despite this progress, we still lack AI systems capable of integrating the diverse cognitive processes involved in sustained scientific research and discovery. Most work has focused on narrow aspects of scientific reasoning in iso- lation. Developing more comprehensive AI discovery sys- tems capable of supporting the full cycle of scientific in- Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Overview of the AI-driven scientific discovery framework. The cycle illustrates the iterative process of scientific inquiry. The framework begins with user-defined problem specifications, retrieves relevant scientific context from literature and databases, and utilizes generative AI sys- tems to produce new hypotheses and experimental designs. These AI-generated concepts are then evaluated and refined through experimental observation, expert input, and scien- tific tools, driving further iterations of the discovery cycle. quiry —from context retrieval and hypothesis generation to experiment design and evaluation (Figure 1) —could dra- matically accelerate progress across scientific disciplines. This paper examines the current state and future potential of generative AI for scientific discovery. We highlight recent advances, particularly in scientific understanding and dis- covery frameworks, while identifying critical gaps. We then outline key research challenges and directions towards more unified AI systems for discovery, including: (i) Creating im- proved benchmarks and evaluation frameworks for scien- tific discovery; (ii) Developing science-focused AI agents that leverage scientific knowledge and reasoning capabili- ties; (iii) Advancing multimodal scientific representations beyond text; and (iv) Unifying automated reasoning, theo- rem proving, and data-driven modeling. By tackling these challenges, the AI and Science community can work to- wards systems that serve as collaborative partners to human scientists, accelerating the pace of discovery in science. Recent Advances in AI for Scientific Tasks The past decade has witnessed remarkable progress in ap- plying AI to various scientific tasks. This section highlights some of the most significant recent advances, demonstrat- ing AI’s growing capabilities in supporting and accelerating scientific discovery across multiple disciplines. Literature Analysis and Brainstorming The exponential growth of scientific publications has made it increasingly challenging for researchers to stay abreast of developments in their fields. Large language models (LLMs) pre-trained on vast scientific corpora have emerged as pow- erful tools to address this challenge, enhancing literature analysis and interaction. Researchers have developed spe- cialized LLMs for various scientific domains. Models like PubMedBERT (Gu et al. 2021) and BioBERT (Lee et al. 2020) focus on biomedical literature, while SciBERT (Belt- agy, Lo, and Cohan 2019) covers a broader range of scien- tific disciplines. More recent models such as BioGPT (Luo et al. 2022) and SciGLM (Zhang et al. 2024) have further pushed the boundaries of scientific language modeling, in- corporating advanced architectures and training techniques. These models, trained on sources like PubMed and arXiv, excel at literature information retrieval, summarization, and question-answering. They enable efficient navigation of sci- entific knowledge by quickly finding relevant papers, dis- tilling key findings, and synthesizing information to answer complex queries. Beyond analysis, recent works demonstrate LLMs’ po- tential in generating novel scientific insights. For instance, SciMON (Ji et al. 2024) uses LLMs to generate new sci- entific ideas by analyzing patterns in the existing literature. These advancements show AI’s capacity to not only aid in literature review but also contribute to identifying promis- ing and novel research directions, potentially accelerating scientific discovery. Theorem Proving Automated theorem proving has recently gained attention in AI for science research due to its fundamental role in scientific reasoning. Recent years have seen remarkable progress in this field, particularly through the integration of LLMs with formal reasoning systems. The GPT-f frame- work (Polu and Sutskever 2020) pioneered this approach by training transformer-based language models on proof tactics, enabling navigation through complex mathematical proofs with the help of learned priors. Building on this, researchers have integrated proving techniques with LLMs and developed enhancements such as data augmentation (Han et al. 2021), retrieval augmentation (Yang et al. 2024), and novel proof search methods (Lample et al. 2022; Wang et al. 2023b). One of the key enhancements is the autofor- malization approach, exemplified by the Draft-Sketch-Prove method (Jiang et al. 2023). This method uses LLMs to first draft informal proofs, translate them into formal sketches, and then complete proofs with additional proof assistant tools (B¨ohme and Nipkow 2010), mimicking the human process of moving from intuitive understanding to rigorous proof. As these systems become more adept at formalizing and proving complex statements, they could be applied to derive scientific theories, potentially accelerating the scien- tific process and leading to enhancements in fields where theoretical understanding lags behind empirical methods. Experimental Design Experimental design is a critical component of the scientific process, often requiring extensive domain knowledge and creative thinking. The automation of this process through generative models has the potential to accelerate scientific discovery across various fields. By leveraging LLM agents, researchers are recently developing systems that can design, plan, optimize, and even execute scientific experiments with minimal human intervention. These tools are particularly valuable in fields where experimental setup is costly, al- lowing researchers to explore a wider range of possibilities before physical implementation. For example, in physics, LLM-driven systems have demonstrated effectiveness in de- signing complex quantum experiments (Arlt et al. 2024) and optimizing parameters in high-energy physics simula- tions (Cai et al. 2024; Baldi, Sadowski, and Whiteson 2014). Chemistry has also recently seen advancements in auto- mated experimentation, with LLM agent systems capable of designing and optimizing chemical reactions (M. Bran et al. 2024). Moreover, in biology and medicine, LLM- driven experimental design has shown promise in optimizing gene-editing protocols (Huang et al. 2024), and designing more effective clinical trials (Singhal et al. 2023). These AI- driven approaches to experimental design allow researchers to tackle more complex problems and explore hypotheses that might otherwise be impractical due to time or resource constraints. Data-driven Discovery Data-driven discovery has become a cornerstone of modern scientific research, leveraging the ever-growing volumes of experimental, observational, and synthetic data to uncover new patterns, relationships, and laws. This paradigm shift has been particularly transformative in fields where complex systems and high-dimensional data are prevalent. In drug discovery, data-driven approaches have signifi- cantly accelerated the identification of potential therapeutic compounds. For instance, recent works employed generative (Mak, Wong, and Pichika 2023; Callaway 2024) and multi- modal representation learning (Gao et al. 2024) models to discover a novel antibiotic, effective against a wide range of bacteria, by searching and screening millions of molecules in the representation space (Gao et al. 2024). These enhance- ments demonstrate the power of AI in exploring vast chem- ical spaces that would be infeasible to search manually or in the huge and infinite combinatorial space of molecules. Equation discovery, commonly known as symbolic re- gression, is a data-driven task for uncovering mathemati- cal expressions from data. Early neural methods like AI Feynman (Udrescu and Tegmark 2020) demonstrated the ability to rediscover fundamental physics laws from data alone, while later work incorporated physical constraints and structures for more interpretable models (Cranmer et al. 2020b). The advent of language modeling and representa- tion learning brought new possibilities. Transformer-based language models, adapted for symbolic regression, treat equation discovery as a numeric-to-symbolic generation task (Biggio et al. 2021; Kamienny et al. 2022). These ap- proaches have been enhanced with search techniques dur- ing decoding (Landajuela et al. 2022; Shojaee et al. 2024a), although challenges remain in effectively encoding and to- kenizing numeric data (Golkar et al. 2023). Recent works like the SNIP model (Meidani et al. 2024) have also ex- plored multi-modal representation learning between sym- bolic expressions and numeric data, moving the equation discovery search to a lower-dimensional and smoother rep- resentation space for more effective and efficient search. Re- cently, LLM-SR (Shojaee et al. 2024b) also demonstrated the potential of using LLMs as scientist agents in the evolu- tionary search for equation discovery. These advancements highlight the evolving landscape of equation discovery, with significant potential for further improvements in integrating numeric data with AI models and leveraging the mathemat- ical reasoning capabilities of advanced LLMs. In materials discovery, data-driven approaches have led to the prediction and subsequent synthesis of novel materi- als with desired properties (Pyzer-Knapp et al. 2022; Mer- chant et al. 2023; Miret and Krishnan 2024). Large gener- ative models have shown remarkable success in generating novel structures. For instance, Merchant et al. (2023) intro- duced Graph Networks for Materials Exploration (GNoME), leading to the discovery of new stable materials. This ap- proach represents an order-of-magnitude increase in known stable crystals, showcasing the potential of AI in expand- ing our materials knowledge base. LLMs have also been re- cently used to extract information from scientific literature in material science, generate novel material compositions, and guide experimental design (Miret and Krishnan 2024). For example, the AtomAgents (Ghafarollahi and Buehler 2024a) demonstrates how LLMs can be integrated into the material discovery pipeline, significantly improving the pro- cess in alloy design. By combining the pattern-recognition and representation learning capabilities with the reasoning and generalization abilities of advanced AI models, we are moving towards systems that can not only analyze existing data but also propose novel hypotheses for data-driven dis- coveries across scientific disciplines. Key Challenges and Research Opportunities Benchmarks for Scientific Discovery First and foremost, evaluating AI systems for open-ended scientific discovery poses unique challenges compared to typical machine learning benchmarks. This challenge is par- ticularly acute for large language models (LLMs) and other foundation models capable of storing and potentially “mem- orizing” vast amounts of scientific knowledge (Brown 2020; Bommasani et al. 2021) in their parameters. Many existing benchmarks in the field of scientific discovery only focus on rediscovering known scientific laws or solving textbook- style problems. For instance, the AI Feynman dataset con- sists of 120 physics equations to be rediscovered from data (Udrescu and Tegmark 2020; Udrescu et al. 2020), while datasets like SciBench (Wang et al. 2023c), ScienceQA (Lu et al. 2022), and MATH (Hendrycks et al. 2021) primar- ily evaluate scientific question answering and mathematical problem-solving abilities. However, these benchmarks may not capture the entire complexity of scientific discovery processes. More critically, they may be vulnerable to reciting or memorization by large language models, potentially leading to overestimation of true discovery capabilities (Carlini et al. 2021; Shojaee et al. 2024b). As (Wu et al. 2023) points out, LLMs can often solve scientific problems by pattern matching against mem- orized knowledge rather than through genuine reasoning or discovery. This concern is further emphasized by studies showing that LLMs can reproduce significant portions of their training data (Carlini et al. 2022). There is a press- ing need for richer benchmarks and evaluation frameworks in this research area to better understand the gap between baselines and recent methods and to identify areas for im- provement. Key directions include: • Developing benchmark datasets focused on novel scien- tific discovery rather than recovery: One promising ap- proach is to create configurable simulated scientific do- mains where the underlying laws and principles can be systematically varied. This would allow testing discov- ery capabilities on new scenarios, mitigating the risk of models simply reciting memorized information ob- served in their training data. For example, (M. Bran et al. 2024) used a simulated chemistry environment to eval- uate AI-driven discovery of novel chemical reactions. Similarly, (Shojaee et al. 2024b) designed simulated set- tings for different scientific domains such as material sci- ence, physics, and biology to evaluate AI-driven scien- tific equation discovery. A key challenge in this line of research is balancing the use of LLMs’ prior scientific knowledge while avoiding mere recitation or memoriza- tion. This balance is crucial for advancing AI’s role in scientific discovery. • Creating evaluation metrics for multiple facets of scien- tific discovery: To comprehensively assess scientific dis- covery capabilities, we need a multi-faceted evaluation framework. The key metrics include: (i) Novelty: Mea- sures to quantify how different a discovered hypothesis or law is from existing knowledge. This could involve comparing against a corpus of known scientific literature (Ji et al. 2024); (ii) Generalizability: Assessing how well discovered laws or models predict out-of-distribution un- observed data. To do so, evaluation benchmarks should be developed that test discovered laws on scenarios sig- nificantly different from the training data distribution, highlighting how scientific theories should be gener- alizable to new contexts; (iii) Alignment with Scien- tific Principles: Evaluating whether discovered hypothe- ses are consistent with fundamental laws of physics or other well-established scientific knowledge. This could involve developing formal verification methods for sci- entific consistency (Cornelio et al. 2023; Cranmer et al. 2020a), as well as assessing the discovered laws’ compat- Figure 2: A comprehensive framework for science-focused AI agents. The diagram illustrates a⃝ the multi-modal nature of scientific data, b⃝ the inputs for scientific tasks, c⃝ the key actions performed by AI agents in scientific discovery, and d⃝ the evaluation metrics for assessing scientific outcomes. This framework highlights the integration of diverse data sources, AI- driven tools, and human experts in advancing scientific research and discovery processes. ibility with existing scientific theories (Liu et al. 2024b). • Involving domain experts in benchmark design and eval- uation: The involvement of domain experts is crucial for developing meaningful benchmarks and evaluating AI-driven scientific discoveries. Experts can contribute in various aspects of the discovery process such as as- sessing the plausibility, novelty, and potential impact of AI-generated hypotheses; evaluating the interpretability and alignment of AI-discovered laws or models with human-understandable scientific principles; and provid- ing feedback during the AI-driven discovery process for human-AI collaborative discovery. By integrating do- main expert involvement throughout the benchmark de- velopment, discovery, and evaluation process, we can en- sure that advancements in AI-driven scientific discovery are both technically sound and aligned with the needs and standards of the scientific community. Science-Focused Agents Current work on scientific AI often treats models as passive tools rather than active agents pursuing discovery. There is a growing need to develop science-focused AI agents (Fig- ure 2) that can leverage broad scientific knowledge, engage in reasoning, and autonomously verify their reasoning and hypotheses. Recently, LLMs have shown impressive capa- bilities in knowledge retrieval and reasoning (Huang and Chang 2023), making them promising candidates for devel- oping such agents. These agents can integrate vast amounts of scientific knowledge embedded in LLMs, generate edu- cated hypotheses, design experiments, verify their designs, and interpret the results. Also, their ability to interface with external tools and experimental data sources with the pro- gramming execution gate allows for real-world experimen- tation and validation. Recent work has demonstrated the potential of LLM-based agents in scientific domains. For example, (M. Bran et al. 2024) introduced ChemCrow, an LLM-augmented system for chemistry research. ChemCrow integrates GPT-4 with domain-specific tools for tasks such as reaction prediction, retrosynthesis planning, and safety assessment. This integration allows the system to reason about chemical processes and validate the hypotheses us- ing specialized chemical tools. Similarly, (Ghafarollahi and Buehler 2024a) developed AtomAgents, a multi-agent sys- tem for alloy design and discovery. SciAgents (Ghafarollahi and Buehler 2024b) also uses multiple AI agents, each spe- cializing in different aspects of materials science, to collab- oratively design new bio-materials. The system incorporates physics-aware constraints and can interface with simulation tools to validate its predictions. However, developing effec- tive science-focused agents also presents several challenges: integration: Effective scientific • Domain-specific tool agents require integration with specialized scientific tools and domain-specific knowledge. This challenge arises from the highly specialized nature of scientific instru- ments and methodologies, which are often underrepre- sented in LLMs’ training data. (Bubeck et al. 2023) demonstrated that while LLMs like GPT-4 excel in gen- eral academic tasks, they struggle with specialized sci- entific reasoning, particularly in physics and chemistry. Potential research directions include developing modular architectures for integrating domain-specific knowledge bases and tool interfaces, and fine-tuning LLMs on cu- rated scientific datasets. These approaches could enable LLMs to access domain-specific knowledge and inter- act effectively with specialized scientific tools, enhanc- ing their capabilities in this setting. • Adaptive experimental design and hypothesis evolution: A significant challenge in scientific-focused agents is developing systems capable of long-term, iterative sci- entific investigations. Such agents must design experi- ments, interpret results, and refine hypotheses over ex- tended periods while maintaining scientific rigor and avoiding biases. This challenge stems from the complex, multi-stage nature of scientific inquiry, which often in- volves repeated cycles of experimentation, analysis, and hypothesis adjustment. Potential research directions to address this challenge include meta-learning frameworks enabling agents to improve experimental design and hy- pothesis refinement strategies across multiple investiga- tions; and hierarchical planning algorithms for managing both short-term experimental steps and long-term scien- tific discovery objectives. • Collaborative scientific reasoning: Enabling collabora- tive scientific reasoning in AI agents is crucial for ad- vancing scientific progress. Agents must build on their scientific knowledge, communicate hypotheses, engage in discourse, and critically judge peers’ work. Current science agents struggle with deep critical analysis and identifying scientific flaws in AI-driven hypotheses and experimental designs (Birhane et al. 2023). Research op- portunities include developing multi-agent systems sim- ulating scientific communities, incorporating domain ex- perts in the multi-agent refinement process, and creating benchmarks to enhance scientific discourse capabilities in science-focused agents. Multi-modal Scientific Representations The landscape of scientific data is vast and diverse, encom- passing far more than just textual information. While re- cent advancements in language models have significantly boosted our ability to process and reason with scientific lit- erature, we must recognize that the majority of scientific data exists in forms quite different from natural language. From microscopy images to genomic sequences, from time series sensor data to structured databases and mathematical laws, scientific knowledge is inherently multi-modal (Topol 2023; Wang et al. 2023a). This diversity presents both chal- lenges and opportunities for AI-driven scientific discovery. The challenge lies in developing integrated representation learning techniques that can effectively capture and unify these varied scientific data types. The opportunity, however, is immense: by creating AI systems capable of reasoning across these diverse modalities, we can accelerate scientific discovery in unprecedented ways. Representation learning offers the potential to distill com- plex, high-dimensional scientific data into more manage- able continuous and low-dimensional forms. This is partic- ularly crucial in scientific domains where high-quality data is limited or expensive to obtain through scientific experi- ments. By learning multi-modal robust representations with the help of pre-training techniques and synthetic simulation data, we can make more efficient use of limited data, poten- tially reducing the need for costly scientific experiments and accelerating the pace of discovery. Key directions in this line of research include: • Cross-modal scientific representation learning: Recent work has shown promising results in learning pre-trained joint representations across modalities for different sci- entific tasks. Notable successes include DrugCLIP (Gao et al. 2024) for joint representations of molecules and protein pockets in drug discovery, Text2Mol (Edwards, Zhai, and Ji 2021) bridging natural language and molec- ular structures, ProtST (Xu et al. 2023) unifying protein sequences and biomedical text in proteomics, and SNIP (Meidani et al. 2024) linking mathematical expressions with numeric data. These advances demonstrate the po- tential of cross-modal learning to enhance scientific tasks by leveraging complementary information across modal- ities. Despite these promising results, significant research opportunities remain (i) Expanding cross-modal repre- sentation learning to diverse and new scientific domains, (ii) Enhancing representation quality through recent in- tegrated self-supervised and multi-modal pre-training; and (iii) Developing unified, modality-agnostic frame- works adaptable to heterogeneous scientific data types. • Latent space scientific hypothesis search: Many scientific discovery tasks involve searching through vast, combina- torial spaces of candidates. Current approaches to these problems often rely on evolutionary search or heuristic methods, which can be computationally expensive and inefficient (Sadybekov and Katritch 2023; Schmidt and Lipson 2009). Recent advances in representation learning offer a promising alternative: conducting scientific hy- pothesis optimization in learned latent spaces. By mov- ing the search process into the latent space, we can po- tentially make the exploration of the hypothesis space more efficient and effective. This approach has shown potential across various domains, from drug discovery (Gao et al. 2024) to equation discovery (Meidani et al. 2024), molecular design (Abeer et al. 2024; Zheng, Li, and Zhang 2023), and protein engineering (Castro et al. 2022; Jumper et al. 2021). This emerging research direc- tion has significant potential for scientific discovery. Fu- ture research avenues include (i) Integrating domain ex- pert knowledge or feedback into the representations and discovery process, (ii) Enhancing interpretability of rep- resentations for scientific validation, and (iii) Advanc- ing optimization techniques for nontrivial discovery ob- jectives and more flexible hypothesis search in the latent space. • Multi-modal scientific reasoning frameworks: The ad- vancement of AI-driven scientific discovery hinges on developing systems capable of multi-modal scientific reasoning. Recent works have shown promising results in this direction. For example, multi-modal retrieval aug- mented generation (RAG) systems have demonstrated potential in leveraging LLMs for scientific discovery (Park et al. 2024). Models like GIT-Mol (Liu et al. 2024a) showcase the integration of visual, textual, and graph reasoning for molecular discovery. In materials science, approaches combining textual reasoning with structural data have also shown promise in predicting material properties and guiding synthesis (Miret and Krishnan 2024). However, comprehensive multi-modal scientific reasoning frameworks remain an open chal- lenge. Such frameworks must effectively integrate rea- soning across diverse data types. While studies like (Lu et al. 2022) have shown improved scientific question- answering through combined text and image contexts, further research is needed to explore the impact of other modalities such as numerical or tabular data, and sym- bolic mathematical theories on scientific discovery tasks. • Transfer learning in scientific domains: Transfer learning offers great potential to accelerate scientific discovery, particularly in domains where data is limited or expen- sive to obtain. Recent studies have demonstrated its ef- ficacy across various scientific fields: In drug discovery, models pre-trained on large synthetic chemical databases have shown improved performance in predicting prop- erties of novel compounds (Gao et al. 2024). In mate- rials science, transfer learning from simulated data to real-world experiments has also accelerated the discov- ery of new materials with desired properties (Chen et al. 2024). However, the application of transfer learning in scientific domains presents unique challenges due to the high specificity of scientific knowledge and potential do- main shift between source and target tasks. Advancing these capabilities could unlock new avenues for cross- disciplinary discoveries and accelerate progress in data- scarce scientific domains. Theory and Data Unification Scientific discovery typically involves a complex interplay between theoretical reasoning, empirical observation, and mathematical modeling. However, most existing AI ap- proaches to scientific tasks focus on just one of these as- pects. There is a pressing need for unified frameworks that integrate logical and mathematical reasoning, formal the- orem proving, data-driven modeling, experimental design, and causal inference. This integration is challenging but crit- ical for capturing the full scientific discovery process. Re- cent advances in LLMs have shown promising results in both theorem-proving and data-driven scientific modeling. For instance, LLMs have demonstrated promising capabil- ities in automated theorem-proving and formal mathemati- cal derivations from natural language problems (Yang et al. 2024; Jiang et al. 2023). On the data-driven side, (Shojaee et al. 2024b; Ma et al. 2024) have shown success in discov- ering equation hypotheses from data with the help of LLM- based program search. However, these approaches largely operate in isolation, and there is a significant gap in unify- ing these capabilities to mirror the holistic nature of scien- tific inquiry. Key challenges and research directions include: • Generating derivable hypotheses from empirical obser- vations: Developing methods that can not only discover patterns in data but also produce rigorous mathemati- cal derivations of these findings is crucial for ensuring the reliability and generalizability of AI-driven scientific discoveries to out-of-distribution data. Derivable theo- retical results provide a level of confidence and under- standing that goes beyond mere empirical correlation. Recent work, such as the AI-Descartes system (Corne- lio et al. 2023), has shown promise by combining equa- tion discovery tools (known as symbolic regression) with automated logical reasoning. However, integrating logi- cal reasoning and data-driven frameworks that are adapt- able across scientific discovery tasks still remains an open challenge. Research opportunities exist to automate proof verification, incorporate expert feedback, and em- bed derivability constraints in data-driven discovery al- gorithms. • Combining symbolic and neural approaches: How can we effectively integrate the strengths of symbolic rea- soning (e.g., logical deduction, formal proofs) with the flexibility and learning capabilities of neural networks? Recent work on neuro-symbolic AI (Garcez and Lamb 2023; Sheth, Roy, and Gaur 2023) provides promising directions, but challenges remain in scaling these ap- proaches to more complex settings and scientific tasks. Developing hybrid architectures that can transition be- tween symbolic and neural representations is helpful in capturing the full spectrum of scientific reasoning. • Reasoning discovery uncertainty in formal frameworks: Scientific discoveries often involve uncertainties and probabilities, yet formal logical frameworks struggle to incorporate these aspects. Developing frameworks that can handle probabilistic reasoning while maintaining rig- orous deduction capabilities is crucial for advancing AI- driven scientific discovery. Recent work, such as prob- abilistic logic systems (De Raedt and Kimmig 2015; De Raedt, Kimmig, and Toivonen 2007), and neuro- symbolic programming (Ahmed et al. 2022) has made progress in this direction. However, significant chal- lenges remain for the use of these approaches in scientific discovery, including scalability to large-scale scientific problems, and expressiveness to capture complex scien- tific theories in specific scientific domains. Conclusion Developing unified AI systems for scientific discovery is an ambitious goal, but one with substantial potential im- pact. Success could dramatically accelerate progress across diverse scientific disciplines. This paper has outlined cur- rent progress as well as several key research challenges and opportunities toward this vision, including developing science-focused AI agents, creating improved benchmarks, advancing multimodal representations, and unifying diverse modes of scientific reasoning. Tackling these challenges will require collaboration between AI researchers, scientists across domains, and philosophers of science. While fully autonomous AI scientists may still be far off, nearer-term progress could produce powerful AI assistants to augment human scientific capabilities. Such tools could help scien- tists navigate the ever-growing scientific literature, brain- storm ideas, generate novel hypotheses, design experiments, and find unexpected patterns in complex experimental data. By pursuing this research agenda, the machine learning and AI community has an opportunity to develop systems that do not just automate product-related tasks, but actively push forward the frontiers of human scientific knowledge. The path will be challenging, but the potential rewards - both scientific and technological - are immense. References Abeer, A. N.; Urban, N. M.; Weil, M. R.; Alexander, F. J.; and Yoon, B.-J. 2024. Multi-objective latent space optimiza- tion of generative molecular design models. Patterns. Ahmed, K.; Teso, S.; Chang, K.-W.; Van den Broeck, G.; and Vergari, A. 2022. Semantic probabilistic layers for neuro- symbolic learning. Advances in Neural Information Pro- cessing Systems, 35: 29944–29959. Arlt, S.; Duan, H.; Li, F.; Xie, S. M.; Wu, Y.; and Krenn, M. 2024. Meta-Designing Quantum Experiments with Lan- guage Models. arXiv preprint arXiv:2406.02470. Baldi, P.; Sadowski, P.; and Whiteson, D. 2014. Searching for exotic particles in high-energy physics with deep learn- ing. Nature communications, 5(1): 4308. Beltagy, I.; Lo, K.; and Cohan, A. 2019. SciBERT: A pre- trained language model for scientific text. arXiv preprint arXiv:1903.10676. Biggio, L.; Bendinelli, T.; Neitz, A.; Lucchi, A.; and Paras- candolo, G. 2021. Neural symbolic regression that scales. In International Conference on Machine Learning, 936–945. Pmlr. Birhane, A.; Kasirzadeh, A.; Leslie, D.; and Wachter, S. 2023. Science in the age of large language models. Nature Reviews Physics, 5(5): 277–280. B¨ohme, S.; and Nipkow, T. 2010. Sledgehammer: judge- ment day. In Automated Reasoning: 5th International Joint Conference, IJCAR 2010, Edinburgh, UK, July 16-19, 2010. Proceedings 5, 107–121. Springer. Boiko, D. A.; MacKnight, R.; Kline, B.; and Gomes, G. 2023. Autonomous chemical research with large language models. Nature, 624(7992): 570–578. Bommasani, R.; Hudson, D. A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M. S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. Brown, T. B. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Bubeck, S.; Chandrasekaran, V.; Eldan, R.; Gehrke, J.; Horvitz, E.; Kamar, E.; Lee, P.; Lee, Y. T.; Li, Y.; Lundberg, S.; et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712. Cai, T.; Merz, G. W.; Charton, F.; Nolte, N.; Wilhelm, M.; Cranmer, K.; and Dixon, L. J. 2024. Transforming the boot- strap: Using transformers to compute scattering amplitudes in planar n= 4 super yang-mills theory. Machine Learning: Science and Technology. Callaway, E. 2024. Major AlphaFold upgrade offers boost for drug discovery. Nature, 629(8012): 509–510. Carlini, N.; Ippolito, D.; Jagielski, M.; Lee, K.; Tramer, F.; and Zhang, C. 2022. Quantifying memorization across neu- ral language models. arXiv preprint arXiv:2202.07646. Carlini, N.; Tramer, F.; Wallace, E.; Jagielski, M.; Herbert- Voss, A.; Lee, K.; Roberts, A.; Brown, T.; Song, D.; Erlings- son, U.; et al. 2021. Extracting training data from large In 30th USENIX Security Symposium language models. (USENIX Security 21), 2633–2650. Castro, E.; Godavarthi, A.; Rubinfien, J.; Givechian, K.; Bhaskar, D.; and Krishnaswamy, S. 2022. Transformer- based protein generation with regularized latent space op- timization. Nature Machine Intelligence, 4(10): 840–851. Chen, A.; Wang, Z.; Vidaurre, K. L. L.; Han, Y.; Ye, S.; Tao, K.; Wang, S.; Gao, J.; and Li, J. 2024. Knowledge-Reuse Transfer Learning Methods in Molecular and Material Sci- ence. arXiv preprint arXiv:2403.12982. Cornelio, C.; Dash, S.; Austel, V.; Josephson, T. R.; Goncalves, J.; Clarkson, K. L.; Megiddo, N.; El Khadir, B.; and Horesh, L. 2023. Combining data and theory for deriv- able scientific discovery with AI-Descartes. Nature Commu- nications, 14(1): 1777. Cranmer, M.; Greydanus, S.; Hoyer, S.; Battaglia, P.; Spergel, D.; and Ho, S. 2020a. Lagrangian neural networks. arXiv preprint arXiv:2003.04630. Cranmer, M.; Sanchez Gonzalez, A.; Battaglia, P.; Xu, R.; Cranmer, K.; Spergel, D.; and Ho, S. 2020b. Discover- ing symbolic models from deep learning with inductive bi- ases. Advances in neural information processing systems, 33: 17429–17442. De Raedt, L.; and Kimmig, A. 2015. Probabilistic (logic) programming concepts. Machine Learning, 100: 5–47. De Raedt, L.; Kimmig, A.; and Toivonen, H. 2007. ProbLog: a probabilistic prolog and its application in link discovery. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, 2468–2473. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Edwards, C.; Zhai, C.; and Ji, H. 2021. Text2mol: Cross- modal molecule retrieval with natural language queries. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 595–607. Gao, B.; Qiang, B.; Tan, H.; Jia, Y.; Ren, M.; Lu, M.; Liu, J.; Ma, W.-Y.; and Lan, Y. 2024. Drugclip: Contrasive protein- molecule representation learning for virtual screening. Ad- vances in Neural Information Processing Systems, 36. Garcez, A. d.; and Lamb, L. C. 2023. Neurosymbolic AI: The 3 rd wave. Artificial Intelligence Review, 56(11): 12387–12406. Ghafarollahi, A.; and Buehler, M. J. 2024a. AtomAgents: Alloy design and discovery through physics-aware multi- arXiv preprint modal multi-agent artificial intelligence. arXiv:2407.10022. Ghafarollahi, A.; and Buehler, M. J. 2024b. SciAgents: Au- tomating scientific discovery through multi-agent intelligent graph reasoning. arXiv preprint arXiv:2409.05556. Golkar, S.; Pettee, M.; Eickenberg, M.; Bietti, A.; Cran- mer, M.; Krawezik, G.; Lanusse, F.; McCabe, M.; Ohana, R.; Parker, L.; et al. 2023. xval: A continuous num- ber encoding for large language models. arXiv preprint arXiv:2310.02989. Gu, Y.; Tinn, R.; Cheng, H.; Lucas, M.; Usuyama, N.; Liu, X.; Naumann, T.; Gao, J.; and Poon, H. 2021. Domain- specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1): 1–23. Han, J. M.; Rute, J.; Wu, Y.; Ayers, E. W.; and Polu, S. 2021. Proof artifact co-training for theorem proving with language models. arXiv preprint arXiv:2102.06203. Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart, S.; Tang, E.; Song, D.; and Steinhardt, J. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874. Huang, J.; and Chang, K. C.-c. 2023. Towards Reasoning in Large Language Models: Survey, Implication, and Reflec- tion. In The 61st Annual Meeting Of The Association For Computational Linguistics. Huang, K.; Qu, Y.; Cousins, H.; Johnson, W. A.; Yin, D.; Shah, M.; Zhou, D.; Altman, R.; Wang, M.; and Cong, L. 2024. Crispr-GPT: An LLM agent for auto- mated design of gene-editing experiments. arXiv preprint arXiv:2404.18021. Ji, H.; Wang, Q.; Downey, D.; and Hope, T. 2024. SCIMON: Scientific Inspiration Machines Optimized for Novelty. In ACL Anthology: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 279–299. University of Illinois Urbana- Champaign/CABBI. Jiang, A. Q.; Welleck, S.; Zhou, J. P.; Lacroix, T.; Liu, J.; Li, W.; Jamnik, M.; Lample, G.; and Wu, Y. 2023. Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs. In The Eleventh International Conference on Learning Representations. Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; ˇZ´ıdek, A.; Potapenko, A.; et al. 2021. Highly accurate protein struc- ture prediction with AlphaFold. nature, 596(7873): 583– 589. Kamienny, P.-A.; d’Ascoli, S.; Lample, G.; and Charton, F. 2022. End-to-end symbolic regression with transform- ers. Advances in Neural Information Processing Systems, 35: 10269–10281. Lample, G.; Lacroix, T.; Lachaux, M.-A.; Rodriguez, A.; Hayat, A.; Lavril, T.; Ebner, G.; and Martinet, X. 2022. Hy- pertree proof search for neural theorem proving. Advances in neural information processing systems, 35: 26337–26349. Landajuela, M.; Lee, C. S.; Yang, J.; Glatt, R.; Santiago, C. P.; Aravena, I.; Mundhenk, T.; Mulcahy, G.; and Petersen, B. K. 2022. A unified framework for deep symbolic regres- sion. Advances in Neural Information Processing Systems, 35: 33985–33998. Lee, J.; Yoon, W.; Kim, S.; Kim, D.; Kim, S.; So, C. H.; and Kang, J. 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinfor- matics, 36(4): 1234–1240. Liu, P.; Ren, Y.; Tao, J.; and Ren, Z. 2024a. Git-mol: A multi-modal large language model for molecular science with graph, image, and text. Computers in biology and medicine, 171: 108073. Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halver- son, J.; Soljaˇci´c, M.; Hou, T. Y.; and Tegmark, M. 2024b. Kan: Kolmogorov-arnold networks. arXiv preprint arXiv:2404.19756. Lu, C.; Lu, C.; Lange, R. T.; Foerster, J.; Clune, J.; The ai scientist: Towards fully au- and Ha, D. 2024. arXiv preprint tomated open-ended scientific discovery. arXiv:2408.06292. Lu, P.; Mishra, S.; Xia, T.; Qiu, L.; Chang, K.-W.; Zhu, S.- C.; Tafjord, O.; Clark, P.; and Kalyan, A. 2022. Learn to explain: Multimodal reasoning via thought chains for sci- ence question answering. Advances in Neural Information Processing Systems, 35: 2507–2521. Luo, R.; Sun, L.; Xia, Y.; Qin, T.; Zhang, S.; Poon, H.; and Liu, T.-Y. 2022. BioGPT: generative pre-trained trans- former for biomedical text generation and mining. Briefings in bioinformatics, 23(6): bbac409. M. Bran, A.; Cox, S.; Schilter, O.; Baldassari, C.; White, A. D.; and Schwaller, P. 2024. Augmenting large language models with chemistry tools. Nature Machine Intelligence, 1–11. Ma, P.; Wang, T.-H.; Guo, M.; Sun, Z.; Tenenbaum, J. B.; Rus, D.; Gan, C.; and Matusik, W. 2024. LLM and Sim- ulation as Bilevel Optimizers: A New Paradigm to Ad- vance Physical Scientific Discovery. In Salakhutdinov, R.; Kolter, Z.; Heller, K.; Weller, A.; Oliver, N.; Scarlett, J.; and Berkenkamp, F., eds., Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceed- ings of Machine Learning Research, 33940–33962. PMLR. MacColl, H. 1897. Symbolic reasoning. Mind, 6(24): 493– 510. Mak, K.-K.; Wong, Y.-H.; and Pichika, M. R. 2023. Artifi- cial intelligence in drug discovery and development. Drug Discovery and Evaluation: Safety and Pharmacokinetic As- says, 1–38. Meidani, K.; Shojaee, P.; Reddy, C. K.; and Farimani, A. B. 2024. SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training. In The Twelfth Interna- tional Conference on Learning Representations. Merchant, A.; Batzner, S.; Schoenholz, S. S.; Aykol, M.; Cheon, G.; and Cubuk, E. D. 2023. Scaling deep learning for materials discovery. Nature, 624(7990): 80–85. Miret, S.; and Krishnan, N. 2024. Are LLMs Ready arXiv preprint for Real-World Materials Discovery? arXiv:2402.05200. Park, N. H.; Callahan, T. J.; Hedrick, J. L.; Erdmann, T.; and Capponi, S. 2024. Leveraging Chemistry Foundation Models to Facilitate Structure Focused Retrieval Augmented Generation in Multi-Agent Workflows for Catalyst and Ma- terials Design. arXiv preprint arXiv:2408.11793. Polu, S.; and Sutskever, I. 2020. Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393. Pyzer-Knapp, E. O.; Pitera, J. W.; Staar, P. W.; Takeda, S.; Laino, T.; Sanders, D. P.; Sexton, J.; Smith, J. R.; and Curi- oni, A. 2022. Accelerating materials discovery using artifi- cial intelligence, high performance computing and robotics. npj Computational Materials, 8(1): 84. Sadybekov, A. V.; and Katritch, V. 2023. Computational approaches streamlining drug discovery. Nature, 616(7958): 673–685. Xu, M.; Yuan, X.; Miret, S.; and Tang, J. 2023. Protst: Multi- modality learning of protein sequences and biomedical texts. In International Conference on Machine Learning, 38749– 38767. PMLR. Yang, K.; Swope, A.; Gu, A.; Chalamala, R.; Song, P.; Yu, S.; Godil, S.; Prenger, R. J.; and Anandkumar, A. 2024. Leandojo: Theorem proving with retrieval-augmented lan- guage models. Advances in Neural Information Processing Systems, 36. Zhang, D.; Hu, Z.; Zhoubian, S.; Du, Z.; Yang, K.; Wang, Z.; Yue, Y.; Dong, Y.; and Tang, J. 2024. SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning. arXiv:2401.07950. Zheng, W.; Li, J.; and Zhang, Y. 2023. Desirable molecule discovery via generative latent space exploration. Visual In- formatics, 7(4): 13–21. Schmidt, M.; and Lipson, H. 2009. Symbolic regression of implicit equations. In Genetic programming theory and practice VII, 73–85. Springer. Segler, M. H.; Preuss, M.; and Waller, M. P. 2018. Planning chemical syntheses with deep neural networks and symbolic AI. Nature, 555(7698): 604–610. Sheth, A.; Roy, K.; and Gaur, M. 2023. Neurosymbolic ar- tificial intelligence (why, what, and how). IEEE Intelligent Systems, 38(3): 56–62. Shojaee, P.; Meidani, K.; Barati Farimani, A.; and Reddy, C. 2024a. Transformer-based planning for symbolic regression. Advances in Neural Information Processing Systems, 36. Shojaee, P.; Meidani, K.; Gupta, S.; Farimani, A. B.; and Reddy, C. K. 2024b. Llm-sr: Scientific equation discov- ery via programming with large language models. arXiv preprint arXiv:2404.18400. Si, C.; Yang, D.; and Hashimoto, T. 2024. Can llms generate novel research ideas? a large-scale human study with 100+ nlp researchers. arXiv preprint arXiv:2409.04109. Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S. S.; Wei, J.; Chung, H. W.; Scales, N.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; et al. 2023. Large language models encode clinical knowl- edge. Nature, 620(7972): 172–180. Topol, E. J. 2023. As artificial intelligence goes multimodal, medical applications multiply. Udrescu, S.-M.; Tan, A.; Feng, J.; Neto, O.; Wu, T.; and Tegmark, M. 2020. AI Feynman 2.0: Pareto-optimal sym- bolic regression exploiting graph modularity. Advances in Neural Information Processing Systems, 33: 4860–4871. Udrescu, S.-M.; and Tegmark, M. 2020. AI Feynman: A physics-inspired method for symbolic regression. Science Advances, 6(16): eaay2631. Wang, H.; Fu, T.; Du, Y.; Gao, W.; Huang, K.; Liu, Z.; Chan- dak, P.; Liu, S.; Van Katwyk, P.; Deac, A.; et al. 2023a. Sci- entific discovery in the age of artificial intelligence. Nature, 620(7972): 47–60. Wang, H.; Yuan, Y.; Liu, Z.; Shen, J.; Yin, Y.; Xiong, J.; Xie, E.; Shi, H.; Li, Y.; Li, L.; et al. 2023b. Dt-solver: Auto- mated theorem proving with dynamic-tree sampling guided In Proceedings of the 61st by proof-level value function. Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), 12632–12646. Wang, R.; Zelikman, E.; Poesia, G.; Pu, Y.; Haber, N.; and Goodman, N. 2024. Hypothesis Search: Inductive Reason- In The Twelfth International ing with Language Models. Conference on Learning Representations. Wang, X.; Hu, Z.; Lu, P.; Zhu, Y.; Zhang, J.; Subrama- niam, S.; Loomba, A. R.; Zhang, S.; Sun, Y.; and Wang, W. 2023c. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635. Wu, Z.; Qiu, L.; Ross, A.; Aky¨urek, E.; Chen, B.; Wang, B.; Kim, N.; Andreas, J.; and Kim, Y. 2023. Reasoning or reciting? exploring the capabilities and limitations of lan- guage models through counterfactual tasks. arXiv preprint arXiv:2307.02477.
ai_researcher
1
A_Mobile_Electronic_Health_Record-Connected_Application_for_Managing_Team_Workflows_in_Inpatient_Care.pdf
COMPARING THE IMPACT OF MOBILE NODES ARRIVAL PATTERNS IN MANETS USING POISSON AND PARETO MODELS John Tengviel1, and K. Diawuo2 1Department of Computer Science, Sunyani Polytechnic, Sunyani, Ghana [email protected] 2Department of Computer Engineering, KNUST, Kumasi, Ghana [email protected] ABSTRACT Mobile Ad hoc Networks (MANETs) are dynamic networks populated by mobile stations, or mobile nodes (MNs). Mobility model is a hot topic in many areas, for example, protocol evaluation, network performance analysis and so on. How to simulate MNs mobility is the problem we should consider if we want to build an accurate mobility model. When new nodes can join and other nodes can leave the network and therefore the topology is dynamic. Specifically, MANETs consist of a collection of nodes randomly placed in a line (not necessarily straight). MANETs do appear in many real-world network applications such as a vehicular MANETs built along a highway in a city environment or people in a particular location. MNs in MANETs are usually laptops, PDAs or mobile phones. This paper presents comparative results that have been carried out via Matlab software simulation. The study investigates the impact of mobility predictive models on mobile nodes’ parameters such as, the arrival rate and the size of mobile nodes in a given area using Pareto and Poisson distributions. The results have indicated that mobile nodes’ arrival rates may have influence on MNs population (as a larger number) in a location. The Pareto distribution is more reflective of the modeling mobility for MANETs than the Poisson distribution. KEYWORDS Mobility Models, MANETs, Mobile Nodes Distribution, Arrival Patterns, Pareto Distribution, Poisson Distribution, Matlab Simulation. 1. INTRODUCTION Mobile Ad-hoc Networks (MANETs) is a collection of wireless mobile nodes configured to communicate amongst each other without the aid of an existing infrastructure. MANETS are Multi-Hop wireless networks since one node may not be indirect communication range of other node. In such cases the data from the original sender has to travel a number of hops (hop is one communication link) in order to reach the destination. The intermediate nodes act as routers and forward the data packets till the destination is reached [1]. Recently, with the deployment of all kinds of wireless devices, wireless communication is becoming more important. In this research area, Ad-Hoc network is a hot topic which has attracted much of research attentions. A wireless ad hoc network is a decentralized wireless network. The network is ad hoc because it does not rely on a preexisting infrastructure, such as routers in wired networks or access points in managed (infrastructure) wireless networks. Instead, each node participates in routing by forwarding data for other nodes, and so the determination of which nodes forward data is made dynamically based on the network connectivity [2]. There are different kinds of routing protocol defined by how messages are sent from the source node to the destination node. Based on this, it’s reasonable to consider node mobility as an essential topic of ad-hoc network. With an accurate mobility model which represents nodes movement, designers can evaluate performance of protocols, predict user distribution, plan network resources allocation and so on. It can also be used in healthcare or traffic control area rescue mission, and so on. Ad hoc networks are viewed to be suitable for all situations in which a temporary communication is desired. The technology was initially developed keeping in mind the military applications [3] such as battle field in an unknown territory where an infrastructure network is almost impossible to have or maintain. In such situations, the ad hoc networks having self-organizing [4] capability can be effectively used where other technologies either fail or cannot be effectively deployed. The entire network is mobile, and the individual terminals are allowed to move freely. Since, the nodes are mobile; the network topology is thus dynamic. This leads to frequent and unpredictable connectivity changes. In this dynamic topology, some pairs of terminals may not be able to communicate directly with each other and have to rely on some other terminals so that the messages are been delivered to their destinations. Such networks are often referred to as multi- hops or store-and-forward networks [5]. This paper presents a study on mobile nodes arrival patterns in MANETs using Poisson and Pareto models. Though not very realistic from a practical point of view, a model based on the exponential distribution can be of great importance to provide an insight into the mobile nodes arrival pattern. The section 2 illustrates a brief review on MANETs studies. The section 3 introduces the Poisson and Pareto distribution models. The simulation procedures and considered parameters are presented in section 4. The obtained results are objects in section 5 and the section 6 closes the paper to further research works. 2. RELATED WORKS Currently there are two types [6, 7] of mobility models used in simulation of networks. These are traces and synthetic models. Traces are those mobility patterns that are observed in real-life systems. Traces provide accurate information, especially when they involve a large number of mobile nodes (MNs) and appropriate long observation period. On the other hand, synthetic models attempt to realistically represent the behaviour of MNs without the use of traces. They are divided into two categories, entity mobility models and group mobility models [1, 8, 9]. The entity mobility models randomise the movements of each individual node and represent MNs whose movements are independent of each other. However, the group mobility models are a set of groups’ nodes that stay close to each other and then randomise the movements of the group and represent MNs whose movements are dependent on each other. The node positions may also vary randomly around the group reference point. In [10], the mobility study in ad hoc has been approximated to pedestrian in the street, willing to exchange content (multimedia files, mp3, etc.) with their handset whilst walking at a relative low speed. Some researchers have proposed basic mobility models such as Random Walk, Random Waypoint, [3, 4], etc. for performance comparison of various routing protocols. The concern with these basic designed models is that they represent a specific scenarios not often found in real lives. Hence their use in ad hoc network studies is very limited. Random Walk or Random Waypoint model though simple and elegant, produce random source of entry into a location with scattered pattern around the simulation area, sudden stops and sharp turns. In real-life, this may not really be the case. 3. MODELS OF STUDY 3.1. POISSON ARRIVAL DISTRIBUTION (NUMBER OF NODES) When arrivals occur at random, the information of interest is the probability of n arrivals in a given time period, where n = 0, 1, 2, . ……n-1 Let ג be a constant representing the average rate of arrival of nodes and consider a small time interval ∆t, with ∆t →0. The assumptions for this process are as follows:  The probability of one arrival in an interval of ∆t seconds, say (t, t+∆t) is ג∆t, independent of arrivals in any time interval not overlapping (t, t+∆t).  The probability of no arrivals in ∆t seconds is 1-ג∆t, under such conditions, it can be shown that the probability of exactly n nodes arriving during an interval of length of t is given by the Poisson distribution law [11] in equation 1: ( ) ( ) , where . (1) The assumption of Poisson MN arrivals also implies a distribution of the time intervals between the arrivals of successive MN in a location. 3.2. Pareto Distribution The Pareto distributions [12-14] are characterized by two parameters: α and β. Parameter α is called shape parameter that determines heavy-tailed characteristics and β =1 is called cutoff or the location parameter that determines the average of inter-arrival time. The node arrival times of the Pareto distribution are independent and identically distributed, which means that each arrival time has the same probability distribution as the other arrival times and all are mutually independent. The two main parameters of the Pareto process are the shape and the scale parameter (x). For one parameter Pareto ( shape only), the distribution function can be written as equation 2: ( ) ( ) , (2) The pdf is given as in equation 3: and for the two – parameter Pareto distribution function defined over the real numbers can be written as in (4): ( ) (3) ( ) Its pdf is given as in equation 5: (4) ( ) ( ) { ( ) ( ) (5) 4. METHODOLOGY 4.1. Varying of α in Pareto Arrival Distribution We assume the arrival distribution on the MNs population by using Pareto distributions. Table 1: Varying α parameter values Scenario α (B) 1 0.3 2 0. 4 3 0 .5 4 0. 8 5 0.9 For the simulations purposes, the varying α values are been considered. Heavy-tail is been modeled by a Pareto distribution and the main principle can be attributed to the principle of number of nodes. We have performed the simulations for a wide range of parameter values as in Table 1 for both one-parameter and two-parameter Pareto models. 4.2. VARYING OF ARRIVAL RATES FOR NODE DISTRIBUTION The arrival pattern of mobile nodes has an impact on the performance of the network. In this scope, we have decided to analysis the effect of arrival distribution on the MNs population in a given area by using Poisson distribution as in equation 1. In most real-world MANETs, the node population in an area of interest varies with time. In this simulation, it is therefore necessary to investigate the impact of arrivals of MNs on the MANETs mobility. The simulation area does not change as the arrival rate changes. The different values of arrival rates being considered in this study are shown in Table 2. Table 2: Varying Arrival Rates Scenario Arrival rates 1 0. 2 0. 3 0 4 0. 5 0.9 3 4 .5 8 During the simulation, nodes were allowed to enter the location from a common source (0 degrees) but not from different sources. The number of MNs that entered the location was assumed to be Poisson distributed with varying arrival rates. 5. RESULTS AND DISCUSSION 5.1. Comparative Study using Pareto Arrival Pattern In this section, the effect of arrival rates on MNs distribution and population in a defined location is analyzed as shown in Figure 1. It was observed that the various arrival rates increased the number of MNs also increased but to a certain limit. It is therefore the indication that every location has a limit or capacity of MNs it can contain. Figure 1: Single Parameter for Varying Values for B, and Exponential for Twenty Nodes Figure 1 may indicate that the exponential distribution was higher than the single parameter in the initial stages but as time progresses the exponential decreases fast to zero. The single parameter Pareto overtakes the exponential as the number of nodes increases and indication that the single parameter performs better than exponential distribution. The Pareto distribution may show tail that decays much more slowly than the exponential distribution. The alpha is the shape parameter which determines the characteristics “decay” of the distribution (tail index) and A is the location parameter which defines the minimum value of x (number of nodes). Figure 2: Two Parameter Pareto for Varying B Values and Exponential In Figure 2 the comparison between the two-parameter Pareto and exponential distributions is illustrated. It is obvious that the two-parameter Pareto outweighs the exponential distribution as the number of MNs increases. The exponential distributions decays very fast and finally get to the a-axis unlike the two-parameter Pareto distribution where some of the arrival rates distribution has not decay to zero. However the two-parameter Pareto performed well than the one-parameter Pareto, since some of the arrival of the two-parameter did not decay to zero. The long-tailed nature of the two- 0246810121416182000.20.40.60.811.21.41.61.82PARETO PDFProbx - Number of NodesB1 = 0.5, k=1B2 = 0.8, k=1B3 = 1, k=1B4 = 1.5, k=1B5 = 2, k=10246810121416182000.511.5Exponential DistributionPDFx - Number of Nodeslam = 0.5lam1 =0.8lam2 = 1lam3=1.50246810121416182000.511.5Exponential DistributionPDFx - Number of Nodeslam = 0.5lam1 =0.8lam2 = 1lam3=1.50246810121416182000.20.40.60.811.21.41.61.82PARETO PDFProbx - Number of NodesB1 = 0.5, k=1B2 = 0.8, k=1B3 = 1, k=1B4 = 1.5, k=1B5 = 2, k=1 parameter Pareto helped to clear out any congestion in a location when the arrival rate was small and the reverse was also true. 5.2 Effect of Varying Arrival Rates In Figure 3, the effect of varying nodes’ arrival rate is computed using Poisson model. Nodes may arrive at a location either in some regular pattern or in a totally random fashion. The arrival rates have shown to impact on the number of nodes in a particular location, although every location has a limited capacity. A high number of nodes typically translate into a higher average number of neighbours per node, which influences the route availability. Figure 3: For Twenty Number of Nodes for varying Arrival rates In reality, the total connection time of a node over a specific interval depends on the nodes encounter rate and the time in each encounter, both of which depend on the relative mobility of nodes. Although a high node arrivals results in more node encounters, the network would eventually become congested. The impact of this relationship is that nodes can and will be tightly packed (i.e. High density) if their arrival rates is high (congestion), but if the arrivals is lower, the nodes must be farther apart (low density). For instance it is clear that there is some congestion for arrivals of MNs, since they have to follow some holding paths. As the value of arrival rate increases, the shape of the distribution changes dramatically to a more symmetrical (”normal'') form and the probability of a larger number of arrivals increases with increasing number of MNs. An interesting observation is that as the arrival rate increases, the properties of the Poisson distribution approach those of the normal distribution as in Figure 3. The first arrival processes of nodes give higher contact probabilities at higher arriving rates. This is due to the nodes’ contiguity one to another making mobility difficult. In practice, one may record the actual number of arrivals over a period and then compare the frequency of distribution of the observed number of arrival to the Poisson distribution to investigate its approximation of the arrival distribution. 6. CONCLUSION The arrival patterns have shown some impact on the network population, as the arrival rate increases the MNs population also increases to a peak and then decays rapidly to the x-axis. It 0246810121416182000.020.040.060.080.10.120.140.160.18Probabilty DistributionNumber of NodesNodes Arriving During Time tArrival Rate = 0.3Arrival Rate = 0.4Arrival Rate =0.5Arrival Rate =0.8Arrival Rate =0.9 was realized that the Poisson distribution is not good for the arrival distribution; therefore the Pareto distribution was considered. It has come out clear that the Pareto distribution is good for the arrival distribution, especially the two-parameter Pareto distribution which performed better than the single Pareto and exponential distributions even though at the earlier stages the exponential performed than the single Pareto distribution with a faster decay. It may subsequently be admitted that mobility in MANETs is a difficult work and actually. It is an interesting research area that has been growing in recent years. Its difficulty is mainly generated because of the continuous changes in the network topology with time. The topological changes have impact on mobility techniques developed for infrastructure-based networks thus may not be directly applied to mobile adhoc networks. We have investigated through simulation mobility prediction of MNs using the queueing model. REFERENCES [1] J. Boleng, T. Camp, and V. Tolety. “A Suvey of Mobility Models for Ad hoc Network Research”, In Wireless Communication and Mobile Computing (WCMC), Vol. 2, No. 5, pages 483 – 502, 2002. [2] Subir Kumar Sarkar, T. G. Basavaraju, C. Puttamadappa, “Mobility Models for Mobile Ad Hoc Networks”, 2007, Page 267 – 277, Auerbach Publications – www.auerbach-publications.com Volume 2, No.5, 2002. [3] C. Rajabhushanam and A. Kathirvel, “Survey of Wireless MANET Application in Battlefield Operations”, (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 2, No.1, January 2011. [4] Buttyan L., and Hubaux J. P., “Stimulating cooperation in self-organizing mobile ad hoc networks. Mobile Networks and Applications: Special Issue on Mobile Ad Hoc Networks, 8(5), 2003. [5] C.P.Agrawal, O.P.Vyas and M.K Tiwari, “Evaluation of Varying Mobility Models & Network Loads on DSDV Protocol of MANETs”, International Journal on Computer Science and Engineering Vol.1 (2), 2009, pp. 40 - 46. [6] P. N. Pathirana, A. V. Savkin & S. K. Jha. “Mobility modeling and trajectory prediction for cellular networks with mobile base stations”. MobiHoc 2003: 213 -221. [7] Mohd Izuan Mohd Saad and Zuriati Ahmad Zukarnain, “Performance Analysis of Random-Based Mobility Models in MANET Routing Protocol, EuroJournals Publishing, Inc. 2009, ISSN 1450- 216X Vol.32 No.4 (2009), pp.444-454 http://www.eurojournals.com/ejsr.htm [8] Zainab R. Zaidi, Brian L. Mark: “A Distributed Mobility Tracking Scheme for Ad-Hoc Networks Based on an Autoregressive Model”. The 6th International Workshop of Distributed Computing, Kolkata, India (2004) 447(458) [9] Abdullah, Sohail Jabbar, Shaf Alam and Abid Ali Minhas, “Location Prediction for Improvement of Communication Protocols in Wireless Communications: Considerations and Future Directions”, Proceedings of the World Congress on Engineering and Computer Science 2011 Vol. II WCECS 2011, October 19-21, 2011, San Francisco, USA [10] Gunnar Karisson et al., “A Mobility Model for Pedestrian Content Distribution”, SIMUTools ’09 workshops, March 2-6, 2009, Rome Italy. [11] John Tengviel, K. A. Dotche and K. Diawuo, “The Impact of Mobile Nodes Arrival Patterns In Manets Using Poisson Models”, International Journal of Managing Information Technology (IJMIT), Vol. 4, No. 3, August 2012, pp. 55 – 71. [12] Martin J. Fischer and Carl M. Harris, “A Method for Analysing Congestion In Pareto and Related Queues”, pp. 15 – 18. [13] K. Krishnamoorthy, “Handbook of Statistical Distributions with Applications”, University of Louisiana at Lafayette, U.S.A. pp. 257 – 261. [14] Kyunghan Lee, Seongik Hong, Seong Joon Kim, Injong Rhee and Song Chong, “SLAW: Self- Similar Least-Action Human Walk”, published in the Proceedings of the IEEE Conference on Computer Communications (INFOCOM), Rio de Janeiro, Brazil, April 19–25, 2009.. PP. 855 – 863. Authors John Tengviel He is a holder of a BSc. Computer Science from Kwame Nkrumah University of in 2001 and MSc. Telecommunication Science and Technology (KNUST) Engineering from College of Engineering at the same university in 2012. He is currently a Lecturer with the Department of Computer Science at Sunyani Polytechnic. His research interests include Mobile Ad hoc Networks, Wireless Communication, Mobility Modeling, MANETs and Database System. Nana (Dr.) Kwasi Diawuo is a senior lecturer of the Department of Computer Engineering at Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana. He earned a BSc. (Electrical/ Electronic Engineering) from KNUST, M.Sc., Ph.D, and MGhIE. He is a member of the Institution of Electrical and Electronic Engineers (IEEE) and Computer Society (of IEEE).
ai_researcher
2
The_Llama_3_Herd_of_Models.pdf
Herding LLaMaS: Using LLMs as an OS Module Aditya K Kamath∗ University of Washington Seattle, Washington, USA [email protected] Sujay Yadalam∗ University of Wisconsin–Madison Madison, Wisconsin, USA [email protected] 4 2 0 2 n a J 7 1 ] S O . s c [ 1 v 8 0 9 8 0 . 1 0 4 2 : v i X r a 1 INTRODUCTION Computer systems are becoming increasingly heterogeneous with the emergence of new memory technologies and compute devices. GPUs alongside CPUs have become commonplace and CXL is poised to be a mainstay of cloud systems. The operating system is responsible for managing these hardware resources, requiring modification every time a new device is released. Years of research and development are sunk into tuning the OS for high performance with each new heterogeneous device [1–4, 9, 10, 12–14]. With the recent explosion in memory technologies and domain-specific ac- celerators, it would be beneficial to have an OS that could provide high performance for new devices without significant effort. We propose LLaMaS which can adapt to new devices easily. LLaMaS uses Large Language Models (LLMs) to extract the useful features of new devices from their textual description and uses these features to make operating system decisions at runtime. Adding support to LLaMaS for a new device is as simple as describing the system and new device properties in plaintext. LLaMaS reduces the burden on system administrators to enable easy integration of new devices into production systems. Preliminary evaluation using ChatGPT [11] shows that LLMs are capable of extracting device features from text and make correct OS decisions based on those features. 2 BURDEN OF HETEROGENEITY ON THE OS The end of Moore’s law and Dennard scaling has made the use of heterogeneous systems necessary. Modern high-performance systems are embracing heterogeneity in both memory and compute. These systems combine the best properties of different memory technologies to optimize for latency, bandwidth, capacity, and cost. For processors, the adoption of GPUs and other domain-specific accelerators (DSAs) has helped push the boundaries of compute. Different applications exhibit different memory requirements, necessitating a diverse set of memory devices to satisfy all of them. A modern HPC system could be connected to local DRAM and NVM, and have disaggregated memory over CXL. Non-volatile memory (NVM) [6] provides high capacities, but experiences read/write asymmetry as well as reduced bandwidth. Similarly, CXL provides greater memory capacity than on-board DRAM, at the expense of increased latencies and lower bandwidth [9]. Data-intensive applications like machine learning or scientific computations require high throughput that is not met by conven- tional architectures. This has led to the development of accelerators such as GPUs and specialized hardware. In the face of this explosive growth of diverse DSAs each with their own unique API, there has been significant effort being put into unifying application develop- ment [7]. The RISC-V group endeavors to provide a unified ISA that can support the unique attributes that these different accelerators require [5]. On the compiler side, the MLIR project [8] is providing an intermediate layer that allows developers to code in their lan- guage of choice and then compile the source code into optimized binaries for a chosen processor. In the face of these advancements, we envision a future where an application binary could be deployed on any processing device without programmer input. The operating system (OS) would be tasked with selecting the optimal processing device for the application. The operating system is the gatekeeper between applications and hardware devices. Beyond providing minimal support for these devices, the OS must be aware of the different intricacies and char- acteristics under which the devices perform optimally, to remove the reliance on application programmers. This requirement of OS modification leads to a significant amount of research effort being spent in trying to devise the best method for handling these devices. For example, there has been significant work in page placement for NVM [13, 14] and CXL [9, 10]. In ad- dition, many works have explored techniques for managing data placement and replication for NUMA systems [1, 4]. Similarly, we foresee that significant effort will need to be made in order to allow the OS to select the optimal processing device. It would be beneficial to have an operating system that could adapt to any heterogeneous system quickly. Such an operating system would reduce the burden of researchers and system admin- istrators. It would also reduce the effort required to integrate new devices into production systems. 3 OUR SYSTEM: HERDING LLAMAS Our goal is to design an operating system that would be able to (1) adapt to new heterogeneous technologies while (2) requiring minimal intervention from the programmer/system administrator. To this end, we propose using Large Language Models as an OS Module (LLaMaS) for managing heterogeneous resources and devices (a.k.a., the herd)1. Language models are a class of Natural Language Processing algorithms that aim to recognize, summarize, translate, predict and generate text. Large language models or LLMs are models trained on very large datasets with billions to trillions of parameters. OpenAI’s GPT-3 has over 100 billion parameters and is trained on almost the entirety of the internet. The recent success of LLMs is attributed to few-shot or zero-shot learning. LLMs can solve various tasks by simply training the models on a few examples (few-shot) or by providing instructions describing the task (zero-shot). LLaMaS takes advantage of LLMs’ ability to perform zero-shot learning. LLaMaS is able to flexibly adapt to new devices with ∗Both authors contributed equally to this research. 1It is worth noting that for our title "Herding LLaMaS", LLaMaS is responsible for managing the herd, and so is performing the herding, not being herded. ASPLOS ‘23 WACI, March 26th, 2023, Vancouver, Canada A.K. Kamath and S. Yadalam Figure 1: LLaMaS system design. quirky characteristics, requiring only a simple description of the system and hardware characteristics. Figure 1 describes how LLaMaS functions. There are two main components of LLaMaS: (1) frontend LLM and (2) backend predic- tion model. The frontend LLM takes the system description as a text file along with the program binary and/or source code if available. It processes them to identify the important features and patterns that could influence decisions during program execution and generates embeddings that contain these information. The backend prediction model (BPM) is responsible for making runtime OS decisions. The embeddings generated by the frontend LLM guide the BPM on which heuristics influence decisions and when the decisions need to be made. For instance, the LLM could inform the BPM to migrate data when the accesses per second to a page exceed a certain value. As an example, the frontend LLM may be informed that a sys- tem contains two types of memory, repetitive access tech (RAT) which provides low latency for frequently accessed data but high latency for infrequently accessed data, and DRAM which provides predictable latencies for all data. Through the embeddings, it will inform the BPM that frequently accessed data should be kept in RAT. During an application’s runtime, the hardware counters may indicate that data in DRAM is being frequently accessed, causing the BPM to move it to RAT. Through this process, the OS teaches itself how to handle new technology with minimal external support. 4 FEASIBILITY STUDY To study the feasibility of LLaMaS, we conducted experiments using ChatGPT [11], a chatbot built on top of OpenAI’s GPT-3.5 and GPT- 4 language models. The goal of the experiments was to understand the abilities of a language model to extract insights from a textual description of device characteristics. The prompts to ChatGPT had 2 components: (1) descrition of a heterogeneous system configuration and device properties, and (2) OS decisions to be made. Below we share the results of 3 such experiments. We first show the prompt to ChatGPT then its response: Experiment 1: Here, we wanted to ensure that LLMs could understand system descriptions to extract useful features. Prompt: "System you manage has CPU and GPU. CPU is con- nected to DRAM and GPU is connected to HBM. For the GPU matrix multiplication program shown below, identify variables that should be allocated on DRAM and those on HBM." ChatGPT: "Variable h_A is used to store the input matrix on the host, so allocate in host memory. Variable d_A is used to store the input matrix on the device, so allocate in device memory." ChatGPT was able to successfully understand the system de- scription. Since ChatGPT had been trained on data involving CPUs and GPUs, we didn’t have to describe the system to it. It was able to understand the use of different variables and map them to the host or device memory. Experiment 2: While the above experiment proved that LLMs could work well on data it was trained on, LLaMaS requires that the OS module adapts to new devices with unseen characteristics. Therefore, our second experiment imagined a heterogenous system with a futuristic device. Prompt: "System you manage has CPU connected to DRAM and Repetitive Access Technology (RAT) memories. RAT is a new memory technology that provides low latency for frequently ac- cessed data. Given below program, identify variables that should be allocated in RAT memory. ChatGPT: "RAT memory should be used for variables that are accessed repeatedly such as arrays or matrices. DRAM should be used for variables that are accessed randomly and infrequently. These variables should be allocated in RAT: priceArr, avg, ..." ChatGPT was able to conceptualize the new memory technology and capture its properties. It identified the important features to consider while making memory placement decisions. Experiment 3: Task placement: Above experiments demon- strated that abilities of LLMs to aid in memory management. We then tried to understand the abilities of LLMs to perform task map- ping to different computational platforms. Prompt: You manage a system with a CPU of 24 cores and a GPU. For best performance, where would you run the following functions? ChatGPT: Func_A involves matrix multiplication and is highly parallelizable. Therefore, we would prefer to run it on the GPU. On the other hand, Func_B involves pointer manipulation and memory operations, so we would prefer to run on the CPU. ChatGPT was able to understand functions’ goals and proper- ties (parallelizable, memory access patterns) and match it with the properties of the underlying hardware. REFERENCES [1] Reto Achermann, Ashish Panwar, Abhishek Bhattacharjee, Timothy Roscoe, and Jayneel Gandhi. 2020. Mitosis: Transparently Self-Replicating Page-Tables for Large-Memory Machines. In Proceedings of the Twenty-Fifth International Confer- ence on Architectural Support for Programming Languages and Operating Systems (Lausanne, Switzerland) (ASPLOS ’20). Association for Computing Machinery, New York, NY, USA, 283–300. https://doi.org/10.1145/3373376.3378468 [2] Neha Agarwal, David Nellans, Mark Stephenson, Mike O’Connor, and Stephen W. Keckler. 2015. Page Placement Strategies for GPUs within Heterogeneous Mem- ory Systems. In Proceedings of the Twentieth International Conference on Archi- tectural Support for Programming Languages and Operating Systems (Istanbul, Turkey) (ASPLOS ’15). Association for Computing Machinery, New York, NY, USA, 607–618. https://doi.org/10.1145/2694344.2694381 [3] Rachata Ausavarungnirun, Joshua Landgraf, Vance Miller, Saugata Ghose, Jayneel Gandhi, Christopher J. Rossbach, and Onur Mutlu. 2017. Mosaic: A GPU Memory FrontendLLMBackendPredictionModelMemory A:1) Volatile2) Suitable for...System descriptionProgram binary,source code[if available]GeneratedembeddingsH/W counters,Page table accessesDataplacementDatamovementComputeschedulingDecisions Herding LLaMaS: Using LLMs as an OS Module ASPLOS ‘23 WACI, March 26th, 2023, Vancouver, Canada Manager with Application-Transparent Support for Multiple Page Sizes. In Pro- ceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitec- ture (Cambridge, Massachusetts) (MICRO-50 ’17). Association for Computing Ma- chinery, New York, NY, USA, 136–150. https://doi.org/10.1145/3123939.3123975 [4] Mohammad Dashti, Alexandra Fedorova, Justin Funston, Fabien Gaud, Renaud Lachaize, Baptiste Lepers, Vivien Quema, and Mark Roth. 2013. Traffic Man- agement: A Holistic Approach to Memory Placement on NUMA Systems. In Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems (Houston, Texas, USA) (ASP- LOS ’13). Association for Computing Machinery, New York, NY, USA, 381–394. https://doi.org/10.1145/2451116.2451157 [5] Jamie Feller. 2018. SiFive Core IP 7 Series Creates New Class of Embedded Intelligent Devices Powered by RISC-V. https://www.sifive.com/press/sifive- core-ip-7-series-creates-new-class-of-embedded. [6] Intel. 2019. Intel® Optane™ memory - revolutionary memory: What is optane memory? https://www.intel.com/content/www/us/en/products/details/memory- storage/optane-memory.html. [7] Chris Lattner. 2021. The Golden Age of Compiler Design in an Era of HW/SW Co-design. Youtube. https://www.youtube.com/watch?v=4HgShra-KnY ASPLOS ’21 Keynote. [8] Chris Lattner, Mehdi Amini, Uday Bondhugula, Albert Cohen, Andy Davis, Jacques Pienaar, River Riddle, Tatiana Shpeisman, Nicolas Vasilache, and Olek- sandr Zinenko. 2021. MLIR: Scaling Compiler Infrastructure for Domain Specific Computation. In 2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). 2–14. https://doi.org/10.1109/CGO51591.2021.9370308 [9] Huaicheng Li, Daniel S. Berger, Lisa Hsu, Daniel Ernst, Pantea Zardoshti, Stanko Novakovic, Monish Shah, Samir Rajadnya, Scott Lee, Ishwar Agarwal, Mark D. Hill, Marcus Fontoura, and Ricardo Bianchini. 2023. Pond: CXL- Based Memory Pooling Systems for Cloud Platforms. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Lan- guages and Operating Systems, Volume 2 (Vancouver, BC, Canada) (ASPLOS 2023). Association for Computing Machinery, New York, NY, USA, 574–587. https://doi.org/10.1145/3575693.3578835 [10] Hasan Al Maruf, Hao Wang, Abhishek Dhanotia, Johannes Weiner, Niket Agarwal, Pallab Bhattacharya, Chris Petersen, Mosharaf Chowdhury, Shobhit Kanaujia, and Prakash Chauhan. 2023. TPP: Transparent Page Placement for CXL-Enabled Tiered-Memory. In Proceedings of the 28th ACM International Conference on Ar- chitectural Support for Programming Languages and Operating Systems, Volume 3 (Vancouver, BC, Canada) (ASPLOS 2023). Association for Computing Machinery, New York, NY, USA, 742–755. https://doi.org/10.1145/3582016.3582063 [11] OpenAI. 2022. Introducing ChatGPT. https://openai.com/blog/chatgpt. [12] Ashish Panwar, Reto Achermann, Arkaprava Basu, Abhishek Bhattacharjee, K. Gopinath, and Jayneel Gandhi. 2021. Fast Local Page-Tables for Virtualized NUMA Servers with VMitosis. In Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (Virtual, USA) (ASPLOS ’21). Association for Computing Machinery, New York, NY, USA, 194–210. https://doi.org/10.1145/3445814.3446709 [13] Amanda Raybuck, Tim Stamler, Wei Zhang, Mattan Erez, and Simon Peter. 2021. HeMem: Scalable Tiered Memory Management for Big Data Applications and Real NVM. In Proceedings of the ACM SIGOPS 28th Symposium on Operating Systems Principles (Virtual Event, Germany) (SOSP ’21). Association for Computing Ma- chinery, New York, NY, USA, 392–407. https://doi.org/10.1145/3477132.3483550 [14] Zi Yan, Daniel Lustig, David Nellans, and Abhishek Bhattacharjee. 2019. Nimble Page Management for Tiered Memory Systems. In Proceedings of the Twenty- Fourth International Conference on Architectural Support for Programming Lan- guages and Operating Systems (Providence, RI, USA) (ASPLOS ’19). Association for Computing Machinery, New York, NY, USA, 331–345. https://doi.org/10. 1145/3297858.3304024
ai_researcher
1
Research_on_Multi_cabin_Collaborative_Assembly_Method_Based_on_Multi_Agent_Reinforcement_Learning.pdf
3 2 0 2 y a M 3 2 ] A M . s c [ 1 v 1 4 1 7 1 . 5 0 3 2 : v i X r a RESEARCH ON MULTI-AGENT COMMUNICATION AND COLLABORATIVE DECISION-MAKING BASED ON DEEP REINFORCEMENT LEARNING Zeng Da University of Electronic Science and Technology of China [email protected] ABSTRACT In a multi-agent environment, the continuous change of each agent’s strategy will lead to the non- stationarity of the multi-agent environment, so the reinforcement learning problem in a multi- agent environment is usually modeled as a decentralized partially observable Markov decision Process (Decentralized Partially Observable Markov Decision Process, Dec-POMDP), which brings challenges to the cooperation between agents. In order to overcome and alleviate the non-stationarity of the multi-agent environment, the mainstream method is to adopt the framework of Centralized Training Decentralized Execution (CTDE). This thesis is based on the framework of CTDE, and studies the cooperative decision-making of multi-agent based on the Multi-Agent Proximal Policy Optimization (MAPPO) algorithm for multi-agent proximal policy optimization. (1) In order to alleviate the non-stationarity of the multi-agent environment, a multi-agent communi- cation mechanism based on weight scheduling and attention module is introduced. Different agents can alleviate the non-stationarity caused by local observations through information exchange between agents, assisting in the collaborative decision-making of agents. The specific method is to introduce a communication module in the policy network part. The communication module is composed of a weight generator, a weight scheduler, a message encoder, a message pool and an attention module. Among them, the weight generator and weight scheduler will generate weights as the selection basis for communication, the message encoder is used to compress and encode communication information, the message pool is used to store communication messages, and the attention module realizes the interactive processing of the agent’s own information and communication information . (2) In the CTDE framework, global information is introduced during centralized training to alleviate environmental non-stationarity. In the MAPPO algorithm, the input of the centralized value network contains global information, and the processing of global information has an impact on the estimation of the value function. This thesis proposes a global information processing module based on the attention mechanism and deep and shallow feature processing. The global information and the local observation information of each agent are input into the attention module to obtain the simplified feature information, after deep and shallow layer feature processing, it is used as the input of the value network. Combining the above improvements, this thesis proposes a Multi-Agent Communication and Global Information Optimization Proximal Policy Optimization(MCGOPPO)algorithm, and conducted ex- periments in the StarCraft Multi-Agent Challenge (SMAC) and the Multi-Agent Particle Environment (MPE). The experimental results show that the improvement has achieved certain effects, which can better alleviate the non-stationarity of the multi-agent environment, and improve the collaborative decision-making ability among the agents. Keywords Deep Reinforcement Learning · Multi-Agent Communication · Multi-Agent Collaboration Running Title for Header 1 Introduction In today’s Internet era, a variety of games have emerged on various platforms, and they are gradually entering Public life, become the seasoning of people’s entertainment. Generally speaking, there are two kinds of characters in games, one is the Character controlled by the player, the other is the Non-Player character (NPC) in the game. One problem is that the running logic of the NPC basically follows the logic preset by the game developer when designing the game. The interaction with the player character is very mechanical and fixed, and if the NPCs were more intelligent, it would greatly enhance the player’s experience and make the experience of the game world more realistic for the player. Deep reinforcement learning in the field of artificial intelligence, with its strong autonomous learning ability, is very suitable for swimming the problem of intelligence in drama [8 – 10]. A typical application scenario is the game Intelligent Bot (AIBot), which can realize the functions of hanging up the phone, acting as a sparring opponent, assisting developers in testing and so on. For example, in competitive games, if a player drops out, the AIbot of the game can be used to replace the player, which makes the experience of the two sides better. It can also be used as a sparring opponent, allowing players to fight against AIbot and compete with human players after skillful operation to help improve players’ operation level. In this field of application, Jiwu AI [11] developed by Tencent AILab based on King of Glory has achieved a high-level AIbot. The ultimate AI can reach the level of professional players. In addition, intelligent AIbot can also better simulate the operation and strategy of real players, which is helpful for game developers to conduct simulation tests on character abilities after game development, and will be of great help to game designers in numerical design. 2 MAPPO Multi-agent near-end strategy optimization algorithm, MAPPO for short, adopts a centralized value function approach to consider global information, which belongs to the CTDE framework. It makes each individual PPO agent cooperate with each other through a global value function.Each agent in MAPPO generates an action based on local observation information and strategies to maximize the fold Withholding bonus. J(θ) = Eat,st (cid:35) γtR (cid:0)st, at(cid:1) (cid:34) (cid:88) t (1) MAPPO is composed of an Actor network and Critic network, with a mapping to learn for the value function Vϕ (S→R). The strategy function πθ learns a mapping from the observed distribution to a range or a mapping to the mean and variance of the action to the Gaussian function for subsequent sampling actions.Actor network optimization objective is: L(θ) = (cid:34) 1 Bn B (cid:88) n (cid:88) (cid:16) min i=1 k=1 θ,i A(k) r(k) i , clip (cid:16) r(k) θ,i , 1 − ϵ, 1 + ϵ (cid:35) (cid:17) (cid:17) A(k) i + σ 1 Bn B (cid:88) n (cid:88) i=1 k=1 (cid:104) πθ S (cid:16) o(k) i (cid:17)(cid:105) , where r(k) θ,i = (cid:16) πθ (cid:17) a(k) i (cid:16) a(k) i | o(k) i | o(k) i (cid:17) . πθold (2) B indicates the size of batch_size, and n indicates the number of agents. r is the scale coefficient of importance sampling. Represents the advantage function, which is used to measure the rationality of selecting a specific action A under a certain state s. ϵ represents the clipping coefficient, S represents the entropy of the strategy, and σ is a hyperparameter that controls the entropy coefficient.Critic network optimization goal is: L(ϕ) = 1 Bn B (cid:88) n (cid:88) (cid:26)(cid:16) max (cid:17) (cid:16) s(k) i Vϕ − ˆRi (cid:17)2 ., i=1 k=1 (cid:17) (cid:104) clip (cid:16) Vϕ (cid:16) s(k) i , Vϕold (cid:17) (cid:16) s(k) i − ε, Vϕold (cid:17) (cid:16) s(k) i (cid:17) + ε − ˆRi (cid:105)2(cid:27) (3) B indicates the size of batch_size, and n indicates the number of agents. V is the value function, ˆRi is the discount reward. 2 Running Title for Header 3 MCGOPPO The algorithm in this chapter is improved based on IPPO algorithm and MAPPO algorithm. In order to strengthen the collaboration and cooperation among multiple agents, a communication module is introduced between independent agents to assist the collaborative decision-making of agents through information sharing and circulation between agents. The communication module is divided into two sub-modules, one of which consists of weight generator and weight scheduler Its function is to improve the efficiency of communication. The weight generator generates the weight of the corresponding agent according to the input information of each agent, and then stores it in the weight scheduler and carries out normalization processing for the selection of communication objects by the agent. The other submodule is composed of the attention module, whose function is to filter the communication information and extract the concise and important communication content. In the multi-agent system environment, due to the partial observability of the environment, a single agent can only obtain partial observation information when performing actions. Therefore, the introduction of inter-agent communication can simultaneously share information and extract key information through attention mechanism to assist target selection, optimize action selection, and improve the level of collaborative decision making among multi-agents. The overall framework of the near-end strategy optimization algorithm based on multi-agent communication and global information optimization still adopts the actor-critic network framework similar to MAPPO, which is composed of distributed Actor network, centralized Critic network and sample pool. Distributed Actor network represents each agent and is responsible for the interaction with the environment. The input of Actor network is the local observation information of the agent, and the output is the selective action of the agent. Actor network is composed of weight generator, weight scheduler, message encoder, message pool, attention module and action selector for communication. More specifically, compared with MAPPO algorithm, the improvement of Actor network lies in the introduction of communication between multiple agents. Through the communication between agents, the information exchange between agents can be improved. On the one hand, the non-stationiness in multi-agent environment can be reduced; on the other hand, more abundant information can be used to assist Actor network. Among them, a single Actor network processes the execution of the corresponding single agent and distinguishes the decision-making of the agent. The communication part is divided into two sub-modules, one is the communication scheduling module based on weight scheduling, and the other is the communication message processing module based on attention mechanism. The communication scheduling module is composed of message encoder, message pool, weight generator and weight scheduler, which is responsible for the compression encoding of communication messages and the generation and allocation of corresponding scheduling weights. The communication message processing module is composed of the attention module, which is mainly responsible for processing the local observation information of the agent together with the communication message, and output the feature information to the subsequent action selector. The goal of centralized Critic network is to optimize action selection and weight, and assist to update action selection of Actor network. The input is the sample obtained from the sample pool (including the joint set of agent local observation, action selection and reward), as well as global information, and the output is the Value value function. The centralized Critic network here does not mean that there is only one Critic network, but that the input of Critic network contains global information, which belongs to a form of centralized training under the CTDE framework. Different from MAPPO algorithm, when dealing with global information, this paper introduces the attention unit to deal with it. Later, the processed global information will continue to undergo deep and shallow feature processing, and then input it into the Critic network to calculate the value function and assist the update of the Actor network. The overall training process is to interact with Actor network and environment to obtain local observation information, local observation The information o is compressed and coded by the message encoder to get the communication message m and write it into the message pool. At the same time, the local observation information o is also generated by the weight generator to generate the weight w and then input it to the weight scheduler. When two Actor networks communicate with each other, one of them will select the communication object according to the weight scheduler. In this way, the communication information m in the message pool of other agents is read, and then the local observation information o and communication information m are input into the attention module. The information filtered by the attention module is the characteristic information c obtained by the current agent after integrating the communication information. The characteristic information c is input into the action selector and the action selection a of the agent is output. Actor network then interacts with the environment through the output action and gets the corresponding reward. After multiple cycles, the collected observation information o, action information a, and reward r compose samples and input them into the sample pool, and then add the global information s. After passing through the attention unit and the Critic feature processing layer, input them into the critic network, and calculate the output Value function. 3 Running Title for Header 3.1 Multi-agent communication module based on weight scheduler and attention mechanism This section introduces the multi-agent communication module based on weight scheduling and attention mechanism, which is divided into two modules Part.The first is the communication scheduling module based on weight scheduling, and the second is the message processing module based on the attention mechanism. For example, when every agent needs to communicate with each other, the communication between two agents will take up a large amount of communication bandwidth, which will appear very redundant in a limited bandwidth environment. Moreover, excessive redundant information is easy to introduce noise information, which will indirectly affect the subsequent decision-making of the agent. Therefore, a communication scheduling module based on weight scheduling is introduced to solve this problem. At the same time, the follow-up processing of communication information is not simply related to the local view of the agent itself The measurement information is superimposed. In the absence of communication, the decision-making basis of each agent is its own local observation information. After the addition of communication, the introduced communication information is essentially the characteristics of the local observation information of other agents after compression and coding processing. The content form of these two parts and the meaning of the information represented by them overlap to a certain extent. In order to avoid the redundancy between the introduced communication information and its own observation information, this paper proposes a communication message processing module based on attention mechanism. Message encoder: consists of two MLP layers. The input is the local observation information oi of the agent, and the output is The communication information mi after encoding and compression will be written into the message pool for storage; The process can be abstracted as a mapping. Weight generator: It consists of three MLP layers. The input is the local observation information oi of the agent, and the output is the weight wi. Its essence is a value, which determines the probability of the message being selected in the subsequent communication. The process can be abstracted as a mapping. Weight scheduler: Essentially, it is a SoftMax layer. The input is the set of weight information of each agent W = [w1, . . . , wn], and the output is the final scheduling weight set W ′ = [w1 ′]. This module can be viewed ′, . . . , wn as a mapping from the weight w (generated by f i wg) to w0 of all agents. The specific processing method is to sort directly according to the normalized size of the generated weight w, and select the message corresponding to the weight at the top of the ranking as the communication message to be delivered. The process can be abstracted as a mapping. Communication, according to the weight scheduler processing, select the message pool stored in the communication message mj (here The subscript j of represents that the communication message is obtained from the message pool in agent j after scheduling), which is the final communication message ci obtained by the current agent i through communication. The message is then forwarded to a subsequent attention module for further processing. Attention(Qi, Kj, Vi) = Sof tM ax (cid:32) QiKj √ dk (cid:33) T Vi (4) Where, the subscript of Qi,Kj and Vi represents the conversion source of the matrix. Subscript is i, indicating that the matrix is converted from oi, the local observation information of agent i; subscript is j, indicating that the matrix is converted from mj, the communication information in the message pool of agent j. The operational meaning of this attention mechanism corresponding to the actual multi-agent environment is that, by extracting the communication information mj (namely ci) of agent j and the more closely related information oi of agent i’s own local observation information, as the useful part obtained by agent i in this communication process, the output is zi. Then enter it into the action selector to continue processing. 3.2 Global information optimization processing module based on attention mechanism and deep and shallow feature processing The reason for deep and shallow feature processing in this paper is that: because the whole feature information contains the Agent’s own information, local information of friends and enemies, mobile information and Agent_id information, the importance of which information is not uniform. Therefore, this paper proposes the method of deep and shallow feature processing, hoping to do a differentiated processing of different parts of information. Enemy information for the current agent, is closely related to target selection, so it is relatively more important to do deep processing, friends and their own information to do shallow processing. The input of deep and shallow layer feature processing is the feature information processed by the attention unit before, which is divided into two parts at first, and one part is enemy information, which will be input into the three FC layers 4 Running Title for Header for further processing. The other part is the Agent’s own information, local information of friends and enemies, mobile related information and Agent_id related information, only input to a FC layer for shallow processing. Finally, the features of the two parts were spliced together to get the final feature information and input it into the Critic network. 4 Experiments SMAC Environment is an open source multi-agent reinforcement learning field based on the StarCraft II game. It has the characteristics of huge observation space and action space, local observation and long-term decision making. Especially with so many game units to manipulate, it is challenging to study and try to come up with models with better synergies and easier convergence. It also offers an interesting set of micro challenge scenario maps, which contain more than two dozen challenge scenarios inspired by the StarCraft Master Challenge mission released by Storm Snow, and are designed to assess how independent agents learn high-level collaboration and micromanagement techniques. Specifically, there are two camps in the SMAC scenario, one controlled by the researchers themselves and the other controlled by a carefully designed non-learning heuristic algorithm, and victory is conditional on defeating and destroying all units of the opposing camp. Based on the level of built-in AI and the difference between the two camps: easy, hard and super hard. In order to study the effectiveness of the algorithm proposed in this paper, six maps of 2s3z, MMM, bane_vs_bane, 2s_VS_1sc, 5m_VS_6m and corridor are selected to verify the algorithm. The information of the agent, camp type and difficulty degree corresponding to each map is shown in Table 4-2: A total of 6 maps were used for the experiment, and two types of symmetric map and asymmetric map were selected according to camp type. For each type of map, three difficulty levels of simple, difficult and super difficult maps were selected respectively according to the difficulty level. The evaluation results of the experiment are composed of the win rate curve and the reward curve. The advantages and failures of the algorithm in different maps are judged according to the convergence curve of the win rate and the reward growth curve. Four algorithms are adopted In experiments, IPPO algorithm and MAPPO algorithm are the baseline algorithm in comparison, and QMIX algorithm is the algorithm most commonly used for comparison in mainstream research on multi-agent reinforcement learning. MCGOPPO algorithm is the final improved algorithm which combines the improvement of multi-agent communication based on weight scheduling and attention module and the improvement of global information optimization based on attention mechanism and deep and shallow feature processing. In the experimental graph, IPPO algorithm is represented by a green dotted line, MAPPO algorithm by a blue dotted line, QMIX algorithm by a purple dotted line, and MCGOPPO algorithm by a red solid line. First, we conducted the scene experiment of symmetrical camp maps in SMAC environment, namely 2s3z, MMM and bane_vs_bane maps, in which the difficulty of the maps gradually increased. Figure 1: Experimental win ratio of scene 2s3z in SMAC 5 Running Title for Header MAPPO algorithm and QMIX algorithm in the early win rate increase the fastest, but in a million It was overtaken by MCGOPPO algorithm around the step, and then MCGOPPO algorithm kept the lead. It can be considered that MCGOPPO algorithm is superior to other algorithms on this map, with a good improvement. Figure 2: Experimental win ratio of scene MMM in SMAC In the first one million steps, QMIX algorithm grew the fastest, followed by MAPPO algorithm, but MCGOPPO algorithm improved after about one million steps, and MCGOPPO algorithm kept the lead. It can also be seen from the figure that MCGOPPO algorithm converges at about two million steps, while MAPPO algorithm and IPPO algorithm converge at about five million steps. Therefore, it can be considered that MCGOPPO algorithm is superior to other comparison algorithms on the map. Figure 3: Experimental win ratio of scene bane_vs_bane in SMAC The improved MCGOPPO algorithm has the fastest increase in win rate in the first 700,000 moves, and can reach To about 0.6, but then there is a certain decline, at one million steps back to about 0.54, and then quickly and steadily rise, at two million steps on the basic convergence. At the same time, the winning rate of MAPPO algorithm increases the slowest in the first 700,000 steps, only reaching about 0.1, and then rises rapidly and begins to converge at about 4 6 Running Title for Header million steps. QMIX The convergence of the algorithm is similar to that of MAPPO, but it begins to converge rapidly to 1 at about two million steps. The growth of IPPO algorithm is relatively smooth, which is similar to the curve growth trend of MAPPO algorithm, and also converging at about 4 million steps. Therefore, it can be considered that MCGOPPO algorithm is superior to other comparison algorithms on this map. In addition, the scene experiment of asymmetric camp maps in SMAC environment is carried out, which are 2s_vs_1sc, corridor and 5m_vs_6m maps respectively. The map difficulty is gradually increased. Compared with the scene experiment of symmetrical camp maps, the scene experiment of asymmetric camp maps is often more challenging for agents. The experimental results are shown below. Figure 4: Experimental win ratio of scene 2s_vs_1sc in SMAC The growth curves of the four algorithms are very similar, and MCGOPPO algorithm converging in about one million steps, while MAPPO algorithm, IPPO algorithm and QMIX algorithm converging in about two million steps, so it can be considered that MCGOPPO algorithm is better than other comparison algorithms on the map. Figure 5: Experimental win ratio of scene corridor in SMAC 7 Running Title for Header The MCGOPPO algorithm and the MAPPO algorithm have much closer curves, around About 8 million steps basically converges, while IPPO algorithm rapidly rises to around 0.95 in the early stage of about 4 million steps, and then begins to fall back. The convergence curve of QMIX algorithm is similar to that of IPPO algorithm, and finally converges to 0.9. Therefore, it can be considered that MCGOPPO algorithm is similar to MAPPO algorithm on this map, but superior to QMIX algorithm and IPPO algorithm. Figure 6: Experimental win ratio of scene 5m_vs_6m in SMAC The MCGOPPO algorithm improves to around 0.4 in about three million steps and then starts to return Drop, at 4 million moves to around 0.25, then slowly rise until the final maximum win rate is 0.44. The growth curve of QMIX algorithm is relatively slow in the first two million steps, and rapidly increases between two million and four million steps. The winning rate reaches 0.32 in seven million steps, then falls back, and finally converges to around 0.27. The growth curve of MAPPO algorithm has been rising in the shock, and finally, in 10 million steps, it is about 0.34, while the growth of IPPO algorithm has been very slow, and in 10 million steps, it is about 0.08, so it can be considered that MCGOPPO algorithm is better than other comparison algorithms on this map. 5 Conclusion Different from the stability of a single agent environment, a multi-agent environment is due to the constant changes of each agent It has the problem of non-stationarity and brings the challenge to the cooperative decision among multiple agents. The research of this paper is based on the multi-agent environment and adopts the method of deep reinforcement learning to carry out related research. More specifically, based on MAPPO algorithm of CTDE framework, aiming at the problem of lack of information communication between Actor networks in MAPPO algorithm and the problem of redundancy of global information in the input of Critic network, this paper conducted a careful study and proposed an improvement method: Firstly, the multi-agent communication mechanism based on weight scheduling and attention mechanism is introduced to solve the Actor network For the problem of lack of information exchange between networks, the non-stationarity of multi-agent environment can be alleviated through information exchange and sharing of agents. The specific approach is to add the communication mechanism to the Actor network, in which the implementation of the communication mechanism consists of two modules, one is the communication selection module of message encoder, weight generator and weight scheduler, and the other is the message processing module composed of attention module. The whole process is: on the one hand, the local observation information of each agent is input into the weight generator to generate the weight coefficient corresponding to the agent, and then the weight is input into the weight scheduler for normalization processing to obtain the final scheduling weight, which is used as the selection basis for the communication between agents later. On the other hand, the local observation information of each agent is input to the message encoder to generate the communication information after compression encoding, which is stored in the message pool as the message for subsequent communication. After the above parallel operation, from the perspective of each agent, the communication information in the corresponding message pool is selected according to the weight in the weight 8 Running Title for Header scheduler, and it is taken out and input into the attention module with its own local observation information. Through the processing of the attention mechanism, the information communication between multiple agents is realized. Second, it introduces global information optimization processing based on attention mechanism and deep and shallow feature processing, because CTDE framework is characterized by introducing global information to mitigate the impact of non-stationary multi-agent environment during centralized training. However, MAPPO algorithm has redundancy in processing this global information to a certain extent. In MAPPO algorithm, a feature clipping redundancy processing method is further mentioned. However, its operation is artificially preset, which requires the artificial introduction of prior knowledge for feature processing. The improvement method in this paper is to input the joint observation information and global information of all agents into the attention mechanism, and then to obtain the simplified feature information through redundancy processing, and then to carry out the deep and shallow feature processing. For the enemy agent information that is closely related to the target selection, the enemy agent information and the friend information are processed shallow. After that, the features will be spliced and then input into the subsequent centralized Critic network. References 9
ai_researcher
1
Density-based_Influence_Metrics_for_Research_Papers.pdf
4 0 0 2 n u J 1 2 2 v 3 7 1 3 0 4 0 / h p - t n a u q : v i X r a Reduction of multipartite qubit density matrixes to bipartite qubit density matrixes and criteria of partial separability of multipartite qubit density matrixes Zai-Zhe Zhong1 1. Department of Physics, Liaoning Normal University, Dalian 116029, Liaoning, China. October 15, 2018 Abstract The partial separability of multipartite qubit density matrixes is strictly defined. We give a reduction way from N-partite qubit den- sity matrixes to bipartite qubit density matrixes, and prove a neces- sary condition that a N-partite qubit density matrix to be partially separable is its reduced density matrix to satisfy PPT condition. PACC numbers: 03.67.Mn; 03.65.Ud; 03.67.Hk Recently, an important task in modern quantum mechanics and quantum information is to find the criteria of separability of density matrixes. The first important result is the well-known positive partial transposition (PPT, Peres-Horodecki) criteria[1,2] for 2 × 2 and 2 × 3 systems. There are many studies about the criteria of separability for the multipartite systems, see [3-8]. Generally, the common so-called ‘separability’, in fact, is the full-separability. For multipartite systems the problems are more complex, there yet is other concept of separability weaker than full-separability, i.e. the ‘partial separability’, e.g. the A-BC-separability, B-AC-separability for a tripartite qubit pure- state ρABC[8], etc.. Related to Bell-type inequalities and some criteria of 1 partial separability of multipartite systems, etc., see [9-12]. However, we yet need to stricter define the concept of partial separability and find the simpler criteria. In this paper, first we discuss how to define strictly the concept of the partial separability corresponding to a partition. Next, we give a new way that an arbitrary N-partite (N≥ 3) qubit density matrix always can be reduced in one step through to a bipartite qubit density matrix . Thus, we prove an effective criterion: A necessary condition of a N-partite qubit density matrix to be partially separable with respect to a partition is that the corresponding reduced bipartite qubit density matrix is separable, i.e. it satisfies the PPT condition. Some examples are given. s=1Hs , of which the standard basis is Suppose that ρi1i2···iN is a density matrix for N-partite qubit Hilbert space H = ⊗N (is = 0, 1). Let ZN be the integer set {1, 2, · · · , N} . If two subsets (r)P ≡ {r1, · · · , rP } and (s)N −P ≡ {s1, · · · , sN −P } in ZN obey s=1 | is > ⊗N (cid:9) (cid:8) 1 6 r1 < · · · < rP < N, 1 < s1 < · · · < sN −P 6 N (r)P ∪ (s)N −P = ZN , (r)P ∩ (s)N −P = ∅(1 6 P < N) (1) (r)P , (s)N −P where P is an integer, 16 P 6 N −1, the set forms a partition of ZN , in the following we simply call it a ‘partition’, and for the sake of stress we denote it by symbol (r)P k (s)N −P . A partition (r)P k (s)N −P corresponds 1, to a permutation S(r)P k(s)N−P ≡ r1, which a new matrix ρ(r)P k(s)N−P from ρi1i2···iN is defined now, whose entries are · · · N · · · sN −P (cid:19) · · · P, P + 1, · · · rP , , by s1, (cid:18) (cid:9) (cid:8) ρ(r)P k(s)N−P h j1···jN , k1···kN i = [ρ]jr1 ···jrP js1 ···jsN−P , kr1 ···krP ks1 ···ksN−P (2) For instance, ρAkBCD = ρABkCD = ρABCkD = ρABCD, and [ρABCD]ikjl, rtsu , ρi1i2···iN , unless (r)P k (s)N −P just maintains the natural order of ZN (i.e. (r)P = (1, · · · , P ) , (s)N −P = (P + 1, · · · , N)), then ρ(r)P k(s)N−P = ρi1i2···iN . ijkl,rstu = [ρABCD]kijl,.trsu, etc.. Generally, ρ(r)P k(s)N−P 6= (cid:3) ijkl,rstu = ρACkBD ρCkABD (cid:3) (cid:2) (cid:2) Lemma. For any partition (r)P k (s)N −P , ρ(r)P k(s)N−P is still a N-partite qubit density matrix. Proof. We only consider the case of tripartite qubit, the general cases are completely similar (also see [11]). Notice the permutation SBkAC , then 2 we have 1 1   0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 ρBkAC = SρABCS†, S =            S is an unitary matrix, therefore ρBkAC is still a tripartite qubit density matrix. (cid:3)            (3) 1 1 (cid:8) s=1 | is > Now, we consider how to more strictly define the partial separability. Ob- viously, if a partition (r)P k (s)N −P maintains the natural order of ZN (i.e. (r)P = (1, 2, · · · , P ) , (s)N −P = (P + 1, P + 2, · · · , N)), then ρ(r)P k(s)N−P = ⊗N ρi1i2···iN under the standard basis , now the (r)P − (s)N −P - separability can naturally be defined as that if ρi1i2···iN can be decomposed as ρ(r)P k(s)N−P = ρi1i2···iN = pαρα,(r)P ⊗ ρα,(s)N−P with probabilities pα, where ρα,(r)P and ρα,(s)N−P , respectively, are a P -partite and a (N − P )- partite qubit density matrixes acting upon ⊗P n=1 Hn for all α, then we call ρi1i2···iN to be (r)P − (s)N −P -separable. However, if the nat- ural order of ZN has been broken in (r)P k (s)N −P ( i.e. s1 < rP ), then generally ρ(r)P k(s)N−P 6= ρi1i2···iN , the case is different from the above. For in- stance, we consider a normalized pure-state ρABCD =| ΨABCD >< ΨABCD |, | ΨABCD >∈ HA ⊗ HB ⊗ HC ⊗ HD of four spin- 1 2 particles A, B, C and D. Now, assume that | ΨABCD > has a special form as | ΨABCD >= cikcjl | iA > ⊗ | jB > ⊗ | kC > ⊗ | lD >, where cik, cjl ∈ C1. m=1Hm and ⊗N −P α P (cid:9) i,j,k,l=0,1 If we keep up to use the original standard basis, then we cannot directly P If see the partial separability, because this choice of basis is unsuitable. we choose other nature basis {| iA > ⊗ | kC > ⊗ | jB > ⊗ | lD >} ( this, in fact, means that we are using ρACkBD), under which we can consider the state | Ψ′ cik | iA > ⊗ | kC >, ACBD >=| ΨAC > ⊗ | ΨBD > , where | ΨAC >= | ΨBD >= cjl | jB > ⊗ | lD > . Now, ρACkBD = ρAC ⊗ ρBD, where , ρAC =| ΨAC >< ΨAC |, ρBD =| ΨBD >< ΨBD |. | ΨABCD > and | ΨABCD >, j,l=0,1 P 3 i,k=0,1 P in fact, are the same in physics, therefore to call ρABCD AC-BD-separable is completely reasonable. Similarly, for the rest. Generalize to the cases of mixed-states, thus we can generally define the concept of partial separability as follows. Definition. For the partition (r)P k (s)N −P , a N-partite qubit density s=1Hs is called to be (r)P −(s)N −P -separable matrix ρi1i2···iN acting upon H= ⊗N if the corresponding density matrix ρ(r)P k(s)N−P can be decomposed as ρ(r)P k(s)N−P = pαρα,(r)P ⊗ ρα,(s)N−P (4) α X where ρα,(r)P and ρα,(s)N−P , respectively, are a P -partite and a (N − P )- partite qubit density matrixes acting upon ⊗P n=1 Hsn for all pα = 1. If ρi1i2···iN is not (r)P − (s)N −P -separable, then α, and 0 < pα ≤ 1, m=1Hrm and ⊗N −P we call it (r)P − (s)N −P -inseparable. α P For the distinct partitions ρi1i2···iN can have distinct separability. Of course, if a ρi1i2···iN is partially inseparable for some partition, then it must be entangled. Here, in passing, we point out that how to find the gen- eral relations between the partial separability and the ordinary separabil- is not a simple problem. For instance, ity (full-separability), generally, ∽ ρ (similar to the we can make such a multipartite qubit density matrix theorem 1 in [13,14]), and by using of the technique in this paper, we ∽ ρ always is partially separable for all possible partitions can prove that (r)P k (s)N −P (1 6 P 6 N − 1) , but ∽ ρ is entangled (not full-separability). In order to find the criteria of partial separability, first we discuss how to reduce a multipartite qubit density matrix in one step through to a bipartite qubit density matrix. For a given partition (r)P k (s)N −P , let two sets (r)P and (s)N −P , respectively be separated again as follows, 1, · · · , r′′ r′′ P ′} , (r′′)P ′′ = 1, · · · , r′ (r′)P ′ = {r′ P ′′ (s′)Q′ = 1, · · · , s′ s′ 1, · · · , s′′ s′′ , (s′′)Q′′ = Q′ Q′′ (cid:9) (cid:8) 1 < r′′ 2 < · · · < r′′ P ′, r′′ 2 < · · · < r′ r′ 1 < r′ P ′′ (cid:8) (cid:8) (cid:9) (cid:9) s′ 1 < s′ 2 < · · · < s′ Q′, s′′ 1 < s′′ 2 < · · · < s′′ Q′′ (r)P = (r′)P ′ ∪ (r′′)P ′′ , (r′)P ′ ∩ (r′′)P ′′ = ∅(0 ≤ P ′, P ′′ ≤ P and P ′ + P ′′ = P ) , one of them can be the null set , one of them can be the null set (5) (s)N −P = (s′)Q′ ∪ (s′′)Q′′ , (s′)Q′ ∩ (s′′)Q′′ = ∅ (0 ≤ Q′, Q′′ ≤ N − P and Q′ + Q′ = N − P ) 4 now we rewrite the partition added these partitions as [(r′)P ′ , (r′′)P ′′] k Now we define the matrix ρ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] by (s′)Q′ , (s′′)Q′′ h . i ρ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] = the submatrix in ρi1···iN consisting of all entries with form as [ρ]x1x2···xN , y1y2···yN (6) which must be a 4×4 matrix, where the values of xk and yk (k = 1, · · · , N) , respectively, are determined by xk = i for k ∈ (r′)P ′ , xk = 1 − i for k ∈ (r′′)P ′′ xk = j for k ∈ (s′)Q′ , xk = 1 − j for k ∈ (s′′)Q′′ yk = u for k ∈ (r′)P ′ , yk = 1 − u for k ∈ (r′′)P ′′ yk = v for k ∈ (s′)Q′ , yk = 1 − v for k ∈ (s′′)Q′′ (7) where i, j, u, v = 0, 1. E.g. ρ[(AC),∅]k[(B),(D)] = the submatrix in ρABCD consisting of all entries with form as [ρ]iji(1−j),uvu(1−v) =  [ρ]0001,0001 [ρ]0100,0001 [ρ]1011,0001 [ρ]1110,0001 [ρ]0001,0100 [ρ]0100,0100 [ρ]1011,0100 [ρ]1110,0100 [ρ]0001,1011 [ρ]0100,1011 [ρ]1011,1011 [ρ]1110,1011 [ρ]0001,1110 [ρ]0100,1110 [ρ]1011,1110 [ρ]1110,1110     (8)    etc.. Now we define the 4×4 matrix ρ((r)P −(s)N−P ) by ρ((r)P −(s)N−P ) = ρ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] for all possible [(r′)P ′ ,(r′′)P ′′]k[(s′)Q′ ,(s′′)Q′′], X and ρ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] are not repeated (9) where we notice that there are indeed repeated ρ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′], in fact, ρ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] = ρ[(r′)P ′ ,(r′′)P ′′]k[(s′′)m−P ′′ ,(s′)m−P ′] =ρ[(r′′)P ′′ ,(r′)P ′]k[(s′)m−P ′ ,(s′′)m−P ′′] = ρ[(r′′)P ′′ ,(r′)P ′]k[(s′′)m−P ′′ ,(s′)m−P ′], etc.. For instance, we have ρ(A−BC) = ρ[(A),∅]k[(BC),∅] + ρ[(A),∅]k[(B),(C)] 5 ρ(B−ACD) = ρ[(B),∅]k[(ACD),∅] + ρ[(B),∅]k[(AC),(D)] + ρ[(B),∅]k[(AD),(C)] + ρ[(B),∅]k[(A),(CD)] ρ(AC−BD) = ρ[(AC),∅]k[(BD),∅] + ρ[(AC),∅]k[(B),(D)] + ρ[(A),(C)]k[(BD),∅] + ρ[(A),(C)]k[(B),(D)] ρ(AC−BDE) = ρ[(AC),∅]k[(BDE),∅] + ρ[(AC),∅]k[(BD),(E)] + ρ[(AC),∅]k[(BE),(D)] +ρ[(AC),∅]k[(B),(DE)] + ρ[(A),(C)]k[(BDE),∅] + ρ[(A),(C)]k[(BD),(E)] +ρ[(A),(C)]k[(BE),(D)] + ρ[(A),(C)]k[(B),(DE)] (10) etc.. As an example, the above reduction procedures from ρABCD to ρ(AC−BD) can be described as ρABCD −→ ρ(AC−BD) = ρ[(AC),∅]k[(BD),∅]+ρ[(AC),∅]k[(B),(D)]+ ρ[(A),(C)]k[(BD),∅] + ρ[(A),(C)]k[(B),(D)] ≡ σ△+σ×+σ⋄+σ∧, where the submatrixes σ△, σ×, σ⋄ and σ∧, respectively, consist of the entries ‘△’, ‘×’,‘⋄’ and ‘∧’ in ρABCD as in the following figure (σ× is just the matrix in Eq.(8)) 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 0000 △ 0001 0010 0011 0100 0101 △ 0110 0111 1000 1001 1010 △ 1011 1100 1101 1110 1111 △ × × × × ⋄ ⋄ ⋄ ⋄ × × × ∧ ∧ ∧ ∧ △ △ △ △ ∧ ∧ ∧ ∧ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ∧ ∧ ∧ ∧ △ △ △ △ × × × × ∧ ∧ ∧ ∧ ⋄ ⋄ ⋄ ⋄ × × × × (11) Similarly, we can consider higher dimensional cases. As for the ordinary bipartite qubit density matrix ρAB, we can take ρ(A−B) ≡ ρAB. Sum up, generally we can define the 4×4 matrix ρ((r)P −(s)N−P ) for a given (r)P k (s)N −P . In addition, it is easily verified that for any partition 1111 △ △ △ △ 6 (r)P k (s)N −P , ρ((s)N−P −(r)P ) is the transposition of ρ((u)P −(s)N−P ), therefore from viewpoint of partial separability, we don’t have to distinguish between the partitions (r)P k (s)N −P and (s)N −P k (r)P . Theorem 1. For any partition (r)P k (s)N −P , ρ((r)P −(s)N−P ) is a bipartite qubit density matrix, therefore ρ((r)P −(s)N−P ), in fact, is a reduction of the N- partite qubit density matrix ρi1i2···iN . Proof. The fact must proved only is that ρ((r)P −(s)N−P ) is surely a bi- partite qubit density matrix. Here we only discuss in detail the cases of quadripartite qubit states, since the generalization is completely straightfor- ward. In the first place, we prove that the theorem holds for a pure-state ρABCD. Suppose that ρABCD =| ΨABCD >< ΨABCD | is a normalized pure- state, where | ΨABCD >= i,j,k,l=0,1 cijkl | iA > ⊗ | jB > ⊗ | kC > ⊗ | lD >, i,j,k,l=0,1 |cijkl|2 = 1. Let P P | Φ△ >= | Φ⋄ >= i,j=0,1 X cijij | ix > ⊗ | jy >, | Φ× >= ciji(1−j) | ix > ⊗ | jy > (12) cij(1−i)j | ix > ⊗ | jy >, | Φ∧ >= cij(1−i)j(1−j) | ix > ⊗ | jy > i,j=0,1 X i,j=0,1 X i,j=0,1 X where x and y are form particles. Make normalization, we obtain ρ△ =| ϕ△ >< ϕ△ |, | ϕ△ >= η−1 × | Φ× >, ρ⋄ =| ϕ⋄ >< ϕ⋄ |, | ϕ⋄ >= η−1 ∧ | Φ∧ ⋄ >, where the normalization factors are △ | Φ△ >, ρ× =| ϕ× >< ϕ× |, ϕ× = η−1 | Φ⋄ >, ρ∧ =| ϕ∧ >< ϕ∧ |, | ϕ∧ >= η−1 η△ = |cijij|2, η× = ciji(1−j) 2 i,j=0,1 s X i,j=0,1 s X η⋄ = cij(1−i)j 2, η∧ = (cid:12) (cid:12) (cid:12) (cid:12) cij(1−i)(1−j) 2 (cid:12) (cid:12) It can be directly verified that from Eq.(10) we have (cid:12) (cid:12) (cid:12) (cid:12) i,j=0,1 s X i,j=0,1 s X (cid:12) (cid:12) ρ(AC−BD) = η2 △ρ△ + η2 ×ρ× + η2 ⋄ρ⋄ + η2 ∧ρ∧ (13) (14) where ρ△, ρ×, ρ⋄, ρ∧ all are bipartite qubit pure-states. It is easily seen that |cijkl|2 = 1. This since | ΨABCD > is normalized, η2 × + η2 ⋄ + η2 △ + η2 ∧ = i,j,k,l=0,1 P 7 means that ρ(AC−BD) is a bipartite qubit density matrix(a mixed state) for this pur-state ρABCD. Secondly, if ρABCD = pαρα(ABCD) is a mixed-state, where every ρα(ABCD) α P is a quadripartite qubit pure-state with probabilities pα, then from Eq.(10) we have ρ(AC−BD) = α pα, (ρα)(AC−BD) . Since every (ρα)(AC−BD) is a bipartite qubit density matrix, ρ(AC−BD) is a density matrix (a mixed-state). A similar way can be extended to higher dimensional case, the key is that P when ρi1,···,iN is a pure-state, then ρ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] =| Ψ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] >< Ψ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′],where the pure-state | Ψ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] >= cx1x2···xN | x1 > ⊗ · · · ⊗ | xN > (x1, x2, · · · , xN are determined by Eq.(7)) i,j=0,1 X therefore we just have | Ψi1,···,iN > = (15) (16) | Ψ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] > for all possible [(r′)P ′ ,(r′′)P ′′]k[(s′)Q′ ,(s′′)Q′′], X and |Ψ[(r′)P ′ ,(r′′)P ′′]k[(s′)m−P ′ ,(s′′)m−P ′′] > are not repeated By using of this relation, make the similar states as in Eq.(12), and make generalization to mixes-states, we can prove that generally, a mixed-state density matrix ρi1···iN can be reduced through to the bipartite qubit density matrix ρ((r)P −(s)N−P ). (cid:3) The following theorem is the main result in this paper, it is an application of PPT condition for multipartite qubit systems. Theorem 2 (Criterion). For a given partition (r)P k (s)N −P , a nec- essary condition of a N-partite(N> 3) qubit density matrix ρi1i2···iN to be (r)P − (s)N −P -separable is that the reduced bipartite qubit density matrix is separable, i.e. ρ((r)P −(s)N−P ) satisf ies the PPT condition. Proof. We only discuss in detail the case of quadripartite qubit, it can be straightforwardly generalized to the case of arbitrary N-partite qubit. In the first place, we prove that this theorem holds for a quadripartite qubit pure- state. Suppose that the pure-state ρABCD is AC-BD-separable. This means 8 that if we choose the natural basis {| iA > ⊗ | jC > ⊗ | rB > ⊗ | sD >} , then ρACkBD = ρAC ⊗ ρBD, where ρAC =| ΨAC >< ΨAC |, | ΨAC >= cij | iA > ⊗ | jC >, |cij|2 = 1, and ρBD =| ΨBD >< ΨBD |, i,j=0,1 P | ΨBD >= drs | rB > ⊗ | sD >, i,j=0,1 P |drs|2 = 1. From the above ways, it easily r,s=0,1 checked that the bipartite qubit density matrix ρ(AC−BD), in fact, can be P rewritten as r,s=0,1 P ρ(AC−BD) = σ△ + σ× + σ⋄ + σ∧ = σ(AC) ⊗ σ(BD) + σ(AC) ⊗ σ ∨ D B +σ ∨ C A (cid:18) (cid:19) ⊗ σ(BD) + σ ∨ C A ⊗ σ ∨ D B (cid:18) (cid:19) (cid:18) (cid:19) (cid:18) (cid:19) (17) where σ(AC) =| Φ(AC) >< Φ(AC) | , we already write | Φ(AC) >= ix >, ei ≡ cii and | iA > ⊗ | iC >−→| ix >. Similarly, σ =| Φ ∨ C A ei | >< i=0,1 P ∨ A C Φ |, | Φ >= A ∨ C ∨ A C jx >, and similarly for σ(BD), σ j=0,1 P (cid:19) (cid:19) (cid:18) (cid:18) fj | jx >, fj ≡ cj(1−j) and | jB > ⊗ | (1 − j)D >−→| (cid:18) (cid:19) (cid:18) (cid:19) , etc.. Now, ρ(AC−BD) can be written as ∨ D B ρ(AC−BD) = η2 (AC)η2 (BD)ρ(AC) ⊗ ρ(BD) + η2 (AC)η2 ρ(AC) ⊗ ρ ∨ D B (cid:18) (cid:19) +η2 ∨ C A (cid:18) (cid:19) η2 (BD)ρ (cid:18) ∨ C A (cid:19) ⊗ ρ(BD) + η2 (cid:19) η2 (cid:18) ∨ C A ∨ D B ρ (cid:18) ∨ C A (cid:18) (cid:19) (cid:18) (cid:19) ∨ D B (cid:19) ⊗ ρ (18) ∨ D B (cid:18) (cid:19) (cid:18) (cid:19) |cii|2. Now, ρ(AC) where ρ(AC) = η(AC) −1 | Φ(AC) >< Φ(AC) |, η(AC) = is a density matrix of a single particle. Similarly, for ρ (cid:1) (cid:0) i=0,1 r P , ρ(BD), ρ . ∨ D B ∨ C A Since (AC)η2 η2 (BD) + η2 (AC)η2 (cid:18) (cid:19) (cid:18) (cid:19) + η2 ∨ C A ∨ D B (BD) + η2 η2 η2 ∨ D B ∨ C A (cid:18) (cid:19) (cid:18) (cid:19) (cid:18) (cid:19) (cid:18) (cid:19) = (AC) + η2 η2 ∨ C A (cid:18) (cid:19) (BD) + η2 η2 ∨ D B (cid:18) (cid:19) = 1 ! ! (19) therefore ρ(AC−BD) is a separable bipartite qubit mixed-state. The PPT condition for separability of 2×2 systems is sufficient and necessary[2], thus ρ(AC−BD) satisfies the PPT condition. Similarly, for other partial separability. 9 Secondly, we prove that this theorem holds yet for partially separable mixed-states. Suppose that ρABCD is a AC-BD-separable mixed-state, then under the same natural basis there is a decomposition as ρABkCD = ρα(BD), where ρα(AC) and ρα(BD) both are bipartite qubit pure-states as in the above for all α, 0 < pα ≤ 1, pα = 1. From the above reduction operation, P α pαρα(AC)⊗ obviously we have α P ρ(AC−BD) = pα ρα(AC) ⊗ ρα(BD) α X (cid:2) (AC−BD) (cid:3) (20) According to the above mention, every (AC−BD) is a separa- ble bipartite qubit mixed-state, this leads to that the convex sum ρ(AC−BD) (cid:3) in Eq.(20) still is a separable bipartite qubit mixed-state, and it must satisfy the PPT condition. ρα(AC) ⊗ ρα(BD) (cid:2) Similarly, we cane prove higher dimensional cases. (cid:3) Corollary. If the reduced bipartite qubit density matrix (ρi1i2···iN )((r)P −(s)N−P ) violates the PPT condition for a partition (r)P k (s)N −P , then ρi1i2···iN is (r)P − (s)N −P -inseparable and entangled. It, in fact, is the inverse-negative proposition of Theorem 2. Examples. Consider two tripartite qubit density matrixes ρ′ ABC = ρ′′ ABC =                         0 1−x 4 1−x 4 x 2 − x − x 2 x 2 2 1−x 4 1−x 4 0 0 1−x 4 x 2 0 0 − x 2 0 1−x 4 0 0 0 − x 2 0 0 1−x 0 4 x 0 2 10 1−x 4 0                         (21) then we have (ρ′ ABC)(A−BC) = ( ρ′′ ABC)(B−AC) = ρW (22) where ρW is the Werner state[1,15] which consists of a singlet fraction x and a random fraction (1 − x), [ρW ]ij,rs = xSij,rs + 1 4 (1 − x) δirδjs S01,01 = S10,10 = −S01,10 = −S10,01 = 1 2 and all the other components of S vanish. (23) It is known[1] that when 1 to that ρ′ ABC is A-BC-inseparable and ρ′′ 3 < x ≤ 1 ρW violates the PPT condition, it leads ABC is B-AC-inseparable. By using of the above theorems and corollary, in some special cases we can make a N-partite qubit from 2N −2 bipartite qubit density matrixes, which is partially inseparable for a given partition. As in the above, for the case of tripartite qubit we take two bipartite qubit density matrixes σ(1), σ(2) and real numbers p1, p2, 0 < p1, p2 ≤ 1 such that σ = p1σ(1)+ p2 σ(2) is a bipartite qubit entangled state ( then it violates the PPT condition). If we want to construct a tripartite qubit entangled state ρABC which is B-AC-inseparable, then we can take the entries of ρABC by σ(1) [ρABC]ijk,rst = p1 [ρABC]ijk,rst = p2 (cid:3) [ρABC]ijk,rst = 0, for the rest (i, j, k, r, s, t = 0, 1) (cid:3) ji,sr , for k = i and t = r ji,sr , for k = 1 − i and t = 1 − r σ(2) (cid:2) (cid:2) (24) It can be verified that ρABC is a tripartite qubit density matrix, and is B-AC- inseparable. In fact, (ρABC)(B−AC) = τ which violates the PPT condition. Similarly, for A-BC and C-AB. The above way can be generalized to obtain a (r)P − (s)N −P -inseparable density matrix from a bipartite qubit entangled 2N−2 state in form as τ = piσ(i), where all σ(i) are some bipartite qubit density matrixes. i=1 P References 11 - [1] A. Peres, Phys. Rev. Lett., 77(1996)1413. [2] M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A, 223(1996)1. [3] S. Wu. X. Chen, and Y. Zhang, Phys. Lett. A, 275(2000)244. [4] M. Horodecki, P. Horodecki, and R. Horodecki, quant-ph/0006071. [5] B. M. Terhal, J. Theo. Compu. Sci., 287(1)(2002)313. [6] M. Horodecki, P. Horodecki, and R. Horodecki, quant-ph/0206008. [7] K. Chen, and L. A. Wu, Phys. Lett. A, 306(2002)14. [8] W. D¨ur, G. Vidal, and J. I. Cirac, Phys. Rev. A, 62(2000)062314. [9] M. Seevinck and G. Svetlichny, Phys. Rev. Lett., 89(2002)060401. [10] A. O. Pittenger and M. H. Rubin, Phys. Rev. A, 62(2000)032313. [11] A. M. Wang, quant-ph/0305016. [12] T. Yamakami, quant-ph/0308072. [13] C. H. Bennett , D. P. DiVincenzo , T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal, Phys. Rev. Lett., 82(1999)5385. [14] D. P. DiVincenzo , T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal, Comm. Math. Phys., 238(2003)379. [15] J. Blank and P. Exner, Acta Univ. Carolinae, Math. Phys. 18(1977)3. 12
ai_researcher
2
SCITUNE_Aligning_Large_Language_Models_with_Human-Curated_Scientific_Multimodal_Instructions.pdf
3 2 0 2 l u J 3 ] V C . s c [ 1 v 9 3 1 1 0 . 7 0 3 2 : v i X r a SCITUNE: Aligning Large Language Models with Scientific Multimodal Instructions Sameera Horawalavithana and Sai Munikoti and Ian Stewart and Henry Kvinge yasanka.horawalavithana,sai.munikoti,ian.stewart,[email protected] Pacific Northwest National Laboratory, Richland, WA Abstract Instruction finetuning is a popular paradigm to align large language models (LLM) with hu- man intent. Despite its popularity, this idea is less explored in improving the LLMs to align existing foundation models with scientific dis- ciplines, concepts and goals. In this work, we present SciTune as a tuning framework to im- prove the ability of LLMs to follow scientific multimodal instructions. To test our method- ology, we use a human-generated scientific in- struction tuning dataset and train a large mul- timodal model LLaMA-SciTune that connects a vision encoder and LLM for science-focused visual and language understanding. In compar- ison to the models that are finetuned with ma- chine generated data only, LLaMA-SciTune sur- passes human performance on average and in many sub-categories on the ScienceQA bench- mark. 1 Introduction Instruction tuning has gained significant traction in the community as a means of enhancing the capa- bilities of large language models (LLMs), allowing them to accurately balance desired outcomes, con- text, and human preferences, leading to more rele- vant and coherent responses. More recently, there are AI assistants who comprehend and execute mul- timodal vision-and-language instructions, aligned with human intent, to successfully accomplish di- verse real-world tasks in complex multimodal en- vironments. In the latest developments, MiniGPT- 4 (Zhu et al., 2023), LLaVA (Liu et al., 2023) and LLaMA-Adapter (Gao et al., 2023) have focused on expanding language-only instruction models to incorporate multimodal capabilities, thereby em- powering LLMs with the ability to perform visual grounded reasoning tasks. While these results have been impressive, many of these models still fail to perform in ways that meet the standards of certain scientific subdo- mains (Li et al., 2023). In this work, we focus on the problem of adapting multimodal founda- tion models to scientific tasks. To achieve this, we investigate grounding scientific multimodal instruc- tion tuning, to align existing foundation models with scientific disciplines, concepts, and goals to ensure that generated content aligns with the stan- dards and expectations of the scientific commu- nity. Our hypothesis is that scientifically aligned multimodal foundation models could be able to learn from unique patterns and structures present in scientific language and follow precise instructions about complex procedures, protocols, and guide- lines in the scientific environments. These models can exhibit improved or comparable performance on science-focused downstream tasks in compar- ison to the models aligned with general human feedback. To this end, we introduce a new framework called SciTune to perform scientific multimodal instruction tuning on top of any decoder-based pre- trained LLM and vision encoder. SciTune includes two stages for scientific multimodal instruction tun- ing, i) scientific concept alignment to learn across various scientific visual signals (e.g., plots, charts, equation, diagram, etc.), and textual signals (e.g., captions, optical character recognition (OCR) and paragraph mentions), ii) scientific instruction tun- ing to fine-tune on a multimodal scientific reason- ing task. To validate our approach, we perform experiments on top of LLaMA (Touvron et al., 2023), the best performing open-source LLM and LLaVA (Liu et al., 2023), the most recent state-of- the-art multimodal instruction tuned model archi- tecture. We show that our model, LLaMA-SciTune surpasses human performance on the ScienceQA multimodal reasoning benchmark and performs significantly better than state-of-the-art (SOTA) vision-language models in a variety of scientific im- age understanding tasks with zero-demonstrations during the inference time. Figure 1: SciTune presents i) scientific concept alignment to learn across various scientific visual signals (e.g., plots, charts, equation, diagram, etc.), and textual signals (e.g., captions, OCR and paragraph mentions), ii) scientific instruction tuning to fine-tune on a multimodal scientific reasoning task (e.g., ScienceQA with 21k multimodal multiple choice questions with rich domain diversity across 3 subjects, 26 topics, 127 categories, and 379 skills.) 2 Related Work Natural Language Instruction Tuning Instruc- tion tuning enables LLMs to follow natural lan- guage instructions and align well with human in- tent. Most recent instruction tuned models such as InstructGPT (Ouyang et al., 2022), FLAN- T5/PALM (Chung et al., 2022), OPT-IML (Iyer et al., 2022), and BLOOMZ (Muennighoff et al., 2022) have shown improved zero- and few-shot downstream task performance over their non- instruction tuned models. The performance of these models is mainly due to the quality of in- struction tuning datasets. For example, FLAN- T5/PALM (Chung et al., 2022) models use the Flan instruction tuning task collection generated from large number of NLP tasks reformatted with spe- cific task-specific instruction templates. There have been several recent attempts to improve the diver- sity of the tasks presented in these collections with synthetic data generation (Peng et al., 2023; Wang et al., 2022; Raheja et al., 2023; Honovich et al., 2022; Ye et al., 2022; Gupta et al., 2022). For ex- ample, Peng et al. (Peng et al., 2023) improve the LLaMA models by instruction-tuning them with GPT-4 without human in the loop. However, syn- thetic data generated by other LLMs is skewed toward the distribution of tasks and instructions present in their pre-training corpus. This means a model trained in this way is limited to mimick- ing the styles of propriety, closed-source LLMs like GPT-4 and ChatGPT with a tendency to veer away from factuality (Gudibande et al., 2023; Wang et al., 2023). Other attempts use human feedback on the responses generated by a model (Ouyang et al., 2022; Glaese et al., 2022; Bai et al., 2022a,b; Nakano et al., 2021). But human feedback datasets are expensive to collect and there are fewer existing datasets that are publicly available when compared to instruction tuning datasets. Multimodal Instruction Tuning Zhang et al. (Zhang et al., 2023a) proposed LLaMA-Adapter to guide the LLaMA model to follow multimodal instructions. Specifically, they proposed a zero- init attention with gating as a Parameter-Efficient Fine-Tuning (PEFT) technique to prepend learn- able multimodal adaption prompts to the input text tokens at higher transformer layers in the LLaMA model. The same authors proposed LLaMA-Adapter-V2 (Gao et al., 2023) that dis- tribute the learnable parameters across all layers in the LLaMA model to improve performance in mul- timodal reasoning. MiniGPT-4 (Zhu et al., 2023) Visual EncoderScientific Concept Alignment Continual Pretraining Stage with SciTune Multimodal Instruction TemplateScientific Instruction TuningEnd-to-End Task-specific Instruction FinetuningMultimodal AdapterGraph PlotVariation of rAθScatter PlotFinal occupation probs.Unifilar FSC with feedback.Node DiagramBar ChartDistribution of the FeIIREWAlgorithm/EquationAlgorithm computing array<CAPTION><FIGURE TYPE>Language Decoder"Describe the following image in detail.","Provide a detailed description of the given image.","Give an elaborate explanation of the image you see.","Share a comprehensive rundown of the presented image.","Offer a thorough analysis of the image.",..<CAPTION><OPTICAL CHARACTER RECOGNITION><PARAGRAPH MENTIONS><FIGURE TYPE>SciTune Instruction TemplateThis factorization is computed by calling procedure ComputeL given in Figure 1, which ComputeL(w, s, g, P w ) Proof. The internal loop in line 6 is performed until a scaled factor of length t ×s is found.'COMPTTEL', '.9-', 'TLI', '{0,0,0', 'while','"i<', 'while','cntwli]l', 'countl[ill -1', 'j~U', 'while','cuureyj)', 'Pw Jlasdo', 'J +', 'L -', '[lax(T', 'trim', '(LTi', '"c'<OCR><PARAGRAPH MENTIONS>Language DecoderMultimodal AdapterVisual Encoder combined the frozen LLM (Vicuna) and a vision encoder with a single projection layer and finetuned with a highly-curated visual conversation dataset. More recently, Liu et al (Liu et al., 2023) intro- duce visual instruction tuning to develop general- purpose visual assistant (LLaVA) that follows mul- timodal instructions. They present several data reformation techniques to construct multimodal instruction-following data from the standard image- text pairs. For example, LLaVA model was trained with 595K image-text pairs filtered from the CC3M dataset (Sharma et al., 2018), and 158K unique language-image instruction-following data gener- ated from ChatGPT/GPT-4 (Liu et al., 2023). This multimodal instruction set includes image-based conversations and detailed descriptions and com- plex reasoning questions. In comparison to LLaVA, our SciTune framework handles several scientific data modalities such as scientific plots, figure types, optical character recognition (OCR), and paragraph mentions to increase the scientific concept cover- age during the pretraining stage. In addition, we only use 50% of the training examples compared to the CC3M dataset used in the LLaVA model. Science-Focused Multimodal Reasoning Sci- enceQA (Lu et al., 2022) is the standard bench- mark for multimodal scientific question answering that covers diverse types of questions, topics and domains. This benchmark tests the multimodal reasoning abilities of models, requiring that they answer multiple choice questions based on visual and textual information and then support that an- swer via a lecture and explanation. While there are more than 15+ models (including GPT-4 from Ope- nAI (Bubeck et al., 2023)) evaluated on ScienceQA, only a couple of models record performance com- parable to humans1. In particular, Multimodal- COT (Zhang et al., 2023b) reports human-level per- formance in ScienceQA by performing the chain of thought (CoT) reasoning in two stages, ratio- nale generation and answer inference. Specifically, Multimodal-COT used the UnifiedQA as the base LLM and DETR (Carion et al., 2020) to extract the vision features. However, further analysis sug- gests that CoT based reasoning may not always lead into the most accurate answer. This model of- ten makes commonsense mistakes when answering the questions requires commonsense knowledge, e.g., the ability to understand maps and counting numbers in the images, and utilizing the alphabet, 1ScienceQA Public Leaderboard and logical mistakes, with contradictions in the rea- soning chains. Multimodal-COT was not evaluated in other visual reasoning tasks in the scientific do- main. On the one hand, LLaVA (Liu et al., 2023) reaches the SOTA performance in the ScienceQA benchmark with the support from GPT-4 that acts as a judge to evaluate the generated answers. 3 Scientific Multimodal Instruction Tuning LLMs memorize only part of scientific knowledge due to the availability of training data during pre- training and show less capability to distinguish the scientific knowledge from the world knowl- edge (Taylor et al., 2022). For example, LLMs per- form differently across various scientific domains when required to train from scratch or continual pretraining (Horawalavithana et al., 2022). There are two major challenges when aligning existing foundation models with scientific goals. First, there are only few publicly available models that can reason about scientific knowledge and performs well on knowledge-intensive scientific tasks (Tay- lor et al., 2022). For example, a recent investiga- tion (Fu, 2023) suggests that existing LLMs are very sensitive to the changes in the evaluation pro- tocol used in the science-focused Massive Mul- titask Language Understanding (MMLU) bench- mark (Hendrycks et al., 2021). This shows that generally pretrained LLMs may hallucinate in gen- erating answers in the science focused downstream tasks without providing any scientific explanations. Second, there are only few high-curated multi- modal instruction-following data available in the scientific domains (Li et al., 2023). This is a ma- jor challenge in developing visual assistants useful for practitioners in a variety of scientific domains (e.g., understanding patients’ needs and providing informed advice given visuals from chest X-ray or computed tomography.) One of our contribu- tions is to provide a framework to generate science- focused multimodal instruction following datasets that cover the broader scientific concepts required to solve a variety of practically relevant science tasks (Taylor et al., 2022; Lu et al., 2022). In this section, we describe the SciTune frame- work in two stages of Scientific Multimodal Con- cept Alignment and Task-specific Instruction Tun- ing in Section 3.1 and the multimodal architecture used for the experiments in Section 3.2. 3.1 Scientific Multimodal Instructions We propose an early-fusion strategy to jointly rea- son over the text, images, and other modalities with a shared multifaceted representation as presented as SciTune instructions. The SciTune instruction template x = (sD, sI , sT ) includes a system mes- sage sD to help the model to understand the role and context, instruction sI randomly sampled from the visual-grounded questions, and sT to encode the multimodality data. Human-generated Scientific Instructions This work solely focuses on multimodal instructions generated by humans instead of machine gener- ated content used in other visual instruction tuned models (Liu et al., 2023; Gao et al., 2023). Our goal is to align the pretrained foundation models with natural scientific concepts and true intent of humans (scientists). To this end, we chose the scientific publications (PDFs) as the medium of sci- entific instructions that demonstrate various stages of scientific discovery. We use the SciCap (Hsu et al., 2021) datasets with more than 400,000 sci- entific figure images extracted from various arXiv papers, including their respective captions and rele- vant paragraphs. This dataset is composed of arXiv papers from January 2010 to October 2020. It en- compasses eight distinct categories: Computer Sci- ence, Economics, Electrical Engineering and Sys- tems Science, Mathematics, Physics, Quantitative Biology, Quantitative Finance, and Statistics. We use the 333,472 examples provided in the SciCap training split for pretraining and use a sample of the validation records to evaluate the performance. We introduce scientific captions (sc), figure types (st), optical character recognition (OCR)(so) and paragraph mentions(sm) in the instruction template sT = {sc, st, so, sm} to convert the SciCap dataset into a multimodal instruction following dataset. Please see the Table 6 for a sample SciTune in- struction. 3.2 Multimodal Architecture Architecture We build on top of the most recent multimodal architectures (e.g., LLaVA (Liu et al., 2023), LLaMA-Adapter (Zhang et al., 2023a)) that guide LLMs to follow multimodal instructions. We noticed that adapter-based multimodal training serves as the most efficient technique for injecting multimodal knowledge to a pretrained LLM de- coder model. Our goal was to improve the existing LLMs to perform better on science-focused multi- modal reasoning and visual grounded tasks. To this end, we chose LLaMA (Touvron et al., 2023) as the LLM decoder, and CLIP visual encoder (Radford et al., 2021) to experiment with multimodal adapter training as shown in Figure 1. The multimodal adapter transforms the output of the visual encoder model as inputs to the language decoder with a linear projection layer. While we keep the language decoder and the visual encoder models frozen, the multimodal adapter is updated during the during the pretraining stage. This modu- lar architecture can be plugged with any language decoder and a visual encoder model. We continue our experiments with LLaMA 7B and 13B model variants for better comparison with other baseline models. It is worthwhile to note that we chose LLaMA due to its superior performance in the pub- lic benchmarks and its open-source accessibility. We do not use any instruction-tuned LLaMA variants (e.g., Vicuna, Guanaco) in our experiments due to two main reasons. First, we mainly focus on improving the base LLM decoder models with multimodal instructions generated by humans in order to eliminate all confounding factors such as machine generated instruction tuning. Since a ma- jority of instruction-tuned models developed on top of LLaMA are knowledge-distilled from closed- source, proprietary models like GPT-4, we want to avoid any unexpected performance advantages. Second, we want to make a fair comparison with other baseline models proposed in this area de- veloped on top of the base LLaMA model, and test whether the multimodal instruction tuning pro- posed in this work could lead into better scientific concept understanding compared to those models. Training We present the unsupervised distribu- tion estimation p(x) from a set of SciTune instruc- tions (x1, x2, .., xm) as the product of conditional multimodal token probabilities as shown in Equa- tion 1. p(x) = n (cid:89) j=1 p(sT >j|sV , sI , sT <j) (1) We use the sV as the multimodal tokens after projected from the respective plot visuals V . We sample the instruction sI from the list of ques- tions presented in Table 7. Note that we skip the token descriptors in sT for brevity, unless the model is trained autoregressively to generate ex- act tokens across all textual modalities in sT = {sc, st, so, sm}. More importantly, the model is able to jointly generate all modality tokens in a single-turn conversation. For example, given a scientific plot and an instruction, the model first generates the figure type (e.g., Graph Plot, Scatter- plot, Node Diagram, Equation, Bar Chart), then the visual content through captioning and OCR, and finally the cited paragraph. 4 Experiments In this section, we report the performance of LLaMA-SciTune model across a variety of science- focused downstreaming tasks. Our goal is to assess the performance of the model in visual grounded language understanding and multimodal reasoning tasks. For example, we want to show how much different training stage contributes to the model performance, or whether adding various scientific modalities in the instruction template (as presented in Section 3.1) improves the overall performance. To this end, we trained three LLaMA-SciTune mod- els to perform ablation studies on the input scien- tific modalities and test whether the scale of the LLM really matters in the downstream task per- formance. Each model spans over two training stages: i) the scientific concept alignment and ii) scientific instruction tuning. Thus, we use the cor- responding stage checkpoints to drive additional experiments. First, we report the performance of our model in two science-focused visual grounded tasks in Section 4.1 to assess the scientific con- cept alignment training stage. Finally, we use the ScienceQA benchmark to test the multimodal rea- soning abilities of our model across three scientific subject areas (Section 4.2). 4.1 Vision Grounded Tasks Performance In this section, we report the performance of the scientific concept alignment stage of the training with two zero-shot downstream tasks. In the first task, we evaluate how well the LLaMA-SciTune is able to align the associated figure types with the actual image. In the second task, we evaluate the performance of the LLaMA-SciTune in gen- erating the figure captions. We use the validation data released by SciCap challenge to perform our experiments. This validation dataset includes 500 random instances of plots and the associated figure type, caption, OCR, and paragraph mentions. 4.1.1 Scientific Figure Type Generation In the scientific concept alignment stage, one of the learning tasks is to align the scientific visu- als with the correct figure type. For example, the model should be able to distinguish a graph plot from a scatter plot. We compare the performance of our model of generating the figure types with a standalone vision encoder. For example, we use the CLIP model (Radford et al., 2021) to perform figure type classification in the zero-shot manner given five candidate types (e.g., Graph Plot, Scat- terplot, Node Diagram, Equation, Bar Chart). We locate the figure types in the generated SciTune outputs, and compare it with the ground truth. As shown in Table 1, LLaMA-SciTune shows 57% per- formance improvement over the standalone CLIP model used in the figure type classification. It is important to note that the LLaMA-SciTune used the same CLIP model as the visual encoder, but the additional multimodal adapter was optimized towards aligning figure types with the plots during the pretraining stage. This multimodal adapter is able to project the outputs of vision encoder into the LLM to improve its understanding on the scientific plots. Table 1: Accuracy of Generating the Figure Types. We also report the zero-shot figure type classification per- formance of the CLIP (Radford et al., 2021) model. Figure Type Graph Plot Scatterplot CLIP LLaMA-SciTune 93.48 54.07 79.06 53.48 98.71 Node Diagram 91.02 94.28 65.71 84.21 28.94 92.42 58.68 Equation Bar Chart All 4.1.2 Scientific Figure Captioning In this section, we test the model performance of generating scientific figure captions given only the scientific plot. Previous works show that scientific figure captioning is an extremely challenging task due to complex image understanding required in vision-to-language modeling (Huang et al., 2023). We take the first sentence in the generated SciTune output as the generated caption. We compare our model performance with the SOTA image caption- ing model, BLIP (Li et al., 2022), trained with more than 14M image-text pairs. We use two text evalu- ation metrics, BLEU an ROUGE, to measure the goodness of generated captions compared with the ground truth. As shown in Table 2, the LLaMA-SciTune model outperforms the BLIP model in both au- tomated text evaluation metrics. This suggests that LLaMA-SciTune may have a better understanding of the scientific plot in comparison to the BLIP It is model finetuned towards image captions. worthwhile to note that LLaMA-SciTune was not directly optimized towards image captioning, but it acquires this skill during the scientific concept alignment via instructions. Table 8 shows a few generated captions in comparison to the baseline and ground truth image captions. Table 2: Evaluation of Generated Figure Captions Model BLIP (Li et al., 2022) LLaMA-SciTune BLEU 0.02±0.02 0.05±0.03 ROUGE 0.10±0.07 0.13±0.08 4.2 Scientific Multimodal Reasoning Task Performance In this section, we evaluate the model performance on science-focused multimodal reasoning question and answering (QA). We report the model perfor- mance in the ScienceQA benchmark (Lu et al., 2022) that includes 21k multimodal multiple choice questions with rich domain diversity across 3 sub- jects, 26 topics, 127 categories, and 379 skills. We use the ScienceQA training split (12726 examples) to tune the model further as shown in Figure 1. Ta- ble 3 reports the performance of the models on the ScienceQA test split (4241 test questions). While lectures are shared between training and test splits, there are new questions associated with multimodal contexts, and explanations in the test split. We have three main observations from this figure. First, LLaMA-SciTune (CTOM) trained with LLaMA 13B model as the base language decoder model outperforms the human performance on aver- age and in four other sub-groupings. For example, this model records 90.03% accuracy in correctly answering the multimodal reasoning questions in the ScienceQA benchmark, where humans record only 88.40% accuracy. This performance benefit is consistent across social science questions, ques- tions with text or no contexts, and higher-grade questions. More importantly, we noticed that the LLaMA-SciTune model reaches a comparable per- formance with the LLaVA model, which is trained with twice the training data than what the former model has seen, and specially without any addi- tional support from GPT-4 during inference. Second, we noticed that LLaMA-SciTune (CTOM) models pretrained with additional scien- tific modalities such as caption, figure type, OCR, and figure mentions perform better than LLaMA- SciTune (C) pretrained only with captions. For example, CTOM variant (86.11) slightly outper- forms C variant (85.11) on average performance and across many other sub-groupings. However, LLaMA-SciTune models trained with LLaMA 7B model as the base language decoder are closer to the performance of LLaMA models (85.81% accu- racy) when they are directly trained on ScienceQA from scratch. Finally, we noticed a significant performance ad- vantage of the models trained with larger language decoder model (13B) compared to the relatively smaller model (7B). For example, the 13B model has nearly 5% performance advantage over the 7B model. This advantage is 5x bigger than what re- ported by the LLaVA model when scaled from 7B to 13B (Liu et al., 2023). While this observation suggests that the larger language decoder model helps to improve the multimodal reasoning perfor- mance, we believe it could lead to huge perfor- mance benefit with even larger models (LLaMA- 65B) when trained with highly-curated scientific multimodal instruction tuning datasets. Explanation Performance Analysis LLaMA- SciTune models also generate lecture and expla- nations along with the generated answers. Please see Figures 2 and 3 and in the Appendix for sev- eral examples of generated lectures and explana- tions. In order to better understand the behavior of generated solution, we manually investigate a few random test examples. Specifically, we picked 50 samples from both the correct and incorrect pre- dictions. We observe that even the correct samples contain a certain amount of incorrect solutions, i.e., around 8% in C and 2% in CTOM version of the 7B models. These results indicate that solution may not always benefit the final answer, and the model is robust to some extent, i.e., it can predict the correct answer even with incorrect rationales. The incorrect solutions are further divided into two major categories, namely commonsense that re- quires commonsense knowledge such as factual information and counting numbers in the images, and the logical mistakes which shows contradic- tions in the reasoning. In our experiment, common- sense mistakes are dominant compared to logical, which aligns with the previous works (Zhang et al., Table 3: Results (accuracy %) on ScienceQA dataset. Question classes: NAT = natural science, SOC = social science, LAN = language science, TXT = text context, IMG = image context, NO = no context, G1-6 = grades 1-6, G7-12 = grades 7-12. We present two variants, LLaMA-SciTune (C) and LLaMA-SciTune (CTOM). Acronyms inside the parenthesis represent the input modalities used in the SciTune instruction template. E.g., Caption, Figure Type, OCR, and Figure Mentions. We use the notation ♠ to denote the models finetuned with GPT-3.5/4 synthetic instructions, or use GPT-3.5/4 for any support during the inference time. We bold the accuracy values that are greater than what humans achieved. For additional baseline results, please refer the public ScienceQA leaderboard2 Method Random Chance Human Average UnifiedQA UnifiedQA (CoT) ♠ GPT-3 (Zero Shot) ♠ GPT-3 (CoT) (ALE) ♠ ChatGPT CoT ♠ GPT-4 CoT Multimodal-CoT Multimodal-CoT ♠ LLaMA-Adapter ♠ LLaVa ♠ LLaVa + GPT-4 (judge) ♠ Chameleon (ChatGPT) ♠ Chameleon (GPT-4) LLaMA-SciTune (C) LLaMA-SciTune (CTOM) LLaMA-SciTune (CTOM) #Params Avg - - 39.83 88.40 223M 70.12 223M 74.11 175B 74.04 175B 75.17 175B+ 78.31 1T+ 83.99 223M 84.91 770M 91.68 13B 85.19 13B 90.92 13B 92.53 175B+ 79.93 1T+ 86.54 7B 85.61 7B 86.11 13B 90.03 NAT 40.28 90.23 68.16 71.00 75.04 75.44 78.82 85.48 87.52 95.91 84.37 90.36 91.56 81.62 89.83 84.36 84.50 89.30 SOC LAN TXT IMG NO 47.45 46.13 89.60 84.97 63.78 69.18 66.42 76.04 74.24 66.59 74.68 70.87 77.37 70.98 82.65 72.44 87.88 77.17 95.26 82.00 88.30 83.72 95.95 89.49 90.62 96.74 79.77 70.64 88.27 74.13 92.23 89.56 94.15 88.35 93.08 95.61 29.25 87.48 74.91 78.91 78.00 78.09 83.18 90.27 85.82 90.82 84.36 88.00 91.09 84.00 89.82 82.81 82.91 87.00 40.08 87.50 61.38 66.53 65.74 67.43 67.92 71.49 82.90 88.80 80.32 88.00 88.99 70.80 77.64 81.26 83.64 86.67 33.66 88.10 77.84 81.81 79.58 79.93 86.13 92.89 86.83 92.89 86.90 90.66 93.52 86.62 92.13 88.29 88.74 91.75 G1-6 G7-12 40.67 39.35 82.42 91.59 65.00 72.98 68.82 77.06 69.87 76.36 69.68 78.23 74.03 80.72 79.04 86.66 85.37 84.65 90.31 92.44 84.05 85.83 90.90 90.93 92.16 92.73 76.53 81.86 83.72 88.03 86.03 81.28 85.60 85.05 91.30 84.37 Table 4: Few-shot performance analysis. We report the number of times lectures seen during the training in frequency, and the number of test questions with the lecture. Frequency #Questions Accuracy 5 10 25 50 36 125 412 1140 (7B) 75.00 81.60 80.34 81.05 Accuracy (13B) 83.33 85.60 85.92 86.14 2023b). Furthermore, there are cases where solu- tions are correct in a absolute sense but their final answers are wrong. We also noticed that solutions generated by CTOM are more accurate compared to C version of the model, further emphasizing the importance of multi-modal training with additional scientific modalities. There are certain task cate- gories where our model performs extremely well compared to baselines. In our manual analysis, we found that model is very good with numerical ques- tions, including temperatures and distances, and can answer all topological/map related questions such as "which ocean is highlighted" in the image. While we observe high performance in aggre- gate, it is also important to determine whether this performance persists in cases with minimal train- ing examples. We evaluate the performance of the model for questions whose accompanying lectures are only observed a few times in the training data. For these few-shot examples, the model will be less likely to have the exact lecture memorized and ready to use in its generation of the answer, which could lead to lower performance. We show the model performance on questions for which the lec- tures were viewed in 5, 10, 25, and 50 times during training, in Table 4. The model performance drops substantially for questions with only 5 or fewer lectures in the training data but quickly recovers after the lecture is viewed at least 10 times, which suggests that the LLaMA-SciTune model doesn’t require significant exposure to a particular type of knowledge to achieve adequate performance. Fur- thermore, this performance drop is worse for the 7B model as compared to the 13B model, which means that the 13B model is able to learn more quickly from fewer examples or may have more knowledge “baked in” from pretraining that can be leveraged for few-shot examples. Future extensions of the model to other datasets should test perfor- mance on truly unseen data, e.g. a more standard VQA dataset, to determine whether the model is Table 5: Evaluation of generated lectures and solutions. 7B Model BLEU ROUGE BLEU ROUGE 13B Model All answers 0.763 Lecture Solution 0.791 Correct answers Lecture 0.765 Solution 0.829 Incorrect answers Lecture 0.751 Solution 0.565 0.778 0.838 0.854 0.872 0.780 0.873 0.847 0.893 0.767 0.631 0.909 0.694 0.868 0.921 0.861 0.937 0.924 0.778 similarly robust in other domains. Chain of Thought Reasoning Performance Outside of the coarse-grained accuracy metric (did the model get the answer right?), we also need to determine whether the model’s overall process of reasoning was correct (did the model accurately explain the reasoning that supports the answer?). We investigate the accuracy of the generated text, outside of the answer alone, assessing if the model is able to accurately recover the lecture and the solution that it was trained to generate and to help its reasoning toward the final answer. We report the BLEU and ROUGE scores over all the gener- ated text, separated into the lecture and solution components and compared with the correspond- ing ground-truth data, e.g. compare the generated lecture component with the ground-truth lecture. The aggregate results for the generation metrics are shown in Table 5. When considering all the questions, the model generates the solution text with higher accuracy than the lecture text. However, in cases where the model answers incorrectly, the trend reverses and the model has a higher accuracy in generating the lecture text as compared to the solution text. There- fore, the model may be failing to answer these questions due to a failure to reason in the “solution stage” of its generation. Furthermore, for the 13B model we see that the lecture generation perfor- mance is higher for incorrect answers than correct answers (ROUGE score of 0.924 for incorrect vs. 0.861 for correct). This could indicate overfitting, where the model “memorizes” lectures that apply to the problem but fails to apply the lectures to the actual solution. This problem is apparent with an example ques- tion about object properties, where the model must determine the property shared by an icicle, a fish bowl, a glass, and a tea cup. The model correctly generates the lecture about object properties re- quired to reason through the problem (“An object has different properties. A property of an object can tell you how it looks, feels, tastes, or smells.”). However, in the solution stage the model incor- rectly reasons that all the objects were transparent instead of fragile, based on a failure to infer the properties of the objects from the image (“You can see clearly through a transparent object. All four objects are transparent.”). Incorrect reasoning can be attributed to two fac- tors, i.e., linguistic and visual features. In our manual analysis of 100 test samples, we found that linguistic features are weak for mainly two use cases, namely retrieving common sense facts (e.g. characteristics of birds peak) and semantic un- derstanding of words in terms of figure of speech and relative position of words in the dictionary. In contrast, visual features appear to be strong in use-cases such as identifying geographical areas, named entities but it lags in counting numbers in images and retrieving properties of objects such as color, texture and states. Future improvements to the LLaMA-SciTune model should mitigate this type of incorrect reasoning, possibly with more explicit chain-of-thought guidance during training and inference. 5 Conclusion and Discussions We present SciTune, a framework for scientific multimodal instruction tuning to align LLMs with scientific concepts and goals. To this end, we trained several LLaMA-SciTune models built on top of LLaMA language decoder model and CLIP vision encoder model and evaluated on a vari- ety of science-focused multimodal downstream tasks. For example, we show that the LLMs tuned with human-generated scientific multimodal in- structions can perform better on classifying sci- entific visuals and generating figure captions with zero-demonstrations during the inference time com- pared with SOTA vision-to-language models. Fur- thermore, the LLaMA-SciTune model surpasses the human performance in ScienceQA, the stan- dard multimodal science-focused reasoning QA benchmark when finetuned with task-specific in- structions. Acknowledgements This work was supported by the NNSA Office of Defense Nuclear Nonproliferation Research and Development, U.S. Department of Energy, and Pacific Northwest National Laboratory, which is operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract DE- AC05–76RLO1830. This article has been cleared by PNNL for public release as PNNL-SA-186641. Authors thank Karl Pazdernik and Sandy Thomp- son for their help with proofreading the article. References Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073. Sébastien Bubeck, Varun Chandrasekaran, Ronen El- dan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lund- berg, et al. 2023. Sparks of artificial general intelli- gence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23– 28, 2020, Proceedings, Part I 16, pages 213–229. Springer. Hyung Won Chung, Le Hou, Shayne Longpre, Bar- ret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Yao 2023. Fu. mmlu. chain-of-thought-hub/tree/main/MMLU. for https://github.com/FranxYao/ Evaluation scripts Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. 2023. Llama-adapter v2: Parameter-efficient visual instruc- tion model. Amelia Glaese, Nat McAleese, Maja Tr˛ebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. 2022. Improving alignment of dialogue agents arXiv preprint via targeted human judgements. arXiv:2209.14375. Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. The false promise of imitating proprietary llms. Prakhar Gupta, Cathy Jiao, Yi-Ting Yeh, Shikib Mehri, Maxine Eskenazi, and Jeffrey P Bigham. 2022. Im- proving zero and few-shot generalization in dia- logue through instruction tuning. arXiv preprint arXiv:2205.12673. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Stein- hardt. 2021. Measuring massive multitask language understanding. Proceedings of the International Con- ference on Learning Representations (ICLR). Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022. Unnatural instructions: Tuning lan- guage models with (almost) no human labor. arXiv preprint arXiv:2212.09689. Sameera Horawalavithana, Ellyn Ayton, Shivam Sharma, Scott Howland, Megha Subramanian, Scott Vasquez, Robin Cosbey, Maria Glenski, and Svit- lana Volkova. 2022. Foundation models of scientific knowledge for chemistry: Opportunities, challenges and lessons learned. In Proceedings of BigScience Episode\# 5–Workshop on Challenges & Perspec- tives in Creating Large Language Models, pages 160– 172. Ting-Yao Hsu, C Lee Giles, and Ting-Hao’Kenneth’ Huang. 2021. Scicap: Generating captions for scien- tific figures. arXiv preprint arXiv:2110.11624. Chieh-Yang Huang, Ting-Yao Hsu, Ryan Rossi, Ani Nenkova, Sungchul Kim, Gromit Yeuk-Yin Chan, Eu- nyee Koh, Clyde Lee Giles, and Ting-Hao’Kenneth’ Huang. 2023. Summaries as captions: Generat- ing figure captions for scientific documents with arXiv preprint automated text summarization. arXiv:2302.12324. Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Dániel Simig, Ping Yu, Kurt Shus- ter, Tianlu Wang, Qing Liu, Punit Singh Koura, et al. 2022. Opt-iml: Scaling language model instruc- tion meta learning through the lens of generalization. arXiv preprint arXiv:2212.12017. Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Nau- mann, Hoifung Poon, and Jianfeng Gao. 2023. Llava- med: Training a large language-and-vision assis- tant for biomedicine in one day. arXiv preprint arXiv:2306.00890. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pre- training for unified vision-language understanding and generation. In International Conference on Ma- chine Learning, pages 12888–12900. PMLR. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. arXiv preprint arXiv:2304.08485. Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai- Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507–2521. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generaliza- tion through multitask finetuning. arXiv preprint arXiv:2211.01786. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question- answering with human feedback. arXiv preprint arXiv:2112.09332. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Gal- ley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International confer- ence on machine learning, pages 8748–8763. PMLR. Vipul Raheja, Dhruv Kumar, Ryan Koo, and Dongyeop Kang. 2023. Coedit: Text editing by task-specific instruction tuning. arXiv preprint arXiv:2305.09857. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic im- age captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2556–2565. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023. How far can camels go? exploring the state of instruction tuning on open resources. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al- isa Liu, Noah A Smith, Daniel Khashabi, and Han- naneh Hajishirzi. 2022. Self-instruct: Aligning lan- guage model with self generated instructions. arXiv preprint arXiv:2212.10560. Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, and Minjoon Seo. 2022. Guess the instruction! making language models stronger zero-shot learners. arXiv preprint arXiv:2210.02969. Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. 2023a. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199. Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. 2023b. Multi- modal chain-of-thought reasoning in language mod- els. arXiv preprint arXiv:2302.00923. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592. A SciTune Multimodal Instructions Table 6 shows two SciTune instruction examples used in the scientific concept alignment training stage. Table 8 presents the LLaVA questions that used to sample the sI instructions. B Training Details We use the LLaVA codebase (Liu et al., 2023) for multimodal adapter training with SciTune instruc- tions. We train the model for 1 epoch with 128 batch size with a 0.002 learning rate and 2048 con- text length. LLaMA-SciTune 7B and 13B model variants took 6.5 and 11.2 hours to train, respec- tively with 8 x A100 GPUs. We use Pytorch Fully Sharded Data Parallel (FSDP) to recursively wrap the language models decoder layers in the task- specific instruction finetuning stage. We finetuned the models for 12 epochs with the ScienceQA training example to make a fair comparison with LLaVA. Table 6: SciTune Multimodal Instruction Examples. We distinguish the system message sD, natural language instruction sI , scientific figure type st, caption sc, OCR so and paragraph mentions sm. The list of instructions used to sample sI are presented at Table 7. A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human’s questions. Human: Give an elaborate explanation of the image you see. Assistant: Graph Plot Room temperature characteristics (drain-source current versus gate voltage) for a set of nominally identical samples from the same wafer. Sample to sample variations are very important, especially below the threshold voltage (-0.5 V for the best device, in red). In the most extreme case (blue curve) the many decades of excess current compared to the best device are attributed to direct transport through individual arsenic dopants which have diffused into the channel after doping and annealing steps. The source-drain voltage is 10mV for these data. FIG. 6: figure S1: Room temperature characteristics (drain-source current versus gate voltage) for a set of nominally identical samples from the same wafer. A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human’s questions. Human: Describe the following image in detail. Assistant: Graph Plot Resonance frequency of the (1, 0) mode of the r1 = 4 cm cavity filled with 1 atm of N2 as a function of the vertical position of the central electrode. Inner electrode position, (arb , offset) In Fig. 10 we have an example of this behavior: the resonance frequency shows a maximum which corresponds to a minimum in the eccentricity of the electrode [39]. Table 7: LLaVA Instructions for detailed image descriptions "Describe the following image in detail." "Provide a detailed description of the given image." "Give an elaborate explanation of the image you see." "Share a comprehensive rundown of the presented image." "Offer a thorough analysis of the image." "Explain the various aspects of the image before you." "Clarify the contents of the displayed image with great detail." "Characterize the image using a well-detailed description." "Break down the elements of the image in a detailed manner." "Walk through the important details of the image." "Portray the image with a rich, descriptive narrative." "Narrate the contents of the image with precision." "Analyze the image in a comprehensive and detailed manner." "Illustrate the image through a descriptive explanation." "Examine the image closely and share its details." "Write an exhaustive depiction of the given image." C Visual Grounded Task Performance Table 8 shows few generated captions for the Sci- Cap images used to test the model performance on visual grounded tasks. We report the gold-standard captions as they appeared in the arXiv articles used to collect SciCap dataset, and the captions gener- ated from the BLIP and LLaMA-SciTune (13B, CTOM) models for the comparisons. D ScienceQA Chain of Thought Reasoning Examples Figures 2 and 3 show a few answers, lectures and solutions generated by LLaMA-SciTune (13B, CTOM) for ScienceQA test instances. Table 8: A Sample of Generated Captions. We highlight the gold standard caption in red, and generated captions from the BLIP (Li et al., 2022) model in gray. LLaMA-SciTune model first generates the figure types followed with the captions colored in blue. Packet drop rate a chart of a bar chart with a number of different items Bar Chart Packet drop rate for each method. The kinetic energy of the recoil protons as a function of the recoil angle at beam momenta P=1.5 and 15 GeV/c, blue and red, respectively. a plot of a curve with a blue line and a red line. Graph Plot The angular distribution of the electron recoil spectrum in the 1.5 GeV/c and 15 GeV/c electron beams. Artificial neural network structure. a diagram of a network with several different paths. Node Diagram The generative neural network. ROC curves of cIBP-VAE in comparison to alternative models on the clinical ECG data set. a plot of the average and average time of a cell phone. Graph Plot ROC curves of c-VAE, CNN, and c-VAE+CNN on the cerebellar atrophy dataset. Functional architecture of the developed prototype. a diagram of a camera and a person on a phone. Node Diagram An overview of the system architecture of the proposed method. Distance between matched groups in Madrid and RGO catalogs (bins of 0.1 degrees). The red line represents the mean value. a plot of a line of data with a red line and a white line. Graph Plot Distance correlation between groups matched by Madrid RGO. Comparison of penetration rate of mobile broadband subscribers with that of fixed broadband subscribers. a chart of the number of people who are using the internet. Bar Chart The average rate of mobile broadband subcribers and fixed broad- band subscribers for each quarter (in thousands). Model staleness of the one-off trained model vs. the model retrained every day. a diagram of a graph with a line graph and a line graph. Graph Plot Prediction accuracy of model trained once and fine-tuned every day. Comparison of the effect of the path loss exponent α on rates achieved by both transmitters, M = 4. a plot of a line graph with a blue line and red line. Graph Plot The sum-rate and sum-rate of RRM-RRM with respect to the path loss exponent γ for the two cases: γ = 2 and γ = 3. Conceptual diagram of nonlinear adaptive method developed to control the HCV epidemic in the existence of uncertainties on parameters of the model. a diagram of a block diagram of a nuclear system. Node Diagram Block diagram of the proposed non-linear SIR epidemic model with adaptive controllers. (a) An example with right answer and right explanation (b) An example with incorrect answer and incorrect explanation Figure 2: Two Multimodal QA examples with answer and explanation generated by LLaMA-SciTune (a) An example with right answer and right explanation (b) An example with incorrect answer and incorrect explanation Figure 3: Two Unimodal QA examples with answer and explanation generated by LLaMA-SciTune
ai_researcher
9
Two_Heads_Are_Better_Than_One_A_Multi-Agent_System_Has_the_Potential_to_Improve_Scientific_Idea_Generation.pdf
1 0 0 2 t c O 8 1 ] C C . s c [ 1 v 9 3 0 0 1 1 0 / s c : v i X r a JOURNAL OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, 1997 TWO HEADS ARE BETTER THAN TWO TAPES TAO JIANG, JOEL I. SEIFERAS, AND PAUL M. B. VIT ´ANYI 1. Introduction The Turing machines commonly used and studied in computer science have sepa- rate tapes for input/output and for storage, so that we can conveniently study both storage as a dynamic resource and the more complex storage structures required for efficient implementation of practical algorithms [HS65]. Early researchers [MRF67] asked specifically whether two-head storage is more powerful if both heads are on the same one-dimensional storage tape than if they are on separate one-dimensional tapes, an issue of whether shared sequential storage is more powerful than separate sequential storage. Our result settles the longstanding conjecture that it is. In a broader context, there are a number of natural structural parameters for the storage tapes of a Turing machine. These include the number of tapes, the dimension of the tapes, and the number of heads on each tape. It is natural to conjecture that a deficiency in any such parameter is significant and cannot be fully compensated for by advantages in the others. For the most part, this has indeed turned out to be the case, although the proofs have been disproportionately difficult [Ra63, He66, Gr77, Aa74, PSS81, Pa82, DGPR84, Ma85, LV88, LLV92, MSST93, PSSN90]. The case of deficiency in the number of heads allowed on each tape has turned out to be the most delicate, because it involves a surprise: A larger number of single- head tapes can compensate for the absence of multihead tapes [MRF67, FMR72, LS81]. For example, four single-head tapes suffice for general simulation of a two- head tape unit, without any time loss at all [LS81]. The remaining question is just what, if anything, is the advantage of multihead tapes. Received by the editors shortly after July 20, 1995. 1991 Mathematics Subject Classification. Primary 68Q05, 68Q30; Secondary 68P20, 94A17, 68Q25, 03D15. Key words and phrases. two-head tape, multihead tape, buffer, queue, heads vs. tapes, mul- titape Turing machine, real-time simulation, on-line simulation, lower bound, Kolmogorov com- plexity, overlap. The first author was supported in part by NSERC Operating Grant OGP0046613. The third author was supported in part by the European Union through NeuroCOLT ESPRIT Working Group Number 8556, and by NWO through NFI Project ALADDIN under Contract number NF 62-376. We thank Wolfgang Maass for discussions that contributed to an early version of the Anti- Holography Lemma in 1985. We thank Ming Li for other valuable discussions, and for first bringing the problem addressed here to the attention of the first author. We thank Zvi Galil and Ken Regan for helpful comments on the manuscript. An earlier version of this report appeared in the Proceedings of STOC ’94, the Twenty-Sixth Annual ACM Symposium on the Theory of Computing, pp. 668–675. c(cid:13)1997 American Mathematical Society 1 2 TAO JIANG, JOEL I. SEIFERAS, AND PAUL M. B. VIT ´ANYI The simplest version of the question is whether a two-head tape is more power- ful than two single-head tapes. In the case of multidimensional “tapes”, Paul has shown that it is [Pa84]. His proof involves using the two-head tape to write, and occasionally to retrieve parts of, algorithmically incompressible bit patterns. Be- cause the diameter of the pattern (and hence the retrieval times) can be kept much smaller than its volume, no fast simulator would ever have time to perform any significant revision or copying of its representation of the bit pattern. On ordinary one-dimensional tapes, however, retrievals take time that is not small compared to the volume of data, and we cannot so easily focus on a nearly static representation of the data. We need some more subtle way to rule out all (possibly very obscure) copying methods that a two-tape machine might employ to keep up with its mission of fast simulation. Our argument below does finally get a handle on this elusive “copying” issue, making use of a lemma formulated more than ten years ago with this goal already in mind [Vi84, Far-Out Lemma below]. Our specific result is that no Turing machine with just two single-head one- dimensional storage tapes can recognize the following language in real time:1 ′ x2x L = { x | 0, 1 } ∈ { ∗ ′ and x is a prefix of x . } With a two-head tape, a Turing machine can easily recognize L in real time. Our result incidentally gives us a tight bound on the number of single-head tapes needed to recognize the particular language L in real time, since three do suffice [MRF67, FMR72]. Thus L is another example of a language with “number-of-tapes complexity” 3, rather different from the one first given by Aanderaa [Aa74, PSS81]. (For the latter, even a two-head tape, even if enhanced by instantaneous head-to- head jumps and allowed to operate probabilistically, was not enough [PSSN90].) Historically, multihead tapes were introduced in Hartmanis and Stearns’ seminal paper [HS65], which outlined a linear -time1 simulation of an h-head tape, using some larger number of ordinary single-head tapes. Stoß [St70] later reduced the number of single-head tapes to just h. Noting the existence of an easy real -time simulation in the other direction, Beˇcv´aˇr [Be65] explicitly raised the question of real- time simulation of an h-head tape using only single-head tapes. Meyer, Rosenberg, and Fischer devised the first such simulation [MRF67]; and others later reduced the number of tapes [FMR72, Be74, LS81], ultimately to just 4h 4. We are the first to show that this number cannot always be reduced to just h, although both the extra power of multihead tapes and the more-than-two-tape complexity of the particular language L have been longstanding conjectures [FMR72, LS81, Vi84, Pa84]. − 2. Tools Overlap. Part of our strategy will be to find within any computation a sufficiently long subcomputation that is sufficiently well behaved for the rest of our analysis. 1On-line recognition requires a verdict for each input prefix before the next input symbol is read, and real-time recognition is on-line recognition with some constant delay bound on the number of steps between the reading of successive input symbols. Note that even a single-tape Turing machine can recognize L on-line in cumulative linear time; but this involves an unbounded (linear-time) delay to “rewind” after reading the symbol 2. In cumulative linear time, in fact, general on-line simulation of a two-head one-dimensional tape is possible using just two single-head tapes [St70]; so real time is a stronger notion of “without time loss”. (There is an analogous linear- time simulation for two-dimensional tapes [ST89], but the question is open for higher dimensions.) TWO HEADS ARE BETTER THAN TWO TAPES 3 The behavior we seek involves limitations on repeated access to storage locations, which we call “overlap” [Aa74, PSSN90]. Our overlap lemma is purely combinatorial, and does not depend at all on the nature of our computations or the “storage locations” corresponding to their steps. Nor does it depend on the computational significance of the steps designated as “distinguished”. The use of computational terminology would only obscure the lemma’s formulation and proof, so we avoid it. An overlap event in a sequence S = ℓ1, . . . , ℓT (of “storage locations”, in our ap- plication) is a pair (i, j) of indices with 1 } ≤ (“visit and soonest revisit”). If ωt(S) is the number of such overlap events “strad- dling” t (i.e., with i t but j (cid:2) t), then the sequence’s internal overlap, ω(S), is max . The relative internal overlap is ω(S)/T . T and ℓi = ℓj / ℓi+1, . . . , ℓj−1 i < j ∈ { ≤ 1 ωt(S) ≤ t < T Here is an example: In the sequence ≤ } { | S = cow, pig, horse, pig, sheep, horse, pig, the overlap events are (2, 4), (4, 7), and (3, 6). For t from 1 up to 6, the respective values of ωt(S) are 0, 1, 2, 2, 2, and 1; so ω(S) is 2, and the relative internal overlap is 2/7. (In our setting below, we apply these definitions to the sequence of storage locations shifted to on the successive steps of a computation or subcomputation. Without loss of generality, we assume that a multihead or multitape machine shifts exactly one head on each step.) The lemma we now formulate guarantees the existence of a contiguous subse- quence that has “small” relative internal overlap (quantified using ε), but that is itself still “long” (quantified using ε′). The lemma additionally guarantees that the subsequence can include a quite fair share of a set of “distinguished positions” of our choice in the original sequence. (The “designated positions” in our setting will be the items in the sequence that correspond to a large “matching”—a notion we define later, especially motivated by computations involving two heads.) Overlap Lemma. Consider any δ < 1 and any ε > 0. Every sequence S (of length T , say) with “distinguished-position” density at least δ has a long contiguous subsequence, of length at least ε′T for some constant ε′ > 0 that depends only on δ and ε, with distinguished-position density still at least δ/2, and with relative internal overlap less than ε. Proof. Without loss of generality, assume T is a power of 2 that is large in terms of δ and ε. (If T is not a power of 2, then we can discard an appropriate prefix and suffix of combined length less than half the total, to obtain such a sequence with distinguished-position density still at least δ.) We consider only the sequence’s two halves, four quarters, eight eighths, etc. Of these, we seek many with sufficient distinguished-position density (at least δ/2) and with internal overlap accounted for by distinct overlap events, planning then to use the fact that each item in S can serve as the second component of at most one overlap event. Within each candidate subsequence S′, we can select a particular straddle point t for which ω(S′) = ωt(S′), and then we can designate the ω(S′) overlap events within S′ that straddle position t as the ones we consider counting. The designated overlap events in S′ can be shared by another interval only if that interval includes the corresponding selected straddle point t. 4 TAO JIANG, JOEL I. SEIFERAS, AND PAUL M. B. VIT ´ANYI We consider the candidate sequences in order of decreasing length (i.e., halves, then quarters, then eighths, etc.). At each partitioning level, at least fraction δ/2 of the subsequences must have distinguished-position density at least δ/2. (Otherwise, we cannot possibly have the guaranteed total δT distinguished positions in the subsequences on that level, since (δ/2) δ/2 < δ.) Among these, we can count distinct overlap from = 1 + (1 δ/2) − · · (δ/2)2 ⌈ (δ/2)4 ⌈ (δ/2)8 ⌈ (δ/2)16 ⌈ etc. δ ⌈ ⌉ ≥ ⌉ (δ/2)2 ⌉ − ⌈ (δ/2)4 ⌉ − ⌈ ⌉ ⌉ (δ/2)8 δ/2 = = − ⌈ ⌈ = ⌉ − ⌈ ⌉ δ 1/2 halves, 2δ 4δ ⌉ − ⌈ ⌉ − ⌈ 8δ ⌉ − ⌈ δ ⌉ ≥ 2δ ⌉ ≥ 4δ ⌉ ≥ ⌈ 1/2 quarters, 1/2 eighths, 1/2 sixteenths, − 2δ 4δ − − Unless we find one of these sequences that has relative internal overlap less than ε, this accounts, at the ith level, for at least (2i−2δ − t)(εT /2i) = εδT /4 εT /2i+1 − distinct overlap events, and hence for more than T distinct overlap events after levels. This is impossible, so we must find the desired low-overlap (4 + 2ε)/(εδ) ⌈ ⌉ (cid:3) sequence at one of these levels. Kolmogorov Complexity. A key to the tractability of our arguments (and most of the recent ones we have cited [Pa82, Pa84, PSS81, DGPR84, Ma85, LV88, LLV92, PSSN90, Vi84]) is the use of “incompressible data”. Input strings that involve such data tend to be the hardest and least subject to special handling. We define incompressibility in terms of Kolmogorov’s robust notion of descrip- tional complexity [Ko65]. Informally, the Kolmogorov complexity K(x) of a binary string x is the length of the shortest binary program (for a fixed reference universal machine) that prints x as its only output and then halts. A string x is incom- pressible if K(x) is at least , the approximate length of a program that simply includes all of x literally. Similarly, a string x is “nearly” incompressible if K(x) is “almost as large as” x | | . The appropriate standard for “almost as large” above can depend on the context, a typical choice being “K(x) )”. The latter implicitly involves (log some constant, however, the careful choice of which might be an additional source of confusion in our many-parameter context. A less typical but more absolute standard such as “K(x) ” completely avoids the introduction of yet another constant. x | − p| | − O ≥ | ≥ | x | x | x | x | | Similarly, the conditional Kolmogorov complexity of x with respect to y, denoted by K(x y), is the length of the shortest program that, with extra information y, | prints x. And a string x is incompressible or nearly incompressible relative to y if K(x y) is y) is large in the appropriate sense. If, at the opposite extreme, K(x | | , then we say that y codes x so small that [CTPR85]. y) is “almost as large as” K(x | x | | − x | | There are a few well-known facts about these notions that we will use freely, sometimes only implicitly. Proofs and elaboration, when they are not sufficiently obvious, can be found in the literature [especially LV93]. The simplest is that, both absolutely and relative to any fixed string y, there are incompressible strings of every length, and that most strings are nearly incompressible, by any standard. Another easy one is that significantly long subwords of an incompressible string are themselves nearly incompressible, even relative to the rest of the string. More TWO HEADS ARE BETTER THAN TWO TAPES 5 y) is very nearly equal to K(y) striking is Kolmogorov and Levin’s “symmetry of information” [ZL70]: K(x) − K(x x) (up to an additive term that is | logarithmic in the Kolmogorov complexity of the binary encoding of the pair (x, y)); i.e., y is always approximately as helpful in describing x as vice versa! (Admittedly, the word “helpful” can be misleading here—the result says nothing at all about the relative computational complexity of generating the two strings from each other.) All these facts can be relativized or further relativized; for example, symmetry of information also holds in the presence of help from any fixed string z: K(y − | K(x (cid:12) (cid:12) z) K(x | − ≈ z) y (cid:12) (cid:12) K(y (cid:12) (cid:12) 3. Strategy K(y z) − z). x (cid:12) (cid:12) | | } } x 0, 1 ∈ { x2x′ { 0, 1 ∈ { ∗ and x′ is a prefix of x For the sake of argument, suppose some two-tape Turing machine M does recog- in real time. Once a binary string nize } ∗ has been read by M , the contents of M ’s tapes tend to serve as a very x redundant representation of prefixes of x, because M has to be prepared to retrieve them at any time. (Our problem and this observation were motivation for Chung, Tarjan, Paul, and Reischuk’s investigation of “robust codings of strings by pairs of strings” [CTPR85].) One way around this is for M to keep one or the other of its tapes’ heads stationed at some stored record of a long prefix of x, as “in- surance”. The early real-time multihead simulations of buffers [MRF67, FMR72, Be74] do follow this strategy, but we show that a machine with only two tapes will not be able to afford always to use one in this way for insurance: There will have to be a significant subcomputation in which the heads on both tapes “keep moving”, even “essentially monotonically”—essentially as they would for straight- forward “copying”. Under these circumstances, in fact, we will be able to use part of the computation itself, rather than the combination of the two tapes’ contents, as the very redundant representation, to contradict the following lemma, which we prove later. Anti-Holography Lemma. Consider any constant C, and consider any binary string x that is long in terms of C, and that is nearly incompressible.2 Suppose y = y1y2 . . . yk (each yi a binary string) is a “representation” with the following properties: | y C 1. ; x | | ≤ 2. For each ℓ ℓ. k i | ≤ − k, x’s prefix of length ℓ x | | ≤ /k is coded by yi+1 . . . yi+ℓ for each Then k is bounded by some constant that depends only on C. For (the binary representation of) a T -step subcomputation by M to serve as a representation y that contradicts this lemma, we need the following: 1. A nearly incompressible input prefix x of length at least read before the subcomputation. /C = Θ(T /C) was y | | 2. There is a parse of the subcomputation into a large number k of pieces so /k is coded in every contiguous sequence that each prefix of x of length ℓ of ℓ pieces. x | | 3. k is (too) large in terms of C. 2We need K(x) > δ|x|, for some fraction δ that is determined by C; so certainly K(x) > |x| − p|x| will be enough if x is long. 6 TAO JIANG, JOEL I. SEIFERAS, AND PAUL M. B. VIT ´ANYI We accomplish these things by finding a subcomputation that has a spatially mono- tonic “matching” that is both long and so well separated spatially that needed information on tape contents cannot be spread over many pieces of the subcompu- tation. The first step is to define and find “a large matching”, and the second is to refine it in a suitable way. In a two-tape or two-head computation or subcomputation, a monotonic sequence of time instants is a matching if neither head scans the same tape square at more than one of the time instants. (So there is actually a separate one-to-one “matching” for each head, between the time instants and the tape squares scanned by that head at those times.) We prove the following lemma later on. Large-Matching Lemma. If a two-tape Turing machine recognizes ′ x2x { x | 0, 1 } ∈ { ∗ ′ and x is a prefix of x } in real time, then its computation on an incompressible binary input of length n includes a matching of length Ω(n). (The implicit constant does depend on the machine.) (Note that this lemma does not hold if the two heads can be on the same tape.) In a two-tape or two-head computation or subcomputation, a matching is (spa- tially) monotonic if, for each of the two heads, the spatial order of the corresponding sequence of tape squares being scanned at the specified time instants is strictly left- to-right or strictly right-to-left. The minimum separation of a monotonic matching is the least distance between successive tape squares in either corresponding se- quence of tape squares. Monotonization Lemma. Suppose ε > 0 is small in terms of δ > 0. If a two- tape (sub)computation of length T has a matching of length at least δT and internal overlap less than εT , then the computation has a monotonic submatching of length Ω(δ/ε) and minimum separation Ω(εT ). (The implicit constants here really are constant, not depending even on the machine; for use below, let c denote the smaller of them.) Proof. Without loss of generality, assume T is large in terms of δ and ε. Parse the computation into about δ/(2ε) subcomputations, each including a matching of length at least 2εT . Each subcomputation involves a contiguous set of at least 2εT distinct tape squares on each tape. The sets from successive subcomputations touch or intersect, but the overlap bound limits their intersection to less than εT tape squares. If we omit every second subcomputation’s set, therefore, we get a spatially monotonic sequence of about δ/(4ε) nonintersecting sets on each tape. If we further omit every second remaining set, then we get a monotonic sequence of about δ/(8ε) sets on each tape, with successive sets separated by at least 2εT tape squares. To get the desired submatching, simply include one matching-time instant (cid:3) from each of the δ/(8ε) remaining subcomputations. 4. Careful Argument Now let us put together the whole argument, taking care to introduce the “con- stants” M (and d), δ, ε, and ε′ in an appropriate order, all before the input length n TWO HEADS ARE BETTER THAN TWO TAPES 7 and the particular input string x0 on which we focus. Each of these values is allowed to depend on earlier ones, but not on later ones. { x x2x′ For the sake of argument, suppose some two-tape Turing machine M does rec- in real time, say ognize the language with delay bound d. Citing the Large-Matching Lemma, take δ > 0 small enough ∗ includes a so that M ’s computation on any incompressible input string x matching of length at least δ . Let ε > 0 be small in terms of d, δ, and M ; and let ε′ be small in terms of d, δ, and ε. Let n be large in terms of all these constants, and let x0 be any incompressible string of n bits. ∗ and x′ is a prefix of x ∈ { 0, 1 0, 1 x | ∈ { } } } | | Split the computation by M on input x0 into an initial subcomputation and a . The number of final subcomputation, each including a matching of length and dn. Therefore, steps in each of these subcomputations will lie between the initial one will involve a prefix of x0 of length at least (1/d)(δn/2) = nδ/(2d), and the final one will have “match density” at least (δn/2)/(dn) = δ/(2d). ⌊ δn/2 δn/2 ⌋ ⌊ ⌋ Applying the Overlap Lemma to the final subcomputation above, we obtain a ε′n, with match density at least δ/(4d) and subcomputation of some length T relative internal overlap less than ε, provided ε′ was chosen small enough in terms of d, δ, and ε. Then applying the Monotonization Lemma, we obtain within this subcomputation a monotonic submatching of minimum separation at least cεT , and 1 (whichever of length 2k + 1, where 2k + 1 is either is odd). If ε was chosen small, then k will be large. Note that kε is approximately equal to a constant cδ/(8d) that depends only on M . c(δ/(4d))/ε c(δ/(4d))/ε ⌉− or ≥ ⌈ ⌉ ⌈ To obtain the desired contradiction to the Anti-Holography Lemma, take y to be a complete record of the T -step subcomputation obtained above, including the symbols scanned and written by each head on each step. To obtain y1, y2, . . . , yk, split this record at every second one of the time instants corresponding to the matching of length 2k + 1, starting with the third and ending with the third-to- last. Take x to be x0’s prefix of length kcεT /(2d). Since δn/(2d) exceeds this length (assuming we chose our constants appropriately), all of x was already read during the initial subcomputation above, and hence before the beginning of the subcomputation described by y. Note that, for some constant D that depends only on M , y | | ≤ DT = 2dD kcε | x | ≈ 16d2D c2δ | , x | and that k is large (in fact, too large for the Anti-Holography Lemma) in terms of the constant C = 16d2D/(c2δ), assuming we chose ε small enough. | x | To see that x’s prefix of length ℓ /k is coded by yi+1 . . . yi+ℓ (for each appropri- ate ℓ and i), suppose we interrupt M with “the command to begin retrieval” (i.e., with the symbol 2) at the (2i + ℓ + 1)st of the time instants corresponding to the /k matching of length 2k +1. Since M must be able to check the prefix of length ℓ by reading only the information within distance dℓ /k = ℓcεT /2 of its heads, that prefix must be coded by that information. Since this distance in each direction is just ℓ/2 times the minimum separation of the matching, and since the matching is monotonic, the same information is available within the subcomputation record y, . Since between the matching’s time instants 2i + ℓ + 1 ⌉ yi+1 . . . yi+ℓ runs from the matching’s time instant 2i + 1 to ℓ/2 ≤ − ⌈ the matching’s time instant 2i + 2ℓ + 1 , it too codes the desired ℓ/2 ⌉ prefix. and 2i + ℓ + 1 + ⌈ 2i + ℓ + 1 2i + ℓ + 1 + x | x | ℓ/2 ℓ/2 − ⌈ ≥ ⌈ ⌉ ⌉ | | 8 TAO JIANG, JOEL I. SEIFERAS, AND PAUL M. B. VIT ´ANYI 5. Proof of Anti-Holography Lemma Without loss of generality, assume k is equal to 2e for some integer exponent e.3 Then the target constant can be 22C−1. Again without loss of generality, assume k is at most this target constant times two.4 Finally, without loss of generality, = n/k for every i.5 assume that To obtain short descriptions of y, we abbreviate many of its subwords in terms e, and of just a few prefixes of x, using the symmetry of information. For each j for j = e = n is divisible by k, with x = x1 . . . xk and 1 in particular, this will yield xi| x | ≤ | | − K(y | x1 . . . x2j ) y ≤ | | − (1 + j/2)n + (log n). O Unless k is smaller than 22C−1, e 1 will be so large that this will imply that x1 . . . x2e−1 codes y. Since y in turn codes all of x = x1 . . . x2e this will mean that the first half of x codes the whole string, contradicting the incompressibility assumption for x. − By induction on j (j = 0, 1, . . . , e), we actually prove “more local” bounds that imply the ones above: For each appropriate i (i = 0, 1, . . . , k 2j), − K(yi+1 . . . yi+2j x1 . . . x2j ) | ≤ | yi+1 . . . yi+2j | − 2j(1 + j/2)n/k + (log n). O Both the base case and the induction step are applications of an intuitively clear corollary, the Further-Abbreviation Lemma below, of the symmetry of information. For the base case, we apply the lemma with y′ equal to yi+1, x′ equal to the null string, and x′′ equal to x1, to get the desired bound on K(y′ x′′): | K(y ′′ x ) ′ | ≤ K(y ′ y ′ ) ′′ K(x ) + − n/k + O (log n). (log n) ≤ | | − O For the induction step, we let y′′ = yi+1 . . . yi+2j and y′′′ = yi+2j +1 . . . yi+2j+1 , and apply the lemma with y′ equal to y′′y′′′, x′ equal to x1 . . . x2j , and x′′ equal to x2j +1 . . . x2j+1 , to get the desired bound on K(y′ x′x′′): | K(y ′ x ′′ x ) ′ | ≤ ≤ K(y K(y ′′ y ≤ | = | ′′ y | | ′ | ′′ ′ x ) ′ x ′′ K(x − ) + K(y ′′′ | + + y ′′′ y | | | − | − ) + ′ ′′′ x (log n) ′′ K(x O ) ) + | − 2j(1 + j/2)n/k 2 2j+1(1 + (j + 1)/2)n/k + − (log n) O 2jn/k + · O O (log n) (log n). (cid:3) Further-Abbreviation Lemma. Assume y′, x′, and x′′ are strings of length Θ(n), with ′′ K(x ′ y | ) = O (log n) and ′′ K(x | ′ x ′′ ) = K(x ) (log n). − O 3If it is not, then just reduce it until it is. 4Otherwise, pair up yi’s to reduce k by factors of 2 until it is. 5If x’s length is not divisible by k, then just discard at most its last 22C−1 bits, until its length is divisible by k. TWO HEADS ARE BETTER THAN TWO TAPES 9 (I.e., y′ codes x′′, which is nearly incompressible relative to x′.) Then K(y ′ x ′′ x ) ′ | ≤ K(y ′ x ) ′ | − ′′ K(x ) + (log n). O Proof. Let d(u v). Then K(u | v) denote a shortest description of u in terms of v, so that (cid:12) (cid:12) | d(u = v)(cid:12) (cid:12) | K(y ′ x ′′ x ) ′ | ≤ ≤ ≤ ≤ ≤ K(d(y K(d(y K(d(y ′ ′ ′ | | ′ x ′ x ′ x | ′ ) x ′ x ) K(y K(y ′ ′ | | − − ′′ ′ x x ′′ x (log n) (cid:12) (cid:12) ′ ) x O ) + ) + ′ x ) | ) (log n) | O ′ ′′ K(x ) (cid:12) x − (cid:12) ′ ′′ x K(x ′′ K(x y (log n).(cid:3) ′′ ) + K(x ′ ′′ ) + K(x | ) + ) + O (cid:12) (cid:12) | O d(y ′ ′ x | | (log n) ′ x ) + ) (cid:12) (cid:12) (log n) O 6. Proof of Large-Matching Lemma Our proof of the Large-Matching Lemma is based on an earlier theorem of Vit´anyi: Far-Out Lemma [Vi84]6. If a two-tape Turing machine recognizes ′ x2x { x | 0, 1 } ∈ { ∗ ′ and x is a prefix of x } in real time, then its “worst-case closest head position” 7 on incompressible inputs x n is Ω(n). 0, 1 ∈ { } In other words, incompressible binary data is guaranteed at some point to drive both heads of such a machine simultaneously far from their original positions. By the continuity of sequential access, of course, this means that the heads actually spend long intervals of time simultaneously far from their original positions; and this is the fact that we exploit. We actually show that even any two-head Turing machine (with both heads on the same one-dimensional tape) that recognizes our language and that satisfies the conclusion of the Far-Out Lemma also satisfies the desired conclusion of the Large- Matching Lemma. (Of course the obvious two-head machine, that does recognize our language in real time, does not satisfy the conclusion of either lemma.) This simplifies the exposition, since we have only one tape to talk about. Note that the “matching” notion does make sense even when both heads are on the same tape. As earlier, let us take explicit care to introduce our “constants” in an appropriate order. Consider any two-head Turing machine M alleged to recognize ′ x2x { x | 0, 1 } ∈ { ∗ ′ and x is a prefix of x } in real time, say with delay bound d, and that satisfies the conclusion of the Far-Out Lemma. Let c be small enough to serve as the implicit constant in that conclusion. Let ε be small in terms of M and c; let δ be small in terms of M , c, and ε; let 6For a sketch of the proof, see the appendix below. 7If pi(t) denotes the net displacement of head i at time t, then the “worst-case closest head position” is maxt mini pi(t). 10 TAO JIANG, JOEL I. SEIFERAS, AND PAUL M. B. VIT ´ANYI n be large in terms of M , c, ε, and δ; and let x be an incompressible string of n bits. Exploiting the conclusion of the Far-Out Lemma, parse x into three pieces, x = uvw, such that uv leaves both heads at least cn tape squares from where they started and the length of u is = Θ(n). Consider M ’s computation on uv2u. The first u must be read before either head gets as far as even cn/3 tape squares from where it started, but the second u must be read while neither head gets closer than 2cn/3 tape squares to where it started. During its subcomputation on v, therefore, it seems that M must somehow “copy” its representation of u across the intervening cn/3 tape squares. We show that this process has to involve a matching larger than δn. cn/(3d) ⌋ ⌊ ≤ For the sake of argument, suppose there is not a matching larger than δn. Then δn. We will select some there must be a maximal matching of size only m correspondingly small “interface” through which a description of u must pass. That interface will involve some rarely crossed boundary at distance between cn/3 and 2cn/3 from the heads’ starting position, and some other rarely crossed boundaries that tightly isolate the 2m tape squares involved in the matching. Since there are 2cn/3 cn/3 candidates for the former, we can select one that is crossed only a constant number (bounded in terms of d and c) of times. We will refer to the tape squares on the respective sides of this selected central boundary as close and far. By the following purely combinatorial lemma, we can tightly isolate the matched tape squares with at most 4m additional boundaries, each of which is crossed only a constant number (bounded in terms of d, c, and our “tightness criterion” ε) of times. − Tight-Isolation Lemma. Consider a finite sequence S of nonnegative numbers, the first and last of which are 0. Let some of the separating “commas” be specially designated—call them “semicolons”. For each threshold ℓ 0, let Sℓ be the sub- sequence consisting of the items 8 that are reachable from the semicolons via items that exceed ℓ (and that themselves exceed ℓ). Then, for each ε > 0, there is some ℓ such that ℓ < ε P S, where P S denotes the sum of the entire sequence S and ℓ is bounded by some constant that depends only on ε. Sℓ| ≥ | Proof. Let T = P S, and let k = 2 i Sℓ| ≤ ℓ | 2T /k. If no ℓ in ki { 0 | 2/ε k ⌈ ⌉ } ≤ ≤ 2T /k < ki Ski | | < T . Since 2T /k were to work, then we would have εT /2 < εT , let us aim for ≤ for every i. But this would lead to the contradiction T > > k X i=0 k X i=0 ki( | Ski Ski+1 ) | | − | (2T /k T /k) − = (k + 1)T /k. (cid:3) In our application, the numbers are the lengths of the crossing sequences associ- ated with the boundaries between tape squares, their sum is at most dn, and the 8Note that the number of such items can be small even if the number of semicolons is large. For ℓ large enough, in fact, |Sℓ| will be 0. TWO HEADS ARE BETTER THAN TWO TAPES 11 semicolons are the matched tape squares. We obtain our desired “isolation neigh- 9 by borhoods” from the at-most-2m contiguous neighborhoods that comprise Sℓ adding one item at each end of each neighborhood. (This might cause neighbor- hoods to combine.) This adds at most 4m items to Sℓ and results in nonempty isolation neighborhoods whose boundary items are at most ℓ. Actually, the picture is clearer if we select our central boundary after we select the isolation neighborhoods. Assuming ε and δ are chosen appropriately small, this lets us select a boundary not included in any of the isolation neighborhoods. (There are at most 4m + cn/6 boundaries (half the original number of candidates) to avoid.) 2δn + εdn Sℓ| ≤ ≤ | u Finally, we use our suggested interface to give a description of u in terms of v that cn/(6d). (We could substitute a description is too short—say shorter than this short for u in x to contradict the incompressiblity of x.) We claim we can reconstruct u from M , v, the length of u, and the following information about the subcomputation of M while reading the v part of input uv: (m) selected boundary locations. (m) crossings of these selected boundaries, and their 1. The sequence of all 2. The sequence of all /2 ≈ | | times (implicitly or explicitly including the corresponding input positions). 3. The following information for each close-to-far crossing, and for the end of O O the subcomputation: M ’s control state and head positions. The full content of every isolation neighborhood. • • 4. The following information for each crossing out of an isolation neighborhood: | | u • • The full content of that isolation neighborhood. The full content of the isolation neighborhood in which the other head remains10—provided that there has been a new crossing into that neigh- borhood since the previous time such information was given for it. To determine u, it suffices to reconstruct enough of M ’s configuration after its computation on input uv so that we can check which additional input string 2u′ of length 1 + leads to acceptance. The far tape contents suffice for this. Our reconstruction strategy is mostly to simulate M step-by-step, starting with the first close-to-far crossing. Toward this end, we strive to maintain the contents of any currently scanned close isolation neighborhood and of the entire far side. We temporarily suspend step-by-step simulation whenever a head shifts onto a close tape square not in any isolation neighborhood, and we aim to resume suspended step-by-step simulation whenever a head shifts onto a far tape square not in any isolation neighborhood. Because our matching is maximal, such a far tape square is not scanned at the time of suspension, and hence also not at any time before the desired resumption. It follows that the information for the needed updates is indeed available, so that resumption is indeed possible. Similarly, any necessary updates are possible if the step-by-step simulation happens to be suspended when the subcomputation ends. It remains only to show that /2 bits suffice for our description of u in terms of v. For each of the sequences in (1) and (2), the trick is to give only the first number explicitly, and then to give the sequence of successive differences. The u | | 9To include all the semicolons, some of these “contiguous neighborhoods” might have to be the empty neighborhoods of the semicolons. 10The other head must remain in some isolation neighborhood—otherwise, the matching could be enlarged. 12 TAO JIANG, JOEL I. SEIFERAS, AND PAUL M. B. VIT ´ANYI | | ≈ O /2 (m log(n/m)) = u (n log(1/δ)/(1/δ)), which can be length of this encoding is O limited to a small fraction of cn/(6d) by choosing δ small enough. For (4), note that that the contents of each isolation neighborhood is given at most once for each of the ℓ crossings into and out of the neighborhood. For (3) and (4), therefore, straightforward encoding requires only (log n+ℓδn+εdn) Sℓ| bits, where the implicit constant is bounded in terms of d and c. This can be limited /2 by choosing ε small enough, δ small enough, and to another small fraction of , and a description of this n large enough. For the remaining information, M , (log n) bits, which can be limited to a final small whole discussion, we need only (cid:3) fraction of /2 by choosing n large enough. (log n+ℓ(m+ )) = O O O u u u | | | | | | | 7. Further Discussion and Remaining Questions In retrospect, our contribution has been a constraint on how a Turing machine with only two storage heads can recognize L in real time. Even if the two heads are on the same one-dimensional tape, such a Turing machine cannot recognize L in real time unless it violates the conclusion of (the first sublemma of) Vit´anyi’s Far-Out Lemma (see Appendix below). Only in the latter do we ever really exploit an assumption that the two heads are on separate tapes. Our result rules out general real-time simulation of a two-head tape unit using only a pair of single-head tapes. It remains to be investigated whether the result extends to some notion of probabilistic real-time simulation [cf., PSSN90]. Another extension might rule out simulation using three single-head tapes, yielding a tight result; but this would require a more difficult witness language. Perhaps allowing the “back” head of the defining two-head machine also to move and store random data, but much more slowly than the “front” head, would let us combine our argu- ments with those of Aanderaa [Aa74, PSS81, Pa82]. A slightly weaker possibility might be to show that two single-head tapes and a pushdown store do not suffice, and a slightly stronger one might be to show that even three single-head tapes and a pushdown store do not suffice. It might be even more difficult to rule out general real-time simulation of a two-head one-dimensional tape unit using two or three higher-dimensional single- head tapes. Our particular language L can be recognized in real time by a Turing machine with just two such two-dimensional tapes—the idea is to strive to main- (√n) tain the n bits of data within an O (√n) bits, to serve as insurance alternatives strategically placed copies of the first at the same time that the array of their left ends provides a convenient area for temporary collection of data and for copying data between the tapes. (√n) radius on both tapes, along with O O The implications for real-time simulation of one-dimensional tape units with more than two heads remain to be investigated. For example, how does a three- head tape compare with three single-head tapes or with one single-head tape and one two-head tape? (Paul’s results [Pa84] do answer such questions for tapes of higher dimension.) How tight is the known bound of 4h 4 single-head tapes for real-time simulation of one h-head (one-dimensional) tape [LS81]? Perhaps the many-heads setting is the right one for a first proof that even an extra head is not enough to compensate for the loss of sharing; e.g., can a 1000-head tape be simulated in real time by 1001 single-head tapes, or by 1000 single-head tapes and a pushdown store? − Finally, does any of this lead to more general insight into the heads or tapes TWO HEADS ARE BETTER THAN TWO TAPES 13 requirements for arbitrary computational tasks? I.e., when asked about some com- putational task, can we tightly estimate the structure of the sequential storage that suffices for the task? Appendix: A proof sketch for Vit´anyi’s Far-Out Lemma Suppose two-tape Turing machine M recognizes the language in real time. With- out loss of generality, assume M ’s storage tape is only semi-infinite, and assume M writes only 0’s and 1’s. Let d be the delay of M . Our ultimate goal is to show that both heads simultaneously range linearly far when the input is incompressible, but first we show that each one separately does so even when the input is just nearly incompressible. (The subsequent application is to significantly long prefixes of input strings that are not compressible at all.) It is only this part of the proof that requires the hypothesis that the two heads are on separate tapes. This part is based on the “bottleneck” argument that Valiev [Va70] (and, independently, Meyer [Me71]) used to show that no single-tape Turing machine can accept the simpler language { Suppose ε is small in terms of M and d, n is large in terms of all of the above, √n). We want to show and x is of length n and nearly incompressible (K(x) that each head ranges farther than εn. in real time. x2x ∈ { 0, 1 ≥ − n x } } ∗ | Suppose the head on one of the tapes, say the first, does not range farther than εn. Then the head on the second tape must certainly range farther than, say, n/3. (Otherwise, the total state after storage of x is a too-short description of x.) Let uvw be the parse of x with uv the shortest prefix of x that leaves M ’s second head = n/(9d), so that that same head gets u at least n/3 tape squares out, and with | no farther than n/9 tape squares out during input of u. On that head’s tape, there must be a “bottleneck” boundary between n/9 and 2n/9 tape squares out that gets crossed at most 9d times. Since all of u gets read when the second head is to the left of this bottleneck, it is possible to describe x = uvw in terms of vw and the bottleneck’s “crossing sequence”, which should include, for each crossing, the step number and the “crossing state”, which in turn should include the complete but relatively small contents of the first storage tape at the time of the crossing. The following information suffices: | 1. vw, 2. a description of this discussion, 3. a description of M , 4. the value of n, 5. the location of the bottleneck, 6. the crossing sequence at the bottleneck. | u − | If we provide vw as a literal suffix, then we can limit the length of this description bits, contradicting the near incompressibility of x. To to little more than n recover u, we can use the information to determine enough of M ’s instantaneous description after reading uv (omitting from the i. d. only what is to the left of the bottleneck on the second tape) to then try each input continuation 2u′ with u′ = n/(9d). | Finally, we return to our ultimate goal. Here is the idea: If the heads do not both go far out together, then they must take turns, so that some region gets crossed many times; abbreviate the symbols read while a head is in that region. | Suppose ε is small in terms of M and d (as above), ε2 is small in terms of 14 TAO JIANG, JOEL I. SEIFERAS, AND PAUL M. B. VIT ´ANYI ε), ε1 is small in terms of the now the preceding parameters (in particular, ε2 preceding parameters (in particular, ε1 ε2), n is large in terms of all of the above, ≪ and x is of length n and incompressible. We want to show that both heads range farther than ε1n, simultaneously. ≪ Suppose, to the contrary, that there is always at least one head within ε1n tape squares of the origin. We count the crossings of the region from ε1n to ε2n: It follows from our assumptions that a left-to-right crossing must occur between in- put symbol number (d/ε)i(ε2n/ε) and input symbol number (d/ε)i+1(ε2n/ε), for every i. (We use the fact that these input prefixes are themselves nearly incom- pressible.) By input symbol number n, therefore, the number of complete crossings (either direction) is at least r = 2 logd/ε(ε/ε2) (which is large because ε2 is so small). There is a complication, however: There might also be partial crossings, involving fewer input symbols but additional overhead in the description we plan to give. To control this problem, we shrink the region slightly, replacing ε1 and ε2 with ε′ 1 and ε′ 2 from the first and last quarters, respectively, of the range [ε1, ε2], chosen so that each of the boundaries ε′ 2n is crossed at most R = 8d/ε2 times. This is possible, since R(ε2 1n and ε′ ε1)n/4 exceeds dn. Finally, then, we formulate a description of the incompressible input that differs from the completely literal one as follows: We eliminate the input read while a head is in the range between ε′ ≥ r(ε2 ε1)n/(2d) bits. We add descriptions of the crossing sequences at these two boundaries, including times, states, and the tape contents out to boundary ε1n, and also the full final contents of the tape squares between the two boundaries, for a total cost of 2n, for a savings of at least r(ε′ 1n and ε′ ε′ 1)n/d 2 − − − ((ε ′ 2 − ′ 1)n + R(log n + ε1n)) = ε O bits, which can be kept significantly smaller than the savings. O − ((ε2 ε1)n + 8d(log n + ε1n)/ε2) = ((ε2 O − ε1)n) (cid:3) References [Aa74] [Be65] [Be74] S. O. Aanderaa, On k-tape versus (k − 1)-tape real time computation, Complexity of Computation (SIAM-AMS Proceedings 7) (R. M. Karp, ed.), American Mathematical Society, Providence, Rhode Island, 1974, pp. 75–96. J. Beˇcv´aˇr, Real-time and complexity problems in automata theory, Kybernetika 1, 6 (1965), 475–497. V. L. Bennison, Saving tapes in the simulation of multihead Turing machines, SIGACT News 6, 2 (April, 1974), 23–26. [CTPR85] F. R. K. Chung, R. E. Tarjan, W. J. Paul, and R. Reischuk, Coding strings by pairs of strings, SIAM Journal on Discrete Mathematics 6, 3 (July, 1985), 445–461. [Gr77] [DGPR84] P. ˇDuriˇs, Z. Galil, W. J. Paul, and R. Reischuk, Two nonlinear lower bounds for on-line computations, Information and Control 60, 1–3 (January–March, 1984), 1–11. [FMR72] P. C. Fischer, A. R. Meyer, and A. L. Rosenberg, Real-time simulation of multihead tape units, Journal of the Association for Computing Machinery 19, 4 (October, 1972), 590–607. D. Yu. Grigoriev, Imbedding theorems for Turing machines of different dimensions and Kolmogorov’s algorithms, Soviet Mathematics 18, 3 (May–June, 1977), 588–592. F. C. Hennie, On-line Turing machine computations, IEEE Transactions on Electronic Computers EC-15, 1 (February, 1966), 35–44. J. Hartmanis and R. E. Stearns, On the computational complexity of algorithms, Trans- actions of the American Mathematical Society 117, 5 (May, 1965), 285–306. A. N. Kolmogorov, Three approaches to the quantitative definition of information, Problems of Information Transmission 1, 1 (January–March, 1965), 1–7. [HS65] [Ko65] [He66] TWO HEADS ARE BETTER THAN TWO TAPES 15 [LLV92] M. Li, L. Longpr´e, and P. M. B. Vit´anyi, The power of the queue, SIAM Journal on [LS81] [LV88] [LV93] Computing 21, 4 (August, 1992), 697–712. B. L. Leong and J. I. Seiferas, New real-time simulations of multihead tape units, Journal of the Association for Computing Machinery 28, 1 (January, 1981), 166–180. M. Li and P. M. B. Vit´anyi, Tape versus queue and stacks: the lower bounds, Infor- mation and Computation 78, 1 (July, 1988), 56–85. M. Li and P. M. B. Vit´anyi, An Introduction to Kolmogorov Complexity and Its Ap- plications, Springer-Verlag, New York, 1993. [Ma85] W. Maass, Combinatorial lower bound arguments for deterministic and nondetermin- istic Turing machines, Transactions of the American Mathematical Society 292, 2 (December, 1985), 675–693. A. R. Meyer, An optimal time bound for a one tape on-line Turing machine compu- tation, unpublished manuscript (June, 1971, but earlier version already cited in 1967 [MRF67]). [Me71] [MRF67] A. R. Meyer, A. L. Rosenberg, and P. C. Fischer, Turing machines with several read- write heads, preliminary report, IEEE Conference Record of 1967 Eighth Annual Sym- posium on Switching and Automata Theory, IEEE Computer Society, Long Beach, California, 1967, pp. 117–127. [MSST93] W. Maass, G. Schnitger, E. Szemer´edi, and G. Tur´an, Two tapes versus one for off-line [Pa82] [Pa84] Turing machines, Computational Complexity 3, 4 (1993), 392–401.. W. J. Paul, On-line simulation of k + 1 tapes by k tapes requires nonlinear time, Information and Control 53, 1–2 (April–May, 1982), 1–8. W. J. Paul, On heads versus tapes, Theoretical Computer Science 28, 1–2 (January, 1984), 1–12. [PSS81] W. J. Paul, J. I. Seiferas, and J. Simon, An information-theoretic approach to time bounds for on-line computation, Journal of Computer and System Sciences 23, 2 (October, 1981), 108–126. [St70] [Ra63] [ST89] [PSSN90] R. Paturi, J. I. Seiferas, J. Simon, and R. E. Newman-Wolfe, Milking the Aanderaa argument, Information and Computation 88, 1 (September, 1990), 88–104. M. O. Rabin, Real time computation, Israel Journal of Mathematics 1, 4 (December, 1963), 203–211. W. Schnitzlein and H.-J. Stoß, Linear-time simulation of multihead Turing machines, Information and Computation 81, 3 (June, 1989), 353–363. H.-J. Stoß, k-Band-Simulation von k-Kopf-Turing-Maschinen, Computing 6, 3 (1970), 309–317. (German) M. K. Valiev, Certain estimates of the time of computations on Turing machines with an input, Cybernetics 6, 6 (June, 1973), 734–741; translated from Kibernetika 6, 6 (November–December, 1970), 26–32. (Russian) P. M. B. Vit´anyi, On two-tape real-time computation and queues, Journal of Computer and System Sciences 29, 3 (December, 1984), 303–311. A. K. Zvonkin and L. A. Levin, The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms, Russian Mathematical Surveys 25, 6 (November–December, 1970), 83–124. [ZL70] [Va70] [Vi84] Abstract. We show that a Turing machine with two single-head one-dimen- sional tapes cannot recognize the set { x2x′ | x ∈ {0, 1}∗ and x′ is a prefix of x } in real time, although it can do so with three tapes, two two-dimensional tapes, or one two-head tape, or in linear time with just one tape. In particular, this settles the longstanding conjecture that a two-head Turing machine can recog- nize more languages in real time if its heads are on the same one-dimensional tape than if they are on separate one-dimensional tapes. Department of Computer Science, McMaster University, Hamilton, Ontario L8S 4K1, Canada 16 TAO JIANG, JOEL I. SEIFERAS, AND PAUL M. B. VIT ´ANYI E-mail address: [email protected] Computer Science Department, University of Rochester, Rochester, New York 14627-0226, U. S. A. E-mail address: [email protected] Centre for Mathematics and Computer Science (CWI), Kruislaan 413, 1098 SJ Amsterdam, The Netherlands E-mail address: [email protected]
ai_researcher
1
Photography_as_a_Design_Research_Tool_into_Natureculture.pdf
QuantaBurstPhotographySIZHUOMAandSHANTANUGUPTA,UniversityofWisconsin-Madison,USAARINC.ULKU,CLAUDIOBRUSCHINI,andEDOARDOCHARBON,EPFL,SwitzerlandMOHITGUPTA,UniversityofWisconsin-Madison,USAdarksceneHigh-speed motionSPAD ArrayQuanta Image Sequence(stochastic and binary)High-speed capture(≈100kfps)Per-frame Patch FlowHierarchicalAlignmentxyRobust Merging & DenoisingHigh-Quality ImageLow blur and noiseHigh dynamic rangeGround Truth (DSLR, Tripod)Binary SequenceProposed MethodNaive AveragingFig.1.Quantaburstphotography.(Top)Single-photonimagesensorscapturestochastic,binaryimagesequencesathighspeeds(∼100kfps).Suchhigh-speedimagesequencescanbealignedtocompensateforscene/cameramotionusingaspatial-temporalhierarchicalalignmentalgorithm.Bymergingthealignedsequencerobustly,ahigh-qualityimagecanbereconstructed,withminimalmotionblurandnoise,andhighdynamicrange,eveninchallengingphotographyconditions.(Bottom,fromlefttoright)Anexamplelow-lightscenecapturedbyaDSLRcameraonatripodtoavoidcamerashake;binaryimagesequencecapturedbyahandheldsingle-photoncamera;imagereconstructedbynaiveaveragingofthebinarysequence(showntoillustratetheamountofmotionduringcapture);super-resolvedimagereconstructedusingtheproposedtechniqueshaslowblurandnoise.Zoominfordetails.Single-photonavalanchediodes(SPADs)areanemergingsensortechnologycapableofdetectingindividualincidentphotons,andcapturingtheirtime-of-arrivalwithhightimingprecision.Whilethesesensorswerelimitedtosingle-pixelorlow-resolutiondevicesinthepast,recently,large(upto1MPixel)SPADarrayshavebeendeveloped.Thesesingle-photoncameras(SPCs)arecapableofcapturinghigh-speedsequencesofbinarysingle-photonimageswithnoreadnoise.Wepresentquantaburstphotography,acomputationalphotographytechniquethatleveragesSPCsaspassiveimagingdevicesforphotographyinchallengingconditions,includingultralow-lightandfastmotion.Inspiredbyrecentsuccessofconventionalburstphotography,wedesignalgorithmsthatalignandmergebinarysequencescapturedbySPCsintointensityimageswithminimalmotionblurandartifacts,highsignal-to-noiseratio(SNR),andhighdynamicrange.WetheoreticallyanalyzeAuthors’addresses:SizhuoMa,[email protected];ShantanuGupta,[email protected],UniversityofWisconsin-Madison,USA;ArinC.Ulku,[email protected];ClaudioBruschini,[email protected];EdoardoCharbon,[email protected],EPFL,Switzerland;MohitGupta,[email protected],UniversityofWisconsin-Madison,USA.©2020AssociationforComputingMachinery.Thisistheauthor’sversionofthework.Itispostedhereforyourpersonaluse.Notforredistribution.ThedefinitiveVersionofRecordwaspublishedinACMTransactionsonGraphics,https://doi.org/10.1145/3386569.3392470.theSNRanddynamicrangeofquantaburstphotography,andidentifytheimagingregimeswhereitprovidessignificantbenefits.Wedemonstrate,viaarecentlydevelopedSPADarray,thattheproposedmethodisabletogeneratehigh-qualityimagesforsceneswithchallenginglighting,complexgeometries,highdynamicrangeandmovingobjects.WiththeongoingdevelopmentofSPADarrays,weenvisionquantaburstphotographyfindingapplicationsinbothconsumerandscientificphotography.CCSConcepts:•Computingmethodologies→Computationalpho-tography.AdditionalKeyWordsandPhrases:Single-photoncamera,single-photonavalanchediode,quantaimagesensor,burstphotography,super-resolution,highdynamicrange,high-speedimaging,low-lightimagingACMReferenceFormat:SizhuoMa,ShantanuGupta,ArinC.Ulku,ClaudioBruschini,EdoardoCharbon,andMohitGupta.2020.QuantaBurstPhotography.ACMTrans.Graph.39,4,Article79(July2020),16pages.https://doi.org/10.1145/3386569.33924701THESINGLE-PHOTONREVOLUTIONAconventionalcameratypicallycaptureshundredstothousandsofphotonsperpixeltocreateanimage.Anemergingclassofsensors,ACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020.arXiv:2006.11840v1 [cs.CV] 21 Jun 2020 79:2•Ma,S.etalcalledsingle-photonavalanchediodes(SPADs)[Niclassetal.2005;Rochas2003],canrecordindividualphotons,andpreciselymeasuretheirtime-of-arrival.Duetotheirsensitivityandpicosecondtimeresolution,SPADsaredrivinganimagingrevolution.Anewgen-erationofdevicesisemerging,withnovelfunctionalitiessuchasimagingattrillionfps[O’Tooleetal.2017],non-line-of-sight(NLOS)imaging[Buttafavaetal.2015;O’Tooleetal.2018],andmicroscopicimagingofnanotime-scalebio-phenomena[Bruschinietal.2019].Passivesingle-photonimaging:Sofar,mostSPAD-basedimagingsystemsareactive,wheretheSPADisusedinprecisetemporalsynchronizationwithanactivelightsource(e.g.,apulsedlaser).ThisincludesapplicationssuchasNLOSimaging,LiDAR[Shinetal.2016],andmicroscopy.CanSPADsbeusednotjustwithcontrolledandpreciselysynchronizedactivelightsourcesasisthenorm,butmoregenerallyunderpassive,uncontrolledillumination(e.g.,sun-light,moonlight)?SuchpassiveSPAD-basedimagingsystemshavethepotentialtoexpandthescopeofSPADstoaconsiderablylargersetofapplications,includingmachinevisionandphotography.ConsideraSPADsensor(anarrayofSPADpixels)imagingasceneilluminatedbypassivelighting.SincephotonsarriveatthesensorrandomlyaccordingtoPoissonstatistics,photondetectioneventsarealsorandom,andcanbevisualizedasaspatio-temporalphoton-cube[Fossum2011].ASPADcameracancaptureasequenceofthin,temporalslicesofthephoton-cube,whereeachsliceisabinary(1-bit)image,asshownininFig.1.Eachpixellocationrecordsa1ifitreceivesoneormorephotonsduringthetemporalextentoftheslice,and0otherwise.Forexample,arecentSPADcamera[Ulkuetal.2019]cancapture∼105binaryframespersecond,at1/4MPixelresolution.1Duetotherandomnatureofphotonarrivals,thebinaryimagesarestochastic.Passivesingle-photonimagingundermotion:Howdoesmotionmanifestinastochasticbinaryimagesequence?Ifthescene(orcam-era)movesduringacquisition,thephotonsemittedbyascenepointgetmis-alignedandspreadovermultipleSPCpixels.Inthispaper,weproposequantaburstphotography,acomputationalphotogra-phytechniquethatcomputationallyre-alignsthephotonsalongmotiontrajectories,forachievinghigh-qualityimagesinchalleng-ingscenarios,includinglow-lightandhigh-speedmotion(Fig.1).Wedevelopalgorithmsthatalignthebinaryslices,thuscreatingahigh-bit-depth,high-dynamic-range,potentiallysuper-resolved(viasub-pixelalignment[Parketal.2003;Wronskietal.2019])imageofthescene,whileminimizingnoiseandmotionblur.Thisissimilarinspirittoconventionalburstphotographywhereaburstofnoisy,short-exposureimagesarealignedandmergedintoasinglehigh-qualityimage[Hasinoffetal.2016;Libaetal.2019].Quantaburstphotographycanbeconsideredalimitingcasebecauseeachbinaryimagecapturesatmostonephotonperpixel,andisthusextremelynoisyandquantized(1-bit).Ontheotherhand,duetofastcapture,wehavealongsequenceavailable(102−105frames,depending1Photon-cubesandsingle-photonbinaryimagesequenceswerefirstconsideredinthecontextofjots[Fossum2005,2011],anotheremergingsingle-photonsensingtechnology.Inthispaper,weprimarilyfocusonSPADsduetotheirhighframerate.However,sincebothjotsandSPADshavesimilarimagingmodelanddataformat,theanalysisandtechniquespresentedhereareapplicabletojotsaswell.onlightlevel,dynamicrangeandmotion),insteadof5−10asinconventionalburstphotography.Whyquantaburstphotography?OneofthekeybenefitsofSPCsisthelowreadnoiseintherawbinaryframes[Bruschinietal.2019],whichenablesdividingtheexposuretimefinelyintoalongsequenceofframestohandlefastmotion.Thisresultsinvirtuallynegligibleintra-framemotionblurandlownoise,evenforrapidmotion(e.g.,sportsandwildlifephotography).2Furthermore,althoughatfirstglanceitmayappearthatSPCs,duetotheirhighsensitivity,areusefulonlyinphoton-starvedscenarios,surprisingly,theycanalsoimagebrightsceneswhereconventionalsensorssaturate[Antolovicetal.2018;Ingleetal.2019].Thisisbecausealthougheachbinaryim-ageisquantized,alargecollectionofsingle-photonmeasurements,whencombined,naturallyavoidssaturation[Yangetal.2012],andthus,achieveextremedynamicrange.Therearetwocatalystsforkeyquantaburstphotography:(a)EmergenceoflargeSPCsarrays:Tillrecently,SPCswereavail-ableassingle-pixelorsmallarrays(e.g.,32x32pixels),which,whilesufficientforseveralscientificimagingapplications,arenotsuit-ableforconsumerdomainphotography.Fortunately,duetotheircompatibilitywithmainstreamCMOSfabricationlines,itisnowpossibletodeveloplargeSPCsarrays,withtheworld’sfirst1MPixeljots[Maetal.2017]andSPADarrays[Morimotoetal.2020]re-portedrecently,whilemaintaininghighsensorqualityandroomtemperatureoperation.(b)High-performanceburstphotography:Weareinspiredbytherecentsuccessofburstphotographyalgorithms[Hasinoffetal.2016;Libaetal.2019;Wronskietal.2019],which,forthefirsttime,arestartingtoproducereliablyartifact-freeimagesinalmostallcircum-stances,includingchallengingsceneswithocclusionsandnon-rigidmotion.Thesemotionestimationandmergingmethodsarerobustenoughtobeshippedtoconsumerdevices,agold-standardforcomputationalphotographytechniques.Weadoptthedesignprinciplesandbestpracticesfromtheseburstphotographyapproaches,anddesignalgorithmstailoredforsingle-photonbinarystochasticimages.Wedemonstrate,viasimulationsandexperimentsona1/8megapixelSPADarray(SwissSPAD2[Ulkuetal.2019])thatquantaburstphotographyisabletogeneratehighSNR,blur-freeandsuper-resolvedimagesinextremescenarios(low-light,fastmotion,largedynamicrange)whichwouldbeconsideredchallengingforburstphotographyonconventionalcameras.Scopeandlimitations:Aresingle-photoncamerasandquantaburstphotographyreadytobedeployedonconsumerdevices?Notyet.Sofar,wehavefocusedonachievinghighimagequality.Ourcurrentunoptimizedimplementation,however,isnotdirectlyamenabletoconsumerdevices,whichhavestrongconstraintsonspeed,powerandmemory.Thecurrentsensorprototypedoesnothaveacolorfilterarray(e.g.,aBayerpattern),andthus,theresult-ingimagesaregray-scale.Theresolution,althoughhighestto-dateamongSPADcameras,isstillrelativelylow(1/8MPixel)forcon-sumerapplications.Fortunately,thecapabilitiesofsingle-photon2Forconventionalcameras,thereisafixedreadnoisepenaltyforeachcapturedframe.Therefore,dividingtheexposuretimefinelyintoalargenumberofframesincreasestheeffectivereadnoiseinthemergedimage.ACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. QuantaBurstPhotography•79:3sensorscontinuetoimprove,withhigherresolution[Morimotoetal.2020]andcolorsensors[ElgendyandChan2019]onthehorizon.TheproposedapproachisnotmeanttodirectlycompetewithconventionalCMOSsensorsandburstphotographypipelines,whichhavebeenoptimizedoverseveralyears,andcanproducecompellingphotographicexperiences.Instead,ourgoalistoexploreandanalyzeanascentbutpromisingimagingmodality,which,ifsuccessful,couldleadtocapabilities(highqualityphotographyinultralow-lightandfastmotion)thatwerehithertoconsideredimpossible.Thispapershouldbeseenjustasafirststeptowardthatgoal.2RELATEDWORKImagedenoising.Singleimagedenoisingalgorithmsdenoiseim-agesbyimposingvariousimagepriorsuchasspatialsmoothness[BeckandTeboulle2009],sparsity[EladandAharon2006]andself-similarity[Buadesetal.2005].Suchpriorsarealsousedintransformdomainssuchasfrequencydomain[GonzalezandWoods2006],wavelets[MalfaitandRoose1997],and3Dtransform(BM3D)[Dabovetal.2007b].Recentdata-drivendenoisingalgorithmsattempttocapturethenoisestatisticsusinganeuralnetworkinsteadofanex-plicitprior[Zhangetal.2018b].SingleimagedenoisingapproachestendtofailwhentheimagehasaverylowSNRduetolowlight,and/orlimitedexposuretimebecauseoffastscene/cameramotion.Inthesecasesitisessentialtocombineinformationfrommultipleimagestogenerateahigh-qualityimage.Burstdenoising.Burstdenoisingmethodstakeasequenceofun-derexposedimagesandmergethemintoasingleimage.TheSNRisimprovedsincemorephotonsarecollected.Thekeytechnicalchal-lengeforburstdenoisingistoaccuratelyalignandmergeframesasthecameramoves.Thiscanbeaddressedeitherbyatwo-stepalign-and-mergeapproach[Hasinoffetal.2016;Heideetal.2014;Libaetal.2019;Liuetal.2014;Wronskietal.2019],orjointoptimiza-tion[Heideetal.2016].Recently,deeplearningbasedmethodshavealsobeenproposed[Godardetal.2018;Mildenhalletal.2018].Arelatedproblemisvideodenoising,wherebothinputandoutputaresequenceofimages[Chenetal.2019;Dabovetal.2007a;Maggionietal.2012].Ourgoalistocreateasingle,high-qualityimagefromaburstofbinarysingle-photonframes.Quanta(single-photon)sensors.Currently,therearetwomainen-ablingtechnologiesforlargesingle-photoncameraarrays:SPADsandjots.SPADsachievesinglephotonsensitivitybyamplifyingtheweaksignalfromeachincidentphotonviaavalanchemultipli-cation,whichenableszeroreadnoiseandextremelyhighframerate(∼100kfps).Jots,ontheotherhand,amplifythesingle-photonsignalbyusinganactivepixelwithhighconversiongain(lowcapac-itance)[Fossum2005].Byavoidingavalanche,jotsachievesmallerpixelpitch,higherquantumefficiencyandlowerdarkcurrent,buthavelowertemporalresolution[Maetal.2017].Althoughthetech-niquesinthispaperareapplicabletobothSPADsandjots,weprimarilyfocusonSPADsbecauseoftheircapabilitytoresolvefastmotionduetohightemporalresolution.Weshowsimulation-basedcomparisonsbetweenthetwotypesofsensors,andadiscussionontheirrelativemerits,inSec.7.Wide-dynamic-rangesensors.Thereareseveralwide-dynamic-rangeimagesensorsbasedondifferenttechnologies,suchasloga-rithmicresponse[Kavadiasetal.2000]andlight-to-frequencycon-version[Wangetal.2006].SuchsensorshaveanextendeddynamicrangecomparedtoconventionalCMOSsensors,buttheblur-noisetrade-offstillexists,whichmakesthemlesseffectiveforlow-light,fast-motionscenarios.Inaddition,especiallyforlogarithmicsensors,photoresponsenon-uniformity(PRNU)isalimitationinconven-tionalimplementations[Yangetal.2009];thiseffectcompoundstheaboveissues,significantlylimitingimagequality.Imagereconstructionfromsingle-photonsensordata.Thereispriorworkonreconstructingintensityimagesfromsingle-photonbinaryframesusingdenoisingtechniquessuchastotalvariationandBM3D[Chanetal.2016;Gnanasambandametal.2019],orbyanend-to-endneuralnetwork[Chandramoulietal.2019;Choietal.2018].Inthepresenceofmotion,Fossum[2013]suggestedshiftingthebinaryimagestocompensateformotionandachieveblur-freeimagereconstruction.Thisideahasbeenimplementedrecently[Gyongyetal.2017,2018;Iwabuchietal.2019],albeitforsimplisticmotionmodels(e.g.,planarobjectswithin-planemotionandnoocclusions).Ourapproachisbasedonamuchlessrestrictiveassumption(motioncanbeapproximatedbypatch-wise2Dtranslationandremainsconstantwithintemporalblocks),andcanreliablyproducehigh-qualityimagesforabroadrangeofcomplex,real-worldscenes.3PASSIVESINGLE-PHOTONIMAGINGMODELConsideraSPCpixelarrayobservingascene.ThenumberZ(x,y)ofphotonsarrivingatpixel(x,y)duringanexposuretimeofτsecondsismodeledasaPoissonrandomvariable[Yangetal.2012]:P{Z=k}=(ϕτη)ke−ϕτηk!,(1)whereϕ(x,y)isthephotonflux(photons/seconds)incidentat(x,y).ηisthequantumefficiency.Eachpixeldetectsatmostonephotonduringanexposuretime,returningabinaryvalueB(x,y)suchthatB(x,y)=1ifZ(x,y)≥1;B(x,y)=0otherwise.Duetotherandomnessinphotonarrival,B(x,y)isarandomvariablewithBernoullidistribution:P{B=0}=e−(ϕτη+rqτ),P{B=1}=1−e−(ϕτη+rqτ),(2)whererqisthedarkcountrate(DCR),whichistherateofspuriouscountsunrelatedtophotons.Toestimatethenumberofincidentphotonsϕ(proportionaltothelinearintensityimageofthescene),supposethecameracapturesasequenceofbinaryframes.Assumingnomotionbetweenbinaryframes,orthatthebinaryframesarealignedperfectlytocompensateformotion,wedefineS(x,y)asthesumofallbinaryframes:S(x,y)=nq(cid:213)t=1Bt(x,y),(3)whereBt(x,y)isthebinaryframeattimet,andnqisthenumberofframes.S(x,y)isthetotalnumberofphotonsdetectedat(x,y)overtheentirebinaryimagesequence.Sinceeachbinaryframeisindependent,theexpectedvalueofthesumimageistheproductofACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. 79:4•Ma,S.etal# of incoming photonsAvg. # of detectedphotonsSaturationFull well capacityTotal # of binary framesSoft saturationConventional Image SensorSingle-Photon SensorAvg. # of detectedphotons# of incoming photonsFig.2.ResponsecurvesforconventionalsensorsandSPADs.There-sponsecurveforasensorisdefinedastheplotoftheaveragenumberofphotonsdetectedasafunctionofnumberofphotonsincidentonthesensor.(Left)Theresponsecurveforconventionalsensorsislinear,untilsaturationwhenthefullwellcapacityisreached.(Right)ForSPADs,theresponsecurveisnon-linear,andasymptoticallyapproachesalimit,whichisthetotalnumberofbinaryframescapturedinthegiventimeduration.SPADssufferfromonlysoftsaturationsincethenumberofdetectedphotonskeepsincreasing,albeitprogressivelyslowly,forincreasingincidentflux.thenumberofframesnq,andtheexpectedvalueoftheBernoullivariableB:E[S(x,y)]=nqE[B(x,y)]=nq(cid:16)1−e−(ϕτη+rqτ)(cid:17).(4)Themaximumlikelihoodestimate(MLE)oftheintensityimageϕisgivenas[Antolovicetal.2016]:ˆϕ(x,y)=−ln(1−S(x,y)/nq)/τη−rq(x,y)/η.(5)Dynamicrange:Eq.4describestherelationshipbetweenS,theto-talnumberofphotonsdetectedbythecamera,andϕ,thenumberofphotonsincidentonthecamera(thequantitywewishtoestimate).Thisnon-linearrelationship[Sbaizetal.2009],asplottedinFig.2,issimilartotheD-logHcurveforphotographicfilmsproposedbyHurterandDiffieldin1890,becausesingle-photoncamerasemulatethesilverhalideemulsionfilmprocess[Fossum2005].Thekeyob-servationisthatthisresponsecurveasymptoticallyapproachesitsmaximumvalue(nq),whileneverreachingit.Thissoftsaturation[In-gleetal.2019]suggeststhatthevalueofSkeepsincreasing(albeitprogressivelyslowly)asthenumberofincidentphotonsincreases,whichmeanstheincidentfluxcanberecoveredevenforbrightscenes.Incontrast,theresponsecurveforconventionalsensorsisastraightlinebeforehittingthefullwellcapacity,andthenflattensduetosaturation.Therefore,apassivesingle-photoncamera,whilecapableofimaginglow-lightscenes,somewhatcounter-intuitively,canalsoimagebrightsceneswhereconventionalsensorssaturate,providinganextremelywidedynamicrange.Readnoise:Conventionalsensorsconvertdiscreteincidentpho-tonstoanalogcurrent,whichisagainconvertedtoadiscretenumberbyananalog-to-digitalconverter(ADC).Thisdiscrete→analog→discretepipelineresultsinreadnoise,whichisthedominantsourceofnoiseinlow-light.Thisplacesalimitonexposuretimeusedinconventionalburstphotography.Givenafixedtotalcapturetime,increasingthenumberofframesmayreducemotionartifacts,butsinceeachadditionalframesincursareadnoisepenalty,theSNRofthemergedimageislowered.Jotshaveadeepsub-electronreadnoise(currently∼0.2e−[Maetal.2017]),whichalthoughconsid-erablylowerthanconventionalCMOSsensors,canstilllimittheimagequalityinultralow-lightconditions[Fossumetal.2016].Incontrast,SPADsdirectlymeasurethephotoncounts,skippingtheintermediateanalogconversion,therebyavoidingreadnoise.ThisallowsaSPADcameratofinelydividetheexposuretimeintoalargenumbernqofbinaryframesformotioncompensation,therebysimultaneouslyachievinglowmotion-blurandhighSNR.4SINGLE-PHOTONIMAGINGUNDERMOTIONIfthesceneorcameramovesduringcapture,thensimplysummingthebinarysequence(Eq.3)leadstomergingofphotonsfromdif-ferentscenepoints,resultinginmotionblur.Therefore,toavoidmotionblur,thebinaryframesmustbealignedtocompensateforinter-framemotionbeforemergingthem.AligningthebinaryframesdirectlyischallengingbecausethetraditionalbrightnessconstancyassumptiondoesnotholdfortheobservedrandombinarysignalduetoextremelylowSNR.Althoughitmaybepossibletoestimatetheinter-framemotionwhenthemotionisaglobal,low-dimensionaltransformsuchasglobal2Dtranslationorglobalhomography,forgeneral,unstructuredsceneswithunknowngeometry,thetransformmustbeformulatedasapixelwise2Dmotionfield(oropticalflow).Inthiscase,thetotalnumberofunknownparameterstoestimateis2MNforimageres-olutionM×N.Suchacomplex,high-dimensionalmotionmodelcannotbesolvedpreciselyfromtherandombinaryinputdata.Fortunately,SPADsareabletocapturebinaryframesathighframerates(97.7kfpsforSwissSPAD2[Ulkuetal.2019]).Atsuchhighframerates,thevelocityateachpixelcanbetreatedasacon-stantwithinalocaltemporalwindow.Weusethisobservationasanadditionalconstrainttosolvetheotherwisechallengingopticalflowproblemonstochasticbinaryframes.Onewaytoincorporatesuchaconstraintistocomputeatemporallycoherentopticalflow[Black1994;Volzetal.2011;WeickertandSchnörr2001].Inpractice,wechooseasimple,lesscomputationallyintensiveapproach:Wedividetheentireimagesequenceintonon-overlappingtemporalblocks,computethesumimageforeachblock(calledblock-sumimages)andaligntheblock-sumimages.Theblock-sumimageshaveahigherSNRthanindividualbinaryframes,whichmakesitpossibletousetraditionalopticalflowmethodstoalignthem.Block-levelvs.frame-levelalignment.Fig.3showsanoverviewofthemethod.Wecalltheblockinthecenterofthesequencetheref-erenceblock.Alltheotherblocks,calledauxiliaryblocks,arealignedtothereferenceblock.Afteraligningtheblock-sumimages,wedonotusethecoarse-temporal-scalemotionfieldbetweentemporalblockstomergethemdirectly.Instead,welinearlyinterpolatethemotionfieldintimetoobtainmotionbetweensuccessivebinaryframes.Thisfine-scalemotionfieldisusedtowarpeachbinaryframeandaligntoacentralreferenceframeinthereferenceblock,beforemerging.Thishierarchicalapproachremovesthemotionblurwithineachtemporalblock,resultinginsharpimagesevenforfastmovingscenes.Afterwarping,afrequency-spacemergingalgorithmisusedtomergethetemporalblocks,whichprovidesrobustnesstosmallalignmenterror.Inthenexttwosections,weprovidedetailsofthealignandmergealgorithms.ACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. QuantaBurstPhotography•79:5xyt…𝐵(cid:2868)𝐵(cid:2879)(cid:2873)(cid:2868)𝐵(cid:2872)(cid:2877)𝐵(cid:2873)(cid:2868)𝐵(cid:2869)(cid:2868)(cid:2868)𝐵(cid:2869)(cid:2872)(cid:2877)𝐵(cid:2879)(cid:2869)(cid:2868)(cid:2868)𝐵(cid:2879)(cid:2869)(cid:2873)(cid:2868)𝐵(cid:2879)(cid:2873)(cid:2869)Auxiliary Block (100 frames)Reference Block (100 frames)Auxiliary Block (100 frames)…………………𝑆(cid:2879)(cid:2869)𝑆(cid:2868)𝑆(cid:2869)(1) Compute Sum Image𝑊(cid:2879)(cid:2869)(cid:2868)(cid:2868)𝑊(cid:2869)(cid:2868)(cid:2868)(2) HierarchicalAlignment𝑊(cid:2868)(3) Patch FlowInterpolation𝑊(cid:2868)𝑊(cid:2879)(cid:2873)(cid:2868)𝑊(cid:2872)(cid:2877)𝑊(cid:2873)(cid:2868)𝑊(cid:2869)(cid:2868)(cid:2868)𝑊(cid:2869)(cid:2872)(cid:2877)𝑊(cid:2879)(cid:2869)(cid:2868)(cid:2868)𝑊(cid:2879)(cid:2869)(cid:2873)(cid:2868)𝑊(cid:2879)(cid:2873)(cid:2869)𝑆(cid:2879)(cid:2869)(cid:3050)𝑆(cid:2868)(cid:3050)𝑆(cid:2869)(cid:3050)(4) Warp and Sum……………………(5) RobustFrequency-DomainMergingFig.3.Algorithmoverview.Inthisexample,thebinarysequenceisdividedinto100-frametemporalblocks.Thecentralblockischosenasthereferenceblock.(1)Foreachblock,thebinaryframesareaddedtoformtheblock-sumimage.(2)Everyotherblockisalignedtothereferenceblock,resultinginacoarsepatchflowbetweenthecenterframesoftheblocks.(3)Coarsepatchflowistemporallyinterpolatedtoestimatethefine-scalepatchflowbetweenindividualbinaryframes.(4)Binaryframesarewarpedusingthefine-scalepatchflowandaddedtogethertoformwarpedblock-sumimages.(5)Warpedblock-sumimagesaremergedtogetherusingarobustfrequency-domainapproach.5ALIGNINGTEMPORALBLOCKSGivenareferenceandanauxiliaryblock,wecomputethe2Dcorre-spondencemapbetweenthembasedontheirappearance.Insteadofusingapixel-wiseopticalflowalgorithm,weuseapatch-basedalign-mentapproach3sinceitismoreresilienttonoisethanpixel-wiseopticalflow[Bruhnetal.2005;Zimmeretal.2011].Furthermore,evenformerging(Sec.6),patch-basedapproachesachievemorerobustresultsthanpixel-basedmerging[Liuetal.2014]inlowSNRimages.Forpatch-basedmerging,itissufficienttocomputeamotionfieldatthepatchlevel,therebysavingcomputationaltime.Hierarchicalpatchalignment.Weuseahierarchicalpatchalign-mentapproachsimilarto[Hasinoffetal.2016]onanimagepyramidbuiltfromtheblock-sumimages.Thenumberofpyramidlevelscanbeadjustedaccordingtothespatialresolutionofthebinaryimages.Weusea3-levelpyramidforthe512x256imagesusedinourexperiments.ThematchingisdonebyminimizingL1matchingerrorinaspatialneighborhood.Forapatchwithindices(i,j),whichexpandsthepixelindices[iM,(i+1)M−1]×[jM,(j+1)M−1],we3Inthispaper,werefertotemporalsumofframesas“blocks”andspatialwindowsofpixelsas“patches”.findthesmallestmotionvector(u,v)thatminimizes:Ed(u,v;i,j)=(i+1)M−1(cid:213)x=iM(j+1)M−1(cid:213)y=jM|Saux(x+u,y+v)−Sref(x,y)|.(6)ThesizeofthepatchisM×M.Sauxistheauxiliaryblock-sumimageandSrefisthereferenceblock-sumimage.Spatialregularizationatfinestlevel.Weperformaglobalreg-ularizationatthefinestlevelofthepyramidtofurtherrefinethepatchalignmentresults(especiallyforblockswithextremelysmallnumberofphotons)andtoprovidesub-pixelalignmentforsuper-resolution.Thisisperformedbyminimizingthefollowingenergy:minu,vE(u,v)=∫ΩijEd(u,v;i,j)+λ(∥∇u∥1+∥∇v∥1)didj,(7)whereΩij=[0,W/M]×[0,H/M]isthespatialdomainforthepatchindicesi,j.u,varethemotionfieldsdefinedonΩij,andH×Wisthespatialresolutionoftheinputimages.EdisthematchingerrordefinedinEq.6.Inpractice,weminimizetheCharbonnierlossρ(x)=√x2+ϵ2asandifferentiablealternativefortheL1loss.Interpolatingthemotionfield.Thecomputedinter-blockmotionistreatedasmotionbetweenthecenterframesofeachblock.AlinearinterpolationisthenperformedtocomputethemotionbetweenACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. 79:6•Ma,S.etalNo AlignmentBlock-Level AlignmentFrame-Level AlignmentFig.4.Effectofframe-levelalignment.(Left)Simplesumofbinaryframescapturedbyamovingcamerashowsthesignificantmotionblur.(Center)Alignmentattheblocklevel(eachconsistingof100binaryframes)doesnotremovethemotionblurcompletely.(Right)Blurisreducedbyinterpolatingtheblock-levelalignmenttoachieveframe-levelalignment.individualframes.Higher-orderinterpolation(e.g.,cubicorspline)mayimprovethetemporalsmoothness,butwillincreasethede-pendencyonotherblocks.Inpractice,linearinterpolationachievesgoodresultsforSPADswithhightemporalresolution.Fig.4showsanexampledemonstratingthebenefitsofframe-levelinterpolation.Whilealignmentattheblockleveldoesnotremovethemotionblurcompletely,blurisconsiderablyreducedbyframe-levelinter-polation.Anevaluationofframe-levelinterpolationonrealdataisprovidedinthesupplementaryreport.6MERGINGBINARYSEQUENCEAfterestimatinginter-framemotion,onewaytomergethebinaryimagesequenceistowarpthebinaryimages,computethesumimageofallwarpedimages,andfinally,computetheMLEofthesum(Eq.5).However,theestimatedmotionfieldmayhaveerrorsduetoocclusions,motiondiscontinuities,andnon-rigidscenede-formations.Inthiscase,simplysummingthewarpedbinaryimageswillcreatestrongblurringorghostingartifacts.Canrobustmergingbeusedforbinaryframes?RobustmergingmethodssuchasWienerfrequency-domainfilteringhavelongbeenusedinvideodenoisingandburstdenoising[Hasinoffetal.2016]toaccountforpotentiallyincorrectestimatedmotion.Thekeyideaisthatifapatchinawarpedframeissignificantlydifferentfromthatinthereferenceframe,thenthealignmentislikelyerroneous.Thefinalmergedpatchiscomputedbytakingaweightedaverageofallmatchedpatches,wherethepatcheswithlargedifferencewiththereferencepath(likelyerroneous)aregivenalowerweight.Thisapproach,whilesuccessfulforconventionalcameras,cannotbedirectlyappliedtomergethesingle-photonbinaryframes.Thisisbe-causeeveniftwobinaryframesareperfectlyaligned,thedifferencebetweentheframescouldstillbehighduetothedominatingshotnoise.Asaresult,everyauxiliaryframewillhavealowweight,andwillmakealowcontributiontothefinalmergedimage,resultinginlowSNR,asshowninFig.5.Inordertoaddressthislimitation,weadoptatwo-stepapproach.First,wewarptheframeswithineachblocktotheblock’sreferenceframebyusingtheestimatedfine-scaleinter-framemotion.Theframesaresimplyaddedtoformawarpedblock-sumimagewithoutanyrobustmerge,sincetheamountofmotionwithineachblockissmall,reducingthelikelihoodofalignmenterrors.Thiswarpingmakesitpossibletoremovethemotionblurwithineachblock,asshowninFig.4.Thewarpedblock-sumimageshavesufficientSNRBinary Image(Reference Frame)Frame-LevelWiener FilteringBlock-LevelWiener FilteringFig.5.Block-levelWienerfiltering.(Left)Thebinaryreferenceframeisextremelynoisyduetothestochasticnatureofphotonarrival.(Center)Wienerfilteringisappliedsuchthateachauxiliaryframeisweightedbymeasuringitsdifferencewiththereferenceframe.Sincethedifferenceislargeevenformidandlowspatialfrequencies,thenoiseinthereferenceframeispreservedinthemergedimage.(Right)Wienerfilteringisappliedtowarpedblock-sumimages,resultinginmergedimageswithhigherSNR.…………𝐵(cid:2868)…………𝐵(cid:2879)(cid:2870)(cid:2868)𝐵(cid:2869)(cid:2877)𝐵(cid:2870)(cid:2868)𝐵(cid:2872)(cid:2868)𝐵(cid:2873)(cid:2877)𝐵(cid:2879)(cid:2872)(cid:2868)𝐵(cid:2879)(cid:2874)(cid:2868)𝐵(cid:2879)(cid:2870)(cid:2869)𝑆(cid:2879)(cid:2869)(cid:3050)𝑆(cid:2868)(cid:3050)𝑆(cid:2869)(cid:3050)Auxiliary BlockReference BlockAuxiliary Block(1) Warp and Sumt(2) Robust Filtering(3) Kernel-Based Reconstruction(Edge)(Flat)(Texture)Low-resolution gridSamplesSuper-resolution gridAnisotropic KernelFig.6.Super-resolutionmerging.(1)Binaryframeswithinablockarewarpedandsummedusingthefine-scaleinter-framepatchflow.(2)Theresultingwarpedblock-sumimageisfilteredaccordingtoaguideimage(thewarpedblock-sumimageforthereferenceblock).Thissteppreparesmatchesforthereconstructionstepbymitigatingthenoiseandalignmenterrors.(3)Theweightedpatchesareplacedonasupersampledoutputgrid,wherepixelsintheindividualpatchesaretreatedassamples.Foreachpixelontheoutputgrid,ananisotropicGaussiankernelisusedtocombinethesamplesinalocalneighborhood.Theshapeandsizeoftheanisotropickernelisdeterminedbyanalyzingthestructuretensoroftheguideimage.tobeamenabletoatraditionalfrequency-domainrobust-mergingapproach[Hasinoffetal.2016].Therefore,Wienerfilteringisappliedtothewarpedblock-sumimagesinthesecondstep,sothattheycanbemergedstablytoreducethenoiselevel.Fig.5showstheresultofapplyingblock-levelWienerfiltering,resultinginconsiderablyhigherSNRthannaiveframe-levelmerging.Mergingwithsuper-resolution:Thehigh-speedsingle-photondataleadstosmallinter-framemotion(∼0.01pixels),whichcanbelever-agedtogenerateamergedimagethathasahigherresolutionthanACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. QuantaBurstPhotography•79:7Table1.SimulationConfigurationSensorTypeConventionalJotSPADResolutionSamePixelPitchSameBitDepth1011QE/PDE(R)59%64%17%QE/PDE(G)64%71%23%QE/PDE(B)47%62%21%ReadNoise(perpixel)2.4e−0.24e−0DarkCurrentNoise/DarkCountRate(perpixel)1e−/s0.16e−/s7.5cpstheinputframes[Parketal.2003;Wronskietal.2019].Wedevelopasimplesuper-resolutionalgorithmbasedonkernelregressionbyadaptingtheoriginalmergingmethoddescribedabove.Asabove,aftertheinter-framemotionfieldiscomputed,frameswithinthesameblockarewarpedandaddeduptoformthewarpedblock-sumimages.However,insteadofcomputingtheweightedaverageofpatches,theweightedpatchesaretreatedasabagofsamplepoints,asshowninFig.6.Eachpatchiswarpedtosub-pixellocationsonahigher-resolutionoutputpixelgrid.Thealgorithmthenscansthrougheachpixelontheoutputgrid.Ateachpixel,ananisotropicGaussiankernel[Takedaetal.2007;Wronskietal.2019]isusedtocombinethesamplepointswithinaspatialneighborhood.Insteadofthepoint-wiserobustnesstermusedinrecentconventionalburstphotography[Wronskietal.2019],oursuper-resolutionmethodusesthefrequency-domainrobustmergingapproach(thesameapproachusedinoriginal-resolutionmerging).Thisapproachismorerobustinpractice,atthecostofslightlyhighercomputationalcomplexity.Pleaserefertothesupplementarytechnicalreportfordesigndetailsofthekernelregressionmethod.Post-denoisingandtonemapping:Afterusingtheproposedmotion-compensatingtemporaldenoisingmethodtogenerateafinalsumimage,existingsingle-photonimagereconstructionmethodscanbeappliedforfurtherdenoising(seethesupplementaryreportforcomparisonsofdifferentreconstructionmethods).WeapplyAnscombetransform[Anscombe1948]tothesumimageandapplyBM3D[Dabovetal.2007b]forspatialdenoising[Chanetal.2016].Aftermerginganddenoising,weuseEq.5toinvertthenon-linearresponsetogetalinearimage.Gammacorrectionandtone-mappingisthenappliedtogenerateimagessuitedforviewing.7RESULTS7.1SimulationResultsWesimulatetheimagingprocessforaSPADcameraandacon-ventionalcameraofthesameresolutionandpixelpitch.Wefirstsimulatetheground-truthlinearintensityimagesusingaraytracer(POV-Ray)andthendrawBernoullisamplesaccordingtoEq.2tosynthesizethebinaryimages.Tab.1showsthesensorparametersweusedforthesimulation.Theparametersfortheconventionalsensorareforahigh-endmachine-visioncamera4.Theparameters4https://www.flir.com/products/grasshopper3-usb3/?model=GS3-U3-123S6C-CfortheSPADcameraarebasedontheSwissSPAD2sensorweuseforourexperiments.CurrentlySwissSPAD2doesnothaveaBayerfilterforcolorimaging.WedonotsimulatetheBayerfilteranddemosaic-ingprocessbutrendertheRGBchannelsdirectly.Thealignmentisperformedonagrayscaleversionoftheimageandthemergingisappliedtothethreechannelsindependently.ThefractionofincidentphotonsthataremeasuredbyaSPADisgivenbyitsphotondetec-tionefficiency(PDE),whichisdefinedastheproductofquantumefficiency,fillfactorandphotondetectionprobability(PDP).(SeeSec.9foradetaileddiscussion.)ThePDEusedinthesimulationiscomputedbymultiplyingthePDPofSwissSPAD2withthespectralresponseofasetofcontrivedcolorfiltersandthefillfactor(assumedtobe50%whichcanbeachievedwithmicrolenses[Antolovićetal.2019]).Thedarkcountrateisassumedtobespatiallyuniform(nohotpixels).Forrealimages,thisnon-uniformitycanbecalibratedandcompensatedasshowninEq.5.Comparisonofconventionalandquantaburstphotography.Wecomparetheresultsforsingle-shotconventionalimage,conven-tionalburstphotographyandquantaburstphotographyfordiffer-entlightingconditions.Forconventionalburstresult,weuseanapproachsimilartoconventionalburstphotographymethods[Hasi-noffetal.2016].Theexposuretimeandnumberofburstsaredeter-minedusingthestrategydescribedinSec.8.Fig.7showsthesimulationresultsfordifferentlightingcondi-tions.Thesceneiskeptstaticwhilethecameraismoving.Thetrajectoryofthecameraissettobealinear3Dtranslationplusasmall,smoothrandom6degrees-of-freedom(DoF)perturbationateachframe.Forascenewithsufficientlight,bothburstmethodsgeneratehigh-qualityimages.Inlowlight,SPAD-basedquantaburstphotographygeneratesmuchbetterresultasthereisnoreadnoise.Pleaserefertothesupplementaryreportforacomparisonofthetwomethodsfordifferentmotionspeedsunderextremelylowlight.Performancefordifferenttypesofcameramotion.Fig.8showsthequantaburstreconstructionresultsfordifferentkindsofcameramotion,includingrotationaroundy-axis,rotationaroundz-axis,translationalongz-axisandrandom6DoFmotion.Inallcases,relativelyblur-freeimagesarereconstructed.7.2ComparisonbetweenJotsandSPADsThequantaburstphotographyapproachdiscussedsofarisappli-cabletobothsingle-photonsensingtechnologies:SPADandjots.Whataretherelativebenefitsofthetwotechnologies?Inthissec-tion,weaddressthisquestionbycomparingtheirperformanceinvariousimagingscenarios.Adaptingproposedapproachestospatiallyoversamplingjots.Duetothespatiallyoversamplingnatureofjots,thespatialresolutionofrawjotsimagesistypicallyhigherthanthefinaloutputimage(oversamplingfactorK>1[Yangetal.2012]).Aboxfilterisap-pliedtodownsampletherawbinaryimages(relatedtotheboxcarfunctionusedin[Chanetal.2016])andconvertthemtofloatingpointintensityvalues.Thefloatimagesarethendividedintotem-poralblocksaswithSPADs(althoughwithsmallerblocksizesthanSPADs)andprocessedthroughthealignandmergepipeline.ACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. 79:8•Ma,S.etalConventional SingleConventional BurstQuanta Burst (SPAD)Conventional Burst(Denoised)Quanta Burst (SPAD)(Denoised)High LightLow LightFig.7.Simulationresultsunderdifferentlightingconditions.Wesimulatea2000-framebinarysequenceofastillindoorsceneunderthreedifferentlightingconditions.Thecameramotionisthesameinallthreesequences.(Top)Whenthereissufficientlight,theSNRofconventionalandquantaburstphotographyarecomparable,althoughthelattergeneratesasharperimagewithlessmotionblur.(Bottom)Asthelightleveldecreases,quantaburstprovidesahigherSNRthanconventionalcameras.Y RotationZ RotationComplex 6DoFZ TranslationNo AlignmentQuanta BurstFig.8.Performancefordifferenttypesofcameramotion.Wesimulatefourdifferenttypesofmotionforthesamescene:rotationaroundy-axis,rotationaroundz-axis,translationalongz-axisandarandom6degrees-of-freedom(DoF)trajectory.Inallcases,theproposedalgorithmisabletoalignthebinaryimagesandgeneratehigh-qualityimages.Comparisonunderdifferentamountsofmotion.Fig.9showsacom-parisonbetweenthereconstructionresultsofSPADsandjots.Wesimulatetwosequencesofthesamescenewherethecameramovesatdifferentspeeds.Sincejots-baseddeviceshaveyettoachieveaveryhighresolution(1024×1024sofar),andtheirtemporalreso-lutionislowerthanSPAD(1kHzvs97.7kHz),wecompareSPADswitha“projectedjotdevice”witharesolutionof5120×5120,suchthattotalnumberofpixelmeasurements(databandwidth)ofthetwosensorsisthesame.Weassignthesamedatabandwidthtothetwosensorsbasedontheassumptionthatthebandwidthwillbeanimportantlimitingfactorfortheframerateforbothsensors,astheirspecificationsevolveinthefuture.Underfastmotion,themergedimagefromjotscontainsmotionblur,whileSPADsareabletoregisterthebinaryimagesandmergeACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. QuantaBurstPhotography•79:9Fast MotionSlow Motion(b) Single-Bit Jots (Projected)(c) SPADs(a) No AlignmentFig.9.ComparisonofjotsandSPADsunderdifferentmotionspeeds.WesimulateaprojectedjotsdevicewiththesamebandwidthasSPADs(andthus,higherspatialresolution).Forfastmotion,temporallysuper-sampledSPADsareabletoresolvethemotionblurandachievesharperimage.Forslowmotion,spatiallysuper-sampledjotsareabletoreconstructimagedetailswithhigherfidelity.themintoasharpimage.Ontheotherhand,whenthemotionisslow,jotsareabletogenerateasharperimageduetotheirhighspatialresolution.Therefore,weenvisionthesetwotechnologiestocomplementeachother:SPADsachievehigherperformanceinhigh-speedscenarios,whilejotswithprojectedhighresolutionwillachievebetterimagequalityforsceneswithrelativelyslowmo-tionandhigh-frequencytexturedetails.Thereaderisreferredtothesupplementarytechnicalreportformorecomparisonsinclud-ingthoseformulti-bitjots,andforcomparisonsusingthesensorparametersofcurrentlyavailablestate-of-the-artprototypesforjots[Gnanasambandametal.2019]andSPADs[Ulkuetal.2019].7.3ExperimentsWeuseaSwissSPAD2camera[Ulkuetal.2019]toperformrealexperiments(Fig.10).ThisSPADcameracancapturebinaryframesataspatialresolutionof512×256.Themaximumframerateofthecamerais96.8kHz.Thecameradoesnothavemicrolensesandhasanativefillfactorofabout13%.CurrentlythesensorisnotequippedwithBayerfilters,soonlygrayscaleimagesarereconstructed.Weidentifythehotpixelsbytaking100000frameswhilecoveringtheFig.10.Camerasetup.(Left)TheSwissSPAD2board[Ulkuetal.2019].(Right)Camerasetup.sensorcompletelyfromlightsources.Thehotpixelsarecorrectedforeachbinaryframe.Seethesupplementaryreportfordetails.Performancefordifferentlightingconditions.Fig.11showstheperformanceofquantaburstphotographyfordifferentlightingcon-ditions.Wechoosethesamestillsceneforallsequences.Thecamerawasmovedhorizontallytoemsurethemotioniscontrollableandreproducibleacrossdifferentsequences.Theconventionalcameraimagesareemulatedfromthecapturedbinaryimagesbyfirstre-constructingtheintensityusingEq.5andthenaddingthereadnoiseandquantizationerroraccordingtotheparametersinTab.1.Quantaburstphotographygeneratesimageswithhigherqualitythanconventionalsingleandburstimages.Eveninverylowlight,wheretheindividualbinaryframesaresufficientlysparsetomakeitnearlyimpossibletomakeoutthescenestructure,areasonableimageisreconstructedbyaligningandmergingthesparseandnoisybinaryframes.Pleaseseethesupplementaryreportforperformanceoftheproposedmethodfordifferentcameramovingspeeds.ThepurposeofthisexperimentisnottocompareaconventionalsensorandSPADsensordirectly.Infact,duetothelowresolutionandlowquantumefficiencyofcurrentSPADsensors,theSPADwillalmostalwaysgenerateworse-qualityimagesthanacommercialCMOSsensor.Herewesimulatetheconventionalimagesbyassum-ingaconventionalsensorwiththesameresolutionandquantumefficiencyastheSPADarray.Duetotheblur-noisetrade-off,con-ventionalsensorstrugglesinreconstructinghigh-qualityimages,whileSPADhasthepotentialofsuper-samplingintimeandmitigatemotionblurevenforlow-lightandfast-movingscenes.Reconstructingchallengingscenes.Fig.12showsvariousscenesinvolvinglargedepthvariations,specularhighlights,complexge-ometryandfinestructures.Suchscenesareusuallychallengingforopticalflowandblockmatchingalgorithms.Thecamerawashand-held,andunderwentarandom6DoFmotionwhencapturingtheimages.Sincealong-focuslensisused,evennaturalhandtremorcausesalargeapparentmotionintheimagespace.Despitethesechallenges,theproposedmethodisabletoreconstructblur-freeimageswithhighSNR.Comparisonofdenoisingalgorithms.Afteraligningandmergingthebinaryframesintoasumimagewithlownoiseandblur,itsSNRcanbefurtherimprovedviaspatialdenoisingalgorithms(e.g.,BM3D[Dabovetal.2007b],totalvariation(TV)[ChanandLu2014]).BM3Disappliedasapost-processingstepaftertheAnscombetrans-form,whereastotalvariationisformulatedasajointreconstructionanddenoisingoptimizationproblem[ChanandLu2014].Fig.13ACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. 79:10•Ma,S.etalConventional SingleConventional BurstQuanta BurstHigh LightMedium LightLow LightBinary ImageFig.11.Performanceunderdifferentlightingconditions.Wecapturethree2000-framebinarysequencesforthesamesceneunderthreedifferentlightingconditions.Asamplebinaryimagefromeachsequenceisshowninthethirdcolumn.Thebinaryimagesbecomesparserasthelightleveldecreases.Forconventionalcameras,thereisatrade-offbetweenmotionblurandnoise,whichmakesitdifficulttogenerateahigh-qualityimageinlow-lightenvironments,eitherwithasinglelongexposure(firstcolumn)orwithaburst(secondcolumn).Forquantaburstphotography,itispossibletoresolvefastmotionwithoutsacrificingtheSNR(fourthcolumn).Eveninverylowlight,areasonableimageisreconstructedbyaligningandmergingthesparseandnoisybinaryframes.Ground Truth(DSLR, Tripod)NaiveAveragingOur ResultDepth VariationSpecular HighlightsComplex GeometryFine StructuresFig.12.Challengingscenes.Weshowthereconstructionresultsoftheproposedmethodforvariouschallengingscenesinvolvinghighdepthvariation,specularhighlights,complexscenegeometryandfinestructures.Thecamerawashandheld,andfollowsarandom6DoFmotion.Imagesarereconstructedfrom10000binaryframes.Inallcases,theproposedmethodisabletocreateablur-freeimagewithhighSNR.comparestheresultsofdifferentcombinationsofspatio-temporaldenoisingschemes.Traditionalsingle-photonimagereconstruction(naiveaverage)containseithermotionblurinthelongsequence,orheavynoiseintheshortsequencewhichcannotbeperfectlyre-movedusingBM3D.Incontrast,quantaburstphotographyapproachincombinationwithspatialdenoisingisabletogeneratesharp,lessnoisyimage.Inourexperiments,BM3DconsistentlyperformsbetterthanTV,whichresultsinover-smoothingforshortexposure,andlossofcontrastforlongexposure.Seethesupplementarytechnicalreportformorecomparisons.Super-resolution.Fig.14demonstratestheperformanceofthesuper-resolutionalgorithm.Ahigh-resolutionlensisusedwiththecamerawhichcreatesaliasingintheimagewhenthesceneisACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. QuantaBurstPhotography•79:11Naive Average, Long ExposureNaive Average, Short ExposureQuanta BurstNo DenoisingBM3DTVFig.13.Comparisonofdenoisingalgorithms.(Left)Naiveaveragereconstructionwithoutmotioncompensationonalongsequence(200images).Resultscontainseveremotionblur.(Center)Naiveaveragereconstructionwithoutmotioncompensationonashortsequence(20images).Resultsaresharpbutcontainstrongnoise.Denoisingalgorithmsreducenoisebutalsoremovehigh-frequencyimagedetails.(Right)Burstalignandmergeresultson200images.Resultsaresharpandlessnoisy.Applyingdenoisingalgorithmsfurtherreducesnoise.BM3DoutperformsTV,whichresultsinoversmoothingforshortexposureandlossofcontrastforlongexposure(redrectangle).Without Super-ResolutionWith Super-ResolutionFig.14.Achievingsuper-resolution.Wecomparetheoutputofthenormalmergingalgorithmvs.thesuper-resolutionalgorithm.Thesuper-resolutionalgorithmisabletoreconstructimageat2xresolution,creatingsharperedgesandmitigatingaliasingartifacts.perfectlyinfocus.Thesuper-resolutionalgorithmisabletouti-lizethealiasingandsub-pixelmotionbetweenframestocreateahigher-resolutionimagewithsharperimagedetailsandlessaliasingartifactsthanthenormalmergingalgorithm.Reconstructinghighdynamicrangescenes.Fig.15showsahighdynamicrangescenecapturedbytheSPADarray.Theonlylightsourceinthescene,thelamp(redbox),isdirectlyvisibleintheimage,whichisabout2000timesbrighterthanthetextontheplaque(bluebox),whichdoesnotreceiveanydirectlight.SimilarasinFig.11,wesimulatetheconventionalimagesbyaddingreadnoiseandquantizationerror.Withasinglecapture,theconventionalimageiseithersaturatedaroundthelamp,orcannotrecoverthetextsontheplaque.Conventionalburstphotographyimprovesthedynamicrange,butthetextisstillindiscernibleduetoreadnoise.Quantaburstphotographyisabletorecoverboththefilamentandthetextatthesametime.Resolvingscenemotion.Sincetheproposedmethodonlycomputespatch-wisemotionanddoesnotassumeanyglobalmotionmodel,itiscapableofresolvingscenemotion.Fig.16showsapersonpluckingthelowesttwostringsontheguitar.Simpleaveragingofbinaryframescreatesghostingartifactsorstrongnoise.Ourmethodisabletoresolvethepluckingmotionofthethumbandthevibrationofthestringswithlowernoise.Indoorsceneswithdifferent,naturallighting.Inadditiontothecontrolledscenesinthelab,wecapturedafewindoorsceneswithACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. 79:12•Ma,S.etalConventional Single(Long Exposure)Conventional Single(Short Exposure)Conventional BurstQuanta BurstLensflareFig.15.Reconstructinghighdynamicrangescenes.Wecaptureascenewithhighdynamicrangewherethelightsource(thelamp)isdirectlyvisibleintheimage.Asingleconventionalimageeithergetssaturated(longexposure)orfailstocapturethedetailsinthedarkregions(shortexposure).Conventionalburstphotographyimprovesthedynamicrange,butremainsnoisyinthedarkregionsduetoreadnoise.Quantaburstphotographyachievesveryhighdynamicrangeandisabletorecoverthedetailsofthefilamentandthetextontheplaqueatthesametime.100000framesarecapturedtoreconstructthefulldynamicrange.Allimagesareprocessedusingthesametone-mappingalgorithm[Ashikhmin2002].SceneNaive Averaging (Long Sequence)Naive Averaging(Short Sequence)Our ResultFig.16.Resolvingscenemotion.Apersonpluckingthelowesttwostringsofaguitar.Averagingthecapturedbinarysequenceresultsineitherghostingartifacts(longsequencewith2000binaryframes)oralowSNR(shortsequencewith100binaryframes).Ourmethodisabletoreconstructahigh-qualityimagefrom2000framesdespitefastandnon-rigidscenemotion.Seesupplementaryvideoforashortvideoreconstruction.Ground Truth (DSLR, Tripod)Naive AveragingOur ResultFig.17.Indoorsceneswithdifferentlighting.Theproposedmethodisabletorecoversharpimagesdespitetheaggressivecameramotionandhighdynamicrange,forvariousscenesunderdifferent,real-worldlightingconditions.ACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. QuantaBurstPhotography•79:1310100100010000Apparent speed (px/s)25100100010000-5051015Photon Flux (photons/s)SNR Difference (dB)0481216201010010001000025100100010000Apparent speed (px/s)Photon Flux (photons/s)SNR Difference (dB)(b) Current SPADs vsSub-Electron Conventional(c) Projected SPADs vsSub-Electron Conventional25100100010000Photon flux (photons/s)-50510152025SNR Difference (dB)100 px/s1000 px/s10000 px/sCurrentProjected10100100010000Apparentspeed (px/s)-505101520SNR Difference (dB)100 photons/s1000 photons/s10000 photons/sCurrentProjected(d) SNR Diff vs Photon Flux(e) SNR Diff vs Speed10100100010000251001000100000510152025Photon Flux (photons/s)Apparent speed (px/s)SNR Difference (dB)(a) Current SPADs vs ConventionalFig.18.TheoreticalSNRanalysis.(a)SNRdifferencebetweenquantaburstphotographybasedoncurrentSPADsandconventionalburstphotographybasedonamachinevisionCMOSsensor.(SNRquanta−SNRconv)indBasafunctionofincidentphotonfluxandapparentmotionspeed.SPADsachievesignificantlyhigherSNRunderverylowlightandhighspeed.Ontheotherhand,inwell-litsceneswithsmallmotion,quantaburstphotographyperformsworseduetolowerquantumefficiencyandhigherdarkcurrentnoise.Theredlineindicatestheiso-contourforSNRdifference=0(equalperformance).(b)SNRdifferencebetweencurrentSPADsandtherecentconventionalimagesensoroniPhone7.Theconventionalsensorworksbetterforawiderrangeoffluxintensityandapparentspeedsduetoitssub-electronreadnoise.(c)SNRdifferencebetweenprojectedSPADswithPDE=50%andiPhone7sensor.(d,e)1Dslicesofthe2Dplotsin(b)and(c)byfixingaspecificfluxorspeed.Ineachcase,thedifferenceishigherforlowlightlevelsandlargemotions.morenaturallighting.AsshowninFig.17,theproposedmethodisabletoreconstructhigh-qualityimagesundertheseunstructuredenvironments.Pleaserefertothesupplementarytechnicalreportformoresimulationandexperimentalresults.8WHENTOUSEQUANTABURSTPHOTOGRAPHY?Whataretheimagingregimeswherequantaburstphotographycanoutperformconventionalcameras?5Toaddressthisquestion,wecharacterizetheperformanceofconventionalandquantaburstphotographyintermsoftheSNRofthereconstructedlinearimage:SNR=20log10ˆϕRMSE(ˆϕ)(8)whereˆϕistheestimatedimageintensity,andRMSEˆϕistherootmeansquarederroroftheestimate.Weassumethattheinputimagesareperfectlyaligned(nomis-alignmenterrors)forbothconven-tionalandsingle-photoncameras,sothattheestimationerrorisonlyduetoimagenoise.Conventionalcameras:Theimageformationofconventionalim-agesensorsisgivenbyanaffinemodel[Hasinoffetal.2010]:I=Z+ϵrc+ϵdc,(9)whereZ∼Pois(ϕτcηc)isthephotoncountsasinEq.1(τcandηcaretheexposuretimeandquantumefficiencyfortheconventionalsensor).ϵrc∼N(0,σrc)isthereadnoise.ϵdc∼Pois(τcrc)isthedarkcurrentnoisecausedbythermalcurrentwithfluxrc.Thesethreecomponentsarestatisticallyindependentofeachother.Tosimplifytheanalysis,weassumeallimagesarecapturedatthesameISOspeedandtemperaturesuchthatσrcandrcarefixed.Supposeaconventionalburstphotographyalgorithmcapturesaburstofncimages.Theprocessofmergingthecapturedimagesintoaresultimagecanbeviewedasamaximumlikelihoodestimation5Thisanalysisisnotmeanttobeadirectcomparisonbetweencurrentsingle-photonandconventionalcameras.ConventionalCMOSsensorshaveconsiderablyhigherspatialresolutionandcolorfilters,andthus,willachievebetterimagequalityascomparedtocurrentSPADarraysintheforeseeablefuture.Thegoalofthisanalysisistoprovideguidelinesonwhenusingquantaburstphotographycanbebeneficial,assumingSPADarrayscanmatchthespatialresolutionofsCMOSsensors.process.Assumingtheimagesareperfectlyalignedsuchthatthencimagescanbemergedsimplybytakingtheiraverage:ˆϕc=1ncτcηcnc(cid:213)t=1(It−τcrc),(10)whereItistheimagecapturedattimet.Weassumethedarkcurrentnoisecanbecalibratedateachpixel.Themeanofthecalibrateddarkcurrentnoiseissubtractedfromthesumofimagestogiveanunbiasedestimateofthephotonflux(linearintensityimage).Fromthenoisemodel,therootmeansquarederror(RMSE)ofthisestimatorduetonoisevarianceisgivenbyRMSE(ˆϕc)=qVar[ˆϕc]=sϕηc+rcTη2c+ncσ2rcT2η2c,(11)whereT=ncτcisthetotalexposuretimeforthesequence.SPADcameras:AmaximumlikelihoodestimatorforSPADcam-eraisderivedinEq.5.Forasufficientlylongsequencenq>30,thevarianceoftheMLEcanbeestimatedusingFisherinformation(Seethesupplementarytechnicalreportforthederivation):RMSE(ˆϕq)=qVar[ˆϕq]≈1pI(ϕ)=vteϕτqηq+rqτq−1nqτ2qη2q,(12)whereτqandηqaretheexposuretimeandquantumefficiencyforthesingle-photoncamera.TheRMSEforbothmodalitiesdependonthetotalexposuretimeToftheimagesequence(assumedsameforbothmodalitiesforafaircomparison)andthetotalnumberofframesncandnq,which,inpractice,inturndependonthephotonfluxlevelϕandcameramotion:longerexposureispreferredwhenthelightlevelislowandthecameraismovingslowly.[Libaetal.2019]proposes“motionmetering”whichautomaticallyselectstheexposuretimebasedonapredictionoffuturesceneandcameramotion.Wetakeasimilarapproachforouranalysis:weassumethesceneandcameramotionareknownorcanbeestimatedsuchthatTandncanbedeterminedaccordingtothefollowingthreeprinciples:(1)Whenthemotionisslow,thetotalexposuretimeischosentomeetatargettotalnumberACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. 79:14•Ma,S.etalofphotonstoensurehighSNR.(2)Whenthemotionisfast,thetotalexposuretimeislimitedbyamaximumamountofmotionacrossthesequence.(3)Thetotalnumberofframesischosentoensuretheper-framemotionblurisbelowathreshold.Detailsaboutthestrategycanbefoundinthesupplementaryreport.TheSNRofbothapproachescanthenbeexpressedasafunctionofphotonfluxandcameramotion,whichallowscomparisonofthetwoapproaches.SNRcomparisonsbetweenconventionalandSPADcameras:Fig.18plotsthedifferenceofSNRs(SNRquanta−SNRconv)indBforawiderangeofphotonfluxesandapparentspeeds.Fig.18(a)com-parestheburstphotographyperformancebetweenacurrentSPADsensorandamachine-visionCMOSsensorwithparameterslistedinTab.1.Atultralowlightandhighspeeds,theSPADsensorper-formsconsiderablybetterthantheCMOSsensor(upto27.5dB=23.7times).Ontheotherhand,inwell-litsceneswithnegligiblemotion,theSPADperformsworse(albeitatmostbyafactorof0.5)duetorelativelylowPDEandhighDCRofcurrentSPADarrays.Recently,advancedCMOSsensorsusedinhigh-endcellphoneshaveachievedsub-electronreadnoise.Fig.18(b)plotstheSNRdifferencebetweencurrentSPADsandiPhone7’ssensor,whichisreportedtohaveareadnoiseof0.68electrons[Claff[n.d.]].SuchlowreadnoisemakesitsperformancebetterthancurrentSPADsforawiderrangeoffluxintensityandmotionspeeds.SinceSPADsareanemergingtechnology,theirspecifications(inparticular,res-olutionandPDE)continuetoimprove,arguablyatafasterratethanconventionalsensorswhicharealreadyamaturetechnology.Fig.Supp18(c)comparesiPhone7’ssensorwithaprojectedSPADswhichachieveaPDEof50%.TovisualizethevariationsoftheSNRdifferencewithrespecttoonespecificparameter,weshow1-DslicesofthecomparisonbetweeniPhone7andcurrent/projectedSPADsensorin(d)and(e),whereeitherthephotonfluxortheappar-entspeedisfixed.ThesefiguresdemonstratehowtheproposedanalysisframeworkcanbeusedtodirectfuturedevelopmentofSPADsforbestperformanceundercertainlightlevelsandamountofmotion.Atheoreticaldynamicrangeanalysiscanbefoundinthesupplementarytechnicalreport.9OUTLOOKONSINGLE-PHOTONSENSORSInthissection,wediscussthecurrentstateandfutureoutlookofSPADsensorarrays,intermsoftheirkeycharacteristics:spatialresolution,temporalframerate,photondetectionefficiency(PDE),andthedarkcountrate(DCR).Spatialresolution:DuetotheircompatibilitywithmainstreamCMOSfabricationlines,itwaspredictedin2008thatSPADimagesensorscouldreachlargeresolutionswithinonedecade[Charbon2007,2008].Inrecentyears,significantefforthasbeendevotedtoachievethisgoal,withtheworld’sfirst1MPixelSPADarrayreportedrecently[Morimotoetal.2020].Withthesamefabricationprocess,itispossibletogoupto5-10MPixel,notfarfromtheircounterpartsinCMOSimagersinseveralcell-phonecameras.Canwegoevenhigher(e.g.,50MPixel)inthelongterm?Thekeyfactorthatlimitsthespatialresolutionistheminimumpixelpitch,whichinturnislimitedbythenecessityofplacingaguardring6aroundeachSPADpixel.IncurrentCMOStechnologies,duetotheguardring,SPADpitchcannotbereducedbelow1µm.Atthatpitch,theguardringoccupiesalargeportionofthepixel,thusreducingthefillfactortoaminimum.Thislimitationcouldbeaddressedvia3D-stacking[Paviaetal.2015],apotentiallyeffectivewaytoreduceSPADpixelpitchbymovingalltheactiveandpassivecomponentsassociatedwithaSPADpixeltothebottomtierofthesensor.Framerateandpowerconsumption:TheframerateofaSPADsensorarrayislimitedbythebit-ratethechipcandeliverandbythenumberofcommunicationchannelsitcanhost.Forexample,a1Mpixelcamerawithaframerateof1kfps,willgenerate1Gbpsofdata,whichcanbehandledbyasingleLVDS(low-voltagedif-ferentialsignalling)communicationchannel.Typically,thiskindofchannelrequiresabout10mWofpoweratfullspeed.Ifonewantstoincreasetheframerateby,say,100X,thenthedataratewillincreaseto100Gbps,with1Wofpowerrequired,whichmaybeprohibitiveforconsumerdevices.ThisassumesthattheinternalpowerdissipationduetoSPADsandchipoperationisnegligible,andthatthereadoutspeedofthepixelsinternallyisnotthebot-tleneck.Thecommunicationpowerconsumptioncanbemitigatedbyperformingon-chipimageprocessingoperations,anddesigningmoreefficientmotioncomputationandimagealignmentoperationsthatareamenabletoon-chipprocessing.Furthermore,itispossibletoexploitthespatio-temporalsparsityinthephoton-cuberawdatainlow-lightscenarios.Dependingonthelight-levelinthescene,onecouldachieveaconsiderabledataratereductionbycompressingtherawphoton-cubedata[Zhangetal.2018a].Photondetectionefficiency(PDE):.PDEisdefinedastheproductofthepixelfillfactor,andthephotondetectionprobability(PDP),whichistheprobabilitythatanimpingingphotongeneratesade-tectablesignal.PDPistheproductofquantumefficiencyandtheprobabilityoftriggeringanavalanche.PDPisdependentonthewavelengthofphotons;forcurrentdevices,thePDPistypically50−70%at450−550nm.Duetolowfillfactors,earlierSPADarrayshadPDEsaslowas1%makingthemhighlyinefficientduetosignif-icantlightloss.However,thePDEinrecentarrayshasincreasedtoapproximately40%byusingmicrolensarrays,whichincreasePDEbyeffectivelyincreasingthefillfactor.Whilestilllaggingthequan-tumefficiencyofconventionalsensors(approximately60−90%),thePDEofSPADarrayswilllikelyimproveduetoimprovingfabricationprocesses,including3Dstacking.Darkcountrate(DCR):.DCRistherateofavalanchecountsun-relatedtophotons,measuredincounts-per-second(cps).EarlierSPADdeviceswerelargelyconsideredimpracticalduetohighDCR,uptoseveraltensofcpsatcryogenictemperatures,andtensofkcpsatroomtemperature.Fortunately,forcurrentdevices,DCRhasbeendrasticallyreducedto2cps[Morimotoetal.2020],evenatroomtemperature.SinceSPADsdonothavereadnoise,thisDCRissufficientlylowtoachievenearlyshot-noise-limitedSNR,even6ASPADpixeldetectssinglephotonsbycreatinganavalancheofphoto-electrons(largecurrent)whenaphotonisincident,andsensingtheavalanchecurrentviaacomparatororahigh-gainamplifier.AguardringisaregionaroundeachSPADpixelthatforcestheavalanchetobeconfinedintheregion,inordertopreventedgebreakdown.Guardringsareimplementedviageometricstructuresthatarenotsensitivetolight.ACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. QuantaBurstPhotography•79:15inultralow-light.SinceDCRisproportionaltotheactiveareaofaSPAD,asthepixelsbecomesmaller,DCRcouldbefurtherreduced.10LIMITATIONSANDDISCUSSIONResolvinglargerrangeofmotions.Theproposedalignmental-gorithmassumesthemotionofthespatialimagepatchescanbeapproximatedbya2Dtranslation,whichisusuallyappropriateforcameramotionandrigidobjectmotion.Whenthisassumptiondoesnothold,thediscrepanciesbetweenthetruedeformationofthepatchandthetranslationapproximationcanbemitigatedbythero-bustmergingalgorithm.However,whenthescenecontainsseveralsmallobjectsorundergoesnonrigidmotion,suchanapproximationnolongerholds,whichcanresultinartifactsinthemergedimage.Aninterestingfutureresearchdirectionistodesignopticalflowalgorithmsforaligningimagesforsuchchallengingscenes.Fast,energy-efficientprocessing.Currently,ouralgorithmsareimplementedinunoptimizedMATLABcodewhichtakesabout30minutesforprocessingasequencewith10000binaryframes,whichisfarfromreal-time.Forconsumerphotographyapplications,itiscriticaltoperformtheprocessinginafastandalsoenergy-efficientway.Inourcurrentimplementation,thebinaryframesaretreatedasrealnumbers(e.g.,whenwarpingthemduringthemergingstage).Onepotentialwaytoimprovetheefficiencyistoutilizespecializedcomputingarchitecturesandalgorithmsforbinarydata[Daruwallaetal.2019;PfeifferandPfeil2018].Bandwidthlimitation.ThehighdynamicrangeofSPADscomesatthecostoflargebandwidthrequirement.Currently,thecapturedbinaryimagesarestoredon-board,andthentransferredtoaPCandprocessedoffline.Thebandwidthrequirementcanberelaxedbycapturingmulti-bitimages(whichsacrificestemporalresolution).ThebandwidthinfutureSPADsensorscanalsobeimprovedbyusingfasterinterfacessuchasPCIexpressandCameraLink.Videoreconstruction.Theproposedmethodcanbeusedforre-constructingvideosbyshiftingthereferenceframeintime.Anex-amplereconstructedvideofortheguitarsequenceisshowninthesupplementaryvideo.Whilethecurrentapproachrecon-structsthevideosequenceoneframeatatime,novelalgorithmsthatenforcetemporalcoherencyacrossreconstructedframescouldbedeveloped,resultinginimprovedvideoqualityaswellaslowercomputationalcomplexity.Free-runningSPADs.TheproposedtechniquesaredesignedforSPADarraysoperatinginsynchronousclock-drivenmodewhereallthepixelsreadoffmeasurementssimultaneously,atfixedintervals.Ithasrecentlybeenshownthatevent-driven[Antolovicetal.2018]orfree-runningSPADs[Ingleetal.2019]achieveahigherdynamicrangebyrechargingtheSPADassoonasthedeadtimeduetoaphotondetectionisover.Aninterestingfuturedirectionistode-signquantaburstphotographytechniquesforasynchronousSPADarrayswherepixelsreturnbinarymeasurementindependently.Quantaimageprocessingpipeline.Theprimaryfocusofthispaperisonthealignmentandmergingofbinaryimages.Weapplyde-noisingandtone-mappingasapost-processingsteptothemergedimages.Formoderncamerasystemswithcolorfilterarrays,thereareseveralotheressentialprocessingstepsintheimageprocessingpipelineincludingdemosaicking,whitebalancing,anddehazing.Specifically,demosaickingisanon-trivialproblemsincetempo-ralinterpolationofalignmentcanintroducecolorartifacts.Recentresearchsuggeststhatthereisapotentialbenefitofperformingend-to-endprocessingfromtherawsensordata[Chenetal.2019;Gharbietal.2016;Heideetal.2014].Apromisingnextstepistode-signasimilarframeworkforquantaburstphotographyandexplorewhethersimilarbenefitsexistforsingle-photonimages.ACKNOWLEDGMENTSThisresearchissupportedinpartbytheDARPAREVEALprogram,aWisconsinAlumniResearchFoundation(WARF)FallCompeti-tionaward(UW-Madison),theSwissNationalScienceFoundationGrant166289andTheNetherlandsOrganizationforScientificRe-searchProject13916(EPFL).REFERENCESFJAnscombe.1948.TheTransformationofPoisson,BinomialandNegative-BinomialData.Biometrika35,3/4(1948),246–254.IvanMichelAntolovic,ClaudioBruschini,andEdoardoCharbon.2018.DynamicRangeExtensionforPhotonCountingArrays.OpticsExpress26,17(Aug.2018),22234.IvanMichelAntolovic,SamuelBurri,ClaudioBruschini,RonHoebe,andEdoardoCharbon.2016.NonuniformityAnalysisofa65-KpixelCMOSSPADImager.IEEETransactionsonElectronDevices63,1(Jan.2016),57–64.IvanMichelAntolović,ArinCanUlku,EkinKizilkan,ScottLindner,FrédéricZanella,RolandoFerrini,MarcSchnieper,EdoardoCharbon,andClaudioBruschini.2019.Optical-StackOptimizationforImprovedSPADPhotonDetectionEfficiency.InQuantumSensingandNanoElectronicsandPhotonicsXVI,ManijehRazeghi,JayS.Lewis,GitiA.Khodaparast,andEricTournié(Eds.).SPIE,SanFrancisco,UnitedStates,99.MichaelAshikhmin.2002.AToneMappingAlgorithmforHighContrastImages.InEurographicsWorkshoponRendering.145–156.A.BeckandM.Teboulle.2009.FastGradient-BasedAlgorithmsforConstrainedTotalVariationImageDenoisingandDeblurringProblems.IEEETransactionsonImageProcessing18,11(Nov.2009),2419–2434.MichaelJ.Black.1994.RecursiveNon-LinearEstimationofDiscontinuousFlowFields.InEuropeanConferenceonComputerVision(ECCV),GerhardGoos,JurisHartmanis,andJan-OlofEklundh(Eds.),Vol.800.SpringerBerlinHeidelberg,Berlin,Heidelberg,138–145.AndrésBruhn,JoachimWeickert,andChristophSchnörr.2005.Lucas/KanadeMeetsHorn/Schunck:CombiningLocalandGlobalOpticFlowMethods.InternationalJournalofComputerVision(IJCV)61,3(2005),211–231.ClaudioBruschini,HaraldHomulle,IvanMichelAntolovic,SamuelBurri,andEdoardoCharbon.2019.Single-PhotonAvalancheDiodeImagersinBiophotonics:ReviewandOutlook.Light:Science&Applications8,1(Dec.2019),87.A.Buades,B.Coll,andJ.-M.Morel.2005.ANon-LocalAlgorithmforImageDenoising.InIEEEComputerSocietyConferenceonComputerVisionandPatternRecognition(CVPR),Vol.2.IEEE,SanDiego,CA,USA,60–65.MauroButtafava,JessicaZeman,AlbertoTosi,KevinEliceiri,andAndreasVelten.2015.Non-line-of-sightimagingusingatime-gatedsinglephotonavalanchediode.Opticsexpress23,16(2015),20997–21011.StanleyChan,OmarElgendy,andXiranWang.2016.ImagesfromBits:Non-IterativeImageReconstructionforQuantaImageSensors.Sensors16,11(Nov.2016),1961.StanleyH.ChanandYueM.Lu.2014.EfficientImageReconstructionforGigapixelQuantumImageSensors.InIEEEGlobalConferenceonSignalandInformationPro-cessing(GlobalSIP).IEEE,Atlanta,GA,USA,312–316.ParamanandChandramouli,SamuelBurri,ClaudioBruschini,EdoardoCharbon,andAndreasKolb.2019.ABitTooMuch?HighSpeedImagingfromSparsePhotonCounts.InIEEEInternationalConferenceonComputationalPhotography(ICCP).Tokyo,Japan,1–9.arXiv:1811.02396E.Charbon.2007.WillAvalanchePhotodiodeArraysEverReach1Megapixel?ImageSensorsWorkshop(2007).E.Charbon.2008.TowardslargescaleCMOSsingle-photondetectorarraysforlab-on-chipapplications.J.ofPhys.D:Appl.Phys.41(12)(2008).ChenChen,QifengChen,MinhNDo,andVladlenKoltun.2019.SeeingMotionintheDark.InInternationalConferenceonComputerVision(ICCV).3185–3194.JoonHeeChoi,OmarA.Elgendy,andStanleyH.Chan.2018.ImageReconstructionforQuantaImageSensorsUsingDeepNeuralNetworks.InIEEEInternationalConferenceonAcoustics,SpeechandSignalProcessing(ICASSP).IEEE,Calgary,AB,6543–6547.ACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. 79:16•Ma,S.etalWilliamJ.Claff.[n.d.].Input-ReferredReadNoiseversusISOSetting.https://www.photonstophotos.net/Charts/RN_e.htm.KostadinDabov,AlessandroFoi,andKarenEgiazarian.2007a.VideoDenoisingbySparse3DTransform-DomainCollaborativeFiltering.InEuropeanSignalProcessingConference.145–149.KostadinDabov,AlessandroFoi,VladimirKatkovnik,andKarenEgiazarian.2007b.ImageDenoisingbySparse3-DTransform-DomainCollaborativeFiltering.IEEETransactionsonImageProcessing16,8(Aug.2007),2080–2095.KyleDaruwalla,HengZhuo,CarlySchulz,andMikkoLipasti.2019.BitBench:ABench-markforBitstreamComputing.InACMSIGPLAN/SIGBEDInternationalConferenceonLanguages,Compilers,andToolsforEmbeddedSystems(LCTES).ACMPress,Phoenix,AZ,USA,177–187.MichaelEladandMichalAharon.2006.ImageDenoisingViaSparseandRedundantRepresentationsOverLearnedDictionaries.IEEETransactionsonImageProcessing15,12(Dec.2006),3736–3745.OmarA.ElgendyandStanleyH.Chan.2019.ColorFilterArraysforQuantaImageSensors.https://arxiv.org/abs/1903.09823(2019).EricFossum.2005.WhatToDoWithSub-DiffractionLimit(SDL)Pixels?—AProposalforaGigapixelDigitalFilmSensor(DFS).InIEEEWorkshoponCharge-CoupledDevicesandAdvancedImageSensors.214–217.EricFossum,JiajuMa,SalehMasoodian,LeoAnzagira,andRachelZizza.2016.TheQuantaImageSensor:EveryPhotonCounts.Sensors16,8(Aug.2016),1260.EricR.Fossum.2011.TheQuantaImageSensor(QIS):ConceptsandChallenges.InOSATopicalMtgonComputationalOpticalSensingandImaging.Toronto,Canada.EricR.Fossum.2013.ModelingthePerformanceofSingle-BitandMulti-BitQuantaImageSensors.IEEEJournaloftheElectronDevicesSociety1,9(Sept.2013),166–174.MichaëlGharbi,GauravChaurasia,SylvainParis,andFrédoDurand.2016.DeepJointDemosaickingandDenoising.ACMTransactionsonGraphics35,6(Nov.2016),1–12.AbhiramGnanasambandam,OmarElgendy,JiajuMa,andStanleyH.Chan.2019.MegapixelPhoton-CountingColorImagingUsingQuantaImageSensor.OpticsExpress27,12(June2019),17298.ClémentGodard,KevinMatzen,andMattUyttendaele.2018.DeepBurstDenoising.InEuropeanConferenceonComputerVision(ECCV),VittorioFerrari,MartialHebert,CristianSminchisescu,andYairWeiss(Eds.).SpringerInternationalPublishing,Cham,538–554.RafaelC.GonzalezandRichardE.Woods.2006.DigitalImageProcessing(3rdEdition).Prentice-Hall,Inc.,USA.IstvanGyongy,TarekAlAbbas,NealeAWDutton,andRobertKHenderson.2017.ObjectTrackingandReconstructionwithaQuantaImageSensor.InInternationalImageSensorWorkshop(IISW).5.IstvanGyongy,NealeDutton,andRobertHenderson.2018.Single-PhotonTrackingforHigh-SpeedVision.Sensors18,2(Jan.2018),323.SamuelW.Hasinoff,FrédoDurand,andWilliamT.Freeman.2010.Noise-OptimalCaptureforHighDynamicRangePhotography.InIEEEComputerSocietyConferenceonComputerVisionandPatternRecognition(CVPR).553–560.SamuelW.Hasinoff,DillonSharlet,RyanGeiss,AndrewAdams,JonathanT.Barron,FlorianKainz,JiawenChen,andMarcLevoy.2016.BurstPhotographyforHighDynamicRangeandLow-LightImagingonMobileCameras.ACMTransactionsonGraphics35,6(Nov.2016),1–12.FelixHeide,StevenDiamond,MatthiasNießner,JonathanRagan-Kelley,WolfgangHeidrich,andGordonWetzstein.2016.ProxImaL:EfficientImageOptimizationUsingProximalAlgorithms.ACMTransactionsonGraphics35,4(July2016),1–15.FelixHeide,KarenEgiazarian,JanKautz,KariPulli,MarkusSteinberger,Yun-TaTsai,MushfiqurRouf,DawidPająk,DikpalReddy,OrazioGallo,JingLiu,andWolfgangHeidrich.2014.FlexISP:AFlexibleCameraImageProcessingFramework.ACMTransactionsonGraphics33,6(Nov.2014),1–13.AtulIngle,AndreasVelten,andMohitGupta.2019.HighFluxPassiveImagingwithSingle-PhotonSensors.InIEEEConferenceonComputerVisionandPatternRecogni-tion(CVPR).6760–6769.KiyotakaIwabuchi,TomohiroYamazaki,andTakayukiHamamoto.2019.IterativeImageReconstructionforQuantaImageSensorbyUsingVariance-BasedMotionEstimation.InInternationalImageSensorWorkshop(IISW).4.S.Kavadias,B.Dierickx,D.Scheffer,A.Alaerts,D.Uwaerts,andJ.Bogaerts.2000.ALogarithmicResponseCMOSImageSensorwithOn-ChipCalibration.IEEEJournalofSolid-StateCircuits35,8(Aug.2000),1146–1152.OrlyLiba,RyanGeiss,SamuelW.Hasinoff,YaelPritch,MarcLevoy,KiranMurthy,Yun-TaTsai,TimBrooks,TianfanXue,NikhilKarnad,QiuruiHe,JonathanT.Barron,andDillonSharlet.2019.HandheldMobilePhotographyinVeryLowLight.ACMTransactionsonGraphics38,6(Nov.2019),1–16.ZiweiLiu,LuYuan,XiaoouTang,MattUyttendaele,andJianSun.2014.FastBurstImagesDenoising.ACMTransactionsonGraphics33,6(Nov.2014),1–9.JiajuMa,SalehMasoodian,DakotaA.Starkey,andEricR.Fossum.2017.Photon-Number-ResolvingMegapixelImageSensoratRoomTemperaturewithoutAvalancheGain.Optica4,12(Dec.2017),1474.MatteoMaggioni,GiacomoBoracchi,AlessandroFoi,andKarenEgiazarian.2012.VideoDenoising,Deblocking,andEnhancementThroughSeparable4-DNonlocalSpatiotemporalTransforms.IEEETransactionsonImageProcessing21,9(Sept.2012),3952–3966.M.MalfaitandD.Roose.1997.Wavelet-BasedImageDenoisingUsingaMarkovRandomFieldaPrioriModel.IEEETransactionsonImageProcessing6,4(April1997),549–565.BenMildenhall,JonathanT.Barron,JiawenChen,DillonSharlet,RenNg,andRobertCarroll.2018.BurstDenoisingwithKernelPredictionNetworks.InIEEE/CVFConferenceonComputerVisionandPatternRecognition(CVPR).IEEE,SaltLakeCity,UT,2502–2510.KazuhiroMorimoto,AndreiArdelean,Ming-LoWu,ArinCanUlku,IvanMichelAn-tolovic,ClaudioBruschini,andEdoardoCharbon.2020.Megapixeltime-gatedSPADimagesensorfor2Dand3Dimagingapplications.Optica7,4(Apr2020),346–354.C.Niclass,A.Rochas,P.-A.Besse,andE.Charbon.2005.DesignandCharacterizationofaCMOS3-DImageSensorBasedonSinglePhotonAvalancheDiodes.IEEEJournalofSolid-StateCircuits40,9(Sept.2005),1847–1854.M.O’Toole,F.Heide,D.B.Lindell,K.Zang,S.Diamond,andG.Wetzstein.2017.Re-constructingTransientImagesfromSingle-PhotonSensors.In2017IEEEConferenceonComputerVisionandPatternRecognition(CVPR).2289–2297.MatthewO’Toole,DavidB.Lindell,andGordonWetzstein.2018.Confocalnon-line-of-sightimagingbasedonthelight-conetransform.Nature555(Mar2018),338–341.SungCheolPark,MinKyuPark,andMoonGiKang.2003.Super-ResolutionImageReconstruction:ATechnicalOverview.IEEESignalProcessingMagazine20,3(May2003),21–36.JuanMataPavia,MarioScandini,ScottLindner,MartinWolf,andEdoardoCharbon.2015.A1×400Backside-IlluminatedSPADSensorWith49.7PsResolution,30pJ/SampleTDCsFabricatedin3DCMOSTechnologyforNear-InfraredOpticalTomography.IEEEJournalofSolid-StateCircuits50,10(Oct.2015),2406–2418.MichaelPfeifferandThomasPfeil.2018.DeepLearningWithSpikingNeurons:Oppor-tunitiesandChallenges.FrontiersinNeuroscience12(Oct.2018),774.AlexisRochas.2003.SinglePhotonAvalancheDiodesinCMOSTechnology.Ph.D.Dissertation.EPFL.LucianoSbaiz,FengYang,EdoardoCharbon,SabineSusstrunk,andMartinVetterli.2009.TheGigavisionCamera.InIEEEInternationalConferenceonAcoustics,SpeechandSignalProcessing(ICASSP).IEEE,Taipei,Taiwan,1093–1096.DongeekShin,FeihuXu,DheeraVenkatraman,RudiLussana,FedericaVilla,FrancoZappa,VivekK.Goyal,FrancoN.C.Wong,andJeffreyH.Shapiro.2016.Photon-efficientimagingwithasingle-photoncamera.NatureCommunications7(Jun2016),12046.HiroyukiTakeda,SinaFarsiu,andPeymanMilanfar.2007.KernelRegressionforImageProcessingandReconstruction.IEEETransactionsonImageProcessing16,2(Feb.2007),349–366.ArinCanUlku,ClaudioBruschini,IvanMichelAntolovic,YungKuo,RinatAnkri,ShimonWeiss,XavierMichalet,andEdoardoCharbon.2019.A512×512SPADImageSensorWithIntegratedGatingforWidefieldFLIM.IEEEJournalofSelectedTopicsinQuantumElectronics25,1(Jan.2019),1–12.SebastianVolz,AndresBruhn,LeviValgaerts,andHenningZimmer.2011.ModelingTemporalCoherenceforOpticalFlow.InIEEEInternationalConferenceonComputerVision(ICCV).IEEE,Barcelona,Spain,1116–1123.XiulingWang,WinnifredWong,andRichardHornsey.2006.AHighDynamicRangeCMOSImageSensorWithInpixelLight-to-FrequencyConversion.IEEETransactionsonElectronDevices53,12(Dec.2006),2988–2992.JoachimWeickertandChristophSchnörr.2001.VariationalOpticFlowComputationwithaSpatio-TemporalSmoothnessConstraint.JournalofMathematicalImagingandVision14,3(2001),245–255.BartlomiejWronski,IgnacioGarcia-Dorado,ManfredErnst,DamienKelly,MichaelKrainin,Chia-KaiLiang,MarcLevoy,andPeymanMilanfar.2019.HandheldMulti-FrameSuper-Resolution.ACMTransactionsonGraphics38,4(July2019),1–18.arXiv:1905.03277FengYang,Y.M.Lu,L.Sbaiz,andM.Vetterli.2012.BitsFromPhotons:OversampledImageAcquisitionUsingBinaryPoissonStatistics.IEEETransactionsonImageProcessing21,4(April2012),1421–1436.FengYang,LucianoSbaiz,EdoardoCharbon,SabineSusstrunk,andMartinVetterli.2009.ImageReconstructionintheGigavisionCamera.InIEEEInternationalConferenceonComputerVisionWorkshops(ICCVWorkshops).IEEE,Kyoto,2212–2219.C.Zhang,S.A.Lindner,I.M.Antolovic,J.M.Pavia,M.Wolf,andE.Charbon.2018a.A30-frames/s,252x144SPADFlashLiDARwith1728Dual-Clock48.8-psTDCs,andPixel-WiseIntegratedHistogramming.IEEEJournalofSolid-StateCircuits54(4)(2018).KaiZhang,WangmengZuo,andLeiZhang.2018b.FFDNet:TowardaFastandFlexibleSolutionforCNN-BasedImageDenoising.IEEETransactionsonImageProcessing27,9(Sept.2018),4608–4622.HenningZimmer,AndrésBruhn,andJoachimWeickert.2011.OpticFlowinHarmony.InternationalJournalofComputerVision(IJCV)93,3(July2011),368–388.ACMTrans.Graph.,Vol.39,No.4,Article79.Publicationdate:July2020. QuantaBurstPhotography:SupplementaryTechnicalReportSizhuoMa1,ShantanuGupta1,ArinC.Ulku2,ClaudioBruschini2,EdoardoCharbon2,andMohitGupta11UniversityofWisconsin-Madison,USA2ÉcolepolytechniquefédéraledeLausanne,[email protected],[email protected],{arin.ulku,claudio.bruschini,edoardo.charbon}@epfl.ch,[email protected],weprovideadditionaltechnicaldetails,analysisandexperimentalresultsthatarenotincludedinthemainpaperforbetterpresentation.1TechnicalDetailsfortheProposedAlgorithm1.1RemovalofHotPixelsDarkcountrate(DCR)distributiononarealSPADarrayisusuallynon-uniforminspace.AfewpixelswhichhaveveryhighDCRwillalwaysshowupasbrightpixelsintheblock-sumimage,whichareusuallycalled“hotpixels”.Suchpixelscanbeidentifiedbytakingadarkimageandlocatethepixelswithhighcounts.Toremovetheminthereconstructedimage,onenaivemethodistoperformamedianfilteringaftermerging.However,theexistenceofhotpixelsinterfereswiththealigningprocess,especiallyinverydarkscenes.Sincehotpixelshaveveryhighintensityanddonotmoveasthecameraorscenemoves,theybiasthemotionestimatetowardszeromotion,causingsystematicalignmenterror.Anotherpotentialapproachistoexcludethehotpixelsinthedatatermsduringalignment.Thisresultsincorrectalignment,howeverwhenthebinaryimagesarewarpedinthemergingstage,thehotpixelswillsweepalongtheestimatedmotiontrajectory,causinga“hotstripe”inthemergedimage.Followingtheanalysisabove,itisnecessarytocorrectthehotpixelsinthebinaryframes.Weadoptasimpleapproachbyrandomlyassigningahotpixelthebinaryvalueofoneofitsspatialneighbors.Thisapproachessentiallyappliesaspatiallyaveragingfilteratthebinaryframelevel,whichhasshowntoremovemostofthehotpixelsinrealexperiments.1.2ChoiceofBlockSizeforMotionEstimationBlocksizeisanimportantparameterforachievingaccurateframealignment.Iftheblocksizeistoosmall,eachblockhasalowphotoncount(lowSNR),whichresultsinhighalignmenterror.Ontheotherhand,iftheblocksizeistoolarge,theblockalignmentiscomputedonlyatsparsetimestamps,whichisunabletocapturehigh-frequencyvariationsinthecameraorscenemotion.Theoptimalblocksizeforaspecificapplicationdependsonboththelightlevelandthemotionvariations,andcanbedeterminedfrompriorknowledgeofapproximatescenefluxlevelsandmotionamounts.Aninterestingextensionistoautomaticallydeterminetheblocksizebyperforminglightandmotionmetering.Intheexperiments,weuseblocksizesrangingfrom100to500frames.1arXiv:2006.11840v1 [cs.CV] 21 Jun 2020 1.3ChoiceofBlockSizeforRobustMergingTheblocksizeusedinthemergingstagedoesnothavetobethesameasinthealigningstage.Onceweestimatethefine-scaleinter-framemotionfield,thebinaryframescanbewarpedseparatelyandgroupedintoblocksusingadifferentblocksize.Thechoiceofblocksizeplaysanimportantroleinthemergingstageaswell.Toosmallablocksizepreservestheshotnoise.Extremelylargeblocksizespreservealignmentartifacts,sincetheframeswithinablockaresimplyaddedtogether.Theoptimalchoiceofblocksizeagaindependsonthelightlevelandthevariationinscene/cameramotion.1.4ChoiceofPatchSizePatchsizeisanotherimportantfactorontheimagequality.Choosingalargepatchsizeislikelytogathermorephotonsandcapturemorefeaturesofthescene,whichmakesalignmenteasier.However,largepatchsizescannotcorrectlymodelnonrigidscenemotion,resultinginmotionartifacts.ChoiceofblocksizeisespeciallyimportantforcurrentSPADsensorswhosespatialresolutionisrelativelylow.Weuse16x16patchesformostexperiments,8x8fornonrigidscenemotion,and32x32fordarksceneswithglobalmotion.1.5Super-ResolutionInthissectionwegivedetailsonthesuper-resolutionalgorithm.Asmentionedinthemainpaper,aftercomputingthefine-scalemotionfield,thebinaryframesarecombinedintowarpedblock-sumimagesandfilteredusingtheWienerfilter:ˆSwf(ω)=ˆSwref(ω)+Ai(ω)(ˆSwaux,i(ω)−ˆSwref(ω)).(1)Thisstepisusedtoreducethemisalignmentartifacts,whichplaysasimilarroleasthepoint-wiserobustnessfactorin[1].Asmentionedinthemainpaper,wefoundthisstepmorerobust,atthecostofcomputationalcomplexity.Insteadofsummingthemuptoformamergedpatchintheoriginal-resolutiongridasintheoriginalmergingalgorithm,eachpatchistreatedasabagofsamplesandwarpedtoahigher-resolutiongrid.Thevalueateachpixelofthehigh-resolutiongridiscomputedbycombiningallsamplesinaneighborhoodusingaanisotropickernel:SSR(x,y)=Pi∈Nwi·SiPi∈Nwi,(2)whereNisthesetofallsamplepointsintheneighborhoodaroundpixel(x,y).Siisthephotoncountsofthei-thsamplepoint.wiistheweightgivenbytheanisotropicGaussiankernel,wi=exp(cid:18)−12(xi−x)TΩ−1(xi−x)(cid:19),(3)wherex=(x,y)isthepixellocationofinterest,xi=(xi,yi)isthelocationofthesamplepoint.Theshapeandsizeoftheanisotropickernel(encodedinthecovariancematrixΩ)isdeterminedbytheanalysisofthelocalstructuretensorofaguideimage.Theguideimagecanbeeitherthereferenceblock-sumimage(forfastercomputation)oraoriginal-resolutionreconstructedimageobtainedbyrunningthenormalmergingalgorithmbeforehand(forbetterquality).Foraflatregion,alargerkernelisusedtogathermorepixelsfordenoising.Foranedge,thekernelisstretchedalongtheedgetoavoidingover-smoothingandmitigatethealignmenterroraroundtheedges.Foracornerorlocalwindowwithhighvariations,asmallkernelischosentopreservethedetails(Fig.6inthemainpaper).Theexactkerneldesignisverysimilartotheheuristicsusedin[1].TheonlydifferenceisthatwesetanupperboundtotheanisotropyfactorAwhichdeterminestheratioofthemajoraxisandshortaxisoftheellipticalkernel:A=1+min(pλ1/λ2,5)(4)2 whereλ1,λ2aretheeigenvaluesofthelocalstructuretensor.Thispreventsthekernelfrombeginelongatedtoomuch,whichcausesartifactsalongtheedges.TheexactparametersweusefortheimageswecapturewithSwissSPAD2camerasare:Ts=[8,16]dependingonlightlevel,kdetail=0.3,kdenoise=1,Dth=0.005,Dtr=0.5,kstretch=1,kshrink=1.Usingsmallblocksizes.Comparedtooriginal-resolutionmerging,super-resolutionmergingrequiresalargernumberofblocks(smallerblocksize).Thisisbecausesuper-resolutionbenefitsfromlargernumberofsamplepointswithdifferentsub-pixeloffsets,whichgrowswiththenumberofblocks.Asaresult,super-resolutionmergingmaysufferfromthenoiseproblemduetosmallblocksize(seeSec.6inthemainpaper).OurapproachtosolvethisproblemistoreplacethereferenceblockimageSrefinEq.1withthepre-reconstructedguideimage,whichhasamuchhigherSNR.Thenoiseestimateσandthescalingfactorcneedtobeadjustedcorrespondingly.2FurtherDiscussiononWhentoUseQuantaBurstPhotography2.1DerivationofEq.12HerewegiveaderivationofEq.12inthemainpaperwhichgivestheRMSEofthemaximumlikelihoodestimatorforquantaburstphotography.RecallthatthebinaryvalueBataSPADpixelfollowsaBernoullidistribution:P{B=0}=e−(φτη+rqτ),P{B=1}=1−e−(φτη+rqτ),(5)whereφisthephotonfluxincidentatthepixel,τistheexposuretime,ηisthequantumefficiency,rqisthedarkcountrate.ThesumimageisdefinedasthesumofallbinaryimagesS(x,y)=nqXt=1Bt(x,y),(6)Assumingnomotion,allphotonsincidentat(x,y)comingfromthesamescenepoint,whichmeansBt(x,y)arei.i.dBernoullivariables.ThereforeSfollowsabinomialdistribution.ThelikelihoodfunctionfortheunknownparameterφgivenanobservednumberofphotonsS=s:f(φ|s)=(cid:18)nqnq−s(cid:19)(e−(φτη+rqτ))s(1−e−(φτη+rqτ))nq−s.(7)Themaximumlikelihoodestimation(MLE)isgivenby:ˆφ=−ln(1−s/nq)/τη−rq/η.(8)TheFisherinformationcanbecomputedas:I(φ)=−NXs=0∂2∂φ2logf(φ|s)P{S=s}(9)=nqτ2qη2qeφτqηq+rqτq−1.(10)Forasufficientlylongsequencenq>30,thevarianceoftheMLEcanbeestimatedusingFisherinformation.Therefore,RMSE(ˆφq)=qVar[ˆφq]≈1pI(φ)=seφτqηq+rqτq−1nqτ2qη2q,(11)whichisconsistentwiththeresultin[2].3 2.2Auto-ExposureStrategyforSignal-to-NoiseRatio(SNR)AnalysisHerewegivethedetailsofthestrategyweusetodeterminethetotalexposuretimeandnumberofframesfortheSNRanalysisinSec.7ofthemainpaper.Weonlyconsiderasinglepixelforthisanalysis.Forafaircomparison,thetotalexposuretime(sumofexposuretimeforallframesinthesequence)forbothsystemsareassumedtobesame,whichisdeterminedbyT=min(ct/φ,mmax/v),(12)wherectisapredeterminedtargetcountofphotons.φisthephotonflux.mmaxisthemaximumtolerabletotalmotioninpixels.vistheapparentspeedofthepixelinpixels/s(weassumethespeedisconstantduringtheexposure).Thisstrategycanbeinterpretedasattemptingtochooseanexposuretimewhichallowsustorecordatargetnumberctofphotonsintheburst,whilemakingsurethatthetotalmotionovertheexposuredoesn’texceedthesetthresholdmmax.Incasethemotionistoofast,theexposuretime(andthenumberofphotonsrecorded)isreducedproportionallysoastorestrictittommax.Thisisbecausefortoolargeapparentmotion,theperfectalignmentassumptionusuallydoesnothold–duetobrightnesschange,viewpointchange,ormovingbeyondthefieldofview.Inpractice,evenifwetakealongburstofimagesinthiscase,laterimageswillnotcontributetothemergingduetomatchingdifficulties.Ingeneral,thenumberofframesncforconventionalburstphotographyisdeterminedbybalancingthemotionblurandSNRoftheresultingimage:ChoosingalargerncwillmitigatemotionblurbutalsodecreaseSNRduetoreadnoise.ItishardtocomparetheSNRofthemethodswhenasingle(conventional)framecontainsmotionblur.Therefore,wechoosenctobetheminimumnumberthatkeepsthemotionblurforasingleframebelowacertainthresholdmf(e.g.,1pixel)andignoretheeffectsofmotionblurwhencomputingSNR.nc=vTmf(13)wheremfisthemaximumtolerablemotionperframe.Thisisalsosimilartotheauto-exposurestrategyusedin[3].Intheanalysis,wechoosect=1000,mmax=60(assuminga512x256camera),mf=1.Forquantaburstphotography,wealwayschoosethemaximumreachableframeratesinceincreasingthenumberofframeswillnotreduceSNR:nq=Tτq,(14)whereτqistheminimumframetimethatisdeterminedbythehardware.2.3DynamicRangeAnalysisInthissectionwegiveatheoreticalanalysisofthedynamicrangeofbothconventionalandquantaburstphotography.Wedefinethedynamicrangeastheratiobetweenthemaximummeasurablephotonfluxandtheminimummeasurablephotonflux:DR=20log10φmaxφmin(15)whereφmaxisdefinedasthehighestphotonfluxbeforesaturation.Forconventionalsensors,thisisthecasewhentheexpectednumberofdetectedphotonsforeachframeisequaltoFullWellCapacity−1.ForSPADs,thiscorrespondstothedetectionofnq−1photonsinatotalofnqframes,i.e.,S=nq−1inEq.8.φminisdefinedasthelowestphotonfluxforwhichtheSNRisabovecertainthreshold.Herewechoosethethresholdtobe1(0dB)whichisconsistentwithpreviousworksonSPAD[4,5]andcommondefinitionforconventionalsensors.Fig.1showsthedynamicrangeforconventionalandquantaburstphotographyfordifferentexposuretime.Thecurvesareplottedforafewtypicalframeratesforbothimagesensors.Quantaburstphotographytendstoperformworseforashortexposuretimeduetothelownumberofframes(lowfullwellcapacity),butgrowsfasterthanconventionalburstphotographyastheexposuretimeincreases.Forexample,100kfpsquantaburstphotographyperformsbetterthan1kfpsconventionalburstphotographyaslongastheexposuretimeislongerthan0.04s.4 0.010.1110100Exposure Time (s)5060708090100110120130140Dynamic Range (dB)Conventional 10fpsConventional 100fpsConventional 1kfpsQuanta 10kfpsQuanta 25kfpsQuanta 100kfpsFig.Supp-1:DRanalysis.Weplotthetheoreticaldynamicrangeofconventionalburstphotographyandquantaburstphotographyasafunctionofexposuretime.Forbothmethods,wechoosethreetypicalframerates.Thedynamicrangeofquantaburstphotographyislowerduetolownumberofframesbutgrowsveryfastastheexposuretimeincreases.3Results3.1SimulationResultsThesimulatedimagesinthemainpaperandsupplementarymaterialisrenderedusingPOV-Ray.CodeisadaptedfromJaimeVivesPiqueres’Lightsysdemo1.Comparisonofconventionalburstphotographyandquantaburstphotography.Fig.Supp-2showsthesimulationresultsforthreedifferentmotionspeeds.AccordingtothestrategyinSec.2,whentheapparentspeedgetsfaster,thetotalexposuretimeislimitedbythemaximumtolerableamountofmotiontoavoidappearancechangesduetosignificantviewpointchange,whichresultsinasmallernumberofincomingphotons.SimilartoFig.8inthemainpaper,thequalityofconventionalburstresultgoesdownfasterthanquantaburstresult.TheresultsofthesetwosetsofsimulationmatchthetheoreticalanalysisofSNRinSec.7ofthemainpaper:Quantaburstphotographyperformsbetterinlowlightandfastmovingscenarios.Performancefordifferentphotondetectionefficiency(PDE).Fig.Supp-3showsthereconstructionresultswithdifferentassumedPDEofthesingle-photoncamera,whichcorrespondsthecurrentspecificationwithoutmicrolens,withmicrolens,doublefillfactorthancurrentspecification,anddoublefillfactoranddoublePDPthancurrentspecification.PDEisanessentialfactorthatdeterminesthefinalimagequality.WeexpectPDEofSPADcamerastokeepimprovingduetoimprovingfabricationprocesses.ComparisonofjotsandSPADs.Fig.Supp-4showsthecomparisonbetweenjotsandSPADs-basedquantaburstphotography.Theproposedquantaburstphotographyisadaptedtosingle-bitandmulti-bitjots.Theinputtothealignprocessisnotbinaryimagesbutspatiallydownsampledversionsofsingle-bitandmulti-bitimages(usingaboxfilter,normalizedin0-1).Therestofthepipelinestillworkswiththisdataformat.Aftermerging,alinearresponsefunctionisappliedtorecovertheintensitiesformulti-bitjots[6].Asmentionedinthemainpaper,becauseofthelimitedspatialandtemporalresolution,currentjotsperformworsethanSPADs.Theprojectedsingle-bitandmulti-bitjotsarenotabletoremovethemotionblurforextremelyfastmotion.Forslowmotion,theyareabletogeneratesharperimagesthanSPADsthankstohigherspatialresolution.Multi-bitjotsgenerateslightlyblurredimagesduetotheirlowerframerate.Fig.Supp-5showsthecomparisonbetweenjotsandSPADsinextremelylowandhighlightingconditions.Inthelowlightcondition,single-bitjotsimagesarenoisybecausethereadnoise,albeitdeepsub-electron,makesthepixelsflipbetween0and1.Insuchlowlighting,mostpixelreceive0photonduringtheexposure.Therefore,morepixelsareflippedfrom0to1,resultinginawhitened,noisyimage.Theresultcannot1http://www.ignorancia.org/index.php/technical/lightsys/5 Conventional SingleConventional BurstQuanta BurstConventional Burst(Denoised)Quanta Burst(Denoised)Fig.Supp-2:Simulationresultsfordifferentcameramovingspeeds.Wesimulatetheindoorscenewiththreedifferentcameramovingspeeds.Thecameramoveslinearly(withperturbations)atdifferentspeeds.(Top)Whenthecameramotionisslow,theexposuretimeischosentomeetatargetnumberoftotalcollectedphotons(1000),inwhichcasebothmethodsgeneratehigh-qualityimages.Thequantaburstphotographygivesslightlyworseresultsduetohigherdarkcurrentnoise.(Middle)Asthecameraspeedincreases,thetotalexposuretimeislimitedbythemaximumtolerableapparentmotionandasmallernumberofphotonsarecollected.Asaresult,theperformanceofbothmethodsdeteriorates.(Bottom)Intheextremelyfastcase,thequantaburstphotographycanstillrecovertheoverallstructureoftheobjects,whiletheconventionalburstphotographyiscompletelydominatedbynoise.beimprovedbysettingathresholdlargerthanonephoton,sincefewpixelsreceivemorethan1photon.Multi-bitjotsgeneratebetterresultbecausethesignalisstrongercomparedtoreadnoiseduetothelongerexposureandhigherfullwellcapacity.SPADscontainleastnoisesincethereisnoreadnoise.Inthehighlightcondition,4-bitjotssaturatemoreeasilythan1-bitjots.Thisisbecausesingle-bitjotshaveanonlinearresponsecurve.Increasingthenumberofbitwilldecreasetheoverexposurelatitudeandresultinamorelinearresponsecurve[7].Itisnotclearwhethersuchnonlinearitycanbeusedtoextendthedynamicrangeformulti-bitreconstruction,whichisalsolikelytobelimitedbythenon-uniformityofthejots.Herewefollowthepracticein[6]andusealinearresponsefunctionforreconstructingmulti-bitimages.Noticethatalltheanalysisaboveassumesbandwidthisthebottleneckforalltypesofsensors.Inpractice,framerateandspatialresolutionmaybeconstrainedbyotherfactorsinchipdesignandmanufacture.TheanalysisisabasedoncurrentspecificationofjotsandSPADs.Inthefuture,jotsmaybeabletoachievereadnoiselowerthan0.15e−whichwillresultinimproveddynamicrange.6 PDE=4.6%PDE=23%PDE=92%PDE=46%Fig.Supp-3:PerformancefordifferentPDEs.WeshowthereconstructionresultsforthesamesceneundersamecameramotionfordifferentPDE.ThefiguretitlesshowthePDEofthegreenchannel,whichcorrespondtocurrentspecificationwithoutmicrolens,currentspecificationwithmicrolens,doublefillfactor,doublefillfactoranddoublePDP.PDEisanessentialfactorofthefinalimagequality,asshownintheclose-ups.3.2ExperimentalResultsPerformancefordifferentcameramovingspeeds.Fig.Supp-6showstheperformanceoftheproposedmethodfordifferentcameramovingspeeds.SameasFig.11inthemainpaper,theconventionalimagesaresimulatedbyreconstructingintensitiesfrombinaryframesandthenaddingreadnoiseandquantizationerror.Asthecameramovesfaster,thesensorscollectalowernumberofphotons,andtheresultsforbothmethodsdegenerate.Inthefastestscenario,conventionalcameracapturesimageswitheithersignificantblurorlowSNR,whilequantaburstphotographyisabletoresolvethemotionandachieveanacceptableSNR.Comparisonofsingle-photonimagingdenoisingalgorithms.Inthispaperwefocusoncombininginformationfromallotherauxiliaryframesinasequencetohelpdenoisethereferenceframe.Aftermergingallframesintoasinglesumimage,itisstillpossibletoapplysingle-imagedenoisingandreconstructionalgorithm,usingspatialinformationtofurtherimprovetheSNRoftheimage.Fig.Supp-7(right)showtheresultsofapplyingtwodenoisingalgorithmsafterburstmerging:BM3Dandtotalvariation(TV).BM3Disappliedasapost-processingstepafterAnscombetransform,asnotedinthemainpaper.Totalvariationisformulatedasajointreconstruction-denoisingoptimizationproblem[8]:minφ−Xi∈Ωlogf(φi|si)+λtvkDφk1,(16)whereφisavectorrepresentationofthephotonfluxateachpixel.Ωistheimagedomain.f(φi|si)isthelikelihoodfunctiondefinedinEq.7.Disthefinitedifferenceoperatorthatisusedtocomputethegradients.λtvisaparameterusedtocontroltheamountofspatialsmoothing.ThetwosequencesaretemporallysubsampledfromtheoriginalsequencesinFig.12inthemainpaper,whichcontainonly200binaryimagesandthereforetheresultsaremuchnoisier.WenoticethatingeneralBM3DperformsbetterthanTV.InFig.Supp-7(Scene1,Burst),TVisnotabletopreservethecontrastintheregionindicatedbytheredrectangle.In(Scene2,Burst),thedarkerregioninthesceneisnoisierinTVthaninBM3D.Wealsocomparetheresultoftheproposedquantaburstphotographywithdirectlydenoisingasimpleaverageofthebinarysequenceasisdoneinpreviouspapers(withoutcompensatingformotion).Sincethesceneismoving,thenaiveaverageresultsareeitherwithheavymotionblur(longsequence),orcontainalotofnoise(shortsequence).Bycomparingthenaiveresultwithshortsequenceandburstresult,itisclearthatusingtemporalinformationfordenoisinghelpsremovenoisewhilekeepthespatialdetailsoftheimage.7 (a) No Alignment(b) Single-Bit Jots(Current)(d) Multi-Bit Jots (Projected)(c) Single-Bit Jots (Projected)(e) SPADsFast MotionSlow MotionFig.Supp-4:ComparisonofjotsandSPADsunderdifferentmotionspeeds.Currentimplementa-tionofjotsperformworsethanSPADsforbothfastandslowmotion.WesimulateaprojectedjotswhichisassumedtohavethesamebandwidthasSPADsandworkinbothsingle-bitand4-bitmode.Forfastmotion,temporally-supersampledSPADsareabletoresolvethemotionblurandgivesharperimage.Forslowmotion,spatially-supersampledjotsareabletoreconstructbetterimagedetails.Multi-bitjotsgenerateslightlyblurredimageduetolowerframerate.WeexpectSPADsandjotstocomplementeachotherandworkfordifferentmotionranges.Effectsofframe-levelflowinterpolation.Oneofthemaintechnicalcontributionsoftheproposedquantaburstphotographymethodisthatthealignmentisperformedonaggregatedblocksumimagesandistheninterpolatedtoobtainframe-levelpatchflow,whichislaterusedformerging.Fig.Supp-8showshowtheframe-levelpatchflowinterpolationhelpsresolvemotionblur.Ifwedividethe2000-framesequenceinto100-frameblocks(“coarserblocks”,20blocksintotal)anddonotinterpolatetheflowwithintheblock(i.e.,mergetheblocksumimagedirectly),theresultcontainsnoticeablemotionblur.Themotionblurcanberesolvedbydividingintofinerblocks(20framesperblock×100blocks),butthisresultsinanoisierimage.Thisisbecausemorenoiseispreservedwhenasmallerblocksizeisused,asdiscussedinSec.6inthemainpaper.Frame-levelflowinterpolation(interpolatedfromcoarseblockdivision)isabletoremovethemotionblurwhilenotaddingextranoise.Resolvingscenemotion.Fig.Supp-9showsanothersequencewithscenemotion:Atennisballisdroppedvertically.Naiveaveragingofthebinarysequencesresultsineithermotionblurorlotsofshotnoise.Byaligningtheimagepatchesthatconstitutethetennisballproperly,theproposedmethodisabletogenerateahigh-SNRimagewithoutmotionblur.Indoorscenewithnaturallighting.Fig.Supp-10showsanothersequenceofindoorscenewithnaturallighting.Theintensecameramotionbetweentheframesiscorrectlyresolvedandaclear,sharpimageisgenerated.8 Low Light(a) No Alignment (b) Single-Bit Jots(Projected)(c) Multi-Bit Jots(Projected)(d) SPADsHigh LightFig.Supp-5:ComparisonofjotsandSPADsunderdifferentlightingconditions.Inadarkenvironment,single-bitjotscontainsignificantnoise.Multi-bitjotscontainlessnoiseduetoitslongerexposuretimeandmorediscretelightlevels.SPADscontainleastnoisesincethereisnoreadnoise.Inabrightscene,single-bitjotsareneartosaturation.Multi-bitjotsaresaturatedandtheimageappearswashedout.SPADsareabletoreconstructthehighfluxscenepoints.References[1]B.Wronski,I.Garcia-Dorado,M.Ernst,D.Kelly,M.Krainin,C.-K.Liang,M.Levoy,andP.Milanfar,“HandheldMulti-FrameSuper-Resolution,”ACMTransactionsonGraphics,vol.38,pp.1–18,July2019.[2]A.Ingle,A.Velten,andM.Gupta,“HighFluxPassiveImagingwithSingle-PhotonSensors,”inIEEEConferenceonComputerVisionandPatternRecognition(CVPR),pp.6760–6769,2019.[3]O.Liba,R.Geiss,S.W.Hasinoff,Y.Pritch,M.Levoy,K.Murthy,Y.-T.Tsai,T.Brooks,T.Xue,N.Karnad,Q.He,J.T.Barron,andD.Sharlet,“Handheldmobilephotographyinverylowlight,”ACMTransactionsonGraphics,vol.38,pp.1–16,Nov.2019.[4]M.Zarghami,L.Gasparini,M.Perenzoni,andL.Pancheri,“HighDynamicRangeImagingwithTDC-BasedCMOSSPADArrays,”Instruments,vol.3,p.38,Aug.2019.[5]F.Zappa,S.Tisa,A.Tosi,andS.Cova,“Principlesandfeaturesofsingle-photonavalanchediodearrays,”SensorsandActuatorsA:Physical,vol.140,pp.103–112,Oct.2007.[6]A.Gnanasambandam,O.Elgendy,J.Ma,andS.H.Chan,“Megapixelphoton-countingcolorimagingusingquantaimagesensor,”OpticsExpress,vol.27,p.17298,June2019.[7]E.R.Fossum,“ModelingthePerformanceofSingle-BitandMulti-BitQuantaImageSensors,”IEEEJournaloftheElectronDevicesSociety,vol.1,pp.166–174,Sept.2013.[8]S.H.ChanandY.M.Lu,“Efficientimagereconstructionforgigapixelquantumimagesensors,”inIEEEGlobalConferenceonSignalandInformationProcessing(GlobalSIP),(Atlanta,GA,USA),pp.312–316,IEEE,Dec.2014.9 Conventional SingleConventional BurstQuanta BurstSlow MotionMedium MotionFast MotionFig.Supp-6:Performancefordifferentcameramovingspeeds.Wecapturethreebinarysequencesforthesamescenewiththecameramovingatdifferentspeeds.Forfastmotion,conventionalcamerascangenerateeitheraheavilyblurredimageoranimagewithsignificantnoise,whilequantaburstphotographycanreconstructablur-freeimagewithmuchlowernoise.10 Naive Average, Long ExposureNaive Average, Short ExposureQuanta BurstNo DenoisingBM3DTVNaive Average, Long ExposureNaive Average, Short ExposureQuanta BurstNo DenoisingBM3DTVScene 1Scene 2Fig.Supp-7:Comparisonofdenoisingalgorithms.(Left)Naiveaveragereconstructionwithoutmotioncompensationonalongsequence(200images).Resultscontainseveremotionblur.(Center)Naiveaveragereconstructionwithoutmotioncompensationonashortsequence(20images).Resultsaresharpbutcontainalotofnoise.Applyingdenoisingalgorithmshelpreducenoisebutalsoremovehigh-frequencyimagedetails.(Right)Burstalignandmergeresultson200images.Resultsaresharpandlessnoisy.Applyingdenoisingalgorithmsfurtherreducenoise.BM3DperformsbetterthanTVasTVreducesintensitycontrastinbrightregions(Scene1)anddoesnotreducenoisewellindarkregions(Scene2).11 No AlignmentFrame-Level AlignmentBlock-Level Alignment(Finer Blocks)Block-Level Alignment(Coarser Blocks)Fig.Supp-8:Effectsofframe-levelflowinterpolation.Ashortsequencethatcontains2000binaryframes,whichisdividedinto100-frameblocks(coarserblocks)and20-frameblocks(finerblocks).Resultsfromblocker-levelalignmenteithercontainmotionblur(coarserblocks)ormorenoise(finerblocks),whiletheinterpolatedframe-levelalignmentisabletoremovemotionblurwithoutincreasingtheamountofnoise.SceneNaive Averaging (Long Sequence)Naive Averaging(Short Sequence)Our ResultFig.Supp-9:Resolvingscenemotion.Wecaptureabinaryimagesequencewhereatennisballisdroppedvertically.Quantaburstphotographyisabletoaligntheimagestogenerateablur-freeimagewithhighSNR.Ground Truth (DSLR, Tripod)Naive AveragingOur ResultFig.Supp-10:Indoorsceneundernaturallighting.Thebinarysequenceiscorrectlyalignedtoeachotherdespiteintensecameramotion,resultinginaclear,sharpimage.12
ai_researcher
5
Goal_Driven_Discovery_of_Distributional_Differences_via_Language_Descriptions.pdf
3 2 0 2 t c O 5 2 ] L C . s c [ 2 v 3 3 2 4 1 . 2 0 3 2 : v i X r a Goal Driven Discovery of Distributional Differences via Language Descriptions Ruiqi Zhong∗, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, Jacob Steinhardt Abstract Exploring large corpora can generate useful discoveries but is time-consuming for humans. We formulate a new task, D5, that automatically discovers differences between two large corpora in a goal-driven way. The task input is a problem comprising a user-specified exploration goal (“comparing the side effects of drug A and drug B”) and a corpus pair (collections of patients’ self-reported reactions after taking each drug). The output is a goal-relevant description (discovery) of how these corpora differ (patients taking drug A “mention feelings of paranoia” more often). We build a D5 system, and to quantitatively evaluate its performance, we 1) build a diagnostic benchmark, SYND5, to test whether it can recover known differ- ences between two synthetic corpora, and 2) contribute a meta-dataset, OPEND5, aggregating 675 open-ended problems ranging across business, social sciences, humanities, machine learning, and health. With both synthetic and real datasets, we confirm that language models can leverage user-specified goals to propose more relevant candidate discoveries, and they sometimes produce discoveries previously unknown to the authors, including demographic differences in discussion topics, political stances in speech, insights in commercial reviews, and error patterns in NLP models. Finally, we discuss the limitations of our D5 system, which discovers correlation rather than causation and potentially reinforces biases when misused; therefore, practitioners should treat the outputs of our system with caution. 1 Introduction Exploring large corpora and generating discoveries from them can be ad hoc and laborious. For example, to compare the side effects of drug A and drug B, doctors might inspect two large corpora of patients’ self-reported reactions after taking each drug; based on ad hoc insights, they hypothesize that patients taking drug A more often “mentions feelings of paranoia”, and then validate this hypothesis by laboriously inspecting the two corpora. Since machines can automatically process a large amount of texts, we might hope for ML systems to facilitate exploratory analyses like the one above. However, an ML task requires a unified input-output space and evaluation metric so that it can be automated, benchmarked, learned, and analyzed. To this end, we formalize one type of exploratory analysis problem as a natural language generation task: goal driven discovery of differences between text distributions via language descriptions (D5). As shown in Figure 1, the input to the D5 task is a “problem” comprising a description of a user-specified exploration goal (understanding side effects) and a corpus pair (text samples from the distributions of self-reported reactions after taking each drug). The output is a “discovery” represented as a natural language predicate (“mentions feelings of paranoia”). We evaluate a discovery with two criteria (Section 3): (1) validity: it should describe a true difference (Zhong et al., 2022); and (2) relevance to the goal (McGarry, 2005). ∗University of California, Berkeley, EECS Department. Email: [email protected] 37th Conference on Neural Information Processing Systems (NeurIPS 2023). Figure 1: Each problem in OPEND5 contains 1) a corpus pair, which has ∼17K samples on average and is partitioned into two halves called “exploration split” and “validation split”, and 2) a natural language description of the exploration goal, which also contains information about how the corpus pair was collected. A D5 system takes the goal and the exploration split as inputs and generates valid and relevant discoveries in natural language as outputs. The underlined texts in the exploration goal vary across problems, while the rest are templates. Since D5 is open-ended and aims at discovering unknowns, the most popular benchmark practice— comparing system-generated outputs with human-written references on a test set—is infeasible. We therefore design two evaluation strategies. • Diagnostic: we synthesized a dataset of D5 problems with known solutions, SYND5, to diagnose whether a D5 system can recover known differences between two synthetic corpora. This strategy is cheap and automated but might not reflect user utility in real applications. • Open-ended: we collected a dataset, OPEND5, by aggregating 675 open-ended D5 problems ranging across business, social sciences, humanities, health, and machine learning (Figure 2), comprising 4.4 million text samples in total across problem corpora. We then manually evaluated a subset of the output discoveries. This strategy is subjective and expensive, but useful for obtaining qualitative insights on more realistic applications. These two strategies allow us to quantitatively evaluate and compare D5 systems. For example, we compared 1) the system from Zhong et al. (2022) designed to describe corpus-level differences without goals, and 2) a goal-conditioned variant that we develop in Section 4. We found language models successfully use the specified goal: the goal-conditioned variant is correct 12% more often on SYND5, and it produces relevant candidate discoveries 31% more often on OPEND5. We envision OPEND5 to be a growing, diverse repository of open-ended D5 problems. They will not only help us evaluate D5 systems more reliably, but also allow the following operations: Facilitate exploratory analysis. Every time we build a better D5 system, we can apply it to a repository of open problems and send the discoveries to researchers who posed them. We show this paradigm is plausible by using our system to automatically produce useful discoveries on OPEND5 (Section 6.1), including insights from commercial reviews, temporal and demographic differences in discussion topics, political stances and stereotypes in speeches, differences in lyric styles, and error patterns in NLP systems. We anticipate future systems to produce more discoveries. Analyze the limitations of our evaluation. Using concrete examples from OPEND5, we show that our current evaluation metrics do not encourage diverse findings, do not always produce causal conclusions, and cannot evaluate discoveries involving heavy expert knowledge (Section 6.2). More D5 problems can help us identify more limitations, which inform areas for future improvement. Train better D5 systems. Like other ML tasks, we can train a system once we have a dataset. We describe a self-supervised learning algorithm that uses a repository of problems (without reference solutions) to train LMs to propose more valid hypotheses (Section 4.3). As a proof-of-concept, we show that it can make LMs better describe the differences between small groups of text samples. 2 Explora(on GoalThe original dataset includes pa(ent’s self-reported reac(ons a9er taking a drug. The two corpora are generated based on what drug the pa(ent has taken. Samples from Corpus A include self-reported reac(ons a9er taking drug A, while samples from Corpus B include self-reported reac(ons a9er taking drug B. I am a doctor. My goal is to understand the side effects of drug A. Corpus Pair-Coughing for two months -Felt sleepy today -[~4K more samples omiNed for brevity] -Slowly recovering. -[~4K more samples omiNed for brevity]-Even liNle sound dries me crazy - Feelig to worried to focus -[~4K more samples omiNed for brevity] -My family complaints I’m too irritated -[~4K more samples omiNed for brevity]Corpus ACorpus BExplora(on splitValida(on splitOutput DiscoveryCorpus A has more samples that “mention feelings of paranoia” Input Problem To conclude, we show that D5 can be quantitatively evaluated, automated, analyzed, and learned. Like other ML tasks, it would benefit from a more diverse, authentic, and larger dataset. We hope future works can gather feedback from domain experts and curate an ever-larger dataset of D5 problems, thus accelerating exploratory analyses and facilitating scientific discoveries. 2 2 Datasets: SYND5 and OPEND5 We first introduce how each input problem is formatted. Then we discuss 1) how we synthesized SYND5, which is used for automatic diagnostic evaluation, and 2) how we collected OPEND5, which is used to investigate the practical value of D5 systems in open-ended applications. 2.1 Task Format Each D5 problem is represented by a corpus pair (Corpus A/B) and a description of the exploration goal. For example, Corpus A/B might be self-reported reactions after taking drug A/B, and the goal description would be “comparing the side effects of drug A and drug B”. The desired output is valid and relevant discoveries in the form of natural language predicates (Figure 1), e.g. Corpus A has more samples that “mentions feelings of paranoia”. 2.2 SYND5, a Diagnostic Benchmark with Reference Solutions and an Automatic Metric To automatically diagnose a D5 system, we synthesized SYND5, a dataset of D5 problem with reference solutions. To synthesize Corpus A and Corpus B for each input problem, we used a language model (LM) to generate two corpora that simultaneously differ on two dimensions, one of which is goal-relevant and one of which is a distractor. For instance, suppose the goal is to “understand how Corpus A differs from Corpus B in terms of topic”. Then we would synthesize an example where Corpus A is more sports-related while B is more art-related (goal-relevant: varying topic), while additionally Corpus A is in English while B is in French (distractor: varying language). The reference solution is the difference on the goal-relevant dimension, e.g. “is sports-related”. In more detail, to synthesize an example in SYND5, we first picked one goal-relevant and one distractor dimension from the set {topic, genre, language}, and sampled a value for each corpus and dimension (e.g. Corpus A: [sports, English]; Corpus B: [art, French]). We then synthesized Corpus A/B such that all its samples are in English/French (i.e. completely different on the distractor dimension) while V percent of them are sports-related/art-related, where we varied V from 0.6 to 1. Since the distractor difference is more salient, SYND5 penalizes D5 systems that ignore the goal and output the incorrect distractor difference “is in English”. We synthesized 300 problems in total to create SYND5; see Appendix 9 for a detailed description of the pipeline. To compute a D5 system’s accuracy, we prompted Claude-v1.3 (Bai et al., 2022b) to judge how often the output discovery is semantically equivalent to the reference. We construct the prompt by using 6 pairs of predicates with the labels of “equivalent”, “similar”, or “irrelevant” as few- shot examples, and ask Claude-v1.3 to judge whether the output discovery and the reference are “equivalent”. As a result, we can automatically diagnose a D5 system. See Appendix 12 for the prompt for equivalence judgement and Appendix 10 for two robustness checks, which (a) consider “similar” discoveries to be correct as well, and (b) use other LMs for equivalence judgement. 2.3 OPEND5, a Realistic Open-Ended Dataset without Reference Solutions To evaluate a D5 system’s utility under realistic applications, we also gathered OPEND5, a realistic dataset of 675 open-ended D5 problems. These problems range across business, social sciences, humanities, health, and machine learning; see Figure 2 for a few examples. To build OPEND5, two of the authors performed an extensive literature review on problems that could potentially benefit from our system, e.g., reading survey papers (Nguyen et al., 2020) and courses on computational social sciences, and skimming through the ACL proceedings from the past decade and datasets from Kaggle that have an NLP tag; we then annotated the exploration goals, scraped/generated the 2Our code is released at https://github.com/ruiqi-zhong/D5 and our code to download OPEND5 is released at https://github.com/petezh/OpenD5. Given the limitations of our system, practitioners should interpret its outputs with caution and not use it to fully automate scientific discoveries. 3 Figure 2: OPEND5 contains 675 problems. See citations in Appendix 23. corresponding corpora, and post-processed them over nine months (see complete list of citations in Appendix 23). As shown in Figure 1, each goal describes the original dataset, how the two given corpora are generated, who is using the system, and what property of the two corpora does the user want to understand. Each OPEND5 problem is reviewed by at least two of the authors to reduce grammatical mistakes and ambiguous interpretations of the goal. Each corpus contains around 17K text samples on average, and OPEND5 in total comprises 4.4 million distinct text samples. We use 50% of each corpus as the “exploration” split and 50% as the “validation” split. The system can only access the exploration split, while the validation split is reserved for the evaluators to validate the discovery. A validation split prevents overfitting the discoveries to the given samples and is analogous to the train-test split in machine learning. Since we hope to build systems that can tackle challenging open-ended problems, we did not avoid cases where we do not know the ground truth answer. This is different from standard benchmarking practices, where humans can provide a reference solution to evaluate an AI system. However, even though we do not know the ground truth, once a system produces a discovery, we can still evaluate it. We present our evaluation metrics in the next section. 3 Evaluation Metrics for Open-Ended D5 problems For the goal of comparing the side effects of drug A and drug B, how do we evaluate a system- generated discovery that Corpus A “mention feelings of paranoia” more often? First, it needs to be valid, such that indeed more samples from Corpus A satisfy this predicate, which can be evaluated (approximately) objectively. Second, it needs to be relevant to the goal of understanding side effects, which depends on the user’s subjective judgement. We define validity and relevance below. Validity. Similar to Zhong et al. (2022), we require an output discovery h to be a truth predicate on a text sample. For example, if h = “mentions about family”, then h is true on the string x1 = “My daughter loves me” and false on the string x2 = “I’m going to school”. Define T (h, x) ∈ [0, 1] as “the certainty that h is true on x”, e.g., T (h, x1) ≈ 1 and T (h, x2) ≈ 0. We approximate T (h, x) by asking three Turkers how certain they are and averaging their responses (see Appendix 11 for details). Let Dval B denote the validation sets for Corpus A and B. We define the “validity” V as A and Dval V (h) := E x∼Dval A [T (h, x)] − E x∼Dval B [T (h, x)]. (1) 4 DomainExample DatasetsHow the Corpus Pairs are GeneratedCorpus ACorpus B87 Business problemsCommercial ReviewsAirline reviews1st-class passenger reviewsEconomy passenger reviewsProduct ReviewsReviews that give 10 starsReviews that give 0 starFinanceYC startupsSuccessful startup descriptionsFailed startup descriptionsNews HeadlinesTop headlines when S&P risesTop headlines when S&P falls278 Social Sciences problemsPoliticsAdministration policyAdmin policy from TrumpAdmin policy from ObamaNewsReuters headlinesHeadlines from 2014Headlines from 2015LanguageCraiglist NegotiationsDialogue from successesDialogue from failuresDiplomacy DialoguesLiesHonest statementsSociologyHappy momentsSelf-reported happy moments from femalesSelf-reported happy moments from malesRate My ProfessorReviews of female lecturersReviews of male lecturers169 Humanities problemsArtsMusic lyricsDrake rap lyricsKanye rap lyricsEducationStudent essaysEssays that received full scoreEssays with only partial credit10 Health problemsHealthDoctor’s notePatients diagnosed with pneumoniaPatients not diagnosed with pneumonia131 Machine Learning problemsMachine LearningNLI — distribution shiftSamples from SNLISamples from MNLIQQP — spurious correlationIndividual questions with label “paraphrase”Individual questions with label “non-paraphrase”LM’s outputGenerations from one LMGenerations from another LMinputs — error analysisInputs where one model is correctInputs where one model is wrongWikiText — clusteringSamples from one clusterSamples not from a cluster Computing V (h) is expensive since it requires human annotations T (h, x) on a set of text samples even to evaluate a single discovery h. In practice, we do not have the budget to compute V (h) on the entire validation split; therefore, we approximate this quantity by randomly sampling from Corpus A and Corpus B. We use these samples to compute an empirical estimate of V , as well as a p-value for the null hypothesis that V ≤ 0 using a one-sided t-test. Relevance. A discovery may be irrelevant even if V = 1. For example, if the goal is to understand the writing style differences between higher-scoring essays (Corpus A) and lower-scoring ones (Corpus B), the discovery that Corpus A “achieves higher scores” has high validity score by definition but irrelevant to the goal of understanding stylistic differences. Therefore, we designed a procedure to evaluate relevance, where human or language model evaluators would score each discovery with 2⃝/ 1⃝/ 0⃝. The evaluators used the rubric below, which illustrates the meaning of each score with the essay example above: • 2⃝, relevant; e.g. the discovery “write in first person” is directly related to the writing style. • 1⃝, indirectly relevant; e.g. the discovery “use the word “I””, is not exactly a writing style, but can still inform the relevant underlying principle of “write in first person”. • 0⃝, irrelevant; e.g. the discovery “argue for abortion” is unrelated to the writing style. To minimize biases while comparing two systems, the evaluators are blind to which system generates which discoveries. To conclude, an ideal discovery would have a high V value with a small p-value and achieve ratings of 2⃝ in relevance. In the next section, we will build a D5 system that addresses these criteria by first proposing goal-relevant candidate discoveries (hypotheses) and then automatically validate them. Other metrics. We also explored two other subjective metrics, novelty (how difficult it is to generate the discovery) and significance (how beneficial it is to learn about the discovery). Due to space limit, we present their rubrics and related results in Appendix 14. 4 Methods: Building a D5 System We describe our D5 system, which maps from a corpus pair and an exploration goal to a set of natural language predicates. Our system is inspired by a two-stage model of how humans discover patterns in data: creatively brainstorming hypotheses and then rigorously validating them on the data (Ludwig & Mullainathan, 2022). Analogously, we first propose hypotheses conditioned on the exploration goal and a subset of samples from the corpus pair (Section 4.1). We then use a language model to approximately compute the validity of each hypothesis, and output the valid ones as the final discoveries (Section 4.2). Our system closely mirrors that of Zhong et al. (2022), except that we leverage the goal to propose more relevant hypotheses. Finally, we present a self-supervised learning algorithm to improve an LM’s ability to propose more valid hypotheses (Section 4.3); however, due to API access constraint, we cannot apply it to fine-tune gpt-3, so we provide a proof of concept experiment on Flan-T5 (Chung et al., 2022). 4.1 Hypothesis Proposer We prompt gpt-3 (Ouyang et al., 2022) to propose hypotheses. Denoting the exploration split of Corpus A/B as Dexp B , we construct the prompt by concatenating a few random samples from Dexp B , the exploration goal, and an instruction to output a list of hypotheses. Figure 3 (left) depicts an example of the resulting prompt, together with a typical language model output. A and Dexp A /Dexp Since the entire corpus pair might not fit into one prompt, we construct multiple prompts with different sets of samples so that gpt-3 can “see” as many different samples as possible in our pipeline. We continue sampling hypotheses with different prompts until obtaining a set of 60 hypotheses, which we call Hinit. Appendix 15 includes more details on selecting the sets of samples for different prompts. 4.2 Hypothesis Validator Many hypotheses in Hinit have low validity: they are not more often true on DA than on DB (i.e. V (h) ≤ 0). To automatically filter them out, we use a language model T ′ to simulate the Turkers’ 5 Figure 3: All underlined content in the prompt differs across problems, while the other content in the prompt is templated. Left: proposer prompt. The generated hypotheses are in blue. All content with colored background is excluded for brevity. For the baseline of not using the exploration goal, we removed the “exploration goal” block from the prompt. Right: the validator prompt. judgement T and hence approximate the validity score V (h) with the function V ′(h), defined as V ′(h) := E x∼Dexp A [T ′(h, x)] − E x∼Dexp B [T ′(h, x)]. (2) To compute T ′, we ask Flan-T5 whether x satisfies h with the prompt shown in Figure 3 (right). To better simulate Turker’s judgment, we collected additional Turker annotations to fine-tune FLAN-T5 (see Appendix 16 for details about the data collection process). We then obtain a significance value p′ by performing a t-test to compare the mean value of V ′(h, x) on the exploration split of Corpus A to that of Corpus B, rule out the hypotheses with p′ greater than 0.001, and output the remainder as discoveries. Finally, we obtain additional discoveries by repeating the same process but asking our system to propose and validate hypotheses about Corpus B rather than Corpus A. Appendix Figure 5 visualizes our entire pipeline and Appendix 8 discusses the computational resources we used. 4.3 Self-Supervised Learning with Open-Ended Problems: A Proof of Concept Since D5 problems are open-ended, future systems could potentially produce discoveries with higher validity scores than any known discovery. Therefore, we design a self-supervised learning algorithm to improve an LM’s ability to propose more valid hypotheses, using the principle that it is easier to validate a discovery than to generate one. Algorithm. Suppose we are given a set of problems for training and an initial language model m0. Our goal is to automatically generate a set of prompt-completion pairs to fine-tune m0 so that it can propose hypotheses that are more valid. To generate a prompt, we randomly sample a problem and create a proposer prompt following the procedure in Section 4.1. To generate the desired completion given a prompt, we sample multiple hypotheses from minit, approximate their V ′ score on the samples in the proposer prompt with the same language model minit (Section 4.2), and select the highest scoring hypothesis. Finally, we use the prompt-completion pairs to fine-tune m0. A Proof of Concept Experiment. Since we cannot fine-tune text-davinci-003, we can only experiment with Flan-T5-xxl (Chung et al., 2022), an open-sourced instruction-tuned model that might only work well for easier “mini-problems”. As a proof of concept, we tested the above self-supervised learning algorithm on the task of describing groups of four samples, where each group comes from a text cluster. We computed both the automated “self-evaluation” validity score V ′ and the “true” validity score V according to Turker evaluation for evaluation. After self-training, V ′ improves substantially from 0.22 to 0.37, and the V improves from 0.07 to 0.10, with a p-value of 0.02. This result provides preliminary evidence that self-training could be applied to a large set of problems to improve the 6 Group A: The Manchester United soccer squad welcomes rising star Juan Silva, … Group A: As Serena Willows joins the UCLA women's tennis roster, Group A: Group B: Egypt's President Abdel Fa?ah el-Sisi and Saudi Arabia’s … Group B: At the African Union Summit in Addis Ababa, Nigeria's President Muhammad. Group B: (some of the sentences are truncated for brevity) … The original dataset includes news summaries. The two corpora are generated based on when they were published. Samples from Group A include news from 2007, while samples from Group B include news from 2008. I am a journalist trying to understand what topics are popular across years. Please write a list of hypotheses (separated by bullet points "-") of how datapoints from Group A differ from those from Group B. Each hypothesis should be formaMed as a sentence fragment. Here are three examples. - "talks about poli;cs, such as presiden;al elec;on.” - "contains insul;ng language for immigrants." - "uses double nega;on, i.e., using two nega;ons in a sentence." Based on the two sentence groups (A and B) from the above, more sentences in Group A ... -“menLons a sports team recruiLng a new member” -“menLons about academic relaLons, such as teachers or students” -“menLons aboutSamples from the two corporaExplora;on GoalFormaqng Instruc;onsLanguage Model OutputsCheck whether the TEXT sa;sfies a PROPERTY. Respond with Yes or No. When uncertain, output No. Now complete the following example - input: PROPERTY: menLons a sports team recruiLng a new member TEXT: As Serena Willows joins the UCLA women's tennis roster output:Proposer promptCheck whether the TEXT sa;sfies a PROPERTY. Respond with Yes or No. When uncertain, output No. Now complete the following example - input: PROPERTY: menLons a sports team recruiLng a new member TEXT: Egypt's President Abdel Fa?ah el-Sisi and Saudi Arabia’s. output:Validator promptPr[NextWord = “Yes”] = 99%Pr[NextWord = “Yes”] = 5%// a list of hypotheses not included for brevity// 20 samples not included for brevity // 20 samples not included for brevity text-davinci-003 w/ goal wo/ goal w/ validator wo/ validator 12% 4% 2% 1% gpt-4 w/ validator wo/ validator w/ goal wo/ goal 27% 8% 15% 5% Table 1: The accuracy on SYND5 using different proposers, with/without incorporating goals, and with/without using validators. Using the validator, the goals, and gpt-4 leads to better results. Hypothesis Relevance Using the goal Not using the goal 2⃝ 1⃝ 79% 9% 12% 52% 16% 32% 0⃝ average 1.68 1.20 Table 2: How often the hypotheses proposed by text-davinci-003 are rated by the authors as 2⃝/ 1⃝/ 0⃝ in terms of relevance (Section 3). Overall, using the goal significantly increases relevance. validity of the hypotheses; we expect future validators to simulate human judgments better, hence decreasing the approximated gap of improvement between V and V ′. We discuss more training and evaluation detail in Appendix 20. 5 Quantitative Evaluation on SYND5 and OPEND5 We show that both SYND5 and OPEND5 can be used to quantitatively evaluate D5 systems. Since SYND5 is automatic, we used it to compare a broad range of D5 systems and studied the contributions of three different factors: the quality of the proposer model (gpt-4 vs. text-davinci-003), the use of a validator, and the use of a goal. We then further investigated the effect of using goals under realistic applications through human evaluation on OPEND5. Automatically comparing different variants with SYND5. As mentioned above, we ablated 3 factors, resulting in 23 = 8 variants. We compared 1) using text-davinci-003 vs. gpt-4 as the hypothesis proposer; 2) using the validator to compute V ′ for each hypothesis and outputting the highest-scoring hypothesis, vs. not using the validator and outputting a random hypothesis; and 3) using the goal vs. replacing it with “I want to understand how Corpus A is different from Corpus B.”. We then automatically calculated the accuracy for each variant as described in Section 2.2. We report the results in Table 1. We find that using the validator and the goals significantly improve the performance, and gpt-4 outperforms text-davinci-003 with goals and the validator (p < 1% under a t-test). We conducted two additional robustness checks in the Appendix 10: (a) using text-davinci-003 instead of Claude-v1.3 to judge predicate equivalence, and (b) considering discoveries semantically similar to the references also to be correct; our conclusions do not change. Finally, to improve the accessibility of our research, we ran the same experiments using gpt-3.5-turbo and flan-t5-xxl as our proposer, and report the results in Appendix Table 6. To show that our conclusions are general and not only apply to synthetically generated texts, we additionally constructed an extension of SYND5 with human-written texts by adapting the NYT dataset from Wang et al. (2023), where each text sample is a New York Times article with a topic and a location label: the topic dimension has 9 different values (e.g., politics, arts) and the location dimension has 10 different values (French, Italy); we then followed the same procedure described in Section 2.2 to create this extension of SYND5, and report our systems’ performance in Appendix Table 7. In all experiments, using the validator and the goal improves the performance. Investigating whether using goals improves relevance on OPEND5. We then investigated whether text-davinci-003 can leverage the goals to propose more relevant hypotheses on more realistic applications in OPEND5. We sampled 100 problems from OPEND5 with distinct goals and randomly sampled 2 hypotheses from text-davinci-003 with/without using goals (see Figure 3), resulting in 400 hypotheses to evaluate. Three authors then rated their relevance based on the rubric in Section 3, while being blinded about which hypotheses were generated with the goal. Our main paper focuses on presenting the evaluations performed by ourselves, since crowdworkers might be noisy and untrustworthy (Veselovsky et al., 2023; Suhr et al., 2021). We report the results in Table 5. Since this evaluation is subjective, the inter-annotator agreement is only moderate (Kappa=0.56); however, we can still robustly conclude that text-davinci-003 can leverage goals to propose hypotheses with higher average relevance rating, since this conclusion can 7 be independently reproduced by every individual evaluator with p < 10−8. To make sure that the same conclusion can be robustly reproduced by external non-authors, we also evaluated the relevance of the hypotheses with Amazon Mechanical Turks, gpt-3.5-turbo, Claude-v1.3, and gpt-4. We report the results in Appendix Table 8 and found that our conclusion robustly holds under five different types of evaluators, including expert authors, external crowdworkers, and language models from different companies with different levels of capabilities. Finally, we conducted similar experiments for the novelty and significance metrics in Appendix 14 and found that they both benefit from using goals as well. In the next section, we present example discoveries on OPEND5 to qualitatively understand what a D5 system can achieve. 6 Qualitatively Analyzing Discoveries and Limitations with OPEND5 To understand the utility and the limitation of a D5 system, we ran it on OPEND5, a set of realistic D5 problems, and analyze the output discoveries qualitatively. 6.1 Producing Discoveries on OPEND5 and Analyzing Them We ran our D5 system on OPEND5, producing 3296 discoveries in total. However, we do not have enough budget to validate every finding, since estimating V is expensive (Section 3). Therefore, from the remaining 3296 discoveries, we manually selected 21 discoveries that 1) achieve a relevance score of 2⃝, 2) are representative of potential use cases, 3) do not require expert knowledge for Turkers to judge, and 4) are likely to achieve a small p-value with fewer than 200 samples from Dval . We then estimated their validity based on the procedure described in Section 3 by using fewer than 200 samples from the validation split and calculated the p-values, which cost us ∼$1500 in total on MTurk. Since we are testing multiple discoveries and each of them can be statistically significant merely due to chance, we keep 13 discoveries with V that are significantly non-zero with p-value below 7%, a threshold determined by the Benjamini Hochberg’s procedure with a false discovery rate of 10%. In other words, <10% of the discoveries presented are false discoveries in expectation. We detail 5 of the 13 discoveries in this section, with the remainder in Appendix 18. For each discovery, we report its automated validity score V ′, the estimated true validity score V , and their respective p values in Table 3. Understanding political stances and stereotypes in speeches. When comparing presidential speeches on immigrants from Obama to those from Trump, the former “argues for a path forward to promote the fair and just treatment of immigrants”, while the latter more frequently “refers to illegal immigrants as criminals”. Analyzing errors in NLP systems. We fine-tuned a pair of models on two different natural language inference datasets, (a) MNLI and (b) SNLI. To understand their patterns of errors, we defined Corpus A to be the subset of MNLI where a is right and b is wrong, and Corpus B to be where b is right and a is wrong. We found that the latter more often “has an informal tone, such as slang or colloquial speech”. One possible explanation is that MNLI contains more different genres and hence more informal speeches, causing the former model to perform better on these examples. Output discovery “argues for a path forward to promote the fair ...” “refers to illegal immigrants as criminals” “has an informal tone, such as slang or colloqu...” “mentions lack of legroom ” “mentions children or family” V 0.16 0.09 0.08 0.16 0.08 p 1.26e-04 6.17e-03 2.35e-03 1.15e-03 1.00e-05 V ′ 0.35 0.19 0.24 0.38 0.11 p′ 2.01e-73 3.17e-38 1.46e-35 1.34e-45 8.05e-09 Table 3: A subset of discoveries presented in Section 6.1 and their associated estimated validity score V , validity score approximated by a model V ′, and their respective p-values p (p′) for the null hypothesis that V (V ′) < 0 under a t-test. We present the full set of 13 discoveries in Table 10. 8 Analyzing airline customer reviews. We compared the concerns in reviews of the airline Air Canada v.s. its subsidiary, Air Canada Rogue, which is considered a low-price wing of Air Canada. The latter more often “mentions lack of legroom”. Analyzing gender differences in self-reported happy moments. Compared to self-reported happy moments written by males, those by females “mentions children or family” more often. Caution: misinterpreting this correlation as causation could reinforce societal biases (Section 6.2). Due to space constraints, we list more examples on analyzing distribution shifts, text clusters, lyric styles, and news headlines in Appendix 18 and their associated V and V ′ values in Appendix Table 10. Across these discoveries, the approximated validity score V ′ has a 71% spearman rank correlation with human rating V (66% for Pearson correlation), thus providing informative yet unreliable signals to practitioners about their validity. We hope that V ′ can better approximate V values in the future as the quality of the validators improve. Finally, future works can collect more open problems, allowing D5 systems to produce more impactful discoveries. 6.2 Concrete Examples in OPEND5 Inform Limitations of D5 Evaluation We discuss limitations of D5 evaluation in this section using concrete examples from OPEND5. Our metrics do not evaluate diversity. There are often multiple valid and relevant discoveries, and our system ideally should generate all of them. For example, when comparing low-rating and high-rating reviews to understand what stands out to customers, both “mentions the hidden fees and poor customer service at the airport” and “mentions the airline charging extra for carry-on items” could be valid discoveries. Our current evaluation does not reward diverse discoveries, and the current system sometimes repeats a discovery using similar paraphrases, e.g., “mentions the rude and unprofessional attitude of the staff ” and “mentions the staff being rude and unhelpful”. Future evaluation metrics can take diversity into account. Interpreting discoveries requires domain experts. We used Turkers’ judgment when computing T (h, x) to judge the validity of a discovery. However, many discoveries require expert knowledge to interpret properly. For example, it requires medical training to reliably judge whether a self-reported drug-use experience satisfies “mentions psychedelics, such as LSD and shrooms.” Correlation ̸= causation. Our metrics currently do not evaluate whether the discovery is causally related to how the corpus pair was generated. For example, when comparing self-reported happy moments from females and males, even if the former corpus has more samples that “mention children and family”, it does not necessarily imply family plays a more important role in inter-personal relations for females; an alternative hypothesis is that females might mention people in general more often than males do, hence leading to the observation that they mention family more often. Spurious correlations could also sneak into our validity evaluation: for example, if the Turkers implicitly associate female activities as family-related Greenwald & Banaji (1995), then we might falsely make this discovery due to evaluator biases. Future metrics should also consider plausible alternative hypotheses to evaluate causality and control the potential biases from the human evaluators. Additionally, we should treat the discovery from D5 with caution to prevent automating and amplifying societal biases. We discuss other limitations, such as restricting the discovery to be a single predicate, the biases in authors’ qualitative evaluation, and the incomprehensiveness of OPEND5 in Appendix 19. 7 Related Work and Discussion Inductive Reasoning with NLP Models. Recent works show that language models are capable of inductive reasoning under restricted settings, discovering patterns from a set of text data points and describing them with language (Honovich et al., 2022). Yang et al. (2022) use this capability to induce natural language rules with the format of “if . . . then . . . ”. Zhou et al. (2022) and Ye et al. (2022) use this capability to improve zero/few-shot accuracy by inferring the most likely instruction using input-output example(s) of the target task. Zhong et al. (2022) and Singh et al. (2022) use this capability to discover patterns in datasets, and we improve by building an automatic benchmark and a dataset of open-ended problems and require the discovery to be relevant. ML models can also perform inductive reasoning in other modalities, such as vision. Hernandez et al. (2021) describes visual features that activate a neuron; Zhu et al. (2022) describes distribution 9 shifts between the training distribution and the test distribution for images; and Eyuboglu et al. (2022) describes errors made by vision models. We hope future models can perform inductive reasoning in other modalities, such as sound (Aghajanyan et al., 2023) or physical senses (Thomason et al., 2016). Exploratory Analysis and Automated Discovery. It is not new to automatically discover patterns by learning from empirical data. To list a few classical methods, linear regression analyzes the effect of each real-valued feature by interpreting the learned weights (Draper & Smith, 1998); n-gram models can extract discriminative phrases, thus yielding insights about corpus-level differences (Manning & Schutze, 1999); topic models (Blei et al., 2003) can extract major topical variations across documents, where each topic is represented as a distribution over words; small decision trees can extract interpretable if-then statements (Letham et al., 2015); and an entity embedding model learned on existing relations between entities can predict unseen relations (Socher et al., 2013). In comparison, D5 produces discoveries in the form of natural language predicates, which are interpretable and can express abstract concepts; additionally, it is more directed at the goal, while machine learning classifiers like naïve bayes or linear regression will pick up any discriminative features: Appendix 21 offers a more comprehensive discussion using examples from SYND5. Given the respective strength of D5 and traditional exploratory methods, we envision D5 to serve as a complementary method to traditional methods. Epistemology. While the process of validating a hypothesis is well-formulated, it is much less well- understood how to automatically generate hypotheses and decide what discoveries are meaningful (Shapere, 1964; Heckman & Singer, 2017). Related works in this area have been sparse, among which McGarry (2005) sketches high-level principles for evaluating knowledge discoveries and Ludwig & Mullainathan (2022) proposes to crowd-source hypotheses from MTurk workers. We concur with the perspective of Polanyi et al. (2000) that meaningfulness of a hypothesis cannot be explicitly verbalized with simple logic but is dependent on implicit community norms; therefore, the process of proposing hypotheses should be learned from empirical data (e.g. pre-training, self-training, or human feedback) rather than deduced from a priori analysis of concepts (Quine, 1969). We hope contributions from other domains can provide more empirical data on what discoveries are meaningful, hence guiding our system to produce more important discoveries. Acknowledgement We thank Xiaochuang Han and Sam Bowman for their early discussions on this project. We thank Cathy Chen, Erik Jones, Jessy Lin, Alex Pan, Chenglei Si, Xi Ye, and Tianyi Zhang for their helpful feedback on the paper draft. We thank OpenAI and Anthropic for providing model access. References The hewlett foundation: Automated essay scoring, 2012. URL https://kaggle.com/ competitions/asap-aes. The hewlett foundation: Short answer scoring, 2013. URL https://kaggle.com/competitions/ asap-sas. Ad observer. https://adobserver.org/, 2021. Accessed: 2022-12-30. Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, and Luke Zettlemoyer. Scaling laws for generative mixed-modal language models. arXiv preprint arXiv:2301.03728, 2023. Roee Aharoni and Yoav Goldberg. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7747–7763, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020. acl-main.692. URL https://aclanthology.org/2020.acl-main.692. Mohammad Alali, Shaayan Syed, Mohammed Alsayed, Smit Patel, and Hemanth Bodala. Justice: A benchmark dataset for supreme court’s judgment prediction. arXiv preprint arXiv:2112.03414, 2021. 10 Akari Asai, Sara Evensen, Behzad Golshan, Alon Halevy, Vivian Li, Andrei Lopatenko, Daniela Stepanov, Yoshihiko Suhara, Wang-Chiew Tan, and Yinzhan Xu. Happydb: A corpus of 100,000 crowdsourced happy moments. arXiv preprint arXiv:1801.07746, 2018. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b. Miriam Barnum and James Lo. Is the npt unraveling? evidence from text analysis of review conference statements. Journal of Peace Research, 57(6):740–751, 2020. Alexander Baturo, Niheer Dasandi, and Slava J Mikhaylov. Understanding state preferences with text as data: Introducing the un general debate corpus. Research & Politics, 4(2):2053168017712821, 2017. Akshay Bhalotia. Yc company scraper. https://github.com/akshaybhalotia/yc_company_ scraper, 2022. Steven Bird, Ewan Klein, and Edward Loper. Natural language processing with Python: analyzing text with the natural language toolkit. " O’Reilly Media, Inc.", 2009. David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022, 2003. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. The snli corpus. 2015. Divy Bramhecha. Poetry Foundation Poems, 2019. URL https://www.kaggle.com/datasets/ tgdivy/poetry-foundation-poems. Dallas Card, Serina Chang, Chris Becker, Julia Mendelsohn, Rob Voigt, Leah Boustan, Ran Abramitzky, and Dan Jurafsky. Replication code and data for “Computational analysis of 140 years of US political speeches reveals more positive but increasingly polarized framing of immigration” [dataset]. https://github.com/dallascard/us-immigration-speeches/, 2022. Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. Neural legal judgment prediction in english. arXiv preprint arXiv:1906.02059, 2019. Sihao Chen, Daniel Khashabi, Wenpeng Yin, Chris Callison-Burch, and Dan Roth. Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims. In Proc. of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019. URL http://cogcomp.org/papers/CKYCR19.pdf. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019. Norman R Draper and Harry Smith. Applied regression analysis, volume 326. John Wiley & Sons, 1998. Sabri Eyuboglu, Maya Varma, Khaled Saab, Jean-Benoit Delbrouck, Christopher Lee-Messer, Jared Dunnmon, James Zou, and Christopher Ré. Domino: Discovering systematic errors with cross- modal embeddings. arXiv preprint arXiv:2203.14960, 2022. Yujia Gao, Jinu Jang, and Diyi Yang. Understanding the usage of online media for parenting from infancy to preschool at scale. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–12, 2021. 11 Anthony G Greenwald and Mahzarin R Banaji. Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychological review, 102(1):4, 1995. Ivan Habernal and Iryna Gurevych. Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1589–1599, Berlin, Germany, 2016. Association for Computational Linguistics. URL http://www.aclweb. org/anthology/P16-1150. Kevin Hartman. Advertisement Transcripts from Various Industries, 2019. URL https://tinyurl. com/5w36dwdx. He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. Decoupling strategy and generation in negotiation dialogues, 2018. Jibo He. Big Data Set from RateMyProfessor.com for Professors’ Teaching Evaluation, 2020. URL https://data.mendeley.com/datasets/fvtfjyvw7d/2. Samuel He. Goodbye world: using natural language processing to identify suicidal posts, 2021. URL https://github.com/hesamuel/goodbye_world. James J Heckman and Burton Singer. Abducting economics. American Economic Review, 107(5): 298–302, 2017. Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, and Jacob Andreas. Natural language descriptions of deep visual features. In International Conference on Learning Representations, 2021. Or Honovich, Uri Shaham, Samuel R Bowman, and Omer Levy. Instruction induction: From few examples to natural language task descriptions. arXiv preprint arXiv:2205.10782, 2022. Nabil Hossain, John Krumm, and Michael Gamon. " president vows to cut< taxes> hair": Dataset and analysis of creative text editing for humorous headlines. arXiv preprint arXiv:1906.00274, 2019. Kaggle. TMDB 5000 Movie Dataset, 2018. URL https://www.kaggle.com/datasets/tmdb/ tmdb-movie-metadata. Rohit Kulkarni. A Million News Headlines, 2018. URL https://doi.org/10.7910/DVN/ SYBGZL. Rohit Kulkarni. The Examiner - Spam Clickbait Catalog, 2020a. URL https://www.kaggle.com/ datasets/therohk/examine-the-examiner. Rohit Kulkarni. Urban Dictionary Words And Definitions, 2020b. URL https://www.kaggle. com/datasets/therohk/urban-dictionary-words-dataset. Rohit Kulkarni. India News Headlines Dataset, 2022. URL https://www.kaggle.com/datasets/ therohk/india-headlines-news-dataset. Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, and David Madigan. Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), Sep 2015. ISSN 1932-6157. doi: 10.1214/15-aoas848. URL http: //dx.doi.org/10.1214/15-AOAS848. Derek Lim and Austin R Benson. Expertise and dynamics within crowdsourced musical knowledge curation: A case study of the genius platform. In ICWSM, pp. 373–384, 2021. Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. Wanli: Worker and ai collaboration for natural language inference dataset creation, January 2022. URL https://arxiv.org/pdf/ 2201.05955. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. 12 Zhi Liu. Reuter_50_50 Data Set, 2011. URL https://archive.ics.uci.edu/ml/datasets/ Reuter_50_50. Jens Ludwig and Sendhil Mullainathan. Algorithmic behavioral science: Machine learning as a tool for scientific discovery. Chicago Booth Research Paper, (22-15), 2022. Christopher Manning and Hinrich Schutze. Foundations of statistical natural language processing. MIT press, 1999. Ken McGarry. A survey of interestingness measures for knowledge discovery. The knowledge engineering review, 20(1):39–61, 2005. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models, 2016. Natan Mish. Federal Reserve Governors Speeches 1996 - 2020, 2020. URL https://tinyurl. com/3j2e79a6. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization via natural language crowdsourcing instructions. In ACL, 2022. Rishabh Misra and Prahal Arora. Sarcasm detection using hybrid neural network. arXiv preprint arXiv:1908.07414, 2019. Rishabh Misra and Jigyasa Grover. Sculpting Data for ML: The first act of Machine Learning. 01 2021. ISBN 9798585463570. Nuno Moniz and Luâ C™is Torgo. Multi-source social feedback of online news feeds. CoRR, [Web Link], 2018. Mickaël Mouillé. Kickstarter Projects, 2017. URL https://www.kaggle.com/datasets/ kemical/kickstarter-projects?select=ks-projects-201612.csv. Dong Nguyen, Maria Liakata, Simon DeDeo, Jacob Eisenstein, David Mimno, Rebekah Tromble, and Jane Winters. How we do things with words: Analyzing text as social and cultural data. Frontiers in Artificial Intelligence, 3:62, 2020. Jianmo Ni, Jiacheng Li, and Julian McAuley. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pp. 188–197, 2019. Elle O’Brien. iterative/aita_dataset: Praw rescrape of entire dataset, February 2020. URL https: //doi.org/10.5281/zenodo.3677563. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022. Verónica Pérez-Rosas and Rada Mihalcea. Experiments in open domain deception detection. In Proceedings of the 2015 conference on empirical methods in natural language processing, pp. 1120–1125, 2015. Verónica Pérez-Rosas, Mohamed Abouelenien, Rada Mihalcea, and Mihai Burzo. Deception detection In Proceedings of the 2015 ACM on International Conference on using real-life trial data. Multimodal Interaction, pp. 59–66, 2015. Verónica Pérez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. Automatic detection of fake news. arXiv preprint arXiv:1708.07104, 2017. Denis Peskov, Benny Cheng, Ahmed Elgohary, Joe Barrow, Cristian Danescu-Niculescu-Mizil, and Jordan Boyd-Graber. It takes two to lie: One to lie and one to listen. In Association for Computational Linguistics, 2020. 13 John P. Pestian, Chris Brew, Pawel Matykiewicz, DJ Hovermale, Neil Johnson, K. Bretonnel Cohen, and Wlodzislaw Duch. A shared task involving multi-label classification of clinical free text. In Biological, translational, and clinical language processing, pp. 97–104, Prague, Czech Republic, June 2007. Association for Computational Linguistics. URL https://aclanthology.org/ W07-1013. Michael Polanyi, John Ziman, and Steve Fuller. The republic of science: its political and economic theory minerva, i (1)(1962), 54-73. Minerva, 38(1):1–32, 2000. Ilan Price, Jordan Gifford-Moore, Jory Fleming, Saul Musker, Maayan Roichman, Guillaume Sylvain, Nithum Thain, Lucas Dixon, and Jeffrey Sorensen. Six attributes of unhealthy conversation. arXiv preprint arXiv:2010.07410, 2020. Demand Progress. Statements of Administration Policy, 2022. URL https: //github.com/unitedstates/statements-of-administration-policy# statements-of-administration-policy. PromptCloud. U.S. Technology Jobs on Dice.com, 2017. URL https://www.kaggle.com/ datasets/PromptCloudHQ/us-technology-jobs-on-dicecom. WVO Quine. Naturalistic epistemology. Ontological relativity and other essays, pp. 69–90, 1969. Quora. Quora Question Pairs, 2017. URL https://www.kaggle.com/c/ quora-question-pairs. Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018. Justin Robischon. Wikipedia Movie Plots, 2019. URL https://www.kaggle.com/datasets/ jrobischon/wikipedia-movie-plots. Allen Roush and Arvind Balaji. DebateSum: A large-scale argument mining and summarization dataset. In Proceedings of the 7th Workshop on Argument Mining, pp. 1–7, Online, December 2020. Association for Computational Linguistics. URL https://aclanthology.org/2020. argmining-1.1. Dudley Shapere. The structure of scientific revolutions. The Philosophical Review, 73(3):383–394, 1964. Chandan Singh, John X Morris, Jyoti Aneja, Alexander M Rush, and Jianfeng Gao. Explain- ing patterns in data with language models via interpretable autoprompting. arXiv preprint arXiv:2210.01848, 2022. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensor networks for knowledge base completion. Advances in neural information processing systems, 26, 2013. Alane Suhr, Clara Vania, Nikita Nangia, Maarten Sap, Mark Yatskar, Samuel Bowman, and Yoav Artzi. Crowdsourcing beyond annotation: Case studies in benchmark data collection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, pp. 1–6, 2021. J Sun. Daily News for Stock Market Prediction, 2017. URL https://www.kaggle.com/ datasets/aaron7sun/stocknews. Jesse Thomason, Jivko Sinapov, Maxwell Svetlik, Peter Stone, and Raymond J Mooney. Learning multi-modal grounded linguistic semantics by playing" i spy". In IJCAI, pp. 3477–3483, 2016. Andrew Thompson. All the News 1.0, 2019. URL https://components.one/datasets/ all-the-news-articles-dataset. Elsbeth Turcan and Kathleen McKeown. Dreaddit: A reddit dataset for stress analysis in social media. arXiv preprint arXiv:1911.00133, 2019. 14 Udacity. Armenian Online Job Postings, 2017. URL https://www.kaggle.com/datasets/ udacity/armenian-online-job-postings. Veniamin Veselovsky, Manoel Horta Ribeiro, and Robert West. Artificial artificial artificial intelli- gence: Crowd workers widely use large language models for text production tasks. arXiv preprint arXiv:2306.07899, 2023. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. URL https://arxiv. org/abs/2204.07705, 2022. Zihan Wang, Jingbo Shang, and Ruiqi Zhong. Goal-driven explainable clustering via language descriptions. arXiv preprint arXiv:2305.13749, 2023. Orion Weller and Kevin Seppi. The rJokes dataset: a large scale humor collection. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pp. 6136–6141, Marseille, France, May 2020. European Language Resources Association. ISBN 979-10-95546-34-4. URL https: //aclanthology.org/2020.lrec-1.753. Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426, 2017. Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, and Furu Wei. Language models as inductive reasoners. arXiv preprint arXiv:2212.10923, 2022. Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, and Minjoon Seo. Guess the instruction! making language models stronger zero-shot learners. arXiv preprint arXiv:2210.02969, 2022. Ruiqi Zhong, Charlie Snell, Dan Klein, and Jacob Steinhardt. Describing differences between text distributions with natural language. In International Conference on Machine Learning, pp. 27099–27116. PMLR, 2022. Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910, 2022. Zhiying Zhu, Weixin Liang, and James Zou. Gsclip: A framework for explaining distribution shifts in natural language. arXiv preprint arXiv:2206.15007, 2022. 15 8 Cost for Running the Experiments For each problem, we ran the proposer for 10 times on average; assuming each prompt to be at most 4000 tokens, we spent around $2.4 for each problem on OpenAI APIs if we use gpt-4 and text-davinci-003, and the cost would decrease to $0.8 if we use gpt-3.5-turbo. Notice that these estimates are computed based on the prices as of 05/14/2023, and we expect the price to further decrease in the future. We ran the Flan-T5 based validator for ˜2 hours on 1 80G A100 GPUs. The total amount of computational resources spent in this research paper is around $2,500 in terms of OpenAI API and 3,000 hours of compute on A100 GPU with 80G memory. 9 Generation Process of SYND5 The high-level description is in Section 2.2. Here we discuss the procedure that generated SYND5. We consider three dimensions of differences: topic, genre, and style. For each, we generated 14/9/7 values, e.g., “celebrity love stories” and “sports team recruiting athletes for the topic attribute, “rap lyrics” and “screen play” for the style attribute, and “French” and “Spanish” for the language attribute. We then used GPT-4 and the Claude API to synthesize 54K text samples, where for each text sample we sampled a topic, genre, and style randomly, e.g. “Write a rap about a sports team recruiting athletes in French”. To synthesize a random SYND5 problem, we randomly sampled a distractor dimension (e.g. language) and a target dimension (e.g. topic), and for each dimension we sampled two random values (e.g. English and French for language, sports and art for topic). For each problem, we sampled 10 texts for corpus A such that all of them satisfy one sampled value for the distractor dimension (e.g. corpus A is entirely in English), and 10 texts for corpus B for to satsify the other distractor dimension (e.g. corpus B is entirely in French). Then we set V fraction of corpus A to satisfy the reference target attribute, e.g. “is sports-related”, and f fraction of corpus B to satisfy the other value for the target dimension (e.g. “is art-related”). We chose V uniformly at random from [0.6, 0.8, 1]. Finally, we provide k example hypotheses from the target dimension other than the target dimension values for Corpus A and Corpus B, and we chose k from [0, 2] uniformly at random. We then sampled 300 D5 problems in total from this distribution. 10 Robustness Checks for Results on SYND5 Table 4 shows the accuracy of different systems using text-davinci-003 as the judge for semantic equivalence. Table 5 shows the accuracy of different systems if we consider outputs semantically similar to the reference to be correct. Across all setups, we found that the conclusion reached in Section 5 still holds under these robustness checks. text-davinci-003 w/ goal wo/ goal w/ validator wo/ validator 6% 3% 1% 0% gpt-4 w/ validator wo/ validator w/ goal wo/ goal 23% 6% 9% 2% Table 4: Same Table as 1, except that we use text-davinc-003 instead Claude-v1.3 to judge similarity. Using the validator, the goals, and gpt-4 leads to better results. text-davinci-003 w/ goal wo/ goal w/ validator wo/ validator 46% 24% 23% 16% gpt-4 w/ validator wo/ validator w/ goal wo/ goal 53% 24% 43% 24% Table 5: Same Table as 1, except that we calculate how often the output is similar, rather than equivalent, to the reference. Using the validator, the goals, and gpt-4 leads to better results. To improve the accessibility of our research, we ran the same experiments with gpt-3.5-turbo and flan-t5-xxl, and report the results in Appendix Table 6. To show that our conclusions are general and not only apply to synthetically generated texts, we additionally constructed an extension of SYND5 with human-written texts by adapting the NYT dataset from Wang et al. (2023), where each text sample is a New York Times article with a topic and a location label: the topic dimension 16 has 9 different values (e.g., politics, arts) and the location dimension has 10 different values (French, Italy); we then followed the same procedure described in Section 2.2 to create this extension of SYND5, and report our systems’ performance in Appendix Table 7. Under all experimental setups, using the validator and the goal improves the performance. flan-t5-xxl gpt-3.5-turbo gpt-4 w/ g, w/ v wo/ g, w/ v wo/ g, w/v wo/ g, wo /v 0.05 0.27 0.27 0.03 0.10 0.15 0.01 0.03 0.05 0.02 0.08 0.08 Table 6: Similar to Table 9, we used gpt-3.5-turbo and flan-t5-xxl as the proposer to tackle the SynD5 dataset, and report the performance with/without using the goal (g), and with/without using the vadliator (v). We find that using the goal and the validator significantly improves the performance, and open-sourced models lag significantly behind. Additionally, gpt-4 does not significantly outperform gpt-3.5-turbo. gpt-3.5-turbo gpt-4 w/ g, w/ v wo/ g, w/ v wo/ g, w/v wo/ g, wo /v 0.61 0.55 0.24 0.28 0.10 0.16 0.22 0.22 Table 7: We created an extension of SynD5 by adapting a dataset of New York Times articles with two dimensions: topic and locations, each with 9 and 10 values. We then used gpt-3.5-turbo and gpt-4 as the proposer, and found the same conclusion: using the goal and a validator improves the performance. Additionally, gpt-4 does not significantly outperform gpt-3.5-turbo. 11 Computing Turker Judgement Scoring. To estimate T (h, x) with Turker’s rating, where h is a truth predicate of a text sample x, the Turker needs to read h and x and then choose among six options: “Certainly Yes”, “Likely Yes”, “Neutral”, “Likely No”, “Certainly No”, and “Confusing/Cannot be decided.” For each (h, x) pair, we collect responses from three Turkers. To compute the average across them, we collect a list of scores using the following rule: each “Certainly Yes” would receive a score of 1.00, “Likely Yes” 0.75, “Neutral” 0.50, “Likely No” 0.25, “Certainly No” 0.00, and “Confusing/Cannot be decided.” receive two scores of 0.50. We then take the average over all the scores we collected from the Turkers for one h and x. “Confusing/Cannot be decided.” receives two scores of 0.50 because we want such a response to drag the average rating towards neutral and it has a larger effect than choosing “Neutral”. Payment. We adjust the payment for each HIT task based on the number of words they need to read. We pay them approximately 0.001 cent per word, and using the conservative estimate that adults read about 200 words per minute, we pay them around $12 per hour. We spent in total around $5K on this HIT task. Qualification. We only recruited Turkers who are located in the U.S. Additionally, we designed qualification test with 8 questions; the questions are designed to be easy to answer as long as they have read our instructions below, and we only accepted turkers who made mistakes on at most one questions. Annotation Instruction. We show our annotation instruction below. We only show examples of choosing “Certainly Yes”, “Certainly No”, and “Confusing” to encourage the Turkers not to choose neutral ratings. Additionally, we explicitly tried to address Halo effect – where the text does not satisfy a predicate h but satisfies a predicate h′ that is highly correlated with h. For example, for the text sample x = “Really love the flight!!” does not satisfy the predicate h = “mentions that the breakfeast is good on the plane”, even though it satisfies a highly correlated predicate h′ = “likes the flight.” 11.1 Instructions Below are the same instructions we have shown you during the qualification. Thanks for visiting this page and refresh your memory about the instruction! 17 Instruction: In this task, you will check whether a TEXT satisfies a PROPERTY Example 1 Property: mentions a natural scene. Text: I love the way the sun sets in the evening. • A) Certainly Yes. • B) Likely Yes. • C) Neutral. • D) Likely No. • E) Certainly No. • F) Confusing/Cannot be decided. Answer. A. sun set is nature-related; if you feel a bit ambivalent, B is also acceptable. Example 2 Property: writes in a 1st person perspective. Text: Makima is cute. • A) Certainly Yes. • B) Likely Yes. • C) Neutral. • D) Likely No. • E) Certainly No. • F) Confusing/Cannot be decided. Answer. E. This text is undoubtedly written in the 3rd person perspetive, so E. Example 3 Property: is better than group B. Text: I also need to buy a chair. • A) Certainly Yes. • B) Likely Yes. • C) Neutral. • D) Likely No. • E) Certainly No. • F) Confusing/Cannot be decided. Answer. F. It is unclear what the hypothesis mean (e.g., what does group B mean?) and doesn’t seem related to the text. So F. Example 4 Property: mentions that the breakfast is good on the airline. Text: The airline staff was really nice! Enjoyable flight. • A) Certainly Yes. • B) Likely Yes. • C) Neutral. • D) Likely No. • E) Certainly No. • F) Confusing/Cannot be decided. 18 Answer. E. Although the text appreciates the flight experience, it DOES NOT mention about the breakfast. So the answer is E. Example 5 Property: appreciates the writing style of the author. Text: The paper absolutely sucks because its underlying logic is wrong. However, the presentation of the paper is clear and the use of language is really impressive. • A) Certainly Yes. • B) Likely Yes. • C) Neutral. • D) Likely No. • E) Certainly No. • F) Confusing/Cannot be decided. Answer. A. Although the text dislikes the paper, it DOES like the writing style. So the answer is A. 12 Prompt to Judge Predicate Similarity We prompt Claude v1.3 (Bai et al., 2022b) to judge whether the predicated predicate is similar to the reference. We consider a response that leads to a“yes” to be correct when we require the discovery to be semantically equivalent to the reference, and consider a response that leads to a “yes” or “related” to be correct when we require the discovery to be semantically similar to the reference. “ Is text_a and text_b similar in meaning? respond with yes, related, or no. Here are a few examples. Example 1: text_a: has a topic of protecting the environment text_b: has a topic of environmental protection and sustainability output: yes Example 2: text_a: has a language of German text_b: has a language of Deutsch output: yes Example 3: text_a: has a topic of the relation between political figures text_b: has a topic of international diplomacy output: related Example 4: text_a: has a topic of the sports text_b: has a topic of sports team recruiting new members output: related Example 5: text_a: has a named language of Korean text_b: uses archaic and poetic diction output: no Example 6: text_a: has a named language of Korean text_b: has a named language of Japanese output: no 19 Target: text_a: {predicate} text_b: {reference} output:” 13 Relevance Rating with External Non-Authors To make sure that the conclusion that “using goal in the context can improve hypotheses relevance” can be robustly reproduced by external non-authors, we also evaluated the relevance of the hypotheses with Amazon Mechanical Turks, GPT-3.5-turbo, Claude-v1.3, and GPT-4. We report the results in Table 8 and found that the conclusion still robustly holds. Relevance Rater Authors Turkers gpt-3.5-turbo claude-v1.3 gpt-4 w goal w /o goal 1.68 1.56 1.05 1.18 1.49 1.20 1.44 0.94 0.92 1.12 p-value 1 ×10−10 4 ×10−2 5 ×10−2 2 ×10−3 1 ×10−6 spearmanr 1.00 0.10 0.19 0.30 0.45 Table 8: We rated the relevance in the same way as Table 5. However, in this table we obtained the ratings not from the authors, but from four different evaluator types: Turkers, gpt-3.5-turbo, claude-v1.3 and gpt-4. For each evaluator type, we calculate (1) the average rating of the candidate discovery when goal is (not) present in the proposers’ prompt, (2) the p-value that the average rating when goal is present is higher under a t-test, and (3) the spearman rank correlation between its rating and the authors’ rating. We find that the p-value is smaller than 0.05 in all cases, indicating that our conclusion is robust; additionally, more capable models has a higher correlation with the authors. 14 Meaningfulness: Relevance, Novelty, and Significance Not every valid discovery is meaningful. For example, if the goal is to understand the topical differences between news from 2008 (Corpus A) and news from 2007 (Corpus B), the discovery that Corpus A “contains news from 2008” is completely valid by definition but meaningless, since it provides only trivial information and is irrelevant to the goal of understanding topical differences. McGarry (2005) surveyed a list of desirable properties for discovery, and we condensed them into three submetrics to rate how meaningful a discovery is based on the exploration goal: 1) relevance, 2) novelty, and 3) significance. We evaluate these independently of validity and assume that the discovery is already valid. For example, the discovery that “something can travel faster than light” is meaningful if true, even though it is highly implausible. We rate each submetric with 0⃝, 1⃝, or 2⃝, where higher is better. We show the evaluation instructions below and present our rating on text-davinci-003 proposed hypotheses. 14.1 Evaluation Instructions Relevance. How relevant the discovery is to the goal. For example, suppose we were a student comparing essays rated as convincing vs. not convincing to figure out what writing style is convincing. Then: • The discovery “write in first person” is directly related to the writing style, so we rate it 2⃝. • The discovery “use the word “I””, is not exactly a writing style, but can still inform the relevant underlying principle of “write in first person”, so we rate it 1⃝. • The discovery “argue for abortion” does not tell us about the underlying writing style, so we rate it 0⃝. Novelty. The difficulty of generating the discovery, e.g. can we think of the discovery in 5 minutes with the goal but without looking at the corpora? For example, suppose we were an airline manager 20 trying to find improvements to the flight experience, and we were comparing negative reviews vs. positive reviews. Then: • The discovery “contain more negative language” is almost certain for negative reviews, so we rate it 0⃝. • The discovery “complain about the crew members” is not entirely novel, but is not tautologi- cally true and hence requires confirmation, so we rate it 1⃝. • The discovery “mention a language barrier with the crew members” is specific and hard to think of without looking at the data, so we rate it 2⃝. Note that our evaluation is “blinded to the samples”: we still consider a discovery novel as long as it is hard to think of before looking at the corpora, even if it might be easy to think of after looking at the corpora. For example, the physical law that F = ma is easy to observe if we have collected and plotted the data on acceleration, mass, and force; however, it might be difficult to think of before we see any such data, so we consider it novel. Significance. Given the exploration goal, how beneficial is it to learn the discovery for the first time? For example, suppose we were an Amazon retailer trying to figure out what customers like and dislike about my product based on negative reviews and positive reviews. Then: • The discovery “accuses the team pushing out a bad product” is not significant since it cannot direct the retailer to improve the product, so we rate it 0⃝. • The discovery “asks for a more durable product” gives some hints about how to improve the product, but isn’t sufficiently helpful on its own, so we rate it 1⃝. • The discovery “says the wrench is missing” can lead to concrete actions for improvement, so we rate it 2⃝. 14.2 Goal Leads to More Meaningful Hypotheses Relevance Novelty Significance with-goal 1.68 1.24 1.56 no-goal 1.20 0.97 1.05 kappa 0.56 0.37 0.46 spearmanr 0.71 0.50 0.64 p of avg 1 × 10−10 5 × 10−6 2 × 10−10 worst p of ind 1 × 10−8 4 × 10−2 2 × 10−7 Table 9: Left. For each metric, we report the average rating on hypotheses generated with or without using the exploration goal, and find that the former performs better. Middle. The inter-annotator agreement rate averaged across pairs of author evaluators, measured by Kappa and Spearman rank coefficient; we find substantial correlations between evaluators across all these subjective metrics, with relevance > significance > novelty. Right. We compute the p-values for the null hypothesis that “with-goal and no-goal result in the same performance”. The p of avg column reports the p-values after we average the ratings from all evaluators, while the “worst p of ind” column takes the max of all p-values based on ratings of individual evaluators. Overall, the conclusions are statistically significant and they can be robustly reproduced across individual evaluators. Compared to Zhong et al. (2022), we added the exploration goal to our prompt when generating hypotheses. Does this improve the quality of the proposed hypotheses? To investigate this, we sampled 100 problems from OPEND5 with distinct exploration goals and randomly sampled 2 hypotheses from GPT-3 with and without using exploration goal (see Figure 3), resulting in 400 hypotheses to evaluate. Three authors then rated their meaningfulness based on the three metrics defined in Section 3, while being blinded about which hypotheses were generated with the exploration goal. The results are shown in Table 9. We found that, when prompted with the exploration goal, GPT-3 on average proposes more relevant, novel, and significant hypotheses; additionally, it proposes hypothe- ses with ratings higher than 0⃝ 31%/21%/28% more often in terms of relevance/novelty/significance. Since this is a subjective evaluation, the Kappa inter-annotator agreement is only moderate, ranging from 0.37 to 0.56. However, we can still robustly conclude that the model can propose more mean- ingful hypotheses when conditioned on the goal: we calculate the p-values for the null hypothesis that with-goal and no-goal have equal performance, and we find p-values to be highly significant and robust across evaluators, for all three submetrics. 21 15 Full Pipeline of the Proposer We present the full details of how we generated the hypotheses with the language model. The process roughly contains four stages: 1) obtaining representative samples for each corpus, 2) sampling hypotheses from GPT-3, 3) rewriting hypotheses, and 4) optionally plugging in example hypotheses. Obtaining representative samples. This step is the same as Zhong et al. (2022), and we borrow the related text from that paper for the reader’s convenience. Since Dres B might overlap significantly, random samples from Dres B might not be representative and informative enough for GPT-3 to notice the differences between the two distributions. Therefore, we choose samples that are representative of their differences. To find those samples, we fine-tune RoBERTa-Large Liu et al. (2019) to predict whether each sample comes from Corpus A or Corpus B and keep the top-p percentile samples with the highest confidence. Next, we take samples from the top-p percentile to prompt GPT-3. A and Dres A and Dres Selecting samples to prompt GPT-3. We randomly select S =25 samples from the top-5 percentile from Corpus A and Corpus B to prompt GPT-3 to propose the hypotheses, using the template shown in Figure 3 left. We require the length of the prompt to be at most 3,200 GPT-3 tokens (the max window size for GPT-3 text-davinci-003 is 4096) and gradually decrease the number of samples S in the prompt until the prompt length is less than 3,200; additionally, we truncate each text samples to at most 256 GPT-3 tokens. Finally, to prevent GPT-3 from proposing hypotheses that reflect simple lexical correlations that can be detected with unigram models, e.g., “uses the word “hey” more often.”, we incrementally construct the subset of samples for Corpus A and Corpus B such that at any time of the construction, no single word can appear 0.25S times more often in one corpus than the other. We repeat the same process for the top-20 and top-100 percentile until we obtain 60 hypotheses. Rewriting hypotheses with GPT-3. As mentioned in Section 6.2, the hypotheses generated by GPT-3 are frequently statements about the corpus, while the validator requires the hypothesis to be a predicate on individual text samples. For example, when comparing definitions that people like from UrbanDictionary.com to other definitions, the hypothesis that the former “is more likely to include slang or colloquial terms.” is a statement about a collection of text samples, rather than a predicate on an individual sample. T (h, x) is undefined in this case, since it does not make sense to check whether a single text sample is more likely to include slang. Ideally, we want to detect these comparison statements and automatically remove the comparatives, e.g., rewrite it to “includes slang or colloquial terms.”. To detect and remove the comparatives from the hypotheses, we tag the part of speech for each word in the hypotheses using the NLTK package (Bird et al., 2009) and check whether any tag is JJR or RBR. If a hypothesis indeed contain theses tags, we prompt GPT-3 to rewrite the hypothesis. We show an example prompt in Figure 4. Plugging in example hypotheses (optionally). We can also add a few problem-specific example hypotheses to the prompt to elicit more relevant hypotheses, and we do so by adding them to the “formatting instruction” part in the prompt used to propose hypotheses Figure 3. In OPEND5, we provided example hypotheses for each problem to steer our system to generate more meaningful discoveries; we produced the example hypotheses by prompting GPT-3 to generate a few hypotheses and selecting the meaningful ones from them. For the reported discoveries in Section 6.1, we confirmed that they are unambiguously different from our provided hypotheses; otherwise, the system might have produced the discoveries by copying the provided hypotheses. We did not use the example hypotheses in Section 5 to test GPT-3’s zero-shot understanding of the goal. 16 Collecting Data to Fine-tune the Validator Here we provide a high-level description of how the data was collected. For each problem in OPEND5, we used our proposer to produce a list of hypotheses. We automatically judged each hypothesis on a subset of samples from the research split using GPT-3 text-davinci-002 (Ouyang et al., 2022), Flan-T5 (Chung et al., 2022), and a model trained with RLHF from Bai et al. (2022a). We created the input 22 Figure 4: The prompt to remove comparatives from a hypotheses. distribution for training by combining and equally weighting the following 3 × 2 = 4 distributions: the subset of (h, x) pairs that GPT-3/Flan-T5/“RLHF” considers Yes or No to be the most likely answer. We then collected averaged turker ratings for in total 3138 (h, x) pairs and used them to fine-tune Flan-T5 to create the validator (Chung et al., 2022). To test cross problem generalization capability of our D5 system, whenever we applied our D5 system to a problem in OPEND5 in Section 6.1, we used a validator that is NOT fine-tuned on the (h, x) pairs from this problem. We achieved this by keeping track of which problem each (h, x) pair comes from and split all the (h, x) pairs into three folds based on the problems; whenever we applied our D5 system to a problem, we used the validator trained on the two folds that do not contain this problem. Figure 5: A sketch of the baseline method. The description can be seen in Section 4 and the actual prompts can be seen in Figure 3. 17 What Discoveries Did we Choose to Present Our system in total produces 3296 discoveries on OPEND5. However, we do not have enough budget to validate every finding, since estimating V is expensive (Section ??). Therefore, from the remaining 3296 discoveries, we manually selected 21 discoveries that 1) the authors think are relevant enough, 2) are representative of potential use cases, 3) do not require expert knowledge for Turkers to judge, and 4) are likely to achieve a small p-value with fewer than 200 samples from Dval A and Dval B . We then estimated their validity based on the procedure described in Section ?? by using fewer than 200 samples from the validation split and calculated the p-values.3 Since we are testing multiple discoveries and each of them can be statistically significant merely due to chance, we keep 13 discoveries with V that are significantly non-zero with p-value below 7%, a threshold determined by the Benjamini Hochberg’s procedure with a false discovery rate of 10%. In other words, fewer than 10% of the discoveries presented are false discoveries in expectation. 3We determined the number of samples s.t. V ′ can achieve a p-value of 0.005. Estimating V for these discoveries costs ∼$1500. 23 Samples from Corpus A + Samples from Corpus B + Problem Context-hypothesis1 -hypothesis2 -hypothesis3 -hypothesis4 -hypothesis5LMhypothesis1 + Sample X from Corpus ALM100%hypothesis1 + Sample Y from Corpus BLM0%Judge hypothesis1 on all individual samples from Corpus A and Corpus B……Compare how oJen each hypothesis is true on Corpus A compared to Corpus B Propose hypotheses based on the problem context and some samples from Corpus A and Corpus BCorpus ACorpus BDiffSound?hypothesis190%0%90%Yeshypothesis2100%100%0%Nohypothesis310%15%-5%No[Other hypotheses not included for brevity] discovery argues for a path forward to promote the fair ... refers to illegal immigrants as criminals has an informal tone, such as slang or colloqu... mentions lack of legroom mentions children or family Uses language that is positive or uplifting references violence or aggression involves physical activity, such as walking, p... contains keywords related to business, finance... mention disasters and crimes, such as plane ac... discusses coronavirus-related topics references pop culture, such as movies, books,... uses vivid imagery and metaphors to convey a f... V 0.16 0.09 0.08 0.16 0.08 0.12 0.06 0.13 0.08 0.03 0.21 0.21 0.09 p 1.26e-04 6.17e-03 2.35e-03 1.15e-03 1.00e-05 2.12e-03 9.87e-03 4.92e-03 2.89e-02 7.03e-02 1.01e-04 2.67e-04 2.47e-02 V’ 0.35 0.19 0.24 0.38 0.11 0.24 0.17 0.37 0.35 0.09 0.27 0.58 0.45 p’ 2.01e-73 3.17e-38 1.46e-35 1.34e-45 8.05e-09 4.18e-59 4.25e-26 7.07e-101 1.45e-95 4.61e-06 9.19e-78 2.09e-30 5.04e-64 Table 10: The full table of discoveries, along with their V , V ′, p, and p′ scores. 18 More Example Discoveries on OPEND5 Analyzing errors in NLP systems. We considered the task of perspectrum classification (Chen et al., 2019), which has the following instruction: “given a perspective and a claim, classify whether the given perspective supports or undermines the claim. If the perspective could possibly convince someone with different view, it is supporting, otherwise it is undermining.” We considered two few- shot learning systems: GPT-3 Instruct Curie (Ouyang et al., 2022) and Tk-Instruct-11B (Wang et al., 2022). We focused on the perspectives where the ground truth label is undermining, and compare the following two corpora: Corpus A – the set of perspectives where Curie correctly classifies the input as undermining but Tk-11B is wrong, and Corpus B – the set where TK-11B is correct while Curie is wrong. We found that Corpus B more often “Uses language that is positive or uplifting” (V ≈ 0.12, AUCROC ≈0.67). One possible explanation is that Curie made many mistakes by misinterpreting undermining as a label for negative sentiment rather than a logical relation between the claim and the perspective. Comparing lyrics from different eras. Compared to lyrics from the 70s, those from the 80s more often “references violence or aggression” (V ≈ 0.06, AUCROC ≈ 0.58). Describing distribution shift. We compared the premises from the SNLI dataset and MNLI dataset, and the former “involves physical activity, such as walking, playing, climbing, or biking” (V ≈ 0.13, AUC-ROC ≈0.64). One possible explanation is that SNLI is based on image captions. Comparing discussion topics between bots and human users. We compared the topical differences between tweets identified as written by bots vs. human users on Twitter, and our system finds that the bots more often “contains keywords related to business, finance or trading” (V ≈ 0.08, AUC-ROC ≈ 0.61). One possible explanation is that bots are frequently used to generate finance-related scams. Identifying temporal differences in news headlines. We compared headlines published by ABC news across different years. Compared to 2014, headlines from 2010 “mention disasters and crimes, such as plane accidents and assaults” more often (V ≈ 0.03, AUCROC ≈ 0.53). Compared to year 2019, year 2020 more often “discusses coronavirus-related topics” (V ≈ 0.21, AUCROC ≈ 0.65). Describing text clusters. We present two example descriptions for text clusters. One from Wikipedia: “references pop culture, such as movies, books, and television shows.” (V ≈ 0.21, AUC-ROC ≈ 0.73); one from PoetryFoundation.com: “uses vivid imagery and metaphors to convey a feeling” (V ≈ 0.09, AUC-ROC ≈0.65). 19 Limitations and Future Work We still face many challenges in building a broadly useful system. We describe technical challenges that machine learning researchers can tackle in Appendix 19.1 and organizational challenges that require domain experts in Appendix 19.2. 24 19.1 Engineering Challenges Hypotheses about the corpora might not be appropriate predicates on individual samples. When comparing highly rated definitions from UrbanDictionary.com to others, our system generates the hypothesis that the former “is more likely to include slang or colloquial terms.” This is a statement about a collection of text samples, but the validator requires the hypothesis h to be a predicate on individual text samples x. To address this, we used GPT-3 to automatically remove comparatives from the hypotheses, e.g. rewriting the hypothesis above to “include slang or colloquial terms.” However, some versions of this problem were harder to remove. For example, when comparing reviews from American Airlines (AA) flights and Delta Airlines to understand which aspects of each airline are doing better/worse, the proposer generated the hypothesis “mentions American Airlines’ staff being unfriendly and unhelpful”. Interpreted literally, this hypothesis can only be true on the corpus of AA reviews, since it presupposes the review to be about AA. The correct predicate for use on individual samples should instead be “mentions staff being unfriendly and unhelpful” (without the words “American Airlines”’). Therefore, future systems should explicitly convert corpus-level statements to their corresponding correct predicates, and the metrics should evaluate whether the validity of the predicates implies the corpus-level statements. Beyond truth predicates. Our work requires the discovery to be a truth predicate that maps a text sample to a truth value. However, scientific discoveries can be arbitrary natural language expressions; extending to more flexible expressions requires a significant redesign of our system and evaluation framework. Some more feasible near-term extensions include 1) allowing natural language expressions that map from text samples to real values, e.g., “how polite the sentence is compared to other samples from the corpora” or 2) using additional logical forms to combine individual truth predicates; e.g., learn a shallow and interpretable decision tree where each split point is a natural language predicate. Beyond corpus-level differences. Our work focuses on describing corpus-level differences and validates a discovery by comparing how often it is true on each corpus. Future work can consider other ways to validate a discovery: for example, suppose each text sample is associated with a continuous target variable, we can validate whether a discovery is more likely true if the target variable is large. Investigating sensitivity towards prompt format. In this paper we hand-crafted the prompt for the proposer and manually annotated the exploration goals on our own for OPEND5. However, due to budget limitation, we have not investigated how sensitive is our D5 system towards prompt formatting and paraphrasing, or whether the performance could have been improved with better prompts. Future works can investigate more in this research direction. Clarifying a discovery. Some discoveries seem to have clear meanings on the surface, but they become ambiguous when we judge them on individual text samples. For example, judging whether a text sample h = “mentions people” seems like an unambiguous task a priori; however, it is unclear whether it is true on the sample x = “I woke up this morning.”, since the “people” in h is a plural form, while x only mentions one person “I”. Future work can use a language model to automatically clarify the meaning of a hypothesis and make it more specific, e.g., rewrite h as “mentions one or more humans.” Correlation ̸= causation. Like other tools that rely on correlations to analyze patterns in data (e.g., linear regression), our system cannot establish causal relations either. For example, when comparing self-reported happy moments from females and males, even if the former corpus has more samples that “mention children and family”, it does not necessarily imply family plays a more important role in inter-personal relations for females; an alternative hypothesis is that females might mention any other people more often than males, hence leading to the observation that they mention family more often. Future work can use language models to propose what control hypothesis to test. Decreasing the cost of validation. As alluded to in Section 3, estimating V is extremely expensive as it requires a lot of human labor. Future work can consider an importance sampling procedure that uses ˆT as a proposer to improve the sample efficiency of estimating V . Training a better proposer. We developed a self-supervised learning algorithm to propose more valid hypotheses. However, it does not take into account the meaningfulness metric, and it is unclear how to manage its trade-offs with validity if they exist. We look forward to future works that can train a better proposer with as minimal supervision as possible. 25 Combining Meaningfulness and Validity Metrics. To simplify evaluation, we assumed meaningful- ness to be independent of the magnitude validity V . Such an assumption allows us to directly evaluate hypotheses that are not necessarily valid but is also limiting for evaluating the final discoveries: for example, for that 2008 “discuss economy” more often than 2007, it would be way more significant if V = 0.99 compared to V = 0.0000001. Future works can propose better metrics that do not assume that validity and meaningfulness are independent. Extending to Non-English Language OPEND5 is currently annotated with English goals and most of the corpora are in English. Future work can consider extending this to other languages. 19.2 Organizational Challenges As discussed in Polanyi et al. (2000), it requires implicit community norms rather than explicit deductive logic to decide what counts as good research results; to guide our system to produce truly important discoveries, our system needs feedback from researchers who work in the domain of interest. However, except for machine learning, the authors do not have research expertise in most of the domains listed in Figure 2. We look forward to future contributions from other domains and list concrete directions below. What problems to solve? We generated the problems in OPEND5 by reading relevant papers and guessing what domain experts might care about. However, our guesses can be inaccurate. Future works can directly gather problems from domain experts to reflect the actual usage of our system. How to interpret a discovery? We asked for Turker’s judgment to compute T (h, x). However, many hypotheses require expert knowledge to interpret properly. For example, only law experts can reliably judge whether a contract x satisfies the predicate h “contains a license grant that is irrevocable.” Domain experts are needed to evaluate the validity of a discovery and supervise the validator. What discoveries are meaningful? Our work developed the evaluation instructions to approximately evaluate what hypotheses are meaningful. However, just as no one can become an outstanding peer reviewer simply by reading the review guideline, we do not consider it feasible to provide a gold evaluation simply by reading our instructions. Whether a discovery is meaningful depends heavily on implicit community norms, and we hope domain experts can provide better evaluation and training signals for our system. 20 Self-Supervised Learning with Open-Ended Problems: A Proof of Concept Since the problems in OPEND5 are open-ended, our system could potentially produce discoveries with higher validity scores than our current system. Therefore, we design a self-supervised learning algorithm to improve an LM’s ability to propose more valid hypotheses, using the principle that it is easier to validate a discovery than to generate one. Algorithm. Suppose we are given a set of problems for training and an initial language model minit. Our goal is to automatically generate a set of prompt-completion pairs to fine-tune minit so that it can propose hypotheses that are more valid. To generate a prompt, we randomly sample a problem and create a proposer prompt following the procedure in Section 4.1. To generate the desired completion given a prompt, we sample multiple hypotheses from minit, approximate their V ′ score on the samples in the proposer prompt with the same language model minit (Section 4.2), and select the highest scoring hypothesis. Finally, we use the prompt-completion pairs to fine-tune minit. However, since we cannot fine-tune instruction-tuned GPT-3, we can only experiment with Flan-T5 (Chung et al., 2022), an open-sourced instruction-tuned model that might only work well for easier “mini-problems”. As a proof of concept, we tested our algorithms for describing groups of four samples, where each group comes from a text cluster. As an overly simplified example, we will give the LM the prompt “Group A: 1. dog 2. cat 3. pig 4. cow. Group B: 1. phone 2. laptop 3. desk 4. cup” as an input and the LM can output “mentions an animal” as a hypothesis. Data. We created 33 corpora by merging all corpora in OPEND5 with the same domain, and automatically generated 4503 text clusters using RoBERTa embeddings (Aharoni & Goldberg, 2020). We focused on clustering because it can automatically generate a large amount of semantically coherent groups of samples. To create a pair of four samples, we randomly sampled a corpus, sampled two clusters within that corpus, and took four random samples from each cluster. To test 26 cross-corpus generalization, we reserved 28 of the 33 corpora to create mini-problems for evaluation, using the rest for training. We used Flan-T5 (Chung et al., 2022) as minit and sampled hypotheses with a temperature of 0.8. For training, we sampled 30,000 mini-problems and selected the best of eight hypotheses generated by minit as the target completion; for evaluation, we sampled 200 mini-problems to calculate V with Turkers and 1500 mini-problems to calculate V ′ automatically. Results. We evaluated randomly sampled hypotheses from the language model before and after self-supervised training. The automated “self-evaluation” validity score V ′ improves substantially from 0.22 to 0.37, and the “true” validity score V according to Turker evaluation improves from 0.07 to 0.10, with a p-value of 0.02. This result provides preliminary evidence that our algorithm (or similar variants) could be applied to a large set of problems to improve the validity of the hypotheses; we expect future validators to simulate human judgments better, hence decreasing the approximated gap of improvement between V and V ′. 21 Comparing D5 to Naïve Bayes We qualitatively compare the discovery generated by our D5 system to the top-5 unigram features extracted by Naive Bayes, a traditional exploratory analysis method. The Naive Bayes method is effective when the target difference can be saliently reflected by individual words. For example, “yo” implies a rap genre, “die” implies a language of Deutsch, and [“rank”, “higher”, “univeristy”] hints at the topic of “college ranking changes”. Additionally, compared to black-box neural networks, such a method is fully interpretable. In comparison, D5 can directly generate a semantically coherent description for the target difference, saving users’ time to guess the underlying correlation by inspecting the top unigram features. Addi- tionally, it can capture differences that are hard to detect at a word level; for example, “the genre of biblical scripture” is mainly reflected in its sentence structure rather than individual words. Finally, D5 only describes goal-related differences, while Naïve Bayes picks up on any discriminative feature; for example, when identifying the topical differences between a English and a Deutsch corpus, Naïve Bayes fails catastrophically and only picks up common determiners such as “the” or “die” instead of topic words, since they are the most useful feature at telling which sample comes from which corpus. Given the respective strength of D5 and traditional exploratory methods, we envision D5 to serve as a complementary method to traditional methods. 22 Annotation Interface to Collect Human-Generated Hypotheses (This section describes an interesting research direction we did not have time to fully pursue.) Task. To fine-tune the language model to propose better hypotheses and perform validation more accurately, we also designed an interface to collect human annotations earlier in the project. In this annotation task, the annotators see five text samples from each of the two corpora; they then write one or many natural language predicate(s) that describe how samples from the two groups are different and choose which text samples satisfy each predicate the annotator has written. Since it is challenging for humans to identify systematic differences between even groups of five sentences, we made the task easier for them by • we chose the representative samples from each corpus to form the two groups of samples, similar to the process in Section 15, and • we highlighted subspan of the text samples that are informative for how the two corpora differ. For example, if Corpus A is sports related while Corpus B is entertainment related, we hope to highlight sports-related words like “basketball”. To automatically identify the text spans to highlight, we fine-tuned RoBERTa to classify whether a sample comes from Corpus A and Corpus B, used the SHAP library to calculate how much each text span influences the classifier’s decision, and highlighted the text spans based on the influence. A screenshot of the annotation interface can be seen in Figure 6. Preliminary Results We performed initial experiments on text clusters formed on the wikitext-2 dataset (Merity et al., 2016). We asked the authors to write hypotheses for 30-50 samples and then compare the results with GPT-3 generated hypotheses. We found that human annotators were able to 27 Figure 6: A detailed screenshot of our annotation interface. write 2-4 valid hypotheses per pair of text groups, while GPT-3 text-davinci-003 was able to generate 4-6. Out of the valid generated hypotheses, approximately a third were variations on another valid hypothesis. The number of times humans were able to write a hypothesis that GPT-3 was unable to generate was around a third of the samples, while GPT-3 was able to generate a novel hypothesis humans have not thought about before in nearly every single text corpora. Given that GPT-3 is close to our author’s ability to write hypotheses, we estimated that we would not be able to fine-tune T5 to propose better hypotheses with human annotations, and hence gave up on this research direction. 23 Datasets Many of our datasets come from the following sources: the Computational Models of Social Meaning class from Columbia Universityhttp://www1.cs.columbia.edu/~smara/teaching/S18/, the ACL Anthology https://aclanthology.org, , and Kaggle datasets with an NLP tag. https: //www.kaggle.com abc-headlines. We collect headlines published by ABC news, an American news company from Kulkarni (2018). ABC headlines are directly downloaded from Harvard Dataverse. The year is extracted from the publication date field. Samples are constructed from the headline text. The data is downloadable from https://doi.org/10.7910/DVN/SYBGZL with license CC0 1.0. 28 1) Write hypotheses2) Select most representative samples for a written hypothesis3) Commit hypotheses and show SHAP highlightsSHAP highlight view4) Write any additional hypotheses after seeing highlights ad-transcripts. We collect ad scripts from a variety of industries from Hartman (2019). Ad transcripts are directly downloaded from Kaggle. The top eight industries by frequency are selected. Newlines are replaced with spaces. The dataset is downloadable from https://www.kaggle.com/ datasets/kevinhartman0/advertisement-transcripts-from-various-industries with license CC0 Public Domain. admin-statements. We collect statements of administration policy from American pres- idents from Progress (2022). Administration statements are extracted from a collec- Extraneous symbols are removed and samples are split by tion hosted on GitHub. paragraph. is downloadable from https://github.com/unitedstates/ statements-of-administration-policy#statements-of-administration-policy and origin files have a Creative Commons Attribution 3.0 License. The dataset ai2-natural-instruction. We collect a learning-from-instructions dataset released by the Allen Institute for AI from Mishra et al. (2022). Natural instruction tasks are directly downloaded without modification. The dataset is released under an Apache-2.0 license. airline-reviews. We collect reviews of airlines collected from the review website Skytrax. Airline reviews for airlines, airports, and seats are downloaded from a public GitHub repository. Names of aircraft, airlines, countries, and traveler types are standardized. Ratings of 1, 4, or 5 on a scale of 5, and 1, 5, 8, or 10 on a scale of 10 are kept. This dataset can be downloaded via https://github.com/quankiquanki/skytrax-reviews-dataset. aita. We collect posts on the “Am I The Asshole” Subreddit, an online forum people ask others whether they were in the wrong from O’Brien (2020). Posts from r/AmITheAsshole are downloaded from a praw scrape of Reddit. Topic areas are chosen based on common themes in posts and coarsely defined based on manual keywords. Each post can belong to multiple topic areas. The dataset can be downloaded at https://doi.org/10.5281/zenodo.3677563. all-the-news. We collect news articles collected from various outlets between 2015 and 2017 from Thompson (2019). News articles are downloaded directly from the Components website. The titles are used as text samples.The dataset can be downloaded at https://components.one/datasets/ all-the-news-articles-dataset . amazon-reviews. We collect Amazon reviews collected from various product categories from Ni et al. (2019). Amazon reviews are downloaded from a 2018 crawl of the website. The first 100,000 review texts are treated as the text sample. The dataset can be downloaded at https: //nijianmo.github.io/amazon/index.html . armenian-jobs. We collect job postings in Armenia from Udacity (2017). The Armenian job postings dataset is downloaded from a snapshot on GitHub. Different IT jobs are manually coded and time intervals are defined in order to balance sample availability. The dataset can be downloaded at https://www.kaggle.com/datasets/udacity/armenian-online-job-postings . boolq. We collect a reading comprehension dataset of yes/no questions from Clark et al. (2019). Boolean questions are downloaded directly as is. The dataset can be downloaded at https:// github.com/google-research-datasets/boolean-questions with license CC-SA-3.0. clickbait-headlines. We collect headlines across time from the Examiner, a clickbait news site from Kulkarni (2020a). The Examiner headlines are directly downloaded from Kaggle. The year is ex- tracted from the publication date field. Samples are constructed from the headline text. The dataset can be downloaded at https://www.kaggle.com/datasets/therohk/examine-the-examiner, with license CC0: public domain. convincing-arguments. We collect arguments on a variety of topics annotated for convincingness from Habernal & Gurevych (2016). Annotated arguments are downloaded from the GitHub repository. Arguments are sorted by rank. The bottom 400 are treated as “unconvincing”, the top 200 are treated as “convincing”, and the next 200 are treated as “somewhat convincing.” The dataset can be downloaded at https://github.com/UKPLab/acl2016-convincing-arguments, with license CC-BY 4.0. craigslist-negotiations. We collect dialogue from Craigslist negotiations, an online seller platform from He et al. (2018). Craigslist negotiations are downloaded from Huggingface. Sequences which contained a “quit” intention or “reject” intention are categorized as failures; those which contained an “accept” intention are categorized as successes. The mid-price is defined as the mean 29 price of the items sold. Within each category, the items are sorted by mid-price. The top half is treated as high-price and the bottom half is treated as low-price. This dataset can be downloaded at https://huggingface.co/datasets/Hellisotherpeople/DebateSum with MIT license. debate. We collect evidence compiled for American competitive policy debate, published online by debate camps from Roush & Balaji (2020). The train split is downloaded from Huggingface. For each sample, we use the abstract as the text. Arguments are categorized by type, debate camp of origin, and topic/specific argument. For topics, we use domain knowledge to list relevant keywords for each topic and include any sample with a file name that includes any keyword. A single sample can belong to multiple topics. This dataset can be downloaded at https://huggingface.co/ datasets/Hellisotherpeople/DebateSum with MIT license. dice-jobs. We collect American technology job postings on dice.com from PromptCloud (2017). Job postings are downloaded from Kaggle. Posts from the six most popular companies are categorized by company. We remove miscellaneous characters and blank descriptions. We additionally apply our splitting procedure to reduce description length. This dataset can be downloaded at https:// www.kaggle.com/datasets/PromptCloudHQ/us-technology-jobs-on-dicecom under CC BY-SA 4.0 . diplomacy-deception. We collect dialogue from games of Diplomacy, which involves deception from Peskov et al. (2020). Diplomacy dialogues are downloaded from GitHub (all splits). The data are ASCII encoded and newlines are removed. Each message and label is treated as a sample. This dataset can be downloaded at https://huggingface.co/datasets/diplomacy_detection un- der unknown license. echr-decisions. We collect facts of cases heard before the European Court of Human Rights from Chalkidis et al. (2019). Decisions are downloaded from a public archive. A random sample of 500 decisions is selected from the files. The samples with any violated articles are categorized as “violation,” while the rest are categorized as “no violation.” This dataset can be downloaded at https://paperswithcode.com/dataset/echr under unknown license. essay-scoring. We collect essays from students from ess (2012). Essays are downloaded from a GitHub repository. Only essays from set 5 are considered. Essays with a score of at least 3 are categorized as good essays, while essays with a score less than 3 are bad essays. This dataset can be downloaded at https://www.kaggle.com/c/asap-aes under unknown license. fake-news. We collect fake and legitimate news from Pérez-Rosas et al. (2017). Fake news articles are downloaded from the author’s website. Full articles are treated as text snippets. This dataset can be downloaded at http://web.eecs.umich.edu/~mihalcea/downloads.html#FakeNews under CC-BY-4.0. fomc-speeches. We collect Federal Open Market Committee (FOMC) speeches from 1996- 2020, which describe Federal Reserve policy from Mish (2020). Fed speeches are down- loaded from Kaggle. The macro indicator data are merged in on the year and month. Full speech text is split by paragraph and categorized by speaker, year, and macroeconomic in- dicator. This dataset can be downloaded at https://www.kaggle.com/datasets/natanm/ federal-reserve-governors-speeches-1996-2020 under unknown license. genius-lyrics. We collect lyrics collected from Genius.com before 2020 from Lim & Benson (2021). Genius lyrics are downloaded from Google Drive. The lyrics are merged with song metadata and treated as samples. We categorize lyrics by hand-selecting popular artists, common genres, time peri- ods, and view counts (over 1M views is high, 500k-1M is medium). This dataset can be downloaded at https://www.cs.cornell.edu/~arb/data/genius-expertise/ under unknown license. happy-moments. We collect self-reported happy moments and demographic characteristics from Asai et al. (2018). The HappyDB dataset is downloaded from the official GitHub repository. Demographic data is cleaned and merged into happy moments. Happy moment descriptions are treated as samples and are categorized by type of happy moment, country of origin, and other demographic features. This dataset can be downloaded at https://github.com/megagonlabs/HappyDB under unknown license. huff-post-headlines. We collect headlines from the news outlet Huffington Post from Misra & Arora (2019) and Misra & Grover (2021). Huffington Post headlines are downloaded from Kaggle. The 30 short description of each article is treated as a sample and tokenized at the sentence level. This dataset can be downloaded at https://rishabhmisra.github.io/publications/ under CC-BY-4.0. immigration-speeches. We collect congressional and presidential speeches that mention immigration from 1880 to the present from Card et al. (2022). Immigration speeches are downloaded from the replication package. The speech text is preprocessed to remove extraneous spaces. We engineer features corresponding to time periods, well-known speakers, other significant time periods, the racial group under discussion, and the geographic area within the United States. This dataset can be downloaded at https://github.com/dallascard/us-immigration-speeches/releases. kickstarter. We collect names of startups on kickstarter.com from Mouillé (2017). We down- load a 2018 crawl from Kickstarter from Kaggle. The project name is treated as the text sample. This dataset can be downloaded at https://www.kaggle.com/datasets/kemical/ kickstarter-projects?select=ks-projects-201612.csv under CC BY-NC-SA 4.0. microedit-humor. We collect funny sentences generated by making one-word edits to normal statements from Hossain et al. (2019). The Microedit dataset is downloaded from the author’s website. We make the relevant edit to each text sample and treat the edited text sample as the data point. We bin the mean annotator grade into 4 and denote each as unfunny, neutral, funny, and very funny, respectively. This dataset can be downloaded at https://paperswithcode.com/dataset/ humicroedit. mnli. We collect a collection of sentence pairs annotated with textual entailment information from a range of genres from Williams et al. (2017). The MNLI corpus is downloaded from the official website. We treat the premise and hypothesis as text samples. This dataset can be downloaded from https://cims.nyu.edu/~sbowman/multinli/, most of which are under the OANC license. monster-jobs. We collect American job postings on monster.com. Jobs on Monster.com are down- loaded from Kaggle. Job descriptions are treated as samples and split at the paragraph and sentence level. We keep and categorize jobs from seventeen large cities. This dataset can be downloaded from https://www.kaggle.com/datasets/PromptCloudHQ/us-jobs-on-monstercom under CC BY-SA 4.0 . movie-tmdb. We collect movie plot summaries from TMDB from Kaggle (2018). TMDB movie overviews are downloaded from Kaggle. We keep only English movies and bin popularity by deciles. The top decile is considered “hits,” the 70-80th percentiles are considered “average,” and the 30-40th percentiles are considered “bad.” This dataset can be downloaded from https://www.kaggle. com/datasets/tmdb/tmdb-movie-metadata21. movie-wiki. We collect movie plot summaries collected from Wikipedia from Robischon (2019). Wikipedia movie summaries are downloaded from Kaggle. This dataset can be downloaded from https://www.kaggle.com/datasets/jrobischon/wikipedia-movie-plots under CC BY- SA 4.0. news-popularity. We collect news headlines posted on social media platforms from Moniz & Torgo (2018). Headlines are downloaded from a reproduction package. The headline and title text are cleaned, and the title is treated as the text sample. The 100 most positive and nega- tive or popular and unpopular articles on each topic are used as distributions. This dataset can be downloaded from https://archive.ics.uci.edu/ml/datasets/News+Popularity+in+ Multiple+Social+Media+Platforms. nli-benchmarks. We collect training examples from various natural language inference (NLI) datasets from Liu et al. (2022). NLI benchmarks are downloaded from a public collection on Google Drive. We examine the premise and hypothesis separately as samples. This dataset can be downloaded from https://github.com/alisawuffles/wanli. npt-conferences. We collect Non-Proliferation of Nuclear Weapons (NPT) conference transcripts from Barnum & Lo (2020). NPT conference notes are extracted from the accompanying replication package. Text is split by paragraph, and only paragraphs longer than 50 characters are preserved. Text is split into three time ranges: pre-2008, 2008-2012, and post-2012. This dataset can be downloaded from https://journals.sagepub.com/doi/full/10.1177/0022343320960523. open-deception. We collect arbitrary lies and truths from any domain generated by crowdworkers from Pérez-Rosas & Mihalcea (2015). Open domain lies are downloaded from the public dataset 31 and lie texts are split into lies and truths. This dataset can be downloaded from https://web.eecs. umich.edu/~mihalcea/downloads.html#OpenDeception. open-review. We collect submissions to ICLR, a machine learning conference from 2018 to 2021. Open review abstracts are accessed via the openreview API. We query for abstracts from the 2018- 2021 ICLR blind submissions. Abstracts are classified based on rating: >= 7 (“great”), 5-6 (“good”), and <= 4 (“bad”). This dataset can be downloaded from https://openreview.net/. parenting-subreddits. We collect posts from various parenting-related subreddits, which are text- based forums on the site Reddit from Gao et al. (2021). Posts from various subreddits are downloaded from the paper’s GitHub repository. We clean the text and split the posts according to the topic(s) each post is tagged with. This dataset can be downloaded from https://github.com/SALT-NLP/ Parenting_OnlineUsage. poetry. We collect poems from PoetryFoundation.com from Bramhecha (2019). Poems are downloaded from a 2019 scrape of the PoetryFoundation website from Kaggle. The text is cleaned and split according to subject tags and authorship. This dataset can be downloaded from https://www.kaggle.com/datasets/tgdivy/poetry-foundation-poems under GNU Affero General Public License. political-ads. We collect political ads observed by Facebook users from pol (2021). Ads are downloaded from the Ad Observer website, which maintains an aggregate of all collected ads. We extract targeting metadata from the targeting field and define splits according to age, gender, location, interests, time, and political lean. This dataset can be downloaded from https://adobserver. org/ad-database/. qqp. We collect questions from Quora.com from Quora (2017). rate-my-prof. We collect reviews of lecturers from RateMyProfessor.com from He (2020). We download a sample of RateMyProfessor.com reviews from an online repo. We clean the text and guess the gender of the reviewed lecturer from the first name using the gender-guesser package. Due to data availability, we consider only male and female names. To improve the quality of the classification, we remove any posts which use pronouns from the opposing sex (e.g. “him”). This dataset can be downloaded from https://data.mendeley.com/datasets/fvtfjyvw7d/2 under CC BY 4.0 . radiology-diagnosis. We collect impressions and medical histories of radiology patients from Pestian et al. (2007). Radiology diagnoses are downloaded from a GitHub copy of the original task dataset. We parse the metadata to retrieve the diagnostic code, decision type, impression, and patient history. Referencing the associated ICD codes, we convert codes to colloquial diagnoses (e.g. 786.2 denotes cough). We treat the histories and impressions as samples and split them according to diagnosis and level of consensus. reddit-humor. We collect jokes posted on the Reddit forum r/Jokes, a message board for sharing jokes from Weller & Seppi (2020). Jokes are downloaded from the dev and test splits of the dataset. We clean the text and split the dataset according to whether they are labeled as funny. This dataset can be downloaded from https://github.com/orionw/rJokesData under Reddit License and Terms of Service, and users must follow the Reddit User Agreement and Privacy Policy, as well as remove any posts if asked to by the original user. reddit-stress. We collect stress-related posts on Reddit from Turcan & McKeown (2019). We split the post text based on which subreddit they are posted on (related to PTSD, anxiety, or stress generally). Reddit posts are downloaded from https://github.com/gillian850413/Insight_ Stress_Analysis, and we recommend following the Reddit User Agreement and Privacy Policy, as well as remove any posts if asked to by the original user. reuters-authorship. We collect articles from various Reuters authors from Liu (2011). The articles are split according to the author. Reuters articles are downloaded from the UCI repository https: //archive.ics.uci.edu/ml/datasets/Reuter_50_50. riddles. We generated several riddles. The 3000 most common English words are manually copied from a website. Words with between 5 and 8 characters are kept. We create two popular riddles. First, we split words based on whether they have a duplicate character. We exclude any words with multiple “doubles” or more than 2 of any character. Second, we split words based on whether they have the letter T. 32 scotus-cases. We collect facts from cases heard by the Supreme Court of the United States (SCOTUS) from Alali et al. (2021). Supreme Court cases are downloaded from a GitHub repository. We identify state/federal parties by manually defining keywords. We split based on the winning party, the identity of each party, and the type of decision. We then define several time periods and relevant political eras and split decisions accordingly. Finally, we split according to the ruling’s policy area and how it changes over time. The dataset can be downloaded from https://paperswithcode.com/paper/ justice-a-benchmark-dataset-for-supreme-court under CC-BY-SA. short-answer-scoring. We collect short answers from students from sho (2013). Short answers are downloaded from a GitHub mirror of the dataset. We consider only responses to essay set 1. The two scores are averaged and binned into good (>= 2.5), medium (1.5-2.5), and bad (<1.5). The dataset can be downloaded from https://www.kaggle.com/c/asap-sas. snli. We collect a collection of sentence pairs annotated with textual entailment information from images from Bowman et al. (2015). The dataset can be downloaded from https://nlp.stanford. edu/projects/snli/ under CC BY-SA 4.0. squad-v2. We collect reading comprehension questions crowdsourced from Wikipedia articles from Rajpurkar et al. (2018). The dataset can be downloaded from https://rajpurkar.github.io/ SQuAD-explorer/ under CC BY-SA 4.0. stock-news. We collect top news headlines on Reddit, an online message board from Sun (2017). Headlines are downloaded from a GitHub mirror. We clean the text and divide the samples based on whether the DOW rose or fell that day. The dataset can be downloaded from https://github.com/ ShravanChintha/Stock-Market-prediction-using-daily-news-headlines under Reddit License and Terms of Service, and users must follow the Reddit User Agreement and Privacy Policy, as well as remove any posts if asked to by the original user. suicide-notes. We collect posts from r/SuicideWatch and r/depression, two forums on Reddit fromHe (2021). The post title and body are combined to form the text samples. Samples are split based on whether they were posted in a suicide-related Subreddit. The dataset can be downloaded from a github: https://github.com/hesamuel/goodbye_world, under Reddit License and Terms of Service, and users must follow the Reddit User Agreement and Privacy Policy, as well as remove any posts if asked to by the original user. times-india-headlines. We collect headlines from Times of India news from Kulkarni (2022). Headlines are downloaded from a Dataverse mirror. We use the first 1000 headlines in each year as samples. The dataset can be downloaded from https://www.kaggle.com/datasets/therohk/ india-headlines-news-dataset under CC0 Public Domain. trial-deception. We collect testimonies from witnesses in real trials from Pérez-Rosas et al. (2015). Trial testimonies are downloaded from the author’s website. The testimonies are divided based on whether they are considered truthful. The dataset can be downloaded from https://web.eecs. umich.edu/~mihalcea/downloads.html#RealLifeDeception. un-debates. We collect speeches from debates at the United Nations from Baturo et al. (2017). Debate transcripts are downloaded from the Dataverse reproduction package. Samples are divided based on the country and year of the snippet. First, we isolate samples from Russia, China, and the United States and specify 3 time periods of interest. Next, we divide all samples by the decade. Finally, we create distributions for 19 countries of interest. The dataset can be downloaded from https://doi.org/10.7910/DVN/0TJX8Y under CC0 1.0 . unhealthy-conversations. We collect expert-annotated unhealthy conversations from Price et al. (2020). Conversation transcripts are downloaded from the official GitHub repository. For each anno- tated attribute, we split the dataset based on whether that form of unhealthy conversation is present in the sample. The dataset can be downloaded from https://github.com/conversationai/ unhealthy-conversations under CC BY-NC-SA 4.0. urban-dictionary. We collect definitions from UrbanDictionary.com, a crowdsourced English dictionary from Kulkarni (2020b). Urban Dictionary entries are downloaded from Kaggle. Definitions are split into groups representing the top 1, 5, and 10 percent of definitions ranked by both upvotes and downvotes; we sample 10,000 from each and create a control distribution by randomly sampling 10,000 definitions from all entries. The dataset can be downloaded from https://www.kaggle. com/therohk/urban-dictionary-words-dataset under CC0 Public Domain. 33 wikitext. We collect text snippets from Wikipedia from Merity et al. (2016). The Wikipedia snippets are loaded from HuggingFace. We remove any samples that are empty or start with ’=’ (which represent headings); samples are tokenized at the sentence level and used for clustering. The dataset can be downloaded from https://huggingface.co/datasets/wikitext under CC BY-SA 3.0. yc−startups. We collect descriptions of companies that were part of the Y Combinator startup incubator from Bhalotia (2022). YCombinator company descriptions are downloaded from a 2022 scrape on GitHub. Only companies with long descriptions are preserved. Companies are split according to founder characteristics, year, “top company” designation, operating status, and loca- tion. The dataset can be downloaded from https://www.kaggle.com/datasets/benhamner/ y-combinator-companies. 34
ai_researcher
2
Design_Ideation_with_AI_-_Sketching_Thinking_and_Talking_with_Generative_Machine_Learning_Models.pdf
4 2 0 2 r a M 7 1 ] C H . s c [ 1 v 4 6 1 1 1 . 3 0 4 2 : v i X r a The Effects of Generative AI on Design Fixation and Divergent Thinking Samangi Wadinambiarachchi The University of Melbourne Melbourne, Australia [email protected] Ryan M. Kelly RMIT University Melbourne, Australia [email protected] Saumya Pareek The University of Melbourne Melbourne, Australia [email protected] Qiushi Zhou The University of Melbourne Melbourne, Australia [email protected] Eduardo Velloso The University of Melbourne Melbourne, Australia [email protected] ABSTRACT Generative AI systems have been heralded as tools for augmenting human creativity and inspiring divergent thinking, though with little empirical evidence for these claims. This paper explores the effects of exposure to AI-generated images on measures of design fixation and divergent thinking in a visual ideation task. Through a between-participants experiment (N=60), we found that support from an AI image generator during ideation leads to higher fix- ation on an initial example. Participants who used AI produced fewer ideas, with less variety and lower originality compared to a baseline. Our qualitative analysis suggests that the effectiveness of co-ideation with AI rests on participants’ chosen approach to prompt creation and on the strategies used by participants to gen- erate ideas in response to the AI’s suggestions. We discuss oppor- tunities for designing generative AI systems for ideation support and incorporating these AI tools into ideation workflows. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI. KEYWORDS Design fixation, Generative-AI, Creativity support tools ACM Reference Format: Samangi Wadinambiarachchi, Ryan M. Kelly, Saumya Pareek, Qiushi Zhou, and Eduardo Velloso. 2024. The Effects of Generative AI on Design Fixation and Divergent Thinking. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 18 pages. https://doi.org/10.1145/3613904.3642919 1 INTRODUCTION Consider a team of designers discussing ideas for environmentally friendly transport solutions for a city. One team member kicks off the discussion with a suggestion about electric buses. The rest of the Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CHI ’24, May 11–16, 2024, Honolulu, HI, USA © 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0330-0/24/05. https://doi.org/10.1145/3613904.3642919 team then spends an hour discussing variations on this idea, all in- volving electric vehicles, until an intern who arrived late asks “have you considered bicycles?”. Until the intern’s suggestion, the ideas were anchored on a salient characteristic of the first proposal—an electric motor. The design literature dubs this phenomenon design fixation—the “blind adherence to a set of ideas or concepts limiting the output of conceptual design” [30, p. 1]. This is a common expe- rience in any creative task, from art to engineering, and happens when exposure to one idea anchors and biases subsequent ideas, restricting exploration of the design space. Fixation happens both consciously and unconsciously, regardless of the level of experi- ence of the practitioner [30, 76] and in all areas of creative work. The severe negative impact that design fixation has on the creative process makes it a key concern in design studies. In the initial stages of the design process, it is common for de- signers to conduct precedence studies and create mood boards by compiling external stimuli as sources of inspiration to broaden their ideation space [41]. However, the exposure to previous solutions during this process can potentially be a source of design fixation. Previous studies have shown that exposure to examples of simi- lar design solutions has mixed effects on creativity [75]. It tends to drive designers towards the example, narrowing the explored solution space [30, 38]. Further, variations in the modality [64, 68], the fidelity [13, 64], the quality [64], the diversity and novelty of the exposed stimuli, the time of exposure, and its proximity to the design problem [64] can vary the intensity of design fixation [63]. Recent developments in generative artificial intelligence (GenAI) have been heralded as the harbinger of a new paradigm of cre- ative work, often under the guise of augmenting human creativ- ity [22]. Publicly available AI image generators such as DALL·E1, Artbreeder2, Stable Diffusion3, and Midjourney4 have made it pos- sible for designers to turn their thoughts into high-quality visuals quickly and at a low cost. The ability of these tools to generate “orig- inal” images based on user prompts potentially offers a rich source of inspiration. For example, Chiou et al. [14] have shown that when used in co-ideation tasks, AI can open up a broader conceptual space quickly and effortlessly, promoting divergent thinking [14]. 1https://openai.com/dall-e-2 2https://www.artbreeder.com 3https://stablediffusionweb.com 4https://www.midjourney.com CHI ’24, May 11–16, 2024, Honolulu, HI, USA Wadinambiarachchi et al. However, there is still a lack of empirical evidence for the ef- fect of generative AI as a source of inspiration during design tasks. Though the specific outputs generated by these tools are novel, they are trained on existing work, blurring the lines between what is original and derivative. Further, designers could still be fix- ated during the ideation process despite any potential inspiration from AI. In this paper, we aim to understand the effects of AI-generated imagery as a source of inspiration in an ideation task. We conducted a between-participants experiment in which designers took part in a visual ideation task that involved sketching ideas for a chatbot avatar. We manipulated participants’ access to sources of inspira- tion: none, access to Google Image Search, or access to Midjourney (an image generation AI tool). Through our study, we sought an- swers to the following questions: • RQ 1: How does the exposure to AI-generated images affect design fixation and divergent thinking during ideation, com- pared to using commonly used sources of inspiration and no inspiration support? • RQ 2: How do different ways of interacting with AI image generators impact participants’ effectiveness in an ideation task? We evaluated the effect of inspiration sources on participants’ ideation output (the sketches). In doing so, we used four diver- gent thinking measures from prior literature (design fixation score, fluency, variety, and originality [30, 53, 62, 76]) to assess differ- ent facets of their creative output. We found that exposure to AI- generated images induced higher design fixation in participants than in other conditions. Moreover, fluency, variety, and originality were lower in the AI-supported group compared to the baseline condition. Through our qualitative analysis, we suggest that fixa- tion arises when creating prompts and when ideating in response to AI images. In addition, we demonstrate that using AI can result in fixation displacement, where the focus of fixation shifts from an exemplar onto the AI’s outputs. Our study provides an empirical contribution to the AI-powered creativity support literature by illustrating how AI-generated im- ages influence design fixation and divergent thinking measures. It further elaborates on AI’s role in providing inspiration during visual design tasks. Further, we demonstrate the importance of focusing on factors that might induce design fixation while acquir- ing inspiration from AI tools and propose potential strategies and directions to explore in mitigating design fixation. 2 RELATED WORK Our research builds on studies of design fixation and on the role AI can play in supporting design ideation. 2.1 Design Fixation Among the factors that hinder designers’ creativity, “design fixation” is one of the most well-studied phenomena in creativity and design research. It is identified as the unconscious adherence to a set of pre- known ideas or knowledge that restricts the ideation space [30, 76]. When a person experiences design fixation, they tend to adhere to pre-conceived ideas and concepts, limiting exploration of the design space during ideation [30, 40, 47]. Design fixation narrows designers’ ability to explore the creative space between abstract ideas and potential solutions [30, 64]. Previous findings show that this is reflected heavily in their design outcomes and restrains designers from maximising their creative potential, resulting in unoriginal outputs [30]. Design fixation has been studied extensively across different fields [61], including cognitive science [9], design [5, 34], educa- tion [28], mechanical engineering [67, 74], and psychology [4, 57]. These studies have collectively shown that design fixation is more likely to occur when designers are exposed to example solutions for design tasks [30]. It has also been demonstrated that the modality, degree of abstraction of the inspiration (i.e. the fidelity), and the designer’s level of expertise [1] can affect fixation intensity when exposed to external stimuli. Fixation has typically been studied through quantitative experi- mental approaches in which participants are asked to solve a design problem, either with or without an example (external stimuli) [64]. For instance, Jansson and Smith’s classic design fixation work [30] reported four experiments. These experiments divided participants into two groups: a treatment group (fixation group), who were given a design problem along with an example solution, and a con- trol group, who were given the same problem with no examples to work from. They hypothesised that showing an example design would restrict the ideas of the treatment group because it would make the participants fixate on the given example. Jansson and Smith [30] found that even though both groups produced a similar number of designs, ideas in the fixation groups were more similar to the example. In a subsequent experiment, the researchers found that the flexibility and originality of the designs were limited in the fixation group and concluded that creative performance may be inhibited when an example induces design fixation. Since then, several studies have been conducted replicating or amending the method and examples [38, 64]. When looking at design fixation, it is important to distinguish dif- ferent types of fixation effects [18]. Youmans and Arciszewski [76] identify three such effects. The first is unconscious adherence [76] to past designs without realizing. An example of this is copying the features of an example (even if the features are inappropriate to the task) [18, 30].The second is conscious blocking [76], where new ideas are actively but perhaps momentarily dismissed. In this situation, a designer is aware of alternative creative paths but chooses to disregard them, perhaps due to a commitment to a current project’s direction or a bias towards familiar solutions. The third type of fixation effect is intentional resistance, a deliberate decision against exploring new concepts. For instance, design companies engaged in research and development often prefer to explore solutions that fall within their well-established expertise, a tendency known as local search bias [18, 50]. Apart from trying to understand its causes, researchers have explored various strategies to overcome design fixation [64]. Such strategies include incorporating physical prototyping in the ideation activities [67], triggering frequent reminders for participants to con- sider all available options in a timely manner during an ideation task [42, 76], utilising design thinking and lateral thinking methods [6] such as de Bono’s six thinking hats [2, 20], having short breaks or “incubation periods” during the task [58, 73], using computer-aided The Effects of Generative AI on Design Fixation and Divergent Thinking CHI ’24, May 11–16, 2024, Honolulu, HI, USA design and intelligent agents [19, 27], and incorporating design by analogy [12]. Even though there is a large body of work on design fixation exploring the effects of external stimuli on creative tasks [1], studies centred around design fixation are limited within the field of HCI. Among these few studies, HCI researchers have started to examine the potential of using AI image generators as tools for support- ing creativity [27, 39]. Thus, in this study, we adapt experimental methods from mechanical engineering and design research, where design fixation is framed as unconscious adherence and is measured by the degree to which participants directly copy features from an example stimulus. We aim to understand the influence of AI image generators on design fixation and divergent thinking, adding new empirical evidence to the HCI literature. 2.2 The Emergent Role of AI in Creativity Support Since the early 1990s, designers have envisioned a future with in- telligent design and creative aids [24]. With recent advances in Generative AI, this vision is becoming a reality. Generative AI sys- tems can create new, plausible media [49] to aid individuals in creative tasks [31]. Generative AI models are trained on large data sets and can enable people to generate content such as images, text, audio, or video quickly and easily [35]. Currently, Generative AI tools enable users to create diverse artefacts by providing instruc- tions in natural language called “prompts”. Generative AI systems can also synthesise diverse concepts and generate unpredictable ideas. In the case of AI image generators — the focus of our study — the output comes from the latent space of a deep learning model, arising from an iterated diffusion process that involves the model ar- ranging pixels into a composition that makes sense to humans [66]. Because of process randomness, different results can be obtained based on the same prompt, with entirely new images each time. This differs from conventional image searches, where the search is performed by entering a query into a database to retrieve images that the search engine considers relevant. Another difference is that whereas long and specific queries might be too restrictive for an image search engine, they can benefit AI image generators. Previous works have explored the roles that generative AI can play in the creative process [29]. For instance, AI can generate content entirely by itself with instructions from the user, or it can act as a creativity support tool, augmenting the user’s creativity [43]. AI text generators can be used as a tool to define specific problems to solve and promote convergent and divergent thinking [72] and have the potential to be used as a co-creative assistant for a designer [19, 54]. Professionals in creative industries claim that AI could be a promising tool to gather inspiration [3]. With the growing interest in AI, HCI researchers have also started to explore ways of using AI as a creativity support tool [16, 31]. Among these explorations, a growing stream of literature fo- cuses on using generative AI to access inspiration and mitigate design fixation. Researchers speculate that generative AI will be- come a potential solution for inspiring designers [37, 51, 55] due to the ability of AI generators to create abstract and diverse stim- uli [32]. One of the early examples in HCI for incorporating AI to mitigate design fixation was the Creative Sketching Partner (CSP) [19, 32], an AI-based creative assistant that generates inspiration for cre- ative tasks. Through multiple studies, Davis et al. [19] suggest that the CSP helped participants in ideation and in overcoming design fixation. Hoggenmueller et al. have also explored how gen- erative text-to-image tools can support overcoming design fixation experienced in the field of Human-Robot Interaction [27]. They conducted a first-person design exploration and reflection using “CreativeAI Postcards” inspired by Lupi and Posavec’s “Dear Data book” method to ideate and visualize robotic artefacts. They noted that AI-generated images have the potential to inspire new robot aesthetics and functionality and also claimed that the designer’s AI-co-creativity can help to eliminate biases and expand limited imagination. In a different case, Lewis [39] reflects that a digital as- sistance tool like “ChatGPT” helped her by acting as an art teacher and providing instructions. Lewis points out that it is challeng- ing to distinguish between inspiration and copying when utilizing generative AI and reflects on concerns such as “transparency of attribution”, “ethical considerations”, and the clarity of the “creation process”. Rafner et al. [48] conducted an in-the-wild study to exam- ine the effects of AI-assisted image generation on creative problem- solving tasks, aiming to investigate the effects of generative AI on problem identification and problem construction. They developed a human-AI co-creative technology that combines a GAN and sta- ble diffusion model to support AI-assisted image generation. They found that this intervention enabled participants to facilitate idea expansion and prompt engineering, suggesting that AI can “aid users in generating new ideas and refining their initial problem representations” [48]. As the domain of AI-powered creativity support is still in its infancy, the available literature provides only a nascent understand- ing of the effect of AI on creativity and design fixation. Our work extends the literature by using established techniques from design fixation research to better understand how AI image generators affect design fixation during a visual design task. 3 METHOD We conducted a between-participants experiment to understand how AI-generated imagery affects designers’ divergent thinking during visual ideation after being exposed to an example design. We compared this scenario to the use of online image search and to no inspiration support. The independent variable was the Inspiration Stimulus: none (Baseline), Google Image Search (Image search), or Generative AI (GenAI). The dependent variables were the Design Fixation score (the number of features in each sketch in common with the example), Fluency (the number of sketches produced), Variety (the number of different types of sketches produced), and Originality (how infrequently other participants devised the same type of sketch). We conducted the experiment in a controlled labo- ratory setting following a mixed-method approach. All participants gave informed written consent to participate after reading a plain language statement describing the procedure. The study received ethics approval from our university. CHI ’24, May 11–16, 2024, Honolulu, HI, USA Wadinambiarachchi et al. Figure 1: The example with the 14 salient features we monitored. Note: The example was given to the participants without the callouts. 3.1 Study Design and Materials The experimental task consisted of a visual ideation activity in which participants were asked to devise as many ideas as possible for a new chatbot avatar by sketching them on paper. The written design brief given to participants was: “Your task is to design a character we plan to use as an avatar for a chatbot. This chatbot is kind, loving, caring, and intelligent. It can assist you in solving your problems and is always there for you to talk to whenever you need to. So, imagine that you are conversing with this chatbot in real life and then come up with as many sketches as possible. Remember, you can annotate the sketch if you need to explain more about your design. And please always number each sketch you draw in the order you come up with them.” This written design brief included an example of an avatar with the figure caption "Example chatbot avatar (for reference only)". The example avatar is shown in Figure 1. Further, we provided verbal instructions for the participants, asking them to produce as many different ideas as they could during the experiment. For participants in the Image search and Gen AI conditions, we additionally informed them that they could use the digital tool (either Google Image Search or Midjourney, depending on the condition) to gather inspiration for their work. The full study protocol can be found in supplementary material. Similar to previous work [30], we started the task by showing participants an example avatar to induce design fixation. We drew inspiration from Ward’s creature invention task [36, 70], which asked participants to imagine and create animals that lived on a different planet. The authors of this paper created the example chatbot avatar after several design iterations. We created the avatar so that it had 14 salient features, which we used to quantitatively assess design fixation (see Figure 1). We considered the presence of these features in participants’ ideas to be evidence of design fixation, following standard practice in the literature [30]. In the experimental task, participants were given 20 minutes to sketch their ideas for addressing the brief. We chose this time limit because it is the median time given to participants in previous design fixation studies [64] and because we aimed to cap each experimental session at one hour to avoid fatigue. We provided participants with pencils, pens, felt pens, and coloured pencils, along with blank A4 sheets to sketch their ideas. A timer was placed outside their peripheral view for them to keep track of time. The experiment included a single between-participants inde- pendent variable—the Inspiration Stimulus available during the task—with three levels: • Baseline: no inspiration support. • Image Search: Participants had access to Google Images5 during the task, accessed through a web browser in incognito mode to avoid the browser history influencing results. • GenAI: Participants had access to the paid version of Mid- journey V4, an AI image generation tool, through a private Discord server running the Midjourney bot (which was re- quired to enter prompts and view outputs from the model). Midjourney V4 was the default model when our study was conducted (May 2023)6. Participants interacted with Mid- journey through textual prompts that the model used to generate sets of four images per prompt. 5https://images.google.com/ 6https://www.midjourney.com AntenaEarsMicrophoneGripperEyesSquare Shaped faceMouthNeckArmDisplay HeartSquare shaped bodyLegsFeet The Effects of Generative AI on Design Fixation and Divergent Thinking CHI ’24, May 11–16, 2024, Honolulu, HI, USA Figure 2: Examples of sketches created by participants in each experimental condition. (A) No support condition, (B) Image search condition, (C) GenAI condition We assessed participants’ creative output using four standard measures from the design fixation literature: design fixation, fluency, variety, and originality, which we describe as follows: Design fixation is the unintentional conformity towards ex- isting ideas or concepts that limits exploration of the ideation space [30, 76]. Researchers use the degree of copying as a method to quantify design fixation [30, 45]. Therefore, we operationalise design fixation as an objective property of each sketch based on the presence or absence of features available in the example. Following the approach used in design fixation literature [45], two raters blind to the experiment’s aims counted the presence of features from the example avatar in the sketches created by the participants. We validated the ratings by computing the inter-rater reliability and computed the design fixation score (DFS) as follows: Design fixation score = Number of features repeated from the example Number of fixating features in the example (1) Fluency refers to the number of ideas produced by the partic- ipants [25, 62]. We operationalise it by counting the number of sketches produced by each participant within the available time (20 minutes). Fluency = Number of sketches produced by the participant (2) Variety measures the coverage of the solution space explored during the idea-generation process [53]. It aims to capture the ex- tent of the design space covered during ideation. If the majority of ideas are similar, it indicates less variety. To compute variety, we assigned a numerical identifier to all the sketches (N=277), im- ported them into a Miro7 (an online collaborative whiteboard), and displayed them in randomised order. Two raters (blind to the con- ditions) iteratively and inductively grouped similar sketches into mutually exclusive clusters. This activity considered several factors: appearance, embodiment, appendages, shape, and accessories. The process resulted in 83 clusters. Each participant received a Variety score based on the number of clusters their sketches were classified into. We subtract 1 from the number of clusters so that if all of a 7miro.com participant’s sketches belong to the same cluster, their score is 0, and if they have sketches in every cluster, their score is 1. Variety = Number of clusters that a participant’s sketches belong to - 1 Number of clusters - 1 (3) Originality (also called Novelty [23, 53]) refers to the unique- ness of a particular sketch within the total pool of sketches made by participants [25, 30]. It measures how unusual and unexpected a given idea is. Intuitively, the more people have the same idea, the less original it is. We computed an idea’s originality by counting the number of other participants who had an idea belonging to the same cluster, dividing it by the total number of other participants, and computing its complement to 1 (to normalise the value between 0 and 1). In other words, it is the proportion of other participants who did not have the same idea. This score is 0 when every participant had an idea in the same cluster and 1 if only a single participant had an idea in that cluster. Originality = 1 − Number of other participants with ideas in the cluster Number of other participants (4) 3.2 Participants We recruited 60 participants through digital student notice boards, mailing lists of university student clubs, and word of mouth. Par- ticipants expressed their interest through a digital signup form. Participants self-described their prior experience in visual design (measured in years/months). We did not specify this experience should only be professional design experience. We screened partic- ipants based on our eligibility criteria and invited those who were 18 years or older and had experience in visual design via email. Fur- ther, to avoid dependent relationships, we ensured that none of the participants had a direct connection with the primary researcher running the study. Participants had a mean age of 25.8 years (18–49, SD = 5.4). They included undergraduate, master’s and PhD students from diverse domains such as arts, business, computer science & IT, design, engineering, and science. Each condition had an equal num- ber of participants and was gender-balanced, with 10 women and 10 men per condition (gender was self-described by participants). (C) GenAI (B) Image search (A) No Support CHI ’24, May 11–16, 2024, Honolulu, HI, USA Wadinambiarachchi et al. Figure 3: The overall experiment flow 1: Initial briefing and participant consent, 2: Pre-study questionnaire, 3-7: Main experi- mental task, 8: Post-study questionnaire, 9: Semi-structured interview and debriefing 3.3 Procedure Participants booked a time to participate individually based on their availability. The study was carried out in a quiet research labora- tory. Upon arrival, participants read a plain language statement describing the study and consented to participate (Figure 3-1). The experiment had four stages: pre-study questionnaire, main experimental task, post-study questionnaire, and semi-structured interview. Each session lasted 45–60 min in total. In the pre-study questionnaire, we collected participants’ basic demographic in- formation, their experience with similar design tasks (measured in years/months), and their familiarity with AI image generators (a yes/no question, and participants were asked to list any systems they had used if they answered yes). (Figure 3-2). The main objec- tive of this questionnaire was to understand and control for any variables that might confound the results. After completing the questionnaire, the participants were randomly assigned to one of the three conditions and were assigned a unique ID generated by the computer (3 random digits) (Figure 3-3). In the main experimental task (Figure 3, steps 3-7), partici- pants in all conditions received the same design brief, which asked them to design an avatar for a chatbot in 20 minutes, as described in Section 3.1. We started by allowing participants to familiarise themselves with the available materials. Then, the participants as- signed to the image search and AI-supported groups received an introduction to the tool they would use during the design task (Fig- ure 3-5). These tools were available for them to use on an Apple MacBook M1 Pro laptop. The tool introduction included a video tutorial created by the research team. This video tutorial explained how to use the tool. After the video tutorial, we allowed participants to ask questions and clarify any doubts. We provided task instructions to participants both verbally and as a written brief. The written brief included an example of a chatbot avatar, which served as a stimulus to induce design fixation (Figure 3-6). Participants were given 20 minutes to complete the design task (Figure 3-7). We limited the design task to 20 minutes to minimise the possibility of fatigue and because previous studies considered it an ideal duration for maintaining focus for producing ideas with both quality and quantity [64, 74]. Once participants indicated they were ready to start, the researcher started the screen recording with participants’ consent (in Image Search and GenAI conditions), switched on the timer and left them alone to work in the room, allowing them to work independently. After the design task, the researcher entered the room and asked the participant to fill in the post-study questionnaire (Figure 3-8). As the post-study questionnaire, we administered the NASA- TLX [26] to ensure that all conditions induced an equivalent work- load. To analyze the NASA-TLX, we used a one-way ANOVA; the effect of the independent variable "condition" on the NASA-TLX overall score was not statistically significant (F(2, 57) = 1, p = 0.37). Therefore, we did not conduct post-hoc tests. Then, the researcher conducted a semi-structured interview. Each semi-structured interview lasted 15–20 min. Through the semi- structured interview, we aimed to get insights into the participant’s background and their past experience in creating logos and avatars. We also probed for possible feelings of design fixation during the experiment and how it was affected by their previous knowledge, experience and process. In addition, we asked questions to un- derstand how the stimuli (or lack thereof) affected their ideation process. To conclude the study, we debriefed the participants about the purpose of the research. We thanked each participant with a $20 gift voucher. Initial BriefingConsentPre-StudyQuestionnaireFamiliarizing with toolsDesign Brief+InstructionsInterview+DebriefingVideo tutorialDesign task No SupportDesign task GIS SupportDesign task AI SupportGenAIImage SearchNo SupportMain Experimental TaskPost-StudyQuestionnairePost-StudyQuestionnairePre-StudyQuestionnaire123456789Semi-structured Interview The Effects of Generative AI on Design Fixation and Divergent Thinking CHI ’24, May 11–16, 2024, Honolulu, HI, USA Figure 4: An example of visual sequence board (A): Participant information and meta data, (B): AI image generation sequence, (B1): Image generation number, (C): AI-generated images in the order 1-2-3-4, (D): Prompt used for each generation, (E): Participant sketch sequence. 3.4 Data Preparation We scanned all the sketches participants created and assigned them a unique identifier. Two independent evaluators rated the sketches to compute the design fixation score, variety, and originality mea- sures. These evaluators were researchers from the human-computer interaction domain with experience in teaching and evaluating de- sign. We extracted all prompts and images from the Midjourney gallery where the logs were saved (not visible to the participants), compiled the sketches and arranged them in the sequence in which they were created as a visual sequence board. Underneath the AI-generated images, we added the sketches of the participants (Figure 4). 3.5 Data Analysis We used a mixed-method approach for our analysis. For quanti- tative analysis of design fixation and divergent thinking, we built Bayesian statistical models to quantify relationships between our dependent and independent variables (see subsection 4.1). We em- ploy Bayesian statistical methods to analyze our results, opting for this approach due to its added flexibility, capability to quantify uncertainty, better handling of small samples, and greater potential for future extensibility. For a comprehensive rationale advocating the use of Bayesian methods over traditional frequentist statistics in the field of Human-Computer Interaction (HCI), see Kay et al. [33]. Readers who may not be familiar with these methods can find a beginner-friendly introduction in McElreath [44] and can see examples of their practical application in HCI in Schmettow [52]. In this manner, we shift the focus away from p-values and dichoto- mous significance testing, directing our discussion towards causal modelling and parameter estimation. For qualitative analysis of participants’ interview data, we used Braun and Clarke’s 6-phase approach to reflexive thematic analysis [7, 60]. The analysis was inductive, i.e. data-driven, based on tran- scripts of the interviews. Each phase of the analysis was progressed using NVivo128 for coding procedures, theme development and 8https://lumivero.com/products/nvivo naming. The analysis aimed to understand potential causes of de- sign fixation during the experiment and participants’ approaches to creating sketches in each condition. In this paper’s findings, we use interview quotes to illustrate participants’ approaches to prompt creation and their stated approaches to ideation based on AI images. This enables us to probe plausible explanations for observed differ- ences between experimental conditions and explore why particular kinds of sketches were created in response to AI-generated images. 4 RESULTS 4.1 Statistical Analysis Figure 5: Theorised causal directed acyclic graph. We summarise our theoretical claims as a directed acyclical graph (DAG) in Figure 5. We argue that the Inspiration Stimulus affects users’ Design fixation, Fluency, Variety, and Originality. The choice of inspiration stimulus affects how much time is spent on the sketching task (as opposed to seeking inspiration), which, in turn, affects the number of sketches produced (fluency). Higher fluency is also likely to lead to higher variety—as producing more sketches also increases the likelihood that they will cover more (A)(B)(C)(D)(E)(B1)1234FluencyOriginalityInspiration StimulusVarietyDesign FixationTime on Task CHI ’24, May 11–16, 2024, Honolulu, HI, USA Wadinambiarachchi et al. ground during ideation. A greater variety of sketches, in turn, will likely lead to more original ideas. We used the brms package [8] to fit our models. This package facilitates the implementation of Bayesian multilevel models in R, leveraging the Stan probabilistic programming language [11]. To ensure the reliability of our Bayesian Markov Chain Monte Carlo (MCMC) sampling process, we assessed convergence and stability through two metrics: R-hat, which ideally should be less than 1.01 [65], and the Effective Sample Size (ESS), which should ideally exceed 1000 [8]. All of our model estimates met these criteria. We built our models based on the original count data in the direct measurements but report normalised values as described in Section 3.1 in our plots for easier comparisons with future work. In our reporting of model results, we present the posterior means of parameter estimates, their corresponding standard deviations, and the boundaries of the 89% compatibility interval, often referred to as the credible interval. The choice of an 89% compatibility inter- val aligns with the recommendation by McElreath [44] to mitigate potential confusion with the frequentist 95% confidence interval, as the two intervals have distinct interpretations. The compatibil- ity interval specifies the range of values within which there is an 89% probability that the true value lies. We report hypothesis test results using Bayes Factors, which compares the likelihood of the observed data under the proposed model over the null. We interpret these values following Wagenmakers et al. [69], considering values above one as supporting a given hypothesis, values under 3 offer- ing anecdotal evidence; under 10, substantial evidence; under 30, strong evidence; under 100, very strong evidence; and above 100, extreme evidence. We note that p-values are not used in Bayesian statistics, and no claims about “statistical significance” should be derived from our results. 4.2 Design Fixation To model Design Fixation, we consider the number of salient features in participants’ sketches also found in the example avatar provided at the beginning of the experiment. We model this data as a binomial distribution with N = 14 (the maximum number of features) and a probit link. We use weakly informative, regularising priors for the model parameters (drawn from a normal distribution with mean zero and standard deviation of 2). We model the random effects of participants and images as being drawn from a normal distribution with mean zero and standard deviation computed from the data through partial pooling. The model suggests that the effect of GenAI has a 100% prob- ability of leading to higher Design Fixation (mean = .32, 89% CI [.11, .55]), and a Bayes Factor of 124 suggests extreme support for the hypothesis that inspiration from GenAI leads to higher Design Fixation than the baseline. The effect of Image Search on Design Fixation was also detrimental, but less so (mean = .27, 89% CI [.05, .49]), with a 98% probability of this effect leading to higher Design Fixation. A Bayes Factor of 42.48 suggests very strong evidence for the hypothesis of a higher Design Fixation than the No support baseline. In summary, our model suggests that, on av- erage, both stimuli led to more features in common with the example avatar, and GenAI led to even more design fixation than Image Search. Table 1: Summary of the binomial model for design fixa- tion: DFS|trials(14) ∼ Stimulus + (1|Participant ID) + (1|Image ID). We provide the posterior means of parame- ter estimates (Est.), posterior standard deviations of these estimates (SD), and the bounds of their 89% compatibility interval. We note that this is not the same as the frequentist confidence interval but a percentile of the posterior distri- bution. All parameter estimates converged with an ESS well above 1000 and an R-hat of 1.00. Parameter Est. (SD) 89% CI Intercept Image Search GenAI -.71 (.10) .27 (0.14) .32 (0.13) [-.86, .56] [.05, .48] [.11, .54] Figure 6: Model posterior predictions for Design Fixation scores. Error bars represent the standard error of the esti- mates. Scores correspond to the percentage of salient fea- tures in the example found in participants’ sketches (higher is worse). 4.3 Fluency To model Fluency, we consider the number of sketches produced by each participant. Our causal model considers two effects of the stimulus on Fluency: a direct effect and an effect mediated by Time on task (ToT). We model these effects through two models, with and without Offset(log(ToT)) as a covariate. This approach was taken to account for varying time on task as participants of both GenAI and Image search utilised different times to sketch. In both cases, we model the expected value for each response as a negative binomial distribution with a log link. This models the sketch count based on the mean and the shape parameter, both of which depend on the inspiration stimulus. We opted for this model instead of a Poisson model due to its ability to model overdispersion. We use weakly informative, regularising priors for the model parameters (drawn from a normal distribution with mean zero and standard deviation of 2). In the total effects model, both Image Search and GenAI demon- strate detrimental effects on Fluency. Specifically, for GenAI, there is a notable negative impact on Fluency with an estimate of -.21 The Effects of Generative AI on Design Fixation and Divergent Thinking CHI ’24, May 11–16, 2024, Honolulu, HI, USA Table 2: Summary of the negative binomial models for Flu- ency—Direct effects: Fluency ∼ + offset(log(Time on Task)) and Total Effects: Fluency ∼ Stimulus. We provide the posterior means of parameter estimates (Est.), posterior standard deviations of these estimates (SD), and the bounds of their 89% compatibility interval. We note that this is not the same as the frequentist confidence interval but a per- centile of the posterior distribution. All parameter estimates converged with an ESS well above 1000 and an R-hat of 1.00. Parameter Direct Effects 89% CI Est.(SD) Total Effects Est.(SD) 89% CI Intercept Image Search GenAI Intercept𝑠ℎ𝑎𝑝𝑒 Image Search𝑠ℎ𝑎𝑝𝑒 GenAI𝑠ℎ𝑎𝑝𝑒 -1.48 (.19) -.22 (.25) .10 (.31) .75 (.45) 1.58 (1.15) -.51 (.61) [-1.79, -1.17] [-.63, .17] [-.39, .58] [.04, 1.47] [-.01, 3.61] [-1.49, .47] 1.50 (.19) -.49 (.25) -.21 (.30) .77 (.46) 1.66 (1.14) -.40 (.65) [1.20, 1.80] [-.88, -.10] [-.66, .26] [.04, 1.49] [.06, 3.74] [-1.41, .66] 4.4 Variety To model Variety, we consider the number of clusters a partici- pant’s sketches belong to minus one to account for the fact that variety only begins with the second sketch. Our causal model con- siders two effects of the stimulus on Variety: a direct effect and an effect mediated by Fluency. We model these effects through two models, with and without Fluency as a covariate. In both cases, the expected value for each response is based on a negative binomial model with a log link. We use weakly informative, regularising priors for the model parameters (drawn from a normal distribution with mean zero and standard deviation of 2 for coefficients and a gamma distribution with parameters set to 0.01 for the shape). Table 3: Summary of the negative binomial models for Va- riety—Direct effects: Variety ∼ Stimulus + Fluency and Total Effects: Variety ∼ Stimulus. We provide the posterior means of parameter estimates (Est.), posterior standard devi- ations of these estimates (SD), and the bounds of their 89% compatibility interval. We note that this is not the same as the frequentist confidence interval but a percentile of the posterior distribution. All parameter estimates converged with an ESS well above 1000 and an R-hat of 1.00. Parameter Est. (SD) 89% CI Est. (SD) 89% CI Direct Effects Total Effects Intercept Image Search GenAI Fluency .00 (0.23) .02 (0.25) -.15 (0.24) .14 (0.02) [-.38, .36] [-.38, .42] [-.53, .23] [.11, .18] 1.01 (.19) -.39 (.28) -.29 (-.28) [.71, 1.30] [-.84, -.06] [-.74, .17] Figure 7: Model posterior predictions for Fluency (number of sketches generated by each participant). Error bars repre- sent the standard error of the estimates. (89% CI [-.7, .28]) and a Bayes factor of 3.20, suggesting a 76% poste- rior probability of a negative effect. This is a substantial indication of its negative total effect on Fluency. The effect of Image Search is more pronounced (mean = -.49, 89% CI [-.9, -.10]) with a Bayes factor of 46.62, indicating a 98% posterior probability of a negative effect, strongly supporting its detrimental influence on Fluency. In the direct effects model, which accounts for the time-on-task, the impact of Image Search on Fluency is minimal (mean = -0.22, 89% CI [-0.64, 0.19]) with a Bayes factor of 4.20, indicating an 81% posterior probability of a negative effect. In contrast, GenAI shows a relatively neutral direct effect on Fluency (mean = .10, 89% CI [-.41, .60]) with a Bayes factor of .61, implying only a 38% posterior probability of a negative effect. In summary, neither Image Search nor GenAI enhanced Fluency compared to the baseline, with both generally result- ing in lower Fluency. The effect of Image Search on Fluency is minimal when controlling for total output time, indicating a less direct impact. However, GenAI does not exhibit a considerable direct negative effect on Fluency, highlighting its influence is not strongly dependent on the total time available for sketching. Figure 8: Model posterior predictions for Variety (percent- age of clusters in which the participant has sketches). Error bars represent the standard error of the estimates. The total effects model shows a detrimental effect of both stimuli on Variety. The model suggests that the effect of GenAI has only an 86% probability of being negative (mean = -.29, 89% CI [-.75, .16]), and a Bayes Factor of 5.93 provides substantial evidence for the hypothesis that it yields a negative total effect on the variety of output. The effect of Image Search was even more negative (mean = -.40, [-.87, -.07]), with a Bayes Factor of 11.38 providing strong support for the hypothesis that it has a negative effect. The model including Fluency as a covariate, models the direct effect of the stimulus on the Variety of the output. Comparing CHI ’24, May 11–16, 2024, Honolulu, HI, USA Wadinambiarachchi et al. the two models, we see that after accounting for the number of sketches that the participant produced, Google Image Search did not have much of an effect on Variety (mean = .02 [-.39, .43]. However, GenAI still had a negative effect (mean = -.15 [-55, .24]), with a Bayes Factor of 2.64 suggesting anecdotal evidence for this effect being negative. In summary, neither Image Search nor GenAI provided meaningful support over the baseline in terms of enhancing the variety of the output, yielding, on average, lower variety than the baseline. The effect of Image Search was fully mediated by Fluency, but GenAI also had an additional negative direct effect on Variety. 4.5 Originality To model Originality, we consider the number of other partici- pants with sketches in the same cluster as each sketch. As in the case of Variety, our causal model considers two effects of the stimulus on Originality: a direct effect and an effect mediated by Variety. We model these effects through two models, with and without Variety as a covariate. Table 4: Summary of the linear regression model for Origi- nality—Direct Effects: Originality ∼ Stimulus + Variety + (1|Participant ID) and Total Effects: Originality ∼ Stimulus + (1|Participant ID). We provide the posterior means of parameter estimates (Est.), posterior standard devi- ations of these estimates (SD), and the bounds of their 89% compatibility interval. We note that this is not the same as the frequentist confidence interval but a percentile of the posterior distribution. All parameter estimates converged with an ESS well above 1000 and an R-hat of 1.00. Parameter Est. (SD) 89% CI Est. (SD) 89% CI Direct Effects Total Effects Intercept Image Search GenAI Variety .83 (0.02) -.01 (0.02) -.03 (0.02) .01 (<0.01) [.81, .86] [-.03, .02] [-.05, -.01] [.00, .01] .86 (.01) -.01 (.02) -.03 (.02) [.84, .88] [-.04, .01] [-.06, -.01] Figure 9: Model posterior predictions for originality (per- centage of other participants who did not have an idea in the same cluster of an idea, averaged per participant). Error bars represent the standard error of the estimates. The expected value for each response is based on a linear re- gression model. This models the originality score based on the inspiration stimulus (and variety score in the direct effects model), as well as a random effect of the participant. We use weakly infor- mative, regularising priors for the model parameters (drawn from a normal distribution with mean zero and standard deviation of 2 for coefficients). We modelled our random effects as being drawn from a normal distribution with mean zero and standard deviation computed from the data through partial pooling. The total effects model suggests that both stimuli had small negative effects on Originality. The model suggests that the effect of GenAI has a 97% probability of being negative (mean = -.03, 89% CI [-.06, .00]), and a Bayes Factor of 35. provides extreme evidence for the hypothesis that it yields a negative effect on the variety of output. The effect of Image Search was slightly less negative but also rather small (mean = -0.01, [-.04, .01]), but a Bayes Factor of 3.9 provides only anecdotal evidence against the hypothesis that it has a positive effect. Adding Variety did not change the model in any meaningful way, suggesting that Variety does not mediate an effect on Originality. In summary, neither Image Search nor GenAI provided a considerable aid in terms of developing Originality of the output, offering, on average, lower originality than the baseline, but these effects were negligible. 4.6 Why did ideating with Generative AI cause design fixation? The results from our statistical models suggest that support from Generative AI led to higher design fixation. To understand why this occurred, this section draws on our interview data, the prompts cre- ated by participants, the AI-generated images, and the participants’ sketches. We first explore the content of participants’ prompts as one potential cause of design fixation. This encapsulates how participants claimed to develop the prompts and how they were influenced by the design brief or the example design. Next, we quantitatively explore the similarity between participants’ sketches and the AI images used to inform that sketch in terms of design fixation. We then discuss the types of AI-generated images returned by participants’ prompts and the sketches created based on them, using a case-study-based approach to illustrate our claims. Overall, our analysis indicates that participants frequently relied on prompts containing keywords copied directly from the design brief or used prompts inspired by the example design. These prompts resulted in AI-generated images that were conceptually similar to the example design in 44% of cases and which frequently contained fixating features that were present in the example design. Further, while not all sketches exhibit high similarity to the example we provided, ideating based on AI images can lead to fixation displacement, where participants simply fixate on the images generated by the AI and copy what they see. This can occur irrespective of whether the participant imitates the example design or whether they attempt to explore other areas of the conceptual space. The Effects of Generative AI on Design Fixation and Divergent Thinking CHI ’24, May 11–16, 2024, Honolulu, HI, USA Table 5: Frequency of words used in the prompt by the participants in GenAI condition. Word Length Count Weighted percentage Similar words Included in the brief robot kind chatbot intelligent cute caring loving 5 4 7 11 4 6 6 38 27 24 20 20 19 17 10.80% 7.67% 6.82% 5.68% 5.68% 5.40% 4.83% robot, robots kind chatbot intelligent cute caring love, loving No Yes Yes Yes No Yes Yes “I just took the words from the brief”: Fixation from prompts 4.6.1 based on the brief and example design. One plausible source of design fixation in our experiment is the prompts that participants used to generate AI images. That is, if prompts include terms that are closely related to the example design or which draw from the design brief, then they might conceivably give rise to AI-generated images that are similar. To investigate this possibility, we first analysed the prompts that participants used for generating images. Participants created a total of 117 prompts, with a mean of 5.85 prompts per participant (median = 5.5 range = 2–15). The length of each prompt ranged from 1 to 26 words (mean = 3.5 words). To explore the content of the prompts, we conducted a simple word frequency analysis using the automated word counting feature in NVivo12. This feature enables us to identify the total number and frequency of unique words that appear in the prompts. Table 5 shows a summary of the most frequent words appearing in participants’ prompts. This analysis revealed that participants frequently created prompts by using keywords copied from the design brief. In total, 52 prompts (44%) contained at least one word that appears in the brief. Exam- ples included kind, which appeared in 27 different prompts; chatbot, which appeared in 24 prompts; intelligent, which appeared in 20 prompts; and caring, which appeared in 19 prompts. During the in- terviews, participants reported using this approach due to a feeling of being ‘stuck’ when trying to develop a prompt. Others attempted to generate ideas that met the requirements of the design brief. GenAI-P253, for example, described adapting content from the brief into his prompt and told us that the process he followed was to “ read the brief, take the descriptions that they had, and make sure that I was meeting those descriptions.” A second approach involved participants using keywords that were themed around the example design. In total, 57% (67/117) of the prompts contained the word robot, chatbot or chatbox (a homophone of chatbots). This suggests that participants often translated what they saw into a prompt before ideating based on the results. In addition, 78 prompts (66.6%) included terms related to robots alongside words from the design brief. For example, P437’s very first prompt was ‘kind loving caring robot’ whereas GenAI-P253 entered ‘cute kind chatbox character’. Data from the interviews also supports the notion that partici- pants created prompts which were fixated on the example design. GenAI-P605, for example, described their process as starting with ‘robots’ and then trying to factor in other aspects of the design brief. He said that he, “searched up intelligent robots. But all those robots that I saw in the [AI-generated images], they looked intelligent, but they didn’t look kind or caring. [I thought], how can I make it both caring and intelligent?”. This participant created three distinct prompts: Intelligent robot, Caring robot, and Baymax (referring to an inflatable computerised robot from a Disney movie) in an attempt to come up with alternative ideas. However, it is worth noting participants created 39 prompts that did not contain words from the brief or phrases related to robots. These prompts evince participants’ attempts to explore different possibilities within the conceptual space of a ‘kind and loving’ character. GenAI-P166, for example, recounted how they started the task by reading the brief and thinking about what to draw. This led them to the idea of ‘family’, which they then translated into three distinct but related prompts: family, mom, and Mom - young. They subsequently drew a sketch of a woman’s face as their only design after seeing the images Midjourney returned from these prompts. Taken together, these cases illustrate how creating prompts based on the brief and the example design may be an initial stimulus for fixation. A successful strategy to overcome this problem was to try to think ‘beyond’ the brief and the example. This latter quality is what may be needed from AI systems that truly support designers in avoiding fixation. 4.6.2 “I would just kind of copy it and then tweak”: AI-generated images as a cause of fixation. A second putative cause of design fixation in our experiment is the AI imagery that participants saw. That is, if the AI images were not meaningfully different to the example design, then this may have encouraged fixation because participants did not consider (or simply were not exposed to) other possible alternatives. This explanation is plausible given that 66.6% of all prompts contained terms related to robots or words copied from the design brief. Prompts containing these terms might be expected to produce images similar to the example design, in turn leading to fixated sketches. To explore the relationship between fixation in participants’ sketches and the AI images, we first computed the correlation be- tween the design fixation score of participants’ sketches (previously calculated by two independent raters, see Section 3.1) and the de- sign fixation score of the most recent set of AI-generated images immediately preceding each sketch. We selected Spearman’s rank correlation (a non-parametric test) as the data did not satisfy nor- mality assumptions. CHI ’24, May 11–16, 2024, Honolulu, HI, USA Wadinambiarachchi et al. For this analysis, we begin with the total set of sketches produced by participants in the AI condition (92 in total). We found that 10 of these sketches were created prior to entering any prompts into Midjourney; we, therefore, removed these sketches from considera- tion as there are no equivalent AI images to compare them against. This left us with 82 sketches, which we plotted against the relevant AI-generated images seen immediately before drawing the sketch. Figure 10: Scatterplot illustrating the correlation between the design fixation score of GenAI images appearing imme- diately before a participant’s sketch and the design fixation score of the sketch associated with the same set of GenAI images. DFS = design fixation score. Figure 10 illustrates the correlation. We observed a moderate pos- itive correlation between the design fixation score of each sketch and the design fixation score of the AI images immediately preced- ing that sketch (𝜌 = 0.56). This provides quantitative support for the idea that AI-generated images that contained features of the example avatar led to sketches with higher design fixation scores. Next, we qualitatively investigated what kinds of images were generated by Midjourney in response to participants’ prompts and whether the resulting sketches were fixated on these images. We began with a simple visual inspection of the AI images to probe whether Midjourney’s outputs were meaningfully different to the example design. This inspection revealed that 44% (206/468) of the AI-generated images portrayed humanoid robots that were conceptually similar to the example avatar. In turn, these images were qualitatively simi- lar to the sketches participants made in response to them, indicating a tendency among participants to imitate — or even directly copy — what they saw. Figure 11 shows an example of this phenomenon. The figure shows the AI images seen by one participant in Midjourney over time. It then positions the participant’s sketches according to the most recently issued group of AI images before the sketch was drawn. In this example, it is immediately evident that the majority of sketches are superficially similar to the images returned by Mid- journey. Likewise, these sketches are similar to the example design we provided (i.e. a cutesy robot) and typically contain the same salient features (legs, arms, and so on). The presence of these fea- tures and their inclusion in the subsequent sketches is one plausible explanation for why the AI support did not encourage participants to ‘break free’ of fixation. It appears to have merely reinforced the existing problem. This phenomenon arose irrespective of whether participants ideated on the fly or considered multiple ideas before creating a sketch. Figure 12 illustrates a second case where the participant is once again fixated on the idea of a robot. In this instance, the participant delays sketching until after issuing multiple prompts and seeing several rounds of AI images. It can be seen that the single sketch the participant created is of a robot-type character, evidencing fixation. The sequence also highlights how the prompt plays a role in this effect, with the participant attempting to vary their initial ‘chatbot’ prompt by adding keywords such as ‘intelli- gent’ or ‘kind’, but resulting in thematically similar returns each time. Cases such as these are illustrative of how fixation may have occurred, with participants repeatedly generating ideas that were similar to the example design and imitating the ideas within them. The interview data supported this latter idea. When asked about their approach to ideation, participants described the AI as a “source of inspiration” but openly admitted they sometimes copied what they observed. For example, GenAI-P253 claimed that using AI “helped a lot of the inspiration for a lot of the designs that very much I just put down what I wanted it to give me, and I would just kind of copy it and then tweak it a little bit for the designs. ” Overall, these cases highlight the risk of AI simply reinforcing the phenomenon of fixation on an initial example. In turn, they raise the question of how AI systems might be usefully designed to encourage shifts away from this effect. 4.6.3 The notion of “Fixation Displacement”. In addition to inves- tigating how fixated sketches resulted from fixated images, our inspection of the sketches in relation to AI images revealed an addi- tional phenomenon not well-captured by the correlational analysis. That is, there is evidence of what we describe as fixation displace- ment, where the participant creates sketches with little relation to the original design but which are very clearly fixated on the AI imagery. Here, the sketches produced are both objectively and subjectively different to the example design but demonstrate a high degree of fixation with the AI images. Figure 13 illustrates an example of fixation displacement in ac- tion. Here, the participant entered the prompt ‘goddess’ as a way of beginning their ideation. This prompt has little connection to the design brief or the idea of a robot avatar. The participant then produced a sketch of a woman’s face, which shares a small number of features with the example robot avatar (eyes, mouth, ears) but which is qualitatively different. Then, they proceed to iterate on this idea, resulting in three sketches that are similar in appearance and which bear a close resemblance to what the participant is seeing in Midjourney. Crucially, this phenomenon is not captured in our earlier scat- terplot (Figure 10) because the images and sketches have only a few features in common with the example design. This means they would be rated as quantitatively ‘low’ on design fixation. In our experiment, fixation is operationalised in terms of similarity to the The Effects of Generative AI on Design Fixation and Divergent Thinking CHI ’24, May 11–16, 2024, Honolulu, HI, USA Figure 11: An example of a participant producing sketches based on AI images that are similar to the example design, evidencing fixation when co-ideating with AI. Figure 12: A second example of a fixated sketch created after several rounds of prompting and image generation. In this case, the prompt is also fixated on the idea of a chatbot, creating similar returns from Midjourney each time. Figure 13: An example of fixation displacement, this is where the participant has shifted their sketches away from the example robot avatar but has now become fixated on the idea of a woman’s face via the AI images. example design, where similarity is assessed by the presence or absence of features from the robot avatar. Conceptually, however, fixation refers to “blind adherence to a set of ideas or concepts” [30]. This general phenomenon is clearly depicted by the images and sketches in Figure 13, highlighting a new and novel risk of employing AI in ideation. That is, design fixation towards an ini- tial example may not be overcome by using AI but may simply be displaced onto the examples that the AI provides. If one oper- ationalises fixation in terms of deviation from an initial example, one might argue that such an outcome is apposite or even desired. Prompt: Cute robot chatbot avatarPrompt: Cartoon chatbot designPrompt: Cute line art robot designPrompt: Cartoon line art robot design12345678Prompt: Kind loving caring and intelligent looking chatbotPrompt: Chatbot in line drawing formatPrompt: Simple chatbot icon designPrompt: Simple chatbox icon designPrompt: ChatbotPrompt: A caring chatbotPrompt: A kind chatbotPrompt: A kind chatbotPrompt: An intelligent chatbot12345Prompt: GoddessPrompt: Goddess of knowledgePrompt: Goddess of lovePrompt: Loving face1234 CHI ’24, May 11–16, 2024, Honolulu, HI, USA Wadinambiarachchi et al. Figure 14: An example of a participant progressing through different ideas and arriving at a final sketch that bears no resemblance to the example robot avatar. But if one operationalises fixation in terms of blind adherence to an idea, then this outcome is questionable. What one may wish to see from AI-based ideation might be more akin to the process seen in Figure 14. Here, the participant- generated 8 groups of images from Midjourney, beginning with prompts (such as ‘dragon’) that have no relationship with the ex- ample design but which might inspire useful ideas and further exploration of the conceptual space. By the fourth prompt, the par- ticipant latched onto the idea of intelligence, which is then used to produce an Einstein-themed robot after prompt 6. However, the par- ticipant abandoned this idea and moved to an abstract design which bears no resemblance to the exemplar. While this still evidences some degree of fixation displacement, given the sketches the par- ticipant produced, it represents a significant conceptual shift from the example design. That is, the participant has considered a range of alternatives and has produced a seemingly useful design that bears no resemblance to the given example. This is perhaps more indicative of what we would consider to be effective AI-supported ideation. 5 DISCUSSION This study aimed to identify the effects of using an AI image gener- ator as inspiration support for an ideation task. Our quantitative analysis revealed that using AI-generated images had a detrimental effect on participants’ ideation performance. Therefore, we aimed to uncover the cause of this effect through our qualitative analysis. We identified that AI caused more design fixation in participants and hindered the variety, originality and fluency of ideas compared to the baseline condition. Further, we observed a moderate posi- tive correlation between the design fixation score of participants’ sketches and the design fixation score of AI-generated images, which suggests that AI has a potential influence in determining the outcome of the ideas. Further, we observed that AI induced a fixation displacement in participants where, even if they shifted their focus away from the initial example, they became fixated on the AI-generated images instead. In this section, we reflect on our learnings and discuss potential opportunities for developing gener- ative AI to better facilitate ideation tasks, and propose strategies for improving divergent thinking during ideation. In doing so, we look at different phases of the ideation task performed by participants in detail (see figure 15). Figure 15: The overall ideation workflow of participants in the AI-supported condition,(𝐴1): Written brief to the prompt,(𝐴2): Initial example to the prompt,(𝐵): Initial ex- ample to the sketch, (𝐶): Prompt to AI-generated images, (𝐷): AI-generated images to the sketch 5.1 How to avoid fixation on the design brief when determining prompts for Generative AI? Through our study, we identified that the prompt acted as a po- tential source of design fixation. While participants used different strategies to devise their prompts (Figure15-A), the results suggest that most of them used keywords from the brief (Figure15-A1) and Prompt: DragonPrompt: Einstein botPrompt: Mother lovePrompt: Cute einsteinPrompt: NurturePrompt: SingularityPrompt: IntelligentPrompt: Conciousness15263748(A1)(A2)(B)(C)(D) The Effects of Generative AI on Design Fixation and Divergent Thinking CHI ’24, May 11–16, 2024, Honolulu, HI, USA built upon the idea of a ‘robot’ from the initial example (Figure15- A2) when devising their prompts. Participants claimed that they tried to avoid copying the initial example later during sketching (Figure15-B), but this was not reflected in the data given that the high design fixation score was calculated based on the ratio of repli- cated features from the initial example. Therefore, we assume that poor prompt design led to the generated images sharing similar fea- tures with the example. This suggests that design fixation happened when the participants created the prompts (Figure15-A). Partici- pants tended to produce prompts that were semantically similar to the words given in the design brief and examples, while exhibiting strategies of repeating the same steps when creating prompts. Prior work indicates that participants can become fixated on exposed ex- amples [10, 30]. Further, studies have shown that participants tend to fixate on self-generated ideas and concepts compared to initial examples [38]. Our study aligns with these findings, suggesting that some participants were fixated on the design brief, the example avatar, or self-generated ideas when determining keywords for the prompt. Thus, paying attention to different prompting strategies might help mitigate this first potential occurrence of fixation when co- ideating with AI. Youmans et al. [76] summarise different strate- gies to mitigate design fixation based on the cause of occurrence. One strategy to snap participants out of fixation is to have timely warnings to consider alternative options [76]. Therefore, creativ- ity support systems based on generative AI could not only turn prompts into images but also scaffold users’ abilities to craft better prompts for ideation. AI systems could support users in creatively interpreting the brief, push users’ thinking into alternative direc- tions, or mix arbitrary ideas into the prompts. This functionality could be enabled through other generative AI techniques, such as large language models. Cheng et al. [13] have found that showing low-fidelity, abstract, and partially completed ideas led participants to become more di- vergent in their thinking and reduce fixation. Users of AI systems should consider using prompts that generate low-fidelity abstract or partial images when interacting with AI because images with these qualities might alleviate design fixation and encourage diver- gent thinking in an ideation task. Having a predetermined prompt structure or template that describes ways to make the images more abstract and less refined might lower the risk of fixation. 5.2 How can AI generate images that better support ideation? The images generated by the AI system in this study were high in fidelity, visual detail, and quality; they appeared to be rich in shape, form, texture, colour, composition, and visual expressiveness (see Figure 11-14). Though this showcases impressive functionality, it might have amplified conformity towards the generated image, causing fixation displacement. This aligns with prior studies, which have shown that complete and strong examples carry the poten- tial to cause fixation [10, 13, 15, 59]. Previous work has found that introducing some incubation time can help dissolve concrete exem- plars into more abstract concepts, lowering fixation [58, 73] and supporting the emergence of novel ideas [21, 56]. Though this was not possible in our study, given the short time available for the task, it is a process that users of AI systems can incorporate into their ideation process. Further, when developing generative AI to support ideation, it may be useful to introduce mechanisms to lower the fidelity and the richness in detail of the output. Another direction is to show partially completed or blurred outputs, which might be beneficial for introducing ambiguity and pushing ideas in new directions [13]. Recent works by Davis et al. [19] and Williford et al. [71] provide initial evidence suggesting that these mechanisms might be plausible approaches to be embedded in generative AI. 5.3 How to translate AI images into design ideas? Through our visual comparisons, we observed similarities between the sketches produced by participants and the AI-generated images, suggesting that participants had imitated and, in some instances, directly copied elements from the images generated by Midjour- ney. Further, we identified that regardless of whether participants ideated on the fly or considered multiple ideas before creating a sketch, they gravitated towards features of the images which Mid- journey generated, leading to fixation. Copying elements from an example is the easiest way to result in fixated outputs [10, 30], and our findings were consistent with this. To successfully act as sources of inspiration, generative AI tools must encourage more strategies that are more effective than copy- ing. Previous work has shown that techniques like visual analogy— identifying abstract correspondences between the images being generated and the solution being sought—can improve the ideation effectiveness of designers of all levels, including novices [12]. How- ever, Casakin and Goldschmidt highlight that even though novices already have an inherent understanding of how visual analogy works, they must be shown how to do it well and how it can sup- port problem-solving in design activities [12]. Scaffolding these skills is a promising role for AI-based creativity support tools. We observed lower fluency in the GenAI and Image search con- ditions. Because the time given to complete the task was the same in all conditions, there was an inherent trade-off between spend- ing time producing ideas vs. seeking inspiration. Interacting with both the AI image generator and the web image search led to less time spent sketching. These results tally with the findings of Vish- wanathan and Linsey [67], who found that though physical proto- typing techniques that required more effort led to higher quality ideas in an engineering design task, they also increased design fixation and lowered fluency. They hypothesise that this is due to a “sunk-cost effect”—the higher the effort spent in a given direction, the harder it is to move into a different one. Participants who spent more time refining prompts and interacting with the AI also had worse ideation performance. Users of generative AI systems should be careful and deliberate in their approach when seeking inspiration from external stimuli like AI image generators to mitigate the risk of design fixation. Crilly [17] suggests that empowering designers to recognise and reflect upon fixating episodes might be beneficial in developing a less fixating co-ideation workflow with AI. Further, Neroni and Crilly [46] state that uncovering participants’ fixation tendencies, which they call "demonstrated vulnerability", is an ef- fective approach that can further strengthen participants’ ability to overcome fixation. In summary, when developing generative AI for CHI ’24, May 11–16, 2024, Honolulu, HI, USA Wadinambiarachchi et al. co-ideation tasks, there is a rich opportunity for designing interac- tions with intelligent agents that not only generate stimuli but also encourage better ideation behaviours. Triggering timely reminders, suggesting new idea directions, preempting fixation, varying the abstraction of the visual outcomes, and facilitating visual analogical reasoning are all promising directions for future work. 5.4 Limitations We acknowledge several limitations in our study. First, at the time of writing, generative AI tools are still nascent technologies. In- teraction paradigms are emerging, and users are still learning to leverage their potential. As such, our results describe a picture of somewhat naïve use of these tools. It will be interesting to see how these results evolve as users become accustomed to generative AI tools and incorporate them into their practice. Next, in this study, we gave all groups of participants the same amount of time to complete the task, but we observed lower fluency in the GenAI condition. We note that when the experiment took place, the AI system did not produce results instantaneously, which potentially delayed participants in that condition. However, we also note that participants in the Image Search condition, who did get their results instantaneously, also exhibited lower fluency than the baseline. This could be due to the exposure to a large number of images with endless scrolling, which added another layer of decision making to pick the ideas that suit them. Therefore, in a real-world setting, it is important to consider the trade-off between spending time on the task (e.g. by sketching) vs. seeking inspiration (e.g. by interacting with a creativity-support tool). We acknowledge that because we limited the task time to 20 minutes based on previous work, we restrict the scope of our in- sights to short-term usage of these tools in a rapid ideation task. In the real world, people may spend longer reflecting on the outputs of AI, and incubation time along with iteration of sketched ideas may produce results different from those of our experiment. Fur- ther, we screened participants for prior skills in visual design, but few had professional industry experience. Though our sample was balanced across conditions, we make no claims about how these effects might be affected by expertise.Therefore, with this study, we can only provide initial insights into how a novice designer might approach a design task, and to generalize these claims, we need further investigation. In this study, we operationalised design fixation by looking for a restricted set of salient features from the example in participants’ sketches. Though the choice of focusing on denotative elements of the design aims to facilitate operationalization, we acknowledge that there are also connotative aspects that were left outside the scope of our analysis, including art style, emotional expression, and cultural references. Finally, our study only evaluated the potential of generative AI tools for ideation support through the specific example of image generators in a visual ideation task. It remains to be seen how these effects translate to other modalities, such as text, video, audio, and music generation. 6 CONCLUSION Through this study, we contribute empirical evidence to the discus- sion of the potential of generative AI to augment human creativity. Our study revealed that using an AI image generator as a source of inspiration by novice designers led to higher design fixation on an initial example and lower fluency, variety, and originality of ideas compared to using a conventional image search or no inspiration support. We suggest that fixation can happen in how the brief and the example influence the prompt given to the AI system, how the system translates it into images, and how the images inspire par- ticipants’ ideas. All of these offer rich opportunities for re-design. Our work suggests that, at least in the current context of AI tool usage, given a fixed amount of time for a visual ideation task, this time is better spent sketching than seeking inspiration through AI. Our work suggests that generative AI tools aimed at supporting co-ideation should not only focus on generating stimuli but also on encouraging more effective ideation behaviours. We believe that incorporating well-thought-out methods and strategies into user practices and developing generative AI tools that can reduce com- mon obstacles, such as design fixation and other creativity blockers, can maximise its potential to speed up the creative process and improve the quality of innovative design output. ACKNOWLEDGMENTS This research is supported by the Rowden White Scholarship and the Melbourne Research Scholarship offered by the University of Melbourne. We also would like to thank Christian Davey at the Melbourne Statistical Consulting Platform for their support. REFERENCES [1] Leyla Alipour, Mohsen Faizi, Asghar Mohammad Moradi, and Gholamreza Akrami. 2018. A review of design fixation: research directions and key fac- tors. InternatIonal Journal of Design Creativity and Innovation 6, 2 (2018), 22–35. https://doi.org/10.1080/21650349.2017.1320232 [2] Carina Andersson, Yvonne Eriksson, Lasse Frank, and Bill Nicholl. 2012. Design fixations among information design students: What has been seen cannot be unseen. In DS 74: Proceedings of the 14th International Conference on Engineering & Product Design Education (E&PDE12) Design Education for Future Wellbeing,. Design Society, Antwerp, Belguim. https://www.designsociety.org/download- publication/33183/design_fixations_among_information_design_students_ what_has_been_seen_cannot_be_unseen [3] Audi MediaCenter. 2022. Reinventing the wheel? “FelGAN” inspires new rim de- signs with AI | Audi MediaCenter. https://www.audi-mediacenter.com/en/press- releases/reinventing-the-wheel-felgan-inspires-new-rim-designs-with-ai- 15097 [4] B.G. Bellows, J.F. Higgins, M.A. Smith, and R.J. Youmans. 2012. The Effects of Individual Differences in Working Memory Capacity and Design Environment on Design Fixation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 56 (9 2012), 1977–1981. https://doi.org/10.1177/1071181312561293 [5] B.G. Bellows, J.F. Higgins, and R.J. Youmans. 2013. An individual differences approach to design fixation: Comparing laboratory and field research. In Design, User Experience, and Usability. Design Philosophy, Methods, and Tools. DUXU 2013. Lecture Notes in Computer Science. Springer, Berlin Heidelberg, 13–21. https: //doi.org/10.1007/978-3-642-39229-0 [6] Iouri Belski and Ianina Belski. 2015. Application of TRIZ in improving the creativity of engineering experts. Procedia Engineering 131 (2015), 792–797. https://doi.org/10.1016/j.proeng.2015.12.379 [7] Virginia Braun and Victoria Clarke. 2022. Thematic analysis: a practical guide. SAGE Publications Inc., Thousand Oaks, California,United states. [8] Paul-Christian Bürkner. 2017. brms: An R Package for Bayesian Multilevel Models Using Stan. Journal of Statistical Software 80, 1 (2017), 1–28. https: //doi.org/10.18637/jss.v080.i01 [9] J. Cao, W. Zhao, and X. Guo. 2021. Utilizing EEG to Explore Design Fixation during Creative Idea Generation. Computational Intelligence and Neuroscience 2021 (2021). https://doi.org/10.1155/2021/6619598 [10] C. Cardoso, P. Badke-Schaub, and A. Luz. 2009. Design fixation on non- verbal stimuli: The influence of simple vs rich pictorial information on design problem-solving. In Proceedings of the ASME Design Engineering Technical Con- ference. ASME, San Diego, California, USA., 995–1002. https://doi.org/10.1115/ DETC2009-86826 The Effects of Generative AI on Design Fixation and Divergent Thinking CHI ’24, May 11–16, 2024, Honolulu, HI, USA [11] Bob Carpenter, Andrew Gelman, Matthew D Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus A Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: A probabilistic programming language. Journal of statistical software 76 (2017). [12] Hernan Casakin and Gabriela Goldschmidt. 1999. Expertise and the use of visual analogy: implications for design education. Design Studies 20, 2 (3 1999), 153–175. https://doi.org/10.1016/S0142-694X(98)00032-5 [13] Peiyao Cheng, Ruth Mugge, and Jan P.L. Schoormans. 2014. A new strategy to reduce design fixation: Presenting partial photographs to designers. Design Studies 35, 4 (2014), 374–391. https://doi.org/10.1016/J.DESTUD.2014.02.004 [14] Li-Yuan Chiou, Peng-Kai Hung, Rung-Huei Liang, and Chun-Teng Wang. 2023. Designing with AI: An Exploration of Co-Ideation with Image Generators. In Proceedings of the 2023 ACM Designing Interactive Systems Conference. ACM, New York, NY, USA, 1941–1954. https://doi.org/10.1145/3563657.3596001 [15] Evangelia G Chrysikou and Robert W Weisberg. 2005. Following the Wrong Footsteps: Fixation Effects of Pictorial Examples in a Design Problem-Solving Task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 5 (2005), 1134–1148. [16] John Joon Young Chung. 2022. Artistic User Expressions in AI-powered Creativity Support Tools. In UIST 2022 Adjunct - Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc, New York, NY, USA, 1–4. https://doi.org/10.1145/ 3526114.3558531 [17] Nathan Crilly. 2015. Fixation and creativity in concept development: The attitudes and practices of expert designers. Design Studies 38 (5 2015), 54–91. https: //doi.org/10.1016/J.DESTUD.2015.01.002 [18] Nathan Crilly and Carlos Cardoso. 2017. Where next for research on fixation, inspiration and creativity in design? Design Studies 50 (5 2017), 1–38. https: //doi.org/10.1016/J.DESTUD.2017.02.001 [19] N. Davis, S. Siddiqui, P. Karimi, M.L. Maher, and K. Grace. 2019. Creative sketching partner: A co-creative sketching tool to inspire design creativity. In Proceedings of the 10th International Conference on Computational Creativity, ICCC 2019. As- sociation for Computational Creativity, North Carolina, 358–359. [20] Edward de Bono. 2008. Six Thinking Hats (revised edition ed.). Penguin, United Kingdom. [21] Saurabh Deo, Aimane Blej, Senni Kirjavainen, and Katja Holtta-Otto. 2021. Idea Generation Mechanisms: Comparing the Influence of Classification, Combina- tion, Building on Others, and Stimulation Mechanisms on Ideation Effectiveness. Journal of Mechanical Design, Transactions of the ASME 143, 12 (12 2021), 1 – 46. https://doi.org/10.1115/1.4051239/1109505 [22] Tojin T. Eapen, Daniel J. Finkenstadt, Josh Folk, and Lokesh Venkataswamy. 2023. How Generative AI Can Augment Human Creativity. https://hbr.org/2023/07/ how-generative-ai-can-augment-human-creativity) [23] Lorenzo Fiorineschi and Federico Rotini. 2023. Uses of the novelty metrics proposed by Shah et al.: what emerges from the literature? Design Science 9 (5 2023), e11. https://doi.org/10.1017/DSJ.2023.9 [24] John Gero, A T Purcell, J S Gero, H M Edwards, and E Matka. 1994. Design fixation and intelligent design aids. In Artificial Intelligence in Design ’94. Springer, Dordrecht, 483–495. https://doi.org/10.1007/978-94-011-0928-4 [25] Joy Paul Guilford. 1956. The structure of intellect. Psychological bulletin 53, 4 (1956), 267–293. https://doi.org/10.1037/h0040755 [26] Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Advances in Psychology 52, C (1 1988), 139–183. https://doi.org/10.1016/S0166-4115(08)62386- 9 [27] Marius Hoggenmueller, Maria Luce Lupetti, and Willem Van Der Maden. 2023. Creative AI for HRI Design Explorations. In HRI ’23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, New York, NY, USA, 40–50. https://doi.org/10.1145/ 3568294.3580035 [28] Thomas Howard, Anja Maier, Balder Onarheim, and Morten. Friis-Olivarius. Overcoming design fixation through education and creativity 2013. In Proceedings of methods. the International Conference on Engineering ICED, Vol. 7 DS75-07. The Design Society, Seoul Korea, 139– Design, 148. https://www.designsociety.org/download-publication/34578/overcoming_ design_fixation_through_education_and_creativity_methods [29] Angel Hsing Chi Hwang. 2022. Too Late to be Creative? AI-Empowered Tools in Creative Processes. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–9. https://doi.org/10.1145/3491101.3503549 [30] David G. Jansson and Steven M. Smith. 1991. Design fixation. Design Studies 12, 1 (1 1991), 3–11. https://doi.org/10.1016/0142-694X(91)90003-F [31] John Joon, Young Chung, Shiqing He, and Eytan Adar. 2021. The Intersection of Users, Roles, Interactions, and Technologies in Creativity Support Tools; The Intersection of Users, Roles, Interactions, and Technologies in Creativity Support Tools. In Designing Interactive Systems Conference 2021. ACM, New York, NY, USA, 1817 –1833. https://doi.org/10.1145/3461778 [32] P. Karimi, J. Rezwana, S. Siddiqui, M.L. Maher, and N. Dehbozorgi. 2020. Creative sketching partner: An analysis of human-AI co-creativity. In International Confer- ence on Intelligent User Interfaces, Proceedings IUI. Association for Computing Ma- chinery, New York, NY, USA, 221–230. https://doi.org/10.1145/3377325.3377522 [33] Matthew Kay, Gregory L. Nelson, and Eric B. Hekler. 2016. Researcher-Centered Design of Statistics: Why Bayesian Statistics Better Fit the Culture and Incentives of HCI. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machin- ery, New York, NY, USA, 4521–4532. https://doi.org/10.1145/2858036.2858465 [34] Jieun Kim, Hokyoung Ryu, and Hyeonah Kim. 2013. To Be Biased or Not to Be: Choosing between Design Fixation and Design Intentionality. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’13). Association for Computing Machinery, New York, NY, USA, 349–354. https://doi.org/10.1145/ 2468356.2468418 [35] Janin Koch, Nicolas Taffin, Michel Beaudouin-Lafon, Markku Laine, Andrés Lucero, and Wendy E. MacKay. 2020. ImageSense: An Intelligent Collaborative Ideation Tool to Support Diverse Human-Computer Partnerships. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (5 2020), 27. https: //doi.org/10.1145/3392850 [36] Aaron Kozbelt and Yana Durmysheva. 2007. Understanding Creativity Judgments of Invented Alien Creatures: The Roles of Invariants and Other Predictors*. The Journal of Creative Behavior 41, 4 (12 2007), 223–248. https://doi.org/10.1002/J. 2162-6057.2007.TB01072.X [37] Bart Lamiroy and Emmanuelle Potier. 2022. Lamuse: Leveraging Artificial Intel- ligence for Sparking Inspiration. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinfor- matics) 13221 LNCS (2022), 148–161. https://doi.org/10.1007/978-3-031-03789- 4{_}10/FIGURES/6 [38] Keelin Leahy, Shanna R. Daly, Seda McKilligan, and Colleen M. Seifert. 2020. Design fixation from initial examples: Provided versus self-Generated ideas. Journal of Mechanical Design, Transactions of the ASME 142, 10 (10 2020), 101402. https://doi.org/10.1115/1.4046446/1074761 [39] Makayla Lewis. 2023. AIxArtist: A First-Person Tale of Interacting with Artificial Intelligence to Escape Creative Block. [40] J S Linsey, I Tseng, K Fu, J Cagan, K L Wood, and C Schunn. 2010. A Study of Design Fixation, Its Mitigation and Perception in Engineering Design Faculty. Journal of Mechanical Design (JMD) 132, 4 (4 2010), 041003. https://doi.org/10. 1115/1.4001110 [41] Andrés Lucero. 2012. Framing, Aligning, Paradoxing, Abstracting, and Directing: How Design Mood Boards Work. In Proceedings of the Designing Interactive Systems Conference. Association for Computing Machinery, New York, NY, USA, 438–447. [42] Abraham S. Luchins. 1942. Mechanization in problem solving: The effect of Einstellung. Psychological Monographs 54, 6 (1942), i–95. https://doi.org/10.1037/ h0093502 [43] Marian Mazzone and Ahmed Elgammal. 2019. Art, Creativity, and the Potential of Artificial Intelligence. Arts 2019, Vol. 8, Page 26 8, 1 (2 2019), 26. https: //doi.org/10.3390/ARTS8010026 [44] Richard McElreath. 2020. Statistical rethinking: A Bayesian course with examples in R and Stan (2e). Chapman and Hall/CRC. [45] Diana P. Moreno, Luciënne T. Blessing, Maria C. Yang, Alberto A. Hernández, and Kristin L. Wood. 2016. Overcoming design fixation: Design by analogy studies and nonintuitive findings. AI EDAM 30, 2 (5 2016), 185–199. https: //doi.org/10.1017/S0890060416000068 [46] Maria Adriana Neroni and Nathan Crilly. 2021. How to Guard Against Fix- ation? Demonstrating Individual Vulnerability is More Effective Than Warn- ing of General Risk. The Journal of Creative Behavior 55, 2 (6 2021), 447–463. https://doi.org/10.1002/JOCB.465 [47] A Terry Purcell and John S Gero. 1996. Design and other types of fixation. Design studies 17 (1996), 363–383. https://doi.org/10.1016/S0142-694X(96)00023-3 [48] Janet Rafner, Blanka Zana, Peter Dalsgaard, Michael Mose Biskjaer, and Jacob Sherson. 2023. Picture This: AI-Assisted Image Generation as a Resource for Problem Construction in Creative Problem-Solving. In Proceedings of the 15th Conference on Creativity and Cognition. Association for Computing Machinery (ACM), New York, NY, USA, 262–268. https://doi.org/10.1145/3591196.3596823 [49] Christian Remy, Lindsay Macdonald Vermeulen, Jonas Frich, Michael Mose Bisk- jaer, and Peter Dalsgaard. 2020. Evaluating creativity support tools in HCI research. In DIS 2020 - Proceedings of the 2020 ACM Designing Interactive Systems Conference. Association for Computing Machinery, Inc, New York, NY, USA, 457–476. https://doi.org/10.1145/3357236.3395474 [50] Lori Rosenkopf and Atul Nerkar. 2001. Beyond local search: boundary-spanning, exploration, and impact in the optical disk industry. Strategic Management Journal 22, 4 (4 2001), 287–306. https://doi.org/10.1002/SMJ.160 [51] Othman Sbai, Mohamed Elhoseiny, Antoine Bordes, Yann LeCun, and Camille Couprie. 2019. DesIGN: Design inspiration from generative networks. Computer Vision – ECCV 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science() 11131 (2019), 0–0. https://doi.org/10.1007/978-3-030-11015-4 [52] Martin Schmettow. 2021. New statistics for design researchers. Springer. CHI ’24, May 11–16, 2024, Honolulu, HI, USA Wadinambiarachchi et al. [74] Robert J. Youmans. 2011. The effects of physical prototyping and group work on the reduction of design fixation. Design Studies 32, 2 (3 2011), 115–138. https://doi.org/10.1016/J.DESTUD.2010.08.001 [75] Robert J. Youmans and Tomasz Arciszewski. 2014. Design Fixation: A Cloak of Many Colors. In Design Computing and Cognition ’12. Springer, Dordrecht, 115–129. https://doi.org/10.1007/978-94-017-9112-0 [76] Robert J. Youmans and Thomaz Arciszewski. 2014. Design fixation: Classifications and modern methods of prevention. AI EDAM 28, 2 (2014), 129–137. https: //doi.org/10.1017/S0890060414000043 APPENDIX A THE MEAN SCORES AND STANDARD ERRORS OF THE NASA TASK LOAD INDEX (NASA-TLX) SCALES Table A.1: The mean scores and standard error for each NASA TLX scale (mental demand, physical demand, temporal de- mand, performance demand, effort demand, frustration de- mand) in the three conditions: No support, Image Search and GenAI NASA TLX No Support Image Search GenAI Mean scores & Std. Error Mental Demand Physical Demand Temporal Demand Performance Effort Frustration 4.1 (0.3) 2.2 (0.3) 4.3 (0.4) 4.1 (0.3) 4.5 (0.3) 3.1 (0.4) 4.0 (0.4) 2.6 (0.3) 3.7 (0.4) 5.2 (0.3) 4.0 (0.4) 2.0 (0.3) 4.2 (0.4) 2.6 (0.3) 4.4 (0.4) 4.7 (0.3) 4.0 (0.2) 2.2 (0.3) [53] Jami J. Shah, Noe Vargas-Hernandez, and Steve M. Smith. 2003. Metrics for measuring ideation effectiveness. Design Studies 24, 2 (3 2003), 111–134. https: //doi.org/10.1016/S0142-694X(02)00034-0 [54] Joon Gi Shin, Janin Koch, Andrés Lucero, Peter Dalsgaard, and Wendy E. MacKay. 2023. Integrating AI in Human-Human Collaborative Ideation. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery, New York, NY, USA, 1–5. https://doi.org/10.1145/3544549.3573802 [55] Dilpreet Singh, Nina Rajcic, Simon Colton, and Jon McCormack. 2019. Camera obscurer: Generative art for design inspiration. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11453 LNCS (2019), 51–68. https://doi.org/10.1007/978-3-030- 16667-0{_}4/TABLES/3 [56] Ut Na Sio and Thomas C. Ormerod. 2009. Does Incubation Enhance Problem Solving? A Meta-Analytic Review. Psychological Bulletin 135, 1 (1 2009), 94–120. https://doi.org/10.1037/A0014212 [57] Melissa A.B. Smith, Robert J. Youmans, Brooke G. Bellows, and Matthew S. Peterson. 2013. Shifting the focus: An objective look at design fixation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8012 LNCS, PART 1 (2013), 144–151. https: //doi.org/10.1007/978-3-642-39229-0{_}17/COVER [58] Steven M. Smith and Julie Linsey. 2011. A Three-Pronged Approach for Over- coming Design Fixation. The Journal of Creative Behavior 45, 2 (6 2011), 83–91. https://doi.org/10.1002/J.2162-6057.2011.TB01087.X [59] Steven M Smith, Thomas B Ward, and Jays Schumacher. 1993. Constraining effects of examples a creative generation task. In. Memorv & Cognition 21, 6 (1993), 837–845. [60] Gareth Terry and Nikki Hayfield. 2021. Essentials of Thematic Analysis. Ameri- can Psychological Association, Washington, DC, USA. https://uwe-repository. worktribe.com/output/7240960 [61] L.A. Vasconcelos, M.A. Neroni, C. Cardoso, and N. Crilly. 2018. Idea representation and elaboration in design inspiration and fixation experiments. International Journal of Design Creativity and Innovation 6, 1-2 (2018), 93–113. https://doi.org/ 10.1080/21650349.2017.1362360 [62] L.A. Vasconcelos, M.A. Neroni, and N. Crilly. 2016. Fluency results in design fixation experiments: An additional explanation. In 4th International Conference on Design Creativity, ICDC 2016. The Design Society, Atlanta, GA, USA, 1–8. [63] Luis A Vasconcelos, Carlos C Cardoso, Chih-Chun Chen, and Nathan Crilly. 2017. Inspiration and Fixation: The Influences of Example Designs and System Properties in Idea Generation. Journal of Mechanical Design 139, 3 (2017), 031101. https://doi.org/10.1115/1.4035540 [64] Luis A. Vasconcelos and Nathan Crilly. 2016. Inspiration and fixation: Questions, methods, findings, and challenges. Design Studies 42 (1 2016), 1–32. https: //doi.org/10.1016/J.DESTUD.2015.11.001 [65] Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, and Paul-Christian Bürkner. 2021. Rank-Normalization, Folding, and Localization: An Improved (cid:98)𝑅 for Assessing Convergence of MCMC (with Discussion). Bayesian Analysis 16, 2 (2021), 667 – 718. https://doi.org/10.1214/20-BA1221 [66] Mathias Peter Verheijden and Mathias Funk. 2023. Collaborative Diffusion: Boost- ing Designerly Co-Creation with Generative AI. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3544549.3585680 [67] Vimal Viswanathan and Julie Linsey. 2012. Design Fixation in Physical Modeling: An Investigation on the Role of Sunk Cost. Proceedings of the ASME Design Engineering Technical Conference 9 (6 2012), 119–130. https://doi.org/10.1115/ DETC2011-47862 [68] V. Viswanathan, M. Tomko, and J. Linsey. 2016. A study on the effects of example familiarity and modality on design fixation. Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM 30, 2 (2016), 171–184. https: //doi.org/10.1017/S0890060416000056 [69] Eric-Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, and Han LJ Van Der Maas. 2011. Why psychologists must change the way they analyze their data: the case of psi: comment on Bem (2011). (2011). [70] T. B. Ward. 1994. Structured Imagination: the Role of Category Structure in Exemplar Generation. Cognitive Psychology 27, 1 (8 1994), 1–40. https://doi.org/ 10.1006/COGP.1994.1010 [71] Blake Williford, Samantha Ray, Jung In Koh, Josh Cherian, Paul Taele, and Tracy Hammond. 2023. Exploring Creativity Support for Concept Art Ideation. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–7. https: //doi.org/10.1145/3544549.3585684 [72] Roosa Wingström, Johanna Hautala, and Riina Lundman. 2022. Redefining Creativity in the Era of AI? Perspectives of Computer Scientists and New Me- dia Artists. Creativity Research Journal (2022), 1–17. https://doi.org/10.1080/ 10400419.2022.2107850 [73] Robert J. Youmans. 2011. Design Fixation in the Wild: Design Environments and Their Influence on Fixation. The Journal of Creative Behavior 45, 2 (6 2011), 101–107. https://doi.org/10.1002/J.2162-6057.2011.TB01089.X
ai_researcher
1
[Research_progress_on_identification_and_quality_control_of_Cervi_Cornu_Pantotrichum].pdf
Constrained Dynamics Simulation: More With Less Ajay Suresha Sathya 4 2 0 2 y a M 1 3 ] O R . s c [ 1 v 0 2 8 0 2 . 5 0 4 2 : v i X r a I. INTRODUCTION Efficient robot dynamics simulation is a fundamental prob- lem key for robot control, identification, design and analysis. Efficient simulation is a crucial enabler of model predictive control (MPC) [54] and learning-based control [44], arguably two of the most successful and currently the most actively researched advanced robot control techniques. It can improve the optimality and safety properties of MPC by enabling a longer prediction horizon [54], enabling highly dynamic robot behaviour 1 close to system limits. Efficient simulation can shorten training times of learning-based controllers, such as a reinforcement learning (RL) [62, 9] based controller, where the learning phase often relies heavily on simulation [44] due to the prohibitive cost and safety concerns involved with training on real robots. Faster and physically accurate simulators can democratize robotics research by potentially not requiring dedicated expensive resources like high-end GPUs and physical robots for training and validating controllers. Accurate simulation is also key to task and motion planning (TAMP) [67, 31] that will be essential for intelligent robots executing long-horizon tasks robustly. Finally, simulation pro- foundly impacts several adjacent fields like biomechanics [20] and computer graphics [18]. Current simulator efficiency is insufficient. Simulation is often the bottleneck, particularly in MPC, where evaluating dynamics and derivatives can take up to 80% of the total com- putation cost [5]. Consquently, achieving whole-body MPC for high degree of freedom (DoF) systems like humanoids remains an open problem. Moreover, industrial applications of solving minimum-time optimal control problems with full dynamics models, to maximize economic objectives like bin picks-per- hour [56], is also limited perhaps due to the long solution times [34] offsetting the performance gains. Moreover, simu- lation inefficiencies force roboticists to ignore submechanisms like gears [37] or joint flexibility [4], which causes significant simulation errors [17]. Inspired by the large language mod- els’ (LLMs) [3, 68] success, early works have trained large multimodal models targetting robotics [21, 11]. Simulation is attractive [23] for generating sufficient amount of diverse data for training these data-hungry models. Considering the scale of the typical datasets [11], reducing the carbon footprint of generating training data and evaluating trained models in simulation, assumes key importance. Drawbacks of existing simulators. Solving the inner equality constrained dynamics problem [28] is often the most com- putationally expensive aspect of simulating contact dynam- ics. To solve these inner problems, most existing simula- tors (RBDL [29], RAISIM [35], MUJOCO [66, 65], PINOC- CHIO [14], DRAKE [64], DART [43], BULLET [18] and PhysX 2 to name a few) use Featherstone’s branching-induced sparsity-exploiting LTL algorithm [26, 27], which however has an expensive computational complexity of O(nd2 + m2d + dm2 + m3), where n, d and m are the robot’s DoF, kine- matic tree’s depth, and constraint dimensionality respectively. This worst-case cubic complexity renders the LTL algorithm inefficient for high DoF robots. Other recent simulators like BRAX [30] adopt a computationally even-worse approach of Gauss-Seidel iterations to solve the KKT system arising from the inefficient [28] maximal coordinate formulation of Baraff [7]. But they mitigate their algorithmic inefficiency to an extent by exploiting the compute power of GPUs or TPUs. However, there exist more efficient recursive algo- rithms [53, 70, 6, 51] in dynamics literature that have been forgotten or ignored that have linear complexity in n. These low-complexity recursive algorithms offer a unique and untapped opportunity to accelerate existing simulators. However, they require a revival and possibly an improvement. The low-complexity algorithms require non-trivial extensions to efficiently support closed-loop mechanisms. An efficient C++ implementation is also essential to fully exploit an effi- cient algorithm. A simulator for robotics should be physically realistic and not ignore complementarity constraints [65] or linearize the nonlinear complementarity problem [18] (NCP) associated with the frictional contact problem, as this realism can potentially require less domain randomization or control feedback to deal with simulator error. Finally, the simulator should also be differentiable with smoothing [60] without sacrificing efficiency to enable trajectory optimization and learning for contact-rich tasks. None of the existing simula- tors satisfy all the requirements listed in this paragraph and building one is my current research problem. II. CURRENT RESEARCH Linear complexity constrained dynamics algorithms: I derived a family of efficient constrained dynamics algorithms (CDAs) [57] for kinematic trees by solving an equivalent discrete-time linear quadratic regulator (LQR) problem [54], arising from Gauss’ principle of least constraint [32, 69, 12]. I first revisited and revived the efficient, but largely unknown, Popov-Vereshchagin (PV) CDA [53, 70] from 1970s, with a complexity of O(n + m2d + m3). Since the original pa- per [70] has no derivation, I provided an expository derivation in [57] by adapting the textbook dynamic programming [8] approach to solve the corresponding LQR problem [54]. I 1https://www.youtube.com/watch?v=tF4DML7FIWk 2https://developer.nvidia.com/physx-sdk further extended the PV algorithm to floating-base trees, with constraints permitted on any link, and updated it with modern efficiency techniques discovered over the years like adding uniform gravity field [10], using local frames [10] and using DH nodes [48]. Then, by exploring LQR solver variants, I proposed two new CDAs [57] with asymptotically optimal O(n + m) complexity, namely PV-soft and PV-early, derived using MUJOCO-style relaxed constraints and aggressive early elimination of Lagrange multipliers respectively. My numeri- cal results indicate significant simulation speed-up of 2x for high dimensional robots likquadrupeds and humanoids. The LQR/optimization connection in my derivation makes CDAs accessible to researchers with control and optimiza- tion background, which includes many roboticists due to the popularity of OCP solvers [63, 24, 47], who currently use CDAs as a black-box. This connection facilitated my follow-up work [55] on proximal [52] dynamics algorithms three original dynamics algorithms, the most efficient among which is constrainedABA with O(n + m) complexity. These proximal algorithms relax the constraint linear independence assumption in the PV solvers, are numerically robust to singular cases, automatically return least-squares solution for even infeasible motion constraints [16, 33]. They generalize existing algorithms, namely PV-soft and the CDAs in MU- JOCO and DRAKE allowing trading-off compliance and rigid- ity in contact through proximal iterations. ConstrainedABA is particularly simple and easily implemented requiring only a few additional lines of code compared to Featherstone’s articulated-body algorithm [25, 28], faciliating its implemen- tation in existing simulators. Lowest complexity Delassus matrix algorithms: The De- lassus matrix [19, 22] or the inverse operational space inertia matrix (OSIM) [40] is key to efficiently differentiating [14, 50] CDAs and solving frictional contact problems [22, 36, 45]. I discovered that the PV algorithm computes the Delassus matrix as an intermediate quantity, thereby providing a new Delassus matrix algorithm (PV-OSIM), with a computational complexity of O(n + m2d + m2). I further accelerated PV- OSIM for floating-base robots with branching at the base using the matrix inversion lemma [59]. PV-OSIM was found to be significantly faster for most realistic robots (upto 2x for humanoids) than the existing state-of-the-art recursive algorithms, KJR [41] and EFPA [71], which have a com- plexity of O(n + m2d + m2) and O(n + md + m2). In a follow-up work [58], I exploited the compositionality of the extended force and motion propagators [46, 15] in PV- OSIM computation to obtain the PV-OSIMr algorithm, with an optimal complexity of O(n + m2). However, computing constraint forces requires factorizing the Delassus matrix, which incurs an additional O(m3) operations for all the above algorithms. The combination of proximal algorithms and the matrix inversion lemma in my recent work [55], yields a new algorithm cABA-OSIM, that can compute even the damped Delassus inverse matrix in just O(n + m2) operations. cABA- OSIM was found to be over 3x faster for humanoids than the widely-used Featherstone’s LTL-OSIM algorithm [27, 14]. Implementation: All the low-complexity dynamics algo- rithms mentioned above have been recently implemented in C++ within the high-quality open-source dynamics library PINOCCHIO [14] and will shortly be released to the commu- nity. PINOCCHIO was chosen because its efficient implementa- tion alone provides several times speed-up compared to other simulators [18, 43] also written in C++. III. FUTURE RESEARCH Short term: There exist several pressing extensions and clear short-term future research directions. Mechanisms with internal closed-loops [38] are inadequately addressed in most existing simulators and recent robot designs increasingly use internal loops [2, 1] for mechanical reasons. We have made initial progress on extending the proximal dynamics algorithms [55] to closed-loop mechanisms. Next, these low- complexity algorithms will be used to solve inequality con- straints and frictional contact problems, thereby delivering a fully-fledged efficient simulator using our low-complexity algorithms. The lessons learned in [45] will be used to ensure the physical realism of the frictional contact simulator. Making our low-complexity simulator differentiable is the natural next step. I expect the derivative computation to be significantly accelerated due to my cABA-OSIM algorithm and through a C++ implementation by adopting the implicit function approach [13]. Note that using proximal algorithms makes differentiation through our proximal CDAs well-defined (over finite number of iterations) even for singular/infeasible cases due to the differentiability of the proximal operator. To obtain informative gradients despite the non-smoothness inherent in frictional contact problems, I plan to explore and support the randomized smoothing methods [61, 60, 42] with our low-complexity algorithms. Medium and long term: In the medium and long-term, my work will explore applications and opportunities enabled by the fast and differentiable simulator. Armed with the cumula- tive speed-ups we expect due to algorithmic improvements as well as efficient implementation, a medium-term direction is to push towards whole-body MPC for humanoid-sized robots. For long term, leveraging the gradient information from the simulator to reduce the sample complexity of RL or imitation learning, where the current models and learning methods remain data-hungry [39], is another potentially interesting direction. This can be critical for manipulation tasks where there is arguably a higher diversity of challenges compared to the locomotion tasks, making data generation challenging. Finally, another exciting longer term direction is enabling long-horizon planning [49] for set of tasks requiring fast and dynamic tool use [67] using physically realistic simulators. Finally, through efficient algorithms and implementations, I hope to democratize robotics research by enabling prototyping and deployment of advanced control techniques like MPC and RL even on resource-constrained computational platforms. REFERENCES [1] Digit robot. URL https://agilityrobotics.com/robots. [2] Kangaroo robot. https://pal-robotics.com/robots/kangaroo/. URL branching mechanisms. Advanced Robotics, 14(8):703– 715, 2001. [3] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. [4] Alin Albu-Sch¨affer, Christian Ott, and Gerd Hirzinger. A unified passivity-based control framework for position, torque and impedance control of flexible joint robots. The international journal of robotics research, 26(1):23–39, 2007. [5] Alejandro Astudillo, Justin Carpentier, Joris Gillis, Goele Pipeleers, and Jan Swevers. Mixed use of analytical derivatives and algorithmic differentiation for nmpc of robot manipulators. IFAC-PapersOnLine, 54(20):78–83, 2021. [6] Dae-Sung Bae and Edward J Haug. A recursive formu- lation for constrained mechanical system dynamics: Part ii. closed loop systems. Journal of Structural Mechanics, 15(4):481–506, 1987. [7] David Baraff. Linear-time dynamics using lagrange multipliers. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 137–146, 1996. [8] Richard Bellman. Dynamic programming. Science, 153 (3731):34–37, 1966. [9] Dimitri Bertsekas. Reinforcement learning and optimal control. Athena Scientific, 2019. [10] Helmut Brandl, Rainer Johanni, and Martin Otter. A very efficient algorithm for the simulation of robots and similar multibody systems without inversion of the mass matrix. IFAC Proceedings Volumes, 19(14):95–100, 1986. [11] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language-action models transfer web knowledge arXiv preprint arXiv:2307.15818, to robotic control. 2023. [12] Herman Bruyninckx and Oussama Khatib. Gauss’ prin- ciple and the dynamics of redundant and constrained manipulators. In Proc. IEEE Int. Conf. Robot. Autom., volume 3, pages 2563–2568. IEEE, 2000. [13] Justin Carpentier and Nicolas Mansard. Analytical derivatives of rigid body dynamics algorithms. In Proc. Robot., Sci. Syst., 2018. [14] Justin Carpentier, Guilhem Saurel, Gabriele Buondonno, Joseph Mirabel, Florent Lamiraux, Olivier Stasse, and Nicolas Mansard. The pinocchio c++ library: A fast and flexible implementation of rigid body dynamics In 2019 algorithms and their analytical derivatives. IEEE/SICE International Symposium on System Integra- tion (SII), pages 614–619. IEEE, 2019. [15] Kyong-Sok Chang and Oussama Khatib. Efficient recur- sive algorithm for the operational space inertia matrix of [16] Alice Chiche and Jean Charles Gilbert. How the aug- mented lagrangian algorithm can deal with an infeasible convex quadratic optimization problem. Journal of Con- vex Analysis, 23(2), 2016. [17] Matthew Chignoli, Nicholas Adrian, Sangbae Kim, and Patrick M Wensing. Recursive rigid-body dynamics arXiv algorithms for systems with kinematic loops. preprint arXiv:2311.13732, 2023. [18] Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016–2021. [19] ´Etienne Delassus. M´emoire sur la th´eorie des liaisons In Annales scientifiques de l’ ´Ecole finies unilat´erales. normale sup´erieure, volume 34, pages 95–179, 1917. [20] Scott L Delp, Frank C Anderson, Allison S Arnold, Peter Loan, Ayman Habib, Chand T John, Eran Guendelman, and Darryl G Thelen. Opensim: open-source software to create and analyze dynamic simulations of movement. IEEE transactions on biomedical engineering, 54(11): 1940–1950, 2007. [21] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm- arXiv e: An embodied multimodal language model. preprint arXiv:2303.03378, 2023. [22] Christian Duriez, Frederic Dubois, Abderrahmane Khed- dar, and Claude Andriot. Realistic haptic rendering of interacting deformable objects in virtual environments. IEEE transactions on visualization and computer graph- ics, 12(1):36–47, 2005. [23] Clemens Eppner, Arsalan Mousavian, and Dieter Fox. Acronym: A large-scale grasp dataset based on simula- tion. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 6222–6227. IEEE, 2021. [24] Farbod Farshidian, Michael Neunert, Alexander W Win- kler, Gonzalo Rey, and Jonas Buchli. An efficient optimal planning and control framework for quadrupedal locomotion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 93–100. IEEE, 2017. [25] Roy Featherstone. The calculation of robot dynamics using articulated-body inertias. Int. J. Robot. Res., 2(1): 13–30, 1983. [26] Roy Featherstone. Efficient factorization of the joint- space inertia matrix for branched kinematic trees. Int. J. Robot. Res., 24(6):487–500, 2005. [27] Roy Featherstone. Exploiting sparsity in operational- space dynamics. Int. J. Robot. Res., 29(10):1353–1368, 2010. [28] Roy Featherstone. Rigid body dynamics algorithms. Springer, 2014. [29] Martin L Felis. Rbdl: an efficient rigid-body dynamics library using recursive algorithms. Autonomous Robots, 41(2):495–511, 2017. [30] C Daniel Freeman, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, and Olivier Bachem. Brax–a differentiable physics engine for large scale rigid body simulation. arXiv preprint arXiv:2106.13281, 2021. [31] Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, and Tom´as Lozano-P´erez. Integrated task and motion plan- ning. Annual review of control, robotics, and autonomous systems, 4:265–293, 2021. [32] Carl Friedrich Gauß. ¨Uber ein neues allgemeines grundgesetz der mechanik. 1829. [33] Osman G¨uler. On the convergence of the proximal point algorithm for convex minimization. SIAM journal on control and optimization, 29(2):403–419, 1991. [34] Taylor A Howell, Brian E Jackson, and Zachary Manch- ester. Altro: A fast solver for constrained trajectory op- timization. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7674– 7679. IEEE, 2019. [35] Jemin Hwangbo, Joonho Lee, and Marco Hutter. Per- contact iteration method for solving contact dynamics. IEEE Robotics and Automation Letters, 3(2):895–902, 2018. [36] Jemin Hwangbo, Joonho Lee, and Marco Hutter. Per- contact iteration method for solving contact dynamics. IEEE Robot. Autom. Lett., 3(2):895–902, 2018. URL www.raisim.com. [37] A Jain and G Rodriguez. Recursive dynamics for geared robot manipulators. In 29th IEEE Conference on Decision and Control, pages 1983–1988. IEEE, 1990. [38] Abhinandan Jain. Robot and multibody dynamics: analy- sis and algorithms. Springer Science & Business Media, 2010. [39] Fabian Jenelten, Junzhe He, Farbod Farshidian, and Science Marco Hutter. Dtc: Deep tracking control. Robotics, 9(86):eadh5401, 2024. [40] Oussama Khatib. A unified approach for motion and force control of robot manipulators: The operational- IEEE Journal on Robotics and space formulation. Automation, 3(1):43–53, 1987. [41] Kenneth Kreutz-Delgado, Abhinandan and Jain, Recursive formulation of Int. J. Robot. Res., 11(4): Guillermo Rodriguez. operational-space control. 320–328, 1992. [42] Quentin Le Lidec, Fabian Schramm, Louis Montaut, Cordelia Schmid, Ivan Laptev, and Justin Carpentier. Leveraging randomized smoothing for optimal control of nonsmooth dynamical systems. Nonlinear Analysis: Hybrid Systems, 52:101468, 2024. [43] Jeongseok Lee, Michael X. Grey, Sehoon Ha, Tobias Kunz, Sumit Jain, Yuting Ye, Siddhartha S. Srinivasa, Mike Stilman, and C Karen Liu. Dart: Dynamic anima- tion and robotics toolkit. The Journal of Open Source Software, 3(22):500, 2018. [44] Joonho Lee, Jemin Hwangbo, Lorenz Wellhausen, Vladlen Koltun, and Marco Hutter. Learning quadrupedal locomotion over challenging terrain. Science robotics, 5 (47):eabc5986, 2020. [45] Quentin Le Lidec, Wilson Jallet, Louis Montaut, Ivan Laptev, Cordelia Schmid, and Justin Carpentier. Contact arXiv Models in Robotics: a Comparative Analysis. preprint arXiv:2304.06372, 2023. [46] Kathryn Weed Lilly. Efficient dynamic simulation of mul- tiple chain robotic systems. The Ohio State University, 1989. [47] Carlos Mastalli, Rohan Budhiraja, Wolfgang Merkt, Guilhem Saurel, Bilal Hammoud, Maximilien Naveau, Justin Carpentier, Ludovic Righetti, Sethu Vijayakumar, and Nicolas Mansard. Crocoddyl: An efficient and versatile framework for multi-contact optimal control. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 2536–2542. IEEE, 2020. [48] Scott McMillan and David E Orin. Efficient computation of articulated-body inertias using successive axial screws. IEEE Transactions on Robotics and Automation, 11(4): 606–611, 1995. [49] Vahid Mokhtari, Ajay Suresha Sathya, Nikolaos Tsiogkas, and Wilm Decr´e. Safe-planner: A single- outcome replanner for computing strong cyclic policies in fully observable non-deterministic domains. In 2021 20th International Conference on Advanced Robotics (ICAR), pages 974–981. IEEE, 2021. [50] John N Nganga and Patrick M Wensing. Accelerating hy- brid systems differential dynamic programming. ASME Letters in Dynamic Systems and Control, 3(1):011002, 2023. [51] M Otter, H Brandl, and R Johanni. An algorithm for the simulation of multibody systems with kinematic loops. In Proceedings of the 7th World Congress on Theory of Machines and Mechanisms, IFToMM, Sevilla, Spain, 1987. [52] Neal Parikh, Stephen Boyd, et al. Proximal algorithms. Foundations and trends® in Optimization, 1(3):127–239, 2014. [53] Je P Popov, Anatolij Fedoroviˇc Vereshchagin, and Manipuljacionnyje Stanislav Leonidoviˇc Zenkeviˇc. roboty: Dinamika i algoritmy. Nauka, 1978. [54] James Blake Rawlings, David Q Mayne, and Moritz Diehl. Model predictive control: theory, computation, and design, volume 2. Nob Hill Publishing Madison, 2017. [55] Ajay Sathya and Justin Carpentier. Constrained articu- lated body dynamics algorithms. Conditionally Accepted to IEEE Transactions on Robotics, 2024. [56] Ajay Suresha Sathya, Alejandro Astudillo, Joris Gillis, Wilm Decr´e, Goele Pipeleers, and Jan Swevers. Tasho: A python toolbox for rapid prototyping and deployment of optimal control problem-based complex robot motion In 2022 IEEE/RSJ International Conference on skills. Intelligent Robots and Systems (IROS), pages 9700–9707. IEEE, 2022. [57] Ajay Suresha Sathya, Herman Bruyninckx, Wilm Decr´e, and Goele Pipeleers. Efficient constrained dynamics algorithms based on an equivalent lqr formulation using gauss’ principle of least constraint. IEEE Transactions on Robotics, 40, 2024. [58] Ajay Suresha Sathya, Wilm Decre, and Jan Swev- ers. Pv-osimr: A lowest order complexity algorithm arXiv preprint for computing the delassus matrix. arXiv:2310.03676, 2024. In revision for IEEE RA-L. [59] Jack Sherman and Winifred J Morrison. Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. The Annals of Mathematical Statistics, 21(1):124–127, 1950. [60] Hyung Ju Suh, Max Simchowitz, Kaiqing Zhang, and Russ Tedrake. Do differentiable simulators give better policy gradients? In International Conference on Ma- chine Learning, pages 20668–20696. PMLR, 2022. [61] Hyung Ju Terry Suh, Tao Pang, and Russ Tedrake. Bun- dled gradients through contact via randomized smooth- ing. IEEE Robotics and Automation Letters, 7(2):4000– 4007, 2022. [62] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. [63] Yuval Tassa, Nicolas Mansard, and Emo Todorov. Control-limited differential dynamic programming. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 1168–1175. IEEE, 2014. [64] Russ Tedrake and the Drake Development Team. Drake: Model-based design and verification for robotics, 2019. URL https://drake.mit.edu. [65] Emanuel Todorov. Convex and analytically-invertible dynamics with contacts and constraints: Theory and In Proc. IEEE Int. Conf. implementation in mujoco. Robot. Autom., pages 6054–6061. IEEE, 2014. [66] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: In Proc. A physics engine for model-based control. IEEE/RSJ Int. Conf. Int. Robots. Syst., pages 5026–5033. IEEE, 2012. [67] Marc Toussaint, Kelsey R Allen, Kevin A Smith, and Joshua B Tenenbaum. Differentiable physics and stable modes for tool-use and manipulation planning. [68] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. [69] Firdaus E Udwadia and Robert E Kalaba. Analytical dynamics : a new approach. Cambridge University press, Cambridge, 1996. ISBN 0-521-48217-8. [70] Anatolii Fedorovich Vereshchagin. Modeling and control of motion of manipulational robots. Soviet Journal of Computer and Systems Sciences, 27(5):29–38, 1989. [71] Patrick Wensing, Roy Featherstone, and David E Orin. A reduced-order recursive algorithm for the computation In Proc. IEEE of the operational-space inertia matrix. Int. Conf. Robot. Autom., pages 4911–4917. IEEE, 2012.
ai_researcher
2
Practical_Considerations_for_Agentic_LLM_Systems.pdf
Practical Considerations for Agentic LLM Systems Chris Sypherd University of Edinburgh Edinburgh, United Kingdom [email protected] Vaishak Belle University of Edinburgh Edinburgh, United Kingdom [email protected] 4 2 0 2 c e D 5 ] I A . s c [ 1 v 3 9 0 4 0 . 2 1 4 2 : v i X r a Abstract As the strength of Large Language Models (LLMs) has grown over recent years, so too has interest in their use as the underlying models for autonomous agents. Although LLMs demonstrate emer- gent abilities and broad expertise across natural language domains, their inherent unpredictability makes the implementation of LLM agents challenging, resulting in a gap between related research and the real-world implementation of such systems. To bridge this gap, this paper frames actionable insights and considerations from the research community in the context of established application paradigms to enable the construction and facilitate the informed deployment of robust LLM agents. Namely, we position relevant research findings into four broad categories—Planning, Memory, Tools, and Control Flow—based on common practices in application- focused literature and highlight practical considerations to make when designing agentic LLMs for real-world applications, such as handling stochasticity and managing resources efficiently. While we do not conduct empirical evaluations, we do provide the nec- essary background for discussing critical aspects of agentic LLM designs, both in academia and industry. Keywords Large Language Models (LLMs), LLM Agents, Agentic LLMs, Ap- plied LLM Systems 1 Introduction In academia, the concept of "agents" has been well-defined for decades (e.g., [94]), and thus the proposition of agents based on LLMs comes with predefined criteria and expectations. As such, agentic LLMs in the research community have come to be defined as autonomous systems with capabilities of beliefs [47, 62, 71], reason- ing [105], planning [36, 72], and control [72]. Under this definition, the ability to plan, reason, and interact with an environment have emerged as the key considerations for success [97]. For LLM agents in industry and real-world deployment, the history and breadth of agents has been condensed to a definition along the lines of "a system that can use an LLM to reason through a problem, create a plan to solve the problem, and execute the plan with the help of a set of tools" [84]. Most industry discussions follow this form, introducing the LLM as the central reasoning engine and adding planning, memory, and tools as three necessary modules (e.g., [12, 13, 16, 84, 93]). Indeed, most industry resources focusing on deployable agentic LLM systems are accompanied by a diagram similar to Figure 1, focusing largely on single agents. While this description is helpful for the most basic of LLM agents, it glosses over some of the more nuanced considerations that must be made for the informed construction of robust agentic LLM systems. The prevailing view of LLM agents in industry brings to light the disparity between (1) research into LLMs and agents and (2) the application of agentic LLM systems in real-world scenarios. To bridge this gap, we propose framing relevant findings from the re- search community in the common industry view of LLM agents. To that end, we organize this work into four main sections—Planning, Memory, Tools, and Control Flow—that correspond to, respectively, planning, memory, tools, and the central reasoning engine com- ponents referenced above. We tailor the contents of this paper to black-box LLM-based single-agent systems typical of that indus- try perspective. By doing so, we hope to create an actionable and approachable survey that enables information exchange between academia and industry within a manageable scope. Figure 1: A typical application-focused depiction of LLM agents. 2 Related Work Many surveys discussing LLM-based agents focus on multi-agent frameworks and related ideas [32, 105]. While similar, the chal- lenges facing multi-agent systems are distinct from the real-world deployment of a single LLM agent. Here, we focus more on de- liberately crafting a robust agent rather than the orchestration of many. Another approach, taken by [100], focuses on methods for im- proving agentic LLM performance starting at the underlying model, looking into data composition and training methodologies. We fo- cus on implementation considerations that improve agentic LLM system performance from a black-box perspective, which lends itself more to real-world deployment. Others focus on creating unified taxonomies [49, 97] or target a single component of LLM agents, such as planning [36]. The most similar work is [87], providing a comprehensive survey of works relating to LLMs as agents as well as reviewing aspects of their design, application, and evaluation. While [87] develops a valuable unified framework based on extant research, we leverage research findings to provide practical application-focused insights and frame our review in the context of the LLM agent paradigm that has developed organically in industry. To the best of our knowledge, this is the first work that coalesces research relevant to LLM agents through the lens of common indus- try practices. We expand that contribution by not just presenting existing research but by extrapolating actionable insights and best practices from it. 3 Applied Scenario To help illustrate some of the following points, we propose the example outlined in Figure 2 of applying an LLM agent as a pesc- etarian1 meal assistant. We will refer to this as the primary example2 throughout this work for consistency, citing specifics from it by the codes assigned in Figure 2 (e.g., 2.R1 to refer to "Pescetarian recipe book"). Figure 2: An example scenario featuring a pescetarian meal assistant LLM agent. 4 Glossary This glossary serves to briefly introduce the following terms that will be used across subsequent sections. Later sections will provide additional contextualization and examples of their utility but not necessarily explicit definitions. Persona. A persona (also referred to as a "role" or a "profile") is the identity assigned to the LLM, often as part of the system prompt. The persona is the lens through which the LLM will interpret and respond to prompts. The persona (e.g., Figure 2.Pe2) can be defined and refined by an occupation (e.g., "professional chef"), level and domain of expertise (e.g., "specializing in pescetarian dishes"), and personality traits (e.g., "friendly and understanding") but can be further customized by adding details such as age, race, gender, and nationality [17, 87]. Tool. Tools are the means by which an LLM can interact with its environment (beyond basic textual exchange) and access ex- ternal resources [66]. Retrieval Augmented Generation (RAG) is 1Someone that does not eat meat, aside from fish and other seafood. 2While we attempt to select a simple example with some relevance to the real world, it is still a contrived example to demonstrate the points outlined in this work and may not fully reflect the complexities of the real world. commonly used as a tool (Figure 2.T1) but is limited in its utility as it exclusively retrieves information about the environment. The true power of tools is realized when they are used to perform actions in the environment, such as the example in Figure 2.T2 that would allow the pescetarian meal assistant to not only recommend recipes but also order the ingredients to prepare them. Other examples of tools include ground-truth verification methods (e.g., code exe- cution and calculator usage) and real-time environment querying (e.g., requesting trending recipes) [91]. Hyperparameters. This section includes a brief overview of hy- perparameters we will reference but does not explore their technical details3. • Seed. Some LLM interfaces will have a “seed” parameter that should, provided all other parameters remain constant, produce the same output. • Temperature. Temperature corresponds to the degree to which randomness will be employed in selecting the out- put tokens. This usually plays out in higher temperature responses being more creative and rambling while lower temperature responses are more predictable and straight- to-the-point. Thus, for more consistent results, a lower temperature (e.g., 0.0 to 0.5) can be used. • Top-p. Top-p (also known as "nucleus sampling;" intro- duced in [34]) corresponds to the probability threshold for selecting tokens that can form part of the output, bounded 0.0 to 1.0. Lower top-p values restrict the pool of tokens the LLM can choose from, resulting in more reproducible outputs. 5 Planning Planning has long been a core component of agent research [59, 94]; it allows more complex tasks to be handled in smaller, more manageable steps. Planning can also enhance the interpretability of an LLM agent, as the steps of the plan and the stopping criteria will be defined in a interpretable format. 5.1 LLMs and Planning Despite anecdotal applications showing signs of successful LLM planning [77], more holistic reviews suggest that LLMs make poor planners [25, 38, 52, 82, 83]. As such, if an LLM agent is to be de- ployed in an environment with a consistent task, manually curating a plan can alleviate the pains of poor LLM planning as well as pro- vide an opportunity to manually craft relevant roles and prompts. Another option is to augment the LLM agent with an external planning tool, which has shown promise [25, 52]. Because LLM planning remains an open area of research, the example in Figure 2.P1 simply assumes planning capabilities without subscribing to a specific approach, for illustrative purposes. To describe current approaches to planning in LLM agents, we categorize them into implicit and explicit planning. For implicit planning, some agents will rely on the LLM to iteratively determine the immediate next step until the task is complete, without ever eliciting a plan [33, 104]. This approach relies on the idea that, when provided with an end goal, the LLM can maintain an internal plan whose steps are revealed iteratively without any explicit plan 3See [1, 6, 9] for common commercial API support for these hyperparameters. formalization. This approach can be viable when the environment is dynamic or only partially observable, such as interacting with a webpage [33]. The other form of implicit planning is the creation and execution of a plan in a single inference, as seen in prompting strategies such as Plan-and-Solve [88] and, to a degree, zero-shot Chain-of-Thought [42]. These approaches rely on conditioning subsequent token generation (i.e., the "execution") on a plan by first generating said plan. Due to the single-hop nature of this approach, it is not recommended for complex tasks, particularly those that would benefit from feedback during execution. Explicit planning is characterized by the explicit formalization of a multi-step plan, typically executed in a multi-hop fashion. The most basic form is to simply request the formulation of a plan and then execute it, as demonstrated by the Least-to-Most prompting strategy [107]. More advanced approaches will first develop a plan and then iteratively refine the plan as steps are executed [55]. Both of these require long-term planning, which is where LLMs tend to demonstrate lackluster performance [38, 82, 83]. 5.2 Task Decomposition It is important to understand the limitations of an LLM before formulating a plan for it to execute. Agentic LLM systems are often applied to problems that a single LLM call cannot resolve but a sequence of calls can. Tasks can typically be decomposed into smaller pieces that, when solved individually, can be reconstructed to produce the final solution [107]. Returning to Figure 2, the request made in Figure 2.I1 is com- posed of multiple subtasks, namely: (1) retrieve recipes that contain rice, beans, and tomato that the user will like and (2) order any missing ingredients. It is also reasonable to decompose (1) further, into a retrieval of recipes that contain the required ingredients and separately a request to select the one that best fits the user’s tastes. If decomposing a well-defined task manually, iteratively decom- posing the task into subtasks and testing an LLM on them can provide valuable insight into what the LLM can consistently handle. Breaking down the problem logically is simple enough, but ascer- taining which tasks an LLM can perform well and which require further decomposition can be challenging, particularly when deal- ing with stochasticity and prompt changes. It is recommended to evaluate the LLM agent frequently and systematically during this process, as discussed in Section 9.2. It may be easier to start at the most basic building blocks of the tasks and combine them than to find the minimum number of viable tasks to start. While it may be intuitive to assume that the more atomic the task the better, this is not always the case. It has been shown that LLMs not only possess the ability to solve multiple distinct tasks in a single query [44, 76, 98] but that composing multiple tasks into a single prompt can increase performance on all constituent tasks, as well as decreasing overall context usage [76]. However, the degree to which tasks may be combined should be the subject of rigorous experimentation for the specific task and environment in which it is considered. 5.3 Plan Adherence One of the responsibilities of the LLM agent is to oversee the appli- cation of the plan. It should decide if a step needs to be repeated (e.g., for Error Handling) or skipped for a given input (e.g., to iterate on the plan [55]). One of the major concerns of LLMs as planners is their inability to identify whether or not they can complete a given task [38]. As such, it is often impossible for an LLM agent to know if a step will be successful until it has been attempted. Thus, it follows logically that an evaluation of the success of each step should take place following execution. Similarly, the overall success of the plan should be evaluated upon completion of all steps. If unsuccessful, the LLM agent may need to adjust or rerun the plan, based on the results of each step and the overall plan (see Section 8.2 for a discussion on incorporating feedback). 6 Memory 6.1 Retrieval Augmented Generation Retrieval augmented generation (RAG) (introduced in [46]) has emerged as a staple of agentic LLM applications in industry [31]. The basis is simple: a system that can provide external context relevant to a natural language input. Typically, an incoming input will be compared against a ground-truth data store and the most relevant piece(s) of information will be provided to the LLM as context upon which it will base its response. This can be done either implicitly, where a user’s input is always used for retrieval for a given LLM call, or explicitly, where the LLM uses RAG as a tool. This has a number of benefits for LLM systems: • Grounding. Rather than relying on the LLM “remember- ing” relevant context from its training data correctly, we can provide the LLM with accurate relevant information. Providing grounded text as context significantly reduces LLM hallucinations and fills knowledge gaps in the training data [30, 46, 74]. • Explainability. Rather than relying on an LLM opaquely referencing information it has been trained on, adherence to context supplied as part of RAG provides insight into exactly where an LLM is getting its information [31]. • Timeliness. While LLMs can reference information from their static training data, the LLM will be subject to a hard information cutoff (the latest date training data was scraped) and a soft information cutoff (events close to its hard information cutoff that have limited coverage). Rather than turning to the infeasible prospect of retraining with updated data, we can provide updated information that is relevant to the query as context [31]. • Outsourcing. Depending on the content, quality, and re- liability of the RAG database, aspects of the query can be implicitly outsourced to the context returned, such as rea- soning and decision-making. • Alignment. The vast amount of training data used for LLMs is the source of their natural language understand- ing but should not necessarily be relied on for unbiased, trustworthy, and safe generation. Typically, aligning LLM ouputs with human preferences is seen as a data collection and training problem [19, 90] but can also be addressed post-hoc with RAG. By augmenting an LLM’s natural lan- guage capabilities and tendencies with context derived from a more refined dataset that adheres to a desired set of hu- man preferences, its output can be guided to conform to a desired set of content and attitudes. This requires care- ful curation of the data store but is a viable method for black-box alignment. To exemplify the points above, consider the RAG sources refer- enced in Figure 2.R1 and 2.R2. Figure 2.R1 can be quoted to avoid recipe hallucinations (grounding) and be updated with new recipes (timeliness). Figure 2.R2 could be useful in responding to Figure 2.I2; where there is no general consensus, we can supply our own ground truth rather than require the LLM to answer a potentially moral question (outsourcing). The tone and terminology of both Figure 2.R1 and 2.R2 will guide the ideals, content, and terminology used by the LLM (alignment). There are two main approaches to RAG: knowledge graphs [29] and vector databases [20, 31], with the latter seeing far greater adoption due to its simplicity. For a discussion on implementing RAG and the extant commercial and open-source offerings, see [31]. 6.2 Long-Term Memory Sometimes, key information is gained during a conversation that may be helpful across all contexts, such as a useful piece of external knowledge or information about a user or task. In those instances, it may be advantageous to store that information in a way accessible to the agent so that its impact is not limited to the current context. This is commonly referred to as "long-term memory" [64, 87, 106]4. We want to be selective with the information that is stored in long-term memory so that it is generally useful and not excessively large. Some common approaches are to store prior solutions to queries [64], global summaries and insights [106], and acquired tools [86]. Long-term memory can be enhanced with reflection, consol- idation, forgetting, revision, and other mechanisms designed to mimic long-term memory in humans (see [106] for a discussion on advanced long-term memory implementation). For simplicity, we focus on a simple version of long-term memory, where information is simply stored and retrieved, and any edits are manual. For this simple variation, we derive the following three criteria from existing literature on long-term memory in LLM agents [14, 64, 86, 87, 106] to use as a litmus test for what information should be stored: • Independent. The information should not have any im- plicit dependencies, such as input values. • Relevant to a consistency. The information should be relevant to consistencies in the agentic LLM system, which may include a task, user, or environment. • Applicable long-term. The information should consis- tently be applicable to contexts to which the LLM agent may be exposed. See Table 1 for the above criteria applied to examples drawn from Figure 2. 4A real-world implementation of long-term memory is OpenAI’s “Memory” [14]. During conversations with ChatGPT, the LLM will save information that it deems particularly useful to its memory. That memory is then made available to the LLM in future conversations. A clear benefit of this is that it reduces repetition on the part of the user and allows the LLM to better fulfill its objective of providing relevant responses. Table 1: Analysis of information from Figure 2 in context of the three criteria for storing in-context information. Information Independent Relevant to Consis- tency Yes No Yes Yes E1 Corn is no longer available E2 Poultry is no longer available U2 Allergic to nuts I1 I have rice, beans, and tomatoes... Yes Yes Yes No Applicable Long-Term Yes No Yes No Store the Info Yes No Yes No If the three criteria above are ensured, then the gathered in- context information can be a useful starting place for prompt im- provements. It will be information that has been identified as gen- erally and consistently useful to the LLM agent’s environment and may be appropriately suited to permanent inclusion in user or sys- tem prompts. It can also be valuable to review long-term memory when making prompt, task, persona, hyperparameter, or model updates; reordering LLM calls; or adjusting tool functionality as such changes may impact the validity of the three criteria. 6.2.1 Extracting Information for Long-Term Memory. A common approach to extracting information that belongs in long-term mem- ory is to leverage an external conversation moderator [73, 106]. The external moderator (e.g., an LLM with a separate role) that reviews conversations (either whole or in pieces) and can be tasked with extracting information it deems compliant with the three criteria above. This is an instance where care must be taken with phrasing as the subjectivity of the task may make the LLM prone to framing bias in its response (e.g., if we ask if there is anything useful to pull out, the LLM will likely pull out some information) [28]. Storing Long-Term Memory. Once a piece of information has 6.2.2 been deemed worthy of long-term memory, it should be stored. Some approaches include embedding and storing the information in a vector database (similar to RAG) [51, 106] and natural language storage, although interacting with the latter quickly becomes un- wieldy as the amount of long-term memory increases. The structure of the vector database allows us to easily query relevant informa- tion [51, 86, 106]. 6.2.3 Utilizing Long-Term Memory. Once information is stored in long-term memory, we must decide when to expose it to the LLM agent. It is key to understand what information is relevant in the current scope. LLM agents may be composed of many LLM calls with different purposes and contexts; not all information from long- term memory will apply to every LLM call. For example, if the user from Figure 2 decides to plan meals once a week (per Figure 2.I5), that would be a valuable long-term memory for Figure 2.Pe1 but not necessarily 2.Pe2, which is mainly used for its culinary expertise. In such instances, the relevance afforded by retrieval from a vector database is valuable [51, 86, 106]. Once relevant information has been retrieved from long-term memory, it can be shared in an LLM call via the user or system prompt. 7 Tools 7.1 Using Tools To enable the LLM to use tools, tool descriptions and methods of invocation need to be exposed with the LLM (similar to traditional software engineering documentation). If the number of tools in use is small, they can be introduced in natural language. The method for invoking a tool should be clear and easily parsable. A common way to do this is by defining JSON schemas or function signatures, although the latter has been shown to be better for LLM agents [68, 89]. Tools can be called either explicitly or implicitly, with the for- mer being the de facto approach in practice. Explicit usage simply entails the invocation of a tool as part of the LLM agent’s out- put [67, 70]. Once the tools are defined and passed as context, the agent will have the means to perform such an invocation in the specified parsable format. Tools can also be implicitly invoked by the implementor in response to an LLM agent’s action or inaction. For example, if a transition between personas occurs, it may be the case that the system will always benefit from a summarization of preceding dialogue. Rather than rely on the LLM agent to invoke a summarization at every persona change, every such transition can trigger a summarization behind the scenes. See Section 9.3 for a discussion on incorporating implicit tool calling. 7.2 Managing Multiplicity As the number of tools grows, defining tools in natural language quickly becomes unwieldy and a structured approach is necessary. To do so, we can leverage LLMs’ convenient understanding of code by creating more concise tool definitions using JSON schemas or function signatures in conjunction with condensed natural lan- guage descriptions5. Often, distinct tools can be placed into distinct groups based on similar core functionality (i.e., if they can reasonably be seen as inheriting from the same base class). These groups can be called “toolsets” or "toolkits"6 and are helpful for determining if tools can be combined behind a single interface or introduced together in the prompt. For example, the tools Figure 2.T1 and 2.T2 introduced in the example would not belong in the same toolset but 2.T2 and 2.T3 would. 7.3 Adding Tools Dynamically Sometimes the tools that are available in the environment in which an agentic LLM system will be deployed are not known beforehand. In this case, we can add “tool identification” as a task for the sys- tem [70, 86]. A compelling example and implementation of this can be observed in the Voyager paper, where an LLM-based agent autonomously traverses the world of Minecraft7 and dynamically assembles a set of tools based on interactions with the environment, which are then stored in long-term memory [86]. 5See https://python.langchain.com/docs/concepts/#tools for a discussion on tool defi- nition and [2] for an implementation. 6See https://python.langchain.com/docs/concepts/#toolkits. 7https://www.minecraft.net 8 Control Flow In the context of LLM agents, control flow refers to the ability to determine what needs to be done in order to respond to a query. Tasking an LLM with control flow is what enables LLM-based agents to accomplish complex tasks that elude the capacity of a single infer- ence. This endows the LLM agent with the autonomy to incorporate advanced techniques such as planning, tool usage, and multi-step reasoning as it sees fit [72, 87]. In practice, this may look like the LLM agent receiving user input (i.e., observing the environment) and selecting the immediate next action. The agent continues to take actions until it decides to stop. For this to be possible, the LLM agent needs to be aware of the action space [102], such as the stopping criteria, available tools (e.g., Figure 2.T1, 2.T2, and 2.T3), available planning options (2.P1), the ability to take a turn to think out loud [102], and utilizing other personas (e.g., 2.Pe2). Consider an LLM agent receiving Figure 2.I1. Rather than simply providing an output, the agent can opt to leverage 2.P1, the planning module, to decompose the complex task and generate a multi-step plan that it can then administer. Once the plan is complete, the agent can decide if it has enough information to provide the final output to 2.I1 or if it needs to take additional actions. Here, we present practical considerations for ensuring the LLM agent can interact with its environment smoothly and without interruption. 8.1 Output Processing When chaining together multiple LLM inputs and outputs, it is often advantageous to process the text before handing it off to the next step. Although natural language is human-readable, it is advisable to use a more structured format (such as JSON or executable code) that is easily parsable [89]. While weaker models may struggle with instruction following, most commercial models have been optimized to adhere to desired output formats specified in the user or system prompt8. Because we approach LLMs from a black-box perspective, we do not discuss the underlying approaches to constraining LLMs to output a specific format. However, it is important to note that the reasoning capabilities demonstrated by an LLM may be negatively (and inadvertently) impacted by constrained generation, depending on the implementation [22, 79]. Because of this, it has been shown that requiring code outputs instead of a specific structure can yield better agents [89]. 8.2 Error Handling Error handling is one of the most important yet elusive parts of building a robust agentic LLM system. Because LLMs are inherently stochastic, chaining several LLM calls together compounds the risk of failure to the point of near inevitability for long sequences. As such, every LLM call in an agentic LLM system should be treated as a potential point of failure and supported by appropriate error handling. We provide Figure 3 of an erroneous tool call to demonstrate several approaches to error handling, where specific responses will 8Output processing documentation for common commercial models: Anthropic [3]; OpenAI [5]; Google [10] be referenced by the codes assigned in the figure (e.g., 3.UI to refer to "Order me an onion."). The system prompt, containing role and tool information, is excluded for simplicity. Figure 3: An example of an erroneous tool call, following the scenario presented in Figure 2. Static Retry. The simplest approach to handling a problem- 8.2.1 atic output is to retry the LLM call with the same prompt. While other hyperparameters may stay the same, the seed should always change between static retries to avoid completely duplicate calls. If using a low temperature or a high top-p, then it may also make sense to adjust those values appropriately so as to receive a different output. In the context of Figure 3, this might look like 3.UI simply being rerun with a different seed. For low-context calls that yield output that is easily verifiable (e.g., parsing the output into a JSON object), it is a simple yet valuable addition to attempt a few static retries in case verification fails. For outputs that are more difficult to verify, such as natural language instructions that are interpreted downstream, static retries are less helpful as the cost of verification increases. Informed Retry. A more informed approach is to append the 8.2.2 LLM’s output to the history, add another user message indicat- ing that the output was unsuccessful, and try again. This should be supplemented with specific error messages or additional direc- tions [39, 81]. In the Figure 3 example, an informed retry might look like sending the following list of messages: 3.UI, 3.AO, 3.TR, and "Attempting the above code yielded the provided error. Please provide an updated output that achieves the initial instruction.". 8.2.3 External Retry. Rather than asking for an informed retry from the same context, we can pull out pieces of the history and provide it to an LLM in a separate context to either fix the previous output or generate a new one. This will likely require significant context from the original call but can be supplemented and differentiated by using a different role, different instructions, and error information. Often, shifting roles from, for example, a software engineer to a code reviewer can provide the impetus the LLM needs to fix or generate the correct output. While it has been shown that the explanations from external LLM-based error systems are frequently unreliable and sensitive to prompt changes [39], having access to task-specific roles, detailed error information (e.g., the error raised by a piece of generated Python code), and background context helps mitigate those issues[81]. In the Figure 3 example, an external retry might look like sending the following list of messages: "You are an expert debugger. You have access to {tool information}." and "When attempting to fulfill the request, ’{3.UI}’, a helper tried to run the code ‘{3.AO}‘, which yielded ‘{3.TR}‘. Please provide an updated output that achieves the initial instruction.". It should be noted that LLMs struggle to locate errors [39, 81] but demonstrate strong error correction capabilities if provided sufficient context, specifically error location [81]. As such, when employing Informed Retry or External Retry, care should be taken to include error information that pinpoints the source, such as diagnostic error messages and tracebacks from APIs and runtime environments. 8.3 Stopping As the control flow of the agentic LLM system is controlled by an LLM, a clear stopping method needs to be defined. This will likely take the form of a predetermined stop token or phrase inserted into the system prompt, such as "TERMINATE" [95]. It should be a token or phrase that is easily parsable and not otherwise likely, to avoid accidental stopping. 8.4 Multiple Personas Often, the role that an LLM is assigned has a significant impact on its performance on a given task. This has been observed in LLM literature generally, becoming a key ingredient of effective prompt- ing [41, 43], and in recent LLM multi-agent research, emerging as a necessary component for agent multiplicity in many such archi- tectures [32, 35, 50, 62, 92, 95]. For example, while the Figure 2.Pe1 role is good for answering most of the user’s queries, the Figure 2.Pe2 role may be better at answering Figure 2.I4 because it requires specialist culinary knowledge. Because there are likely to be many distinct tasks that form part of an agentic LLM system, there is usually room for multiple roles to be used. An overview of approaches to defining personas for LLMs, or "profiling" them, is detailed in [87], categorizing them as handcrafted (e.g., [62, 64]), LLM-generated (e.g, [92, 99]), or dataset- aligned (i.e., derived from a pertinent dataset). The roles should be informed by the task that the call is handling. This is dependent on the overall context of the agentic LLM system but can largely be addressed in the following ways: • If the tasks are well-defined, handcraft specialist roles for each task (e.g., Figure 2.Pe1 and 2.Pe2). • If the tasks are not well-defined but generally correspond to a single topic, use the most specific handcrafted role for that topic (e.g., the catch-all Figure 2.Pe1). • If the tasks are truly undefined to start (e.g., an assistant that helps with anything) or the topic is very broad: – Define several distinct roles to which the LLM agent can route subsequent calls as it sees fit [75]. Once the agent is in use, a more informed set of personas can be defined according to the most frequently ones. This may also be thought of as the dataset alignment approach [87], where the dataset is constructed in the environment under an interim set of personas. – Leverage an LLM to create the role that it deems would be best able to respond to the prompt [92, 99]. This is more expensive as generating the role requires LLM usage but is certainly more robust to unforeseen sce- narios. This approach may be used in conjunction with the above point (e.g., if no suitable predefined role is found, create one). 8.5 Managing Relevant Context Managing the context that is sent to an LLM is an effective method of increasing the efficiency (speed and cost) and performance of an LLM system, as inference time is dependent on the number of input tokens [63, 85] and LLMs perform worse in long-context sce- narios, particularly for complex tasks [48, 53]. Additionally, careful context management is a necessity given that LLMs have limited context windows9. Even for "long-"context LLMs (>100k token limit), many tasks quickly become unwieldy if not properly man- aged (e.g., working with HTML, where single webpages can be hundreds of thousands of tokens). This is a key consideration to make during task decomposition; the more specific the task, the more extraneous context (e.g., prior messages) can be trimmed [65]. As such, the context that a specific LLM call receives should be tailored to the task as much as possible. Even if an LLM call re- quires past messages, it is often possible to strip out certain pieces of context or summarize them, leaving the parts the subsequent call relies on intact and maintaining the overall meaning. Significant adjustments can be made to the context between calls to decrease the overall token count and remove extraneous context, thus re- ducing LLM confusion and increasing performance for the LLM call [65]. 9 Additional Considerations 9.1 Model Size The size of the model to use is typically driven by three concerns: cost, speed, and performance. Usually, the bigger the model, the higher the cost, the lower the speed, and the better the performance (although this is not a hard-and-fast rule). It can be tempting to build an agentic LLM system around the weakest model that will adequately do the job so that all three conditions are optimized from the start. However, attempting to build out a functional system from a smaller model first will likely be more time consuming and expensive than starting at the strongest model possible and down- grading the models used for specific calls once the LLM agent has demonstrated competence in the environment. Due to the influence one call can have on subsequent ones, it is infeasible to understand what is possible for a given use case if not all the pieces are working optimally. By starting with stronger models, there will be a gold- standard baseline to compare against so the performance impact of downgrading a model for a specific call can be measured10. It is recommended that the correct model is selected on a per-task basis 9E.g., Claude 3: 200k [15]; Gemini 1.5 Pro: 2M+ [8]; GPT-4o: 128k [4] 10See [21, 56] for discussions on these points from an industry perspective. and evaluated both individually and in the context of the entire agentic LLM system. 9.2 Evaluation Evaluating an agentic LLM system can be challenging due to the potential for long sequences, non-determinism in LLMs, interac- tions with external entities, and tasks that may not have obviously correct solutions. Nonetheless, it is essential to have an approach to evaluation defined before deployment to (1) have a baseline to compare against and (2) measure performance changes over time and in response to changes. When creating a dataset for evaluating an LLM agent, the most important consideration is that it accurately resembles the envi- ronment in which is will be deployed. There are many LLM agent benchmarks available targeting specific domains [26, 54, 101, 103] as well as general purpose application [58, 78, 96], but many agentic LLM systems applied to a specific task will be too niche to benefit from a broader benchmark. However, insomuch as an established benchmark fits the application of the LLM system, it can be a strong starting point for evaluation and refinement. Whether an existing benchmark is used or not, it is advisable to collect informative agent interactions (e.g., long sequences, short sequences, incorrect outputs, correct outputs, etc.) and related metadata (e.g., hyperpa- rameters) in the deployment environment. Doing so will allow the creation of a dataset, comprised of reproducible input and output pairs, that is derived from the environment. Even a dataset with a few samples will provide a baseline to compare against to ensure prompt engineering addresses failed executions, identify the effects of model and prompt changes, and avoid regression in the system11. While traditional metrics (e.g., precision, recall, etc.) are useful to track, metrics specific to the agent can help reveal changes in the system that higher-level metrics fail to reflect [24, 40]. For example, an LLM agent that arrives at the same answer when presented with two different prompts is superficially consistent but a difference in the number of intermediate steps to reach that conclusion may indi- cate that the system is overly sensitive to prompt changes. Building from [40, 52, 57] that suggest types of alternative evaluation, we provide sample metrics below to use as a starting place, although useful metrics should be chosen in accordance with the design of the LLM agent and the environment in which it is implemented12. 9.2.1 Holistic. No matter how well an agentic LLM system might do along the way or what emergent capabilities it might demon- strate, the final output will determine whether the system is accom- plishing its task or not. It is impossible to tell how a composition of LLM calls will perform without running them end-to-end; thus, evaluating an LLM agent should primarily rely on holistic metrics to determine if it is performing as expected. Sample Metrics. • Across X distinct prompts, how many correct answers does the agent produce? 11See [7] for an industry approach to evaluating deployed LLM systems. 12Note that the following are focused primarily on evaluating agentic LLM systems but that external components should also be evaluated, such as the RAG system (e.g., the quality of retrievals and the fidelity of embedded documents) [30, 69] and tools (e.g., reliability and consistency of their output). • For input X across N trials, how many distinct answers does the agent produce? • For input X across N trials, what is the average number of steps executed by the agent? • For input X across N trials, what is the average number of tools used by the agent? • For input X (that requires LLM planning) across N trials, what is the average number of steps in each plan? • For input X across N trials, what is the average cost/time? 9.2.2 Piecemeal. Measuring the performance of a single or a subset of LLM calls that completes a definable task is a viable method of diagnosing problems in or making changes to the system. However, due to the influence a single LLM call can have downstream in an LLM agent, isolated piecemeal evaluation of an agentic LLM system should never be considered a substitute for Holistic measures. Sample Metrics. • For call X with N trials, how many distinct answers are produced? • For N synonymous versions of input A to call X, how many distinct top-K documents are provided by RAG from each embedded version of A? • For call X with tool access across N trials, how many distinct tools are used? • For call X across N trials, what is the average cost/time? 9.3 Integration with Traditional Engineering Because LLMs are inherently stochastic, it is often easier to offload as much of the agent’s responsibility onto traditional engineering as possible. This allows outsourcing parts of the system that require determinism to methods that can be deterministic. By crafting an LLM agent according to software engineering best practices, we can ensure that key components that are necessary for a given task are always completed or included, rather than relying on the agent to make a request or execute an action. This can take the form of automatically managing context between calls, output processing, combining tools into toolsets (e.g., putting Figure 2.T2 and 2.T3 behind a "delivery" interface), incorporating information from long- term memory permanently into the prompts (e.g., Figure 2.E1), setting callbacks on certain transitions and calls (e.g., to generate a summary of the most recent conversation to use as context when transitioning from Figure 2.Pe1 to 2.Pe2), and adding an evaluation after each step of a plan (see Plan Adherence). (The last two can be thought of as implicit tool usage; see Section 7.1). However, care should be taken not to limit the autonomy of the agent in doing so. One way to return autonomy to the agent while still leveraging the benefits of traditional engineering is to allow the agent to short-circuit. Short-Circuiting. Short-circuiting (from the world of soft- 9.3.1 ware engineering: the idea of evaluating an expression only so far as to guarantee a single answer) is an integral technique for agentic LLM systems. This can be as simple as including stopping criteria into the LLM agent’s instructions (see Section 8.3 for examples) or allowing the LLM agent to produce a final output in a single turn. If an agentic LLM system does not short-circuit when it ob- viously should, the system may have an overreliance on external engineering (i.e., the flow (or parts of the flow) of the agent being hard-coded)13. As an example, the query presented in Figure 2.I3 demonstrates an instance when an LLM agent may want to short-circuit. The query poses a simple question-answering scenario that most current models could satisfactorily respond to. Allowing the agent the autonomy to determine what step to take next (as opposed to, for example, implicitly calling Figure 2.P1 for every input) would permit it to simply provide an answer, thus short-circuiting any other components. 10 Limitations Although we present some practical methods for the evaluation of deployed systems, we do not explore human-in-the-loop evaluation as human-computer interaction represents a rich field of study that exceeds the scope of this work. An important follow-up to evaluation is how to compare and respond to changes in a deployed agentic LLM system, such as prompt, model, and environment changes. These considerations remain largely underexplored in current literature and represent some of the key challenges to deploying real-world LLM agents. We do not discuss these considerations as agent maintenance does not fall into the scope of this work but suggest that they are prominent directions for future work. We explore one aspect of cost for agentic LLM systems, model size, but leave other considerations (such as whether to use an out-of-the-box model or to finetune one on a specific task [23, 45], to leverage increasingly strong open-source models [18, 27, 37] or to rely on aligned commercial models [11, 61, 80], and, similarly, to self-host or to use a 3rd party provider) for future work as cost and feasibility of proposed agent architectures warrant a review on their own. See [40] for a discussion on the need for cost-informed LLM agent research. While we approach the agent’s underlying LLM from a black-box perspective for simplicity and relevance to many industry applica- tions, approaching it as whitebox opens up additional complexities and opportunities. We deem that considering model specifics ex- ceeds the scope of this review but recognize the value of future work highlighting practical considerations for the real-world deployment of whitebox LLM agents. 11 Conclusion In this review, we present relevant research into LLM agents and derive actionable insights from it that can be utilized when imple- menting and deploying agentic LLM systems in the real world. We ascribe relevant research and insights to the four main components of LLM agents from application-focused literature—Planning, Mem- ory, Tools, and Control Flow—to provide a review that is mutually accessible to both industry and academia. Namely, for Planning, we explore how poor LLM planning capabilities hinder current LLM agent applications and the practical benefits to be derived from task decomposition; for Memory, we explore the benefits of 13A recent example of this is OpenAI’s GPT-o1 [60]. The initial implementation has no short-circuiting, meaning even simple queries that a much weaker model can handle or that require no significant output still incur a full traversal of the agentic LLM system. For example, asking GPT-o1 to “Do nothing” will still pass through the planning, thinking, and alignment stages of the system. and practical considerations to make when leveraging RAG and long-term memory in an LLM agent; for Tools, we discuss how to present and manage tools for an LLM agent; for Control Flow, we provide practical insights for promoting an uninterrupted LLM agent execution and managing agent internals, such as personas and context usage; and, lastly, suggest additional considerations, such as model size, evaluation, and integrating an LLM agent with traditional engineering. Acknowledgments We would like to acknowledge Sergei Petrov and Sonny George, whose input and feedback were instrumental in shaping the foun- dations of this work. References [1] OpenAI [n.d.]. API Reference. OpenAI. Retrieved October 16, 2024 from https://platform.openai.com/docs/api-reference [2] Pinecone [n.d.]. Building Custom Tools for LLM Agents. Pinecone. Retrieved Oc- tober 6, 2024 from https://www.pinecone.io/learn/series/langchain/langchain- tools/ [3] Anthropic [n.d.]. Increase output consistency (JSON mode). Anthropic. Retrieved October 16, 2024 from https://docs.anthropic.com/en/docs/test-and-evaluate/ strengthen-guardrails/increase-consistency [4] OpenAI [n.d.]. Models. OpenAI. Retrieved October 16, 2024 from https: //platform.openai.com/docs/models [5] OpenAI [n.d.]. Structured Outputs. OpenAI. Retrieved October 16, 2024 from https://platform.openai.com/docs/guides/structured-outputs [6] Anthropic 2024. API Reference. Anthropic. Retrieved October 16, 2024 from https://docs.anthropic.com/en/api [7] LangChain 2024. Evaluate your LLM application. LangChain. Retrieved October 16, 2024 from https://docs.smith.langchain.com/tutorials/Developers/evaluation [8] Google 2024. Gemini models. Google. Retrieved October 16, 2024 from https://ai.google.dev/gemini-api/docs/models/gemini [9] Google 2024. Generate content with the Gemini API. Google. Retrieved October 16, 2024 from https://cloud.google.com/vertex-ai/generative-ai/docs/model- reference/inference [10] Google 2024. Generate structured output with the Gemini API. Google. Retrieved October 16, 2024 from https://ai.google.dev/gemini-api/docs/structured-output [11] Anthropic 2024. Introducing the next generation of Claude. Anthropic. Retrieved October 8, 2024 from https://www.anthropic.com/news/claude-3-family [12] DAIR.AI 2024. LLM Agents. DAIR.AI. Retrieved October 8, 2024 from https: //www.promptingguide.ai/research/llm-agents [13] SuperAnnotate 2024. LLM agents: The ultimate guide. SuperAnnotate. Retrieved October 7, 2024 from https://www.superannotate.com/blog/llm-agents [14] OpenAI 2024. Memory and new controls for ChatGPT. OpenAI. Retrieved October 8, 2024 from https://openai.com/index/memory-and-new-controls-for- chatgpt/ [15] Anthropic 2024. Models. Anthropic. Retrieved October 16, 2024 from https: //docs.anthropic.com/en/docs/about-claude/models truefoundry 2024. What are LLM Agents? truefoundry. Retrieved October 7, 2024 from https://www.truefoundry.com/blog/llm-agents [16] [18] [17] Lisa P. Argyle, Ethan C. Busby, Nancy Fulda, Joshua R. Gubler, Christopher Rytting, and David Wingate. 2023. Out of One, Many: Using Language Models to Simulate Human Samples. Political Analysis 31, 3 (Feb. 2023), 337–351. https://doi.org/10.1017/pan.2023.2 Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen Technical Report. arXiv:2309.16609 [cs.CL] https://arxiv.org/abs/2309.16609 [19] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bow- man, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam Mc- Candlish, Tom Brown, and Jared Kaplan. 2022. Constitutional AI: Harmlessness from AI Feedback. arXiv:2212.08073 [cs.CL] https://arxiv.org/abs/2212.08073 [20] Ryan C. Barron, Ves Grantcharov, Selma Wanna, Maksim E. Eren, Manish Bhattarai, Nicholas Solovyev, George Tompkins, Charles Nicholas, Kim Ø. Ras- mussen, Cynthia Matuszek, and Boian S. Alexandrov. 2024. Domain-Specific Retrieval-Augmented Generation Using Vector Stores, Knowledge Graphs, and Tensor Factorization. arXiv:2410.02721 [cs.CL] https://arxiv.org/abs/2410.02721 [21] Gad Benram. 2024. Understanding the cost of Large Language Models (LLMs). Retrieved October 16, 2024 from https://www.tensorops.ai/post/understanding- the-cost-of-large-language-models-llms [22] Luca Beurer-Kellner, Marc Fischer, and Martin Vechev. 2024. Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation. In Proceedings of the 41st International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 235), Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp (Eds.). PMLR, 3658–3673. https://proceedings.mlr.press/v235/beurer-kellner24a.html [23] Martin Juan José Bucher and Marco Martini. 2024. Fine-Tuned ’Small’ LLMs (Still) Significantly Outperform Zero-Shot Generative AI Models in Text Classi- fication. arXiv:2406.08660 [cs.CL] https://arxiv.org/abs/2406.08660 [24] Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie. 2024. A Survey on Evaluation of Large Language Models. ACM Trans. Intell. Syst. Technol. 15, 3, Article 39 (March 2024), 45 pages. https://doi.org/10.1145/3641289 [25] Gautier Dagan, Frank Keller, and Alex Lascarides Keller. 2024. Dynamic plan- ning with an LLM. In Proceedings of the Language Gamification Workshop 2024 at NeurIPS. Neural Information Processing Systems Foundation (NeurIPS), 1–14. https://doi.org/10.48550/arXiv.2308.06391 Language Gamification Workshop 2024 at NeurIPS ; Conference date: 14-12-2024 Through 14-12-2024. [26] Shihan Deng, Weikai Xu, Hongda Sun, Wei Liu, Tao Tan, Liujianfeng Liujianfeng, Ang Li, Jian Luan, Bin Wang, Rui Yan, and Shuo Shang. 2024. Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 8813–8831. https://doi.org/ 10.18653/v1/2024.acl-long.478 [27] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sra- vankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasu- den Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiao- qing Ellen Tan, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aaron Grattafiori, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alex Vaughan, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Franco, Aparajita Saraf, Arkabandhu Chowdhury, Ash- ley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai, Chris Tindal, Christoph Feichten- hofer, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, Danny Wyatt, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowl- ing, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Firat Ozgenel, Francesco Caggioni, Francisco Guzmán, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Govind Thattai, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hamid Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Goldman, Ibrahim Damlaj, Igor Molybog, Igor Tu- fanov, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Karthik Prasad, Kartikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kun Huang, Kunal Chawla, Kushal Lakhotia, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Maria Tsimpoukelli, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Nau- mov, Maya Lathi, Meghan Keneally, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Raste- gari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikolay Pavlovich Laptev, Ning Dong, Ning Zhang, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyag- ina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Ray- mond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Rohan Maheswari, Russ Howes, Ruty Rinott, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Kohler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vítor Albiero, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaofang Wang, Xiaojian Wu, Xiaolan Wang, Xide Xia, Xilun Wu, Xinbo Gao, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yuchen Hao, Yundi Qian, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, and Zhiwei Zhao. 2024. The Llama 3 Herd of Models. arXiv:2407.21783 [cs.AI] https://arxiv.org/abs/2407.21783 Jessica Maria Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, and Zexue He. 2024. Cognitive Bias in Decision-Making with LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2024, Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (Eds.). Association for Computational Linguistics, Miami, Florida, USA, 12640–12653. https://doi.org/10.18653/v1/2024.findings- emnlp.739 [28] [29] Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, and Jonathan Larson. 2024. From Local to Global: A Graph RAG Approach to Query-Focused Summarization. arXiv:2404.16130 [cs.CL] https://arxiv.org/abs/2404.16130 [30] Shahul Es, Jithin James, Luis Espinosa Anke, and Steven Schockaert. 2024. RAGAs: Automated Evaluation of Retrieval Augmented Generation. In Pro- ceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, Nikolaos Aletras and Orphee De Clercq (Eds.). Association for Computational Linguistics, St. Julians, Malta, 150–158. https://aclanthology.org/2024.eacl-demo.16 [31] Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, and Haofen Wang. 2024. Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv:2312.10997 [cs.CL] https://arxiv.org/abs/2312.10997 [32] Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, Olaf Wiest, and Xiangliang Zhang. 2024. Large Language Model Based Multi-agents: A Survey of Progress and Challenges. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, Kate Larson (Ed.). International Joint Conferences on Artificial Intelligence Organization, 8048–8057. https://doi.org/10.24963/ijcai.2024/890 Survey Track. Izzeddin Gur, Hiroki Furuta, Austin V Huang, Mustafa Safdari, Yutaka Mat- suo, Douglas Eck, and Aleksandra Faust. 2024. A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis. In The Twelfth International Conference on Learning Representations. https://openreview.net/ forum?id=9JQtrumvg8 [33] [34] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The Curi- ous Case of Neural Text Degeneration. In International Conference on Learning Representations. https://openreview.net/forum?id=rygGQyrFvH [35] Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and Jürgen Schmidhuber. 2024. MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework. In The Twelfth International Conference on Learning Representations. https: //openreview.net/forum?id=VtmBAGCN7o [36] Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, and Enhong Chen. 2024. Understanding the planning of LLM agents: A survey. arXiv:2402.02716 [cs.AI] https://arxiv.org/ abs/2402.02716 [37] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, De- vendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7B. arXiv:2310.06825 [cs.CL] https: //arxiv.org/abs/2310.06825 [38] Subbarao Kambhampati. 2024. Can large language models reason and plan? Annals of the New York Academy of Sciences 1534, 1 (March 2024), 15–18. https: //doi.org/10.1111/nyas.15125 [39] Ryo Kamoi, Sarkar Snigdha Sarathi Das, Renze Lou, Jihyun Janice Ahn, Yilun Zhao, Xiaoxin Lu, Nan Zhang, Yusen Zhang, Haoran Ranran Zhang, Su- jeeth Reddy Vummanthala, Salika Dave, Shaobo Qin, Arman Cohan, Wenpeng Yin, and Rui Zhang. 2024. Evaluating LLMs at Detecting Errors in LLM Re- sponses. In First Conference on Language Modeling. https://openreview.net/ forum?id=dnwRScljXr [40] Sayash Kapoor, Benedikt Stroebl, Zachary S. Siegel, Nitya Nadgir, and Arvind Narayanan. 2024. AI Agents That Matter. arXiv:2407.01502 [cs.LG] https: //arxiv.org/abs/2407.01502 [41] Shubhra Kanti Karmaker Santu and Dongji Feng. 2023. TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks. In Findings of the Association for Computational Linguistics: EMNLP 2023, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 14197–14203. https://doi.org/10.18653/v1/2023.findings-emnlp.946 [42] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199–22213. [43] Aobo Kong, Shiwan Zhao, Hao Chen, Qicheng Li, Yong Qin, Ruiqi Sun, Xin Zhou, Enzhi Wang, and Xiaohang Dong. 2024. Better Zero-Shot Reasoning with Role- Play Prompting. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Kevin Duh, Helena Gomez, and Steven Bethard (Eds.). Association for Computational Linguistics, Mexico City, Mexico, 4099–4113. https://doi.org/10.18653/v1/2024.naacl-long.228 [44] Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran Hossen Bhuiyan, Shafiq Joty, and Jimmy Huang. 2023. A Systematic Study and Com- prehensive Evaluation of ChatGPT on Benchmark Datasets. In Findings of the Association for Computational Linguistics: ACL 2023, Anna Rogers, Jordan Boyd- Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 431–469. https://doi.org/10.18653/v1/2023.findings-acl.29 [45] Eric Lehman, Evan Hernandez, Diwakar Mahajan, Jonas Wulff, Micah J Smith, Zachary Ziegler, Daniel Nadler, Peter Szolovits, Alistair Johnson, and Emily Alsentzer. 2023. Do We Still Need Clinical Language Models?. In Proceedings of the Conference on Health, Inference, and Learning (Proceedings of Machine Learning Research, Vol. 209), Bobak J. Mortazavi, Tasmie Sarker, Andrew Beam, and Joyce C. Ho (Eds.). PMLR, 578–597. https://proceedings.mlr.press/v209/ eric23a.html [46] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Proceedings of the 34th Inter- national Conference on Neural Information Processing Systems (Vancouver, BC, Canada) (NIPS ’20). Curran Associates Inc., Red Hook, NY, USA, Article 793, 16 pages. [47] Huao Li, Yu Chong, Simon Stepputtis, Joseph Campbell, Dana Hughes, Charles Lewis, and Katia Sycara. 2023. Theory of Mind for Multi-Agent Collaboration via Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 180–192. https://doi.org/10.18653/v1/2023.emnlp-main.13 [48] Tianle Li, Ge Zhang, Quy Duc Do, Xiang Yue, and Wenhu Chen. 2024. Long- context LLMs Struggle with Long In-context Learning. CoRR abs/2404.02060 (2024). https://doi.org/10.48550/arXiv.2404.02060 [49] Xinzhe Li. 2024. A Review of Prominent Paradigms for LLM-Based Agents: Tool Use (Including RAG), Planning, and Feedback Learning. arXiv:2406.05804 [cs.AI] https://arxiv.org/abs/2406.05804 [51] [50] Yuan Li, Yixuan Zhang, and Lichao Sun. 2023. MetaAgents: Simulating In- teractions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents. ArXiv abs/2310.06500 (2023). https: //api.semanticscholar.org/CorpusID:263829557 Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, and Qin Chen. 2023. AgentSims: An Open-Source Sandbox for Large Language Model Evaluation. arXiv:2308.04026 [cs.AI] https://arxiv.org/abs/2308.04026 [52] Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. 2023. LLM+P: Empowering Large Language Models with Optimal Planning Proficiency. arXiv:2304.11477 [cs.AI] https://arxiv.org/abs/ 2304.11477 [53] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024. Lost in the Middle: How Language Models Use Long Contexts. Transactions of the Association for Computational Linguistics 12 (2024), 157–173. https://doi.org/10.1162/tacl_a_00638 [54] Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. 2024. AgentBench: Evaluating LLMs as Agents. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=zAdUB0aCTQ [55] Zhihan Liu, Hao Hu, Shenao Zhang, Hongyi Guo, Shuqi Ke, Boyi Liu, and Zhaoran Wang. 2024. Reason for Future, Act for Now: A Principled Framework for Autonomous LLM Agents with Provable Sample Efficiency. arXiv:2309.17382 [cs.AI] https://arxiv.org/abs/2309.17382 ljunkai. 2023. How to find the optimal model size for Large Language Models to optimize effectiveness and cost. Retrieved October 16, 2024 from https://repost.aws/articles/ARv5lSlUnnSkanRxSD2EFz5w/how-to-find-the- optimal-model-size-for-large-language-models-to-optimize-effectiveness- and-cost [56] [57] Nikhil Mehta, Milagro Teruel, Xin Deng, Sergio Figueroa Sanz, Ahmed Awadal- lah, and Julia Kiseleva. 2024. Improving Grounded Language Understanding in a Collaborative Environment by Interacting with Agents Through Help Feedback. In Findings of the Association for Computational Linguistics: EACL 2024, Yvette Graham and Matthew Purver (Eds.). Association for Computational Linguistics, St. Julian’s, Malta, 1306–1321. https://aclanthology.org/2024.findings-eacl.87 [58] Grégoire Mialon, Clémentine Fourrier, Thomas Wolf, Yann LeCun, and Thomas Scialom. 2024. GAIA: a benchmark for General AI Assistants. In The Twelfth International Conference on Learning Representations. https://openreview.net/ forum?id=fibxvahvs3 [59] Dana Nau, Malik Ghallab, and Paolo Traverso. 2004. Automated Planning: Theory & Practice. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. [60] OpenAI. 2024. Learning to Reason with LLMs. Technical Report. Retrieved October 16, 2024 from https://openai.com/index/learning-to-reason-with-llms/ [61] OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bog- donoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, An- drew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kon- draciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, An- drew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. 2024. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL] https://arxiv.org/abs/2303.08774 Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative Agents: Interactive Simulacra of Human Behavior (UIST ’23). Association for Computing Machinery, New York, NY, USA, Article 2, 22 pages. https://doi.org/10.1145/3586183.3606763 [62] [63] Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, James Bradbury, Jeff Dean. 2023. Sys. c4be71ab8d24cdfb45e3d06dbfca2780-Abstract-mlsys2023.html Jacob Devlin, Jonathan Heek, Kefan Xiao, Shivani Agrawal, and In ML- https://proceedings.mlsys.org/paper_files/paper/2023/hash/ Efficiently Scaling Transformer Inference. [64] Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2024. ChatDev: Communicative Agents for Software De- velopment. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 15174–15186. https://doi.org/10.18653/v1/2024.acl-long.810 [65] Hongjin Qian, Zheng Liu, Peitian Zhang, Kelong Mao, Yujia Zhou, Xu Chen, and Zhicheng Dou. 2024. Are Long-LLMs A Necessity For Long-Context Tasks? arXiv:2405.15318 [cs.CL] https://arxiv.org/abs/2405.15318 [66] Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Xuanhe Zhou, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Guoliang Li, Zhiyuan Liu, and Maosong Sun. 2024. Tool Learning with Foundation Models. ACM Comput. Surv. (Nov. 2024). https://doi.org/10.1145/3704435 [67] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, dahai li, Zhiyuan Liu, and Maosong Sun. 2024. ToolLLM: Facilitating Large Language Models to Master 16000+ Real- world APIs. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=dHng2O0Jjr [68] Aymeric Roucher and Sergei Petrov. 2024. Our Transformers Code Agent beats the GAIA benchmark! Hugging Face. Retrieved October 2, 2024 from https: //huggingface.co/blog/beating-gaia [69] Alireza Salemi and Hamed Zamani. 2024. Evaluating Retrieval Quality in Retrieval-Augmented Generation. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (Wash- ington DC, USA) (SIGIR ’24). Association for Computing Machinery, New York, NY, USA, 2395–2400. https://doi.org/10.1145/3626772.3657957 [70] Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language Models Can Teach Themselves to Use Tools. In Thirty-seventh Conference on Neural Information Processing Systems. https: //openreview.net/forum?id=Yacmpz84TH [71] Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, and Yulia Tsvetkov. 2023. Minding Language Models’ (Lack of) Theory of Mind: A Plug- and-Play Multi-Character Belief Tracker. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 13960–13980. https://doi.org/10. 18653/v1/2023.acl-long.780 [72] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face. In Thirty-seventh Conference on Neural Information Processing Systems. https://openreview.net/forum?id=yHdTscY6Ci [73] Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2024. Reflexion: language agents with verbal reinforcement learn- ing. In Proceedings of the 37th International Conference on Neural Information Processing Systems (New Orleans, LA, USA) (NIPS ’23). Curran Associates Inc., Red Hook, NY, USA, Article 377, 19 pages. [74] Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval Augmentation Reduces Hallucination in Conversation. In Findings of the Association for Computational Linguistics: EMNLP 2021, Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, Punta Cana, Dominican Republic, 3784–3803. https://doi.org/10.18653/v1/2021.findings-emnlp.320 [75] Chenglei Si, Weijia Shi, Chen Zhao, Luke Zettlemoyer, and Jordan Lee Boyd- Graber. 2023. Getting MoRE out of Mixture of Language Model Reasoning Experts. In The 2023 Conference on Empirical Methods in Natural Language Processing. https://openreview.net/forum?id=UMywlqrW3n [76] Guijin Son, SangWon Baek, Sangdae Nam, Ilgyun Jeong, and Seungone Kim. 2024. Multi-Task Inference: Can Large Language Models Follow Multiple In- structions at Once?. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Lun-Wei Ku, Andre Mar- tins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 5606–5627. https://doi.org/10.18653/v1/2024.acl-long.304 [77] C. Song, B. M. Sadler, J. Wu, W. Chao, C. Washington, and Y. Su. 2023. LLM- Planner: Few-Shot Grounded Planning for Embodied Agents with Large Lan- guage Models. In 2023 IEEE/CVF International Conference on Computer Vi- sion (ICCV). IEEE Computer Society, Los Alamitos, CA, USA, 2986–2997. https://doi.org/10.1109/ICCV51070.2023.00280 [78] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Johan Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew Lampinen, Andy Zou, An- gela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, An- tonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Cather- ine Stinson, Cedrick Argueta, Cesar Ferri, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Christopher Waites, Christian Voigt, Christopher D Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, C. Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodolà, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Gior- gio Mariani, Gloria Xinyue Wang, Gonzalo Jaimovitch-Lopez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Francis Anthony Shevlin, Hinrich Schuetze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fer- nández Fisac, James B Simon, James Koppel, James Zheng, James Zou, Jan Kocon, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Fro- hberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh Dhole, Kevin Gimpel, Kevin Omondi, Kory Wallace Math- ewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lu- cas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros-Colón, Luke Metz, Lütfi Kerem Senel, Maarten Bosma, Maarten Sap, Maartje Ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramirez-Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L Leavitt, Matthias Ha- gen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael Andrew Yee, Michael Cohen, Michael Gu, Michael Ivanit- skiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilac- qua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan Andrew Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo An- tonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter W Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Mil- lière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Ray- maekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan Le Bras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Russ Salakhutdinov, Ryan An- drew Chi, Seungjae Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel Stern Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepi- deh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shub- ham Toshniwal, Shyam Upadhyay, Shyamolima Shammie Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Ste- fan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven Piantadosi, Stuart Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsunori Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Venkatesh Ramasesh, vinay uday prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. 2023. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research (2023). https://openreview.net/forum?id=uyTL5Bvosj [79] Zhi Rui Tam, Cheng-Kuang Wu, Yi-Lin Tsai, Chieh-Yen Lin, Hung-yi Lee, and Yun-Nung Chen. 2024. Let Me Speak Freely? A Study On The Impact Of Format Restrictions On Large Language Model Performance.. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, Franck Dernoncourt, Daniel Preoţiuc-Pietro, and Anastasia Shimorina (Eds.). Association for Computational Linguistics, Miami, Florida, US, 1218–1236. https://doi.org/10.18653/v1/2024.emnlp-industry.91 [80] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy Lillicrap, Angeliki Lazaridou, Orhan Firat, James Molloy, Michael Isard, Paul R. Barham, Tom Hennigan, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, Ryan Doherty, Eli Collins, Clemens Meyer, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Jack Krawczyk, Cosmo Du, Ed Chi, Heng-Tze Cheng, Eric Ni, Purvi Shah, Patrick Kane, Betty Chan, Manaal Faruqui, Aliaksei Severyn, Hanzhao Lin, YaGuang Li, Yong Cheng, Abe Ittycheriah, Mahdis Mahdieh, Mia Chen, Pei Sun, Dustin Tran, Sumit Bagri, Balaji Lakshminarayanan, Jeremiah Liu, Andras Orban, Fabian Güra, Hao Zhou, Xinying Song, Aurelien Boffy, Harish Ganapathy, Steven Zheng, HyunJeong Choe, Ágoston Weisz, Tao Zhu, Yifeng Lu, Siddharth Gopal, Jarrod Kahn, Maciej Kula, Jeff Pitman, Rushin Shah, Emanuel Taropa, Majd Al Merey, Martin Baeuml, Zhifeng Chen, Laurent El Shafey, Yujing Zhang, Olcan Sercinoglu, George Tucker, Enrique Piqueras, Maxim Krikun, Iain Barr, Niko- lay Savinov, Ivo Danihelka, Becca Roelofs, Anaïs White, Anders Andreassen, Tamara von Glehn, Lakshman Yagati, Mehran Kazemi, Lucas Gonzalez, Misha Khalman, Jakub Sygnowski, Alexandre Frechette, Charlotte Smith, Laura Culp, Lev Proleev, Yi Luan, Xi Chen, James Lottes, Nathan Schucher, Federico Lebron, Alban Rrustemi, Natalie Clay, Phil Crone, Tomas Kocisky, Jeffrey Zhao, Bartek Perz, Dian Yu, Heidi Howard, Adam Bloniarz, Jack W. Rae, Han Lu, Laurent Sifre, Marcello Maggioni, Fred Alcober, Dan Garrette, Megan Barnes, Shantanu Thakoor, Jacob Austin, Gabriel Barth-Maron, William Wong, Rishabh Joshi, Rahma Chaabouni, Deeni Fatiha, Arun Ahuja, Gaurav Singh Tomar, Evan Sen- ter, Martin Chadwick, Ilya Kornakov, Nithya Attaluri, Iñaki Iturrate, Ruibo Liu, Yunxuan Li, Sarah Cogan, Jeremy Chen, Chao Jia, Chenjie Gu, Qiao Zhang, Jordan Grimstad, Ale Jakse Hartman, Xavier Garcia, Thanumalayan Sankara- narayana Pillai, Jacob Devlin, Michael Laskin, Diego de Las Casas, Dasha Valter, Connie Tao, Lorenzo Blanco, Adrià Puigdomènech Badia, David Reitter, Mi- anna Chen, Jenny Brennan, Clara Rivera, Sergey Brin, Shariq Iqbal, Gabriela Surita, Jane Labanowski, Abhi Rao, Stephanie Winkler, Emilio Parisotto, Yim- ing Gu, Kate Olszewska, Ravi Addanki, Antoine Miech, Annie Louis, Denis Teplyashin, Geoff Brown, Elliot Catt, Jan Balaguer, Jackie Xiang, Pidong Wang, Zoe Ashwood, Anton Briukhov, Albert Webson, Sanjay Ganapathy, Smit Sang- havi, Ajay Kannan, Ming-Wei Chang, Axel Stjerngren, Josip Djolonga, Yuting Sun, Ankur Bapna, Matthew Aitchison, Pedram Pejman, Henryk Michalewski, Tianhe Yu, Cindy Wang, Juliette Love, Junwhan Ahn, Dawn Bloxwich, Kehang Han, Peter Humphreys, Thibault Sellam, James Bradbury, Varun Godbole, Sina Samangooei, Bogdan Damoc, Alex Kaskasoli, Sébastien M. R. Arnold, Vijay Vasudevan, Shubham Agrawal, Jason Riesa, Dmitry Lepikhin, Richard Tanburn, Srivatsan Srinivasan, Hyeontaek Lim, Sarah Hodkinson, Pranav Shyam, Jo- han Ferret, Steven Hand, Ankush Garg, Tom Le Paine, Jian Li, Yujia Li, Minh Giang, Alexander Neitz, Zaheer Abbas, Sarah York, Machel Reid, Elizabeth Cole, Aakanksha Chowdhery, Dipanjan Das, Dominika Rogozińska, Vitaliy Nikolaev, Pablo Sprechmann, Zachary Nado, Lukas Zilka, Flavien Prost, Luheng He, Marianne Monteiro, Gaurav Mishra, Chris Welty, Josh Newlan, Dawei Jia, Miltiadis Allamanis, Clara Huiyi Hu, Raoul de Liedekerke, Justin Gilmer, Carl Saroufim, Shruti Rijhwani, Shaobo Hou, Disha Shrivastava, Anirudh Baddepudi, Alex Goldin, Adnan Ozturel, Albin Cassirer, Yunhan Xu, Daniel Sohn, Deven- dra Sachan, Reinald Kim Amplayo, Craig Swanson, Dessie Petrova, Shashi Narayan, Arthur Guez, Siddhartha Brahma, Jessica Landon, Miteyan Patel, Ruizhe Zhao, Kevin Villela, Luyu Wang, Wenhao Jia, Matthew Rahtz, Mai Giménez, Legg Yeung, James Keeling, Petko Georgiev, Diana Mincu, Boxi Wu, Salem Haykal, Rachel Saputro, Kiran Vodrahalli, James Qin, Zeynep Cankara, Abhanshu Sharma, Nick Fernando, Will Hawkins, Behnam Neyshabur, Solomon Kim, Adrian Hutter, Priyanka Agrawal, Alex Castro-Ros, George van den Driess- che, Tao Wang, Fan Yang, Shuo yiin Chang, Paul Komarek, Ross McIlroy, Mario Lučić, Guodong Zhang, Wael Farhan, Michael Sharman, Paul Natsev, Paul Michel, Yamini Bansal, Siyuan Qiao, Kris Cao, Siamak Shakeri, Christina Butter- field, Justin Chung, Paul Kishan Rubenstein, Shivani Agrawal, Arthur Mensch, Kedar Soparkar, Karel Lenc, Timothy Chung, Aedan Pope, Loren Maggiore, Jackie Kay, Priya Jhakra, Shibo Wang, Joshua Maynez, Mary Phuong, Taylor To- bin, Andrea Tacchetti, Maja Trebacz, Kevin Robinson, Yash Katariya, Sebastian Riedel, Paige Bailey, Kefan Xiao, Nimesh Ghelani, Lora Aroyo, Ambrose Slone, Neil Houlsby, Xuehan Xiong, Zhen Yang, Elena Gribovskaya, Jonas Adler, Ma- teo Wirth, Lisa Lee, Music Li, Thais Kagohara, Jay Pavagadhi, Sophie Bridgers, Anna Bortsova, Sanjay Ghemawat, Zafarali Ahmed, Tianqi Liu, Richard Pow- ell, Vijay Bolina, Mariko Iinuma, Polina Zablotskaia, James Besley, Da-Woon Chung, Timothy Dozat, Ramona Comanescu, Xiance Si, Jeremy Greer, Guolong Su, Martin Polacek, Raphaël Lopez Kaufman, Simon Tokumine, Hexiang Hu, Elena Buchatskaya, Yingjie Miao, Mohamed Elhawaty, Aditya Siddhant, Nenad Tomasev, Jinwei Xing, Christina Greer, Helen Miller, Shereen Ashraf, Aurko Roy, Zizhao Zhang, Ada Ma, Angelos Filos, Milos Besta, Rory Blevins, Ted Klimenko, Chih-Kuan Yeh, Soravit Changpinyo, Jiaqi Mu, Oscar Chang, Mantas Pajarskas, Carrie Muir, Vered Cohen, Charline Le Lan, Krishna Haridasan, Amit Marathe, Steven Hansen, Sholto Douglas, Rajkumar Samuel, Mingqiu Wang, Sophia Austin, Chang Lan, Jiepu Jiang, Justin Chiu, Jaime Alonso Lorenzo, Lars Lowe Sjösund, Sébastien Cevey, Zach Gleicher, Thi Avrahami, Anudhyan Boral, Hansa Srinivasan, Vittorio Selo, Rhys May, Konstantinos Aisopos, Léonard Hussenot, Livio Baldini Soares, Kate Baumli, Michael B. Chang, Adrià Recasens, Ben Caine, Alexander Pritzel, Filip Pavetic, Fabio Pardo, Anita Gergely, Justin Frye, Vinay Ramasesh, Dan Horgan, Kartikeya Badola, Nora Kassner, Subhrajit Roy, Ethan Dyer, Víctor Campos Campos, Alex Tomala, Yunhao Tang, Dalia El Badawy, Elspeth White, Basil Mustafa, Oran Lang, Abhishek Jindal, Sharad Vikram, Zhitao Gong, Sergi Caelles, Ross Hemsley, Gregory Thornton, Fangxiaoyu Feng, Wojciech Stokowiec, Ce Zheng, Phoebe Thacker, Çağlar Ünlü, Zhishuai Zhang, Mohammad Saleh, James Svensson, Max Bileschi, Piyush Patil, Ankesh Anand, Roman Ring, Katerina Tsihlas, Arpi Vezer, Marco Selvi, Toby Shevlane, Mikel Ro- driguez, Tom Kwiatkowski, Samira Daruki, Keran Rong, Allan Dafoe, Nicholas FitzGerald, Keren Gu-Lemberg, Mina Khan, Lisa Anne Hendricks, Marie Pellat, Vladimir Feinberg, James Cobon-Kerr, Tara Sainath, Maribeth Rauh, Sayed Hadi Hashemi, Richard Ives, Yana Hasson, Eric Noland, Yuan Cao, Nathan Byrd, Le Hou, Qingze Wang, Thibault Sottiaux, Michela Paganini, Jean-Baptiste Lespiau, Alexandre Moufarek, Samer Hassan, Kaushik Shivakumar, Joost van Amers- foort, Amol Mandhane, Pratik Joshi, Anirudh Goyal, Matthew Tung, Andrew Brock, Hannah Sheahan, Vedant Misra, Cheng Li, Nemanja Rakićević, Mostafa Dehghani, Fangyu Liu, Sid Mittal, Junhyuk Oh, Seb Noury, Eren Sezener, Fantine Huot, Matthew Lamm, Nicola De Cao, Charlie Chen, Sidharth Mudgal, Romina Stella, Kevin Brooks, Gautam Vasudevan, Chenxi Liu, Mainak Chain, Nivedita Melinkeri, Aaron Cohen, Venus Wang, Kristie Seymore, Sergey Zubkov, Rahul Goel, Summer Yue, Sai Krishnakumaran, Brian Albert, Nate Hurley, Motoki Sano, Anhad Mohananey, Jonah Joughin, Egor Filonov, Tomasz Kępa, Yomna Eldawy, Jiawern Lim, Rahul Rishi, Shirin Badiezadegan, Taylor Bos, Jerry Chang, Sanil Jain, Sri Gayatri Sundara Padmanabhan, Subha Puttagunta, Kalpesh Kr- ishna, Leslie Baker, Norbert Kalb, Vamsi Bedapudi, Adam Kurzrok, Shuntong Lei, Anthony Yu, Oren Litvin, Xiang Zhou, Zhichun Wu, Sam Sobell, Andrea Sicil- iano, Alan Papir, Robby Neale, Jonas Bragagnolo, Tej Toor, Tina Chen, Valentin Anklin, Feiran Wang, Richie Feng, Milad Gholami, Kevin Ling, Lijuan Liu, Jules Walter, Hamid Moghaddam, Arun Kishore, Jakub Adamek, Tyler Mercado, Jonathan Mallinson, Siddhinita Wandekar, Stephen Cagle, Eran Ofek, Guillermo Garrido, Clemens Lombriser, Maksim Mukha, Botu Sun, Hafeezul Rahman Mohammad, Josip Matak, Yadi Qian, Vikas Peswani, Pawel Janus, Quan Yuan, Leif Schelin, Oana David, Ankur Garg, Yifan He, Oleksii Duzhyi, Anton Äl- gmyr, Timothée Lottaz, Qi Li, Vikas Yadav, Luyao Xu, Alex Chinien, Rakesh Shivanna, Aleksandr Chuklin, Josie Li, Carrie Spadine, Travis Wolfe, Kareem Mohamed, Subhabrata Das, Zihang Dai, Kyle He, Daniel von Dincklage, Shyam Upadhyay, Akanksha Maurya, Luyan Chi, Sebastian Krause, Khalid Salama, Pam G Rabinovitch, Pavan Kumar Reddy M, Aarush Selvan, Mikhail Dektiarev, Golnaz Ghiasi, Erdem Guven, Himanshu Gupta, Boyi Liu, Deepak Sharma, Idan Heimlich Shtacher, Shachi Paul, Oscar Akerlund, François-Xavier Aubet, Terry Huang, Chen Zhu, Eric Zhu, Elico Teixeira, Matthew Fritze, Francesco Bertolini, Liana-Eleonora Marinescu, Martin Bölle, Dominik Paulus, Khyatti Gupta, Tejasi Latkar, Max Chang, Jason Sanders, Roopa Wilson, Xuewei Wu, Yi- Xuan Tan, Lam Nguyen Thiet, Tulsee Doshi, Sid Lall, Swaroop Mishra, Wanming Chen, Thang Luong, Seth Benjamin, Jasmine Lee, Ewa Andrejczuk, Dominik Rabiej, Vipul Ranjan, Krzysztof Styrc, Pengcheng Yin, Jon Simon, Malcolm Rose Harriott, Mudit Bansal, Alexei Robsky, Geoff Bacon, David Greene, Daniil Mirylenka, Chen Zhou, Obaid Sarvana, Abhimanyu Goyal, Samuel Andermatt, Patrick Siegler, Ben Horn, Assaf Israel, Francesco Pongetti, Chih-Wei "Louis" Chen, Marco Selvatici, Pedro Silva, Kathie Wang, Jackson Tolins, Kelvin Guu, Roey Yogev, Xiaochen Cai, Alessandro Agostini, Maulik Shah, Hung Nguyen, Noah Ó Donnaile, Sébastien Pereira, Linda Friso, Adam Stambler, Adam Kurzrok, Chenkai Kuang, Yan Romanikhin, Mark Geller, ZJ Yan, Kane Jang, Cheng-Chun Lee, Wojciech Fica, Eric Malmi, Qijun Tan, Dan Banica, Daniel Balle, Ryan Pham, Yanping Huang, Diana Avram, Hongzhi Shi, Jasjot Singh, Chris Hidey, Niharika Ahuja, Pranab Saxena, Dan Dooley, Srividya Pranavi Potharaju, Eileen O’Neill, Anand Gokulchandran, Ryan Foley, Kai Zhao, Mike Dusenberry, Yuan Liu, Pulkit Mehta, Ragha Kotikalapudi, Chalence Safranek-Shrader, Andrew Goodman, Joshua Kessinger, Eran Globen, Prateek Kolhar, Chris Gorgolewski, Ali Ibrahim, Yang Song, Ali Eichenbaum, Thomas Brovelli, Sahitya Potluri, Preethi Lahoti, Cip Baetu, Ali Ghorbani, Charles Chen, Andy Crawford, Shalini Pal, Mukund Sridhar, Petru Gurita, Asier Mujika, Igor Petrovski, Pierre-Louis Cedoz, Chenmei Li, Shiyuan Chen, Niccolò Dal Santo, Siddharth Goyal, Jitesh Punjabi, Karthik Kappaganthu, Chester Kwak, Pallavi LV, Sarmishta Velury, Himadri Choudhury, Jamie Hall, Premal Shah, Ricardo Figueira, Matt Thomas, Minjie Lu, Ting Zhou, Chintu Kumar, Thomas Jurdi, Sharat Chikkerur, Yenai Ma, Adams Yu, Soo Kwak, Victor Ähdel, Sujeevan Rajayogam, Travis Choma, Fei Liu, Aditya Barua, Colin Ji, Ji Ho Park, Vincent Hellendoorn, Alex Bailey, Taylan Bilal, Huanjie Zhou, Mehrdad Khatir, Charles Sutton, Wojciech Rzadkowski, Fiona Macintosh, Konstantin Shagin, Paul Medina, Chen Liang, Jinjing Zhou, Pararth Shah, Yingying Bi, Attila Dankovics, Shipra Banga, Sabine Lehmann, Marissa Bredesen, Zifan Lin, John Eric Hoffmann, Jonathan Lai, Raynald Chung, Kai Yang, Nihal Balani, Arthur Bražinskas, Andrei Sozanschi, Matthew Hayes, Héctor Fernández Alcalde, Peter Makarov, Will Chen, Antonio Stella, Liselotte Snijders, Michael Mandl, Ante Kärrman, Paweł Nowak, Xinyi Wu, Alex Dyck, Krishnan Vaidyanathan, Raghavender R, Jessica Mallet, Mitch Rudominer, Eric Johnston, Sushil Mittal, Akhil Udathu, Janara Christensen, Vishal Verma, Zach Irving, Andreas Santucci, Gamaleldin Elsayed, Elnaz Davoodi, Marin Georgiev, Ian Tenney, Nan Hua, Geoffrey Cideron, Edouard Leurent, Mahmoud Alnahlawi, Ionut Georgescu, Nan Wei, Ivy Zheng, Dylan Scandinaro, Heinrich Jiang, Jasper Snoek, Mukund Sundararajan, Xuezhi Wang, Zack Ontiveros, Itay Karo, Jeremy Cole, Vinu Rajashekhar, Lara Tumeh, Eyal Ben-David, Rishub Jain, Jonathan Uesato, Romina Datta, Oskar Bunyan, Shimu Wu, John Zhang, Piotr Stanczyk, Ye Zhang, David Steiner, Subhajit Naskar, Michael Azzam, Matthew Johnson, Adam Paszke, Chung-Cheng Chiu, Jaume Sanchez Elias, Afroz Mohiuddin, Faizan Muhammad, Jin Miao, Andrew Lee, Nino Vieillard, Jane Park, Jiageng Zhang, Jeff Stanway, Drew Garmon, Abhijit Karmarkar, Zhe Dong, Jong Lee, Aviral Kumar, Luowei Zhou, Jonathan Evens, William Isaac, Geoffrey Irving, Edward Loper, Michael Fink, Isha Arkatkar, Nanxin Chen, Izhak Shafran, Ivan Petrychenko, Zhe Chen, Johnson Jia, Anselm Levskaya, Zhenkai Zhu, Peter Grabowski, Yu Mao, Alberto Magni, Kaisheng Yao, Javier Snaider, Norman Casagrande, Evan Palmer, Paul Suganthan, Alfonso Castaño, Irene Giannoumis, Wooyeol Kim, Mikołaj Rybiński, Ashwin Sreevatsa, Jennifer Prendki, David So- ergel, Adrian Goedeckemeyer, Willi Gierke, Mohsen Jafari, Meenu Gaba, Jeremy Wiesner, Diana Gage Wright, Yawen Wei, Harsha Vashisht, Yana Kulizhskaya, Jay Hoover, Maigo Le, Lu Li, Chimezie Iwuanyanwu, Lu Liu, Kevin Ramirez, Andrey Khorlin, Albert Cui, Tian LIN, Marcus Wu, Ricardo Aguilar, Keith Pallo, Abhishek Chakladar, Ginger Perng, Elena Allica Abellan, Mingyang Zhang, Ishita Dasgupta, Nate Kushman, Ivo Penchev, Alena Repina, Xihui Wu, Tom van der Weide, Priya Ponnapalli, Caroline Kaplan, Jiri Simsa, Shuangfeng Li, Olivier Dousse, Fan Yang, Jeff Piper, Nathan Ie, Rama Pasumarthi, Nathan Lintz, Anitha Vijayakumar, Daniel Andor, Pedro Valenzuela, Minnie Lui, Cosmin Padu- raru, Daiyi Peng, Katherine Lee, Shuyuan Zhang, Somer Greene, Duc Dung Nguyen, Paula Kurylowicz, Cassidy Hardin, Lucas Dixon, Lili Janzer, Kiam Choo, Ziqiang Feng, Biao Zhang, Achintya Singhal, Dayou Du, Dan McKinnon, Natasha Antropova, Tolga Bolukbasi, Orgad Keller, David Reid, Daniel Finchel- stein, Maria Abi Raad, Remi Crocker, Peter Hawkins, Robert Dadashi, Colin Gaffney, Ken Franko, Anna Bulanova, Rémi Leblond, Shirley Chung, Harry Askham, Luis C. Cobo, Kelvin Xu, Felix Fischer, Jun Xu, Christina Sorokin, Chris Alberti, Chu-Cheng Lin, Colin Evans, Alek Dimitriev, Hannah Forbes, Dylan Banarse, Zora Tung, Mark Omernick, Colton Bishop, Rachel Sterneck, Rohan Jain, Jiawei Xia, Ehsan Amid, Francesco Piccinno, Xingyu Wang, Praseem Banzal, Daniel J. Mankowitz, Alex Polozov, Victoria Krakovna, Sasha Brown, MohammadHossein Bateni, Dennis Duan, Vlad Firoiu, Meghana Thotakuri, Tom Natan, Matthieu Geist, Ser tan Girgin, Hui Li, Jiayu Ye, Ofir Roval, Reiko Tojo, Michael Kwong, James Lee-Thorp, Christopher Yew, Danila Sinopalnikov, Sabela Ramos, John Mellor, Abhishek Sharma, Kathy Wu, David Miller, Nicolas Sonnerat, Denis Vnukov, Rory Greig, Jennifer Beattie, Emily Caveness, Libin Bai, Julian Eisenschlos, Alex Korchemniy, Tomy Tsai, Mimi Jasarevic, Weize Kong, Phuong Dao, Zeyu Zheng, Frederick Liu, Fan Yang, Rui Zhu, Tian Huey Teh, Jason Sanmiya, Evgeny Gladchenko, Nejc Trdin, Daniel Toyama, Evan Rosen, Sasan Tavakkol, Linting Xue, Chen Elkind, Oliver Woodman, John Carpenter, George Papamakarios, Rupert Kemp, Sushant Kafle, Tanya Grun- ina, Rishika Sinha, Alice Talbert, Diane Wu, Denese Owusu-Afriyie, Cosmo Du, Chloe Thornton, Jordi Pont-Tuset, Pradyumna Narayana, Jing Li, Saaber Fatehi, John Wieting, Omar Ajmeri, Benigno Uria, Yeongil Ko, Laura Knight, Amélie Héliou, Ning Niu, Shane Gu, Chenxi Pang, Yeqing Li, Nir Levine, Ariel Stolovich, Rebeca Santamaria-Fernandez, Sonam Goenka, Wenny Yustalim, Robin Strudel, Ali Elqursh, Charlie Deck, Hyo Lee, Zonglin Li, Kyle Levin, Raphael Hoffmann, Dan Holtmann-Rice, Olivier Bachem, Sho Arora, Christy Koh, Soheil Hassas Yeganeh, Siim Põder, Mukarram Tariq, Yanhua Sun, Lucian Ionita, Mojtaba Seyedhosseini, Pouya Tafti, Zhiyu Liu, Anmol Gulati, Jasmine Liu, Xinyu Ye, Bart Chrzaszcz, Lily Wang, Nikhil Sethi, Tianrun Li, Ben Brown, Shreya Singh, Wei Fan, Aaron Parisi, Joe Stanton, Vinod Koverkathu, Christo- pher A. Choquette-Choo, Yunjie Li, TJ Lu, Abe Ittycheriah, Prakash Shroff, Mani Varadarajan, Sanaz Bahargam, Rob Willoughby, David Gaddy, Guillaume Des- jardins, Marco Cornero, Brona Robenek, Bhavishya Mittal, Ben Albrecht, Ashish Shenoy, Fedor Moiseev, Henrik Jacobsson, Alireza Ghaffarkhah, Morgane Riv- ière, Alanna Walton, Clément Crepy, Alicia Parrish, Zongwei Zhou, Clement Farabet, Carey Radebaugh, Praveen Srinivasan, Claudia van der Salm, Andreas Fidjeland, Salvatore Scellato, Eri Latorre-Chimoto, Hanna Klimczak-Plucińska, David Bridson, Dario de Cesare, Tom Hudson, Piermaria Mendolicchio, Lexi Walker, Alex Morris, Matthew Mauger, Alexey Guseynov, Alison Reid, Seth Odoom, Lucia Loher, Victor Cotruta, Madhavi Yenugula, Dominik Grewe, Anas- tasia Petrushkina, Tom Duerig, Antonio Sanchez, Steve Yadlowsky, Amy Shen, Amir Globerson, Lynette Webb, Sahil Dua, Dong Li, Surya Bhupatiraju, Dan Hurt, Haroon Qureshi, Ananth Agarwal, Tomer Shani, Matan Eyal, Anuj Khare, Shreyas Rammohan Belle, Lei Wang, Chetan Tekur, Mihir Sanjay Kale, Jinliang Wei, Ruoxin Sang, Brennan Saeta, Tyler Liechty, Yi Sun, Yao Zhao, Stephan Lee, Pandu Nayak, Doug Fritz, Manish Reddy Vuyyuru, John Aslanides, Nidhi Vyas, Martin Wicke, Xiao Ma, Evgenii Eltyshev, Nina Martin, Hardie Cate, James Manyika, Keyvan Amiri, Yelin Kim, Xi Xiong, Kai Kang, Florian Luisier, Nilesh Tripuraneni, David Madras, Mandy Guo, Austin Waters, Oliver Wang, Joshua Ainslie, Jason Baldridge, Han Zhang, Garima Pruthi, Jakob Bauer, Feng Yang, Riham Mansour, Jason Gelman, Yang Xu, George Polovets, Ji Liu, Honglong Cai, Warren Chen, XiangHai Sheng, Emily Xue, Sherjil Ozair, Christof Anger- mueller, Xiaowei Li, Anoop Sinha, Weiren Wang, Julia Wiesinger, Emmanouil Koukoumidis, Yuan Tian, Anand Iyer, Madhu Gurumurthy, Mark Goldenson, Parashar Shah, MK Blake, Hongkun Yu, Anthony Urbanowicz, Jennimaria Palo- maki, Chrisantha Fernando, Ken Durden, Harsh Mehta, Nikola Momchev, Elahe Rahimtoroghi, Maria Georgaki, Amit Raul, Sebastian Ruder, Morgan Redshaw, Jinhyuk Lee, Denny Zhou, Komal Jalan, Dinghua Li, Blake Hechtman, Parker Schuh, Milad Nasr, Kieran Milan, Vladimir Mikulik, Juliana Franco, Tim Green, Nam Nguyen, Joe Kelley, Aroma Mahendru, Andrea Hu, Joshua Howland, Ben Vargas, Jeffrey Hui, Kshitij Bansal, Vikram Rao, Rakesh Ghiya, Emma Wang, Ke Ye, Jean Michel Sarr, Melanie Moranski Preston, Madeleine Elish, Steve Li, Aakash Kaku, Jigar Gupta, Ice Pasupat, Da-Cheng Juan, Milan Someswar, Tejvi M., Xinyun Chen, Aida Amini, Alex Fabrikant, Eric Chu, Xuanyi Dong, Amruta Muthal, Senaka Buthpitiya, Sarthak Jauhari, Nan Hua, Urvashi Khan- delwal, Ayal Hitron, Jie Ren, Larissa Rinaldi, Shahar Drath, Avigail Dabush, Nan-Jiang Jiang, Harshal Godhia, Uli Sachs, Anthony Chen, Yicheng Fan, Hagai Taitelbaum, Hila Noga, Zhuyun Dai, James Wang, Chen Liang, Jenny Hamer, Chun-Sung Ferng, Chenel Elkind, Aviel Atias, Paulina Lee, Vít Listík, Mathias Carlen, Jan van de Kerkhof, Marcin Pikus, Krunoslav Zaher, Paul Müller, Sasha Zykova, Richard Stefanec, Vitaly Gatsko, Christoph Hirnschall, Ashwin Sethi, Xingyu Federico Xu, Chetan Ahuja, Beth Tsai, Anca Stefanoiu, Bo Feng, Keshav Dhandhania, Manish Katyal, Akshay Gupta, Atharva Parulekar, Divya Pitta, Jing Zhao, Vivaan Bhatia, Yashodha Bhavnani, Omar Alhadlaq, Xiaolin Li, Peter Da- nenberg, Dennis Tu, Alex Pine, Vera Filippova, Abhipso Ghosh, Ben Limonchik, Bhargava Urala, Chaitanya Krishna Lanka, Derik Clive, Yi Sun, Edward Li, Hao Wu, Kevin Hongtongsak, Ianna Li, Kalind Thakkar, Kuanysh Omarov, Kushal Majmundar, Michael Alverson, Michael Kucharski, Mohak Patel, Mudit Jain, Maksim Zabelin, Paolo Pelagatti, Rohan Kohli, Saurabh Kumar, Joseph Kim, Swetha Sankar, Vineet Shah, Lakshmi Ramachandruni, Xiangkai Zeng, Ben Bariach, Laura Weidinger, Tu Vu, Alek Andreev, Antoine He, Kevin Hui, Sheleem Kashem, Amar Subramanya, Sissie Hsiao, Demis Hassabis, Koray Kavukcuoglu, Adam Sadovsky, Quoc Le, Trevor Strohman, Yonghui Wu, Slav Petrov, Jeffrey Dean, and Oriol Vinyals. 2024. Gemini: A Family of Highly Capable Multimodal Models. arXiv:2312.11805 [cs.CL] https://arxiv.org/abs/2312.11805 [81] Gladys Tyen, Hassan Mansoor, Victor Carbune, Peter Chen, and Tony Mak. 2024. LLMs cannot find reasoning errors, but can correct them given the error location. In Findings of the Association for Computational Linguistics: ACL 2024, Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 13894–13908. https://doi.org/10.18653/v1/2024. findings-acl.826 [82] Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kamb- hampati. 2024. On the planning abilities of large language models: a critical investigation. In Proceedings of the 37th International Conference on Neural Infor- mation Processing Systems (New Orleans, LA, USA) (NIPS ’23). Curran Associates Inc., Red Hook, NY, USA, Article 3320, 13 pages. [83] Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambham- pati. 2022. Large Language Models Still Can’t Plan (A Benchmark for LLMs on Planning and Reasoning about Change). In NeurIPS 2022 Foundation Models for Decision Making Workshop. https://openreview.net/forum?id=wUU-7XTL5XO [84] Tanay Varshney. 2023. Introduction to LLM Agents. Blog. Retrieved October 7, 2024 from https://developer.nvidia.com/blog/introduction-to-llm-agents/ [85] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. https://proceedings.neurips.cc/paper_ files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf [86] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi (Jim) Fan, and Anima Anandkumar. 2023. Voyager: An Open-Ended Embodied Agent with Large Language Models. Trans. Mach. Learn. Res. 2024 (2023). https://api.semanticscholar.org/CorpusID:258887849 [87] Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, and Jirong Wen. 2024. A survey on large language model based autonomous agents. Frontiers of Computer Science 18, 6 (March 2024). https://doi.org/10. 1007/s11704-024-40231-1 [88] Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023. Plan-and-Solve Prompting: Improving Zero-Shot Chain- of-Thought Reasoning by Large Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 2609–2634. https: //doi.org/10.18653/v1/2023.acl-long.147 [105] Yadong Zhang, Shaoguang Mao, Tao Ge, Xun Wang, Yan Xia, Wenshan Wu, Ting Song, Man Lan, and Furu Wei. 2024. LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models. In First Conference on Language Modeling. https://openreview.net/forum?id=iMqJsQ4evS [106] Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, and Yanlin Wang. 2024. MemoryBank: Enhancing Large Language Models with Long-Term Memory. Proceedings of the AAAI Conference on Artificial Intelligence 38, 17 (Mar. 2024), 19724–19731. https://doi.org/10.1609/aaai.v38i17.29946 [107] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=WZH7099tgfM [89] Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji. 2024. Executable Code Actions Elicit Better LLM Agents. In Forty- first International Conference on Machine Learning. https://openreview.net/ forum?id=jJ9BoXAfFa [90] Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. 2023. Aligning Large Language Models with Human: A Survey. arXiv:2307.12966 [cs.CL] https://arxiv.org/abs/ 2307.12966 [91] Zhiruo Wang, Zhoujun Cheng, Hao Zhu, Daniel Fried, and Graham Neubig. 2024. What Are Tools Anyway? A Survey from the Language Model Perspective. In First Conference on Language Modeling. https://openreview.net/forum?id= Xh1B90iBSR [92] Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2024. Unleashing the Emergent Cognitive Synergy in Large Lan- guage Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Vol- ume 1: Long Papers), Kevin Duh, Helena Gomez, and Steven Bethard (Eds.). Association for Computational Linguistics, Mexico City, Mexico, 257–279. https://doi.org/10.18653/v1/2024.naacl-long.15 [93] Lilian Weng. 2023. LLM Powered Autonomous Agents. Blog. Retrieved October 8, 2024 from https://lilianweng.github.io/posts/2023-06-23-agent/ [94] Michael Wooldridge and Nicholas R. Jennings. 1995. Intelligent agents: theory and practice. The Knowledge Engineering Review 10, 2 (1995), 115–152. https: //doi.org/10.1017/S0269888900008122 [95] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang (Eric) Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Ahmed Awadallah, Ryen W. White, Doug Burger, and Chi Wang. 2024. AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation. In COLM 2024. [96] Yue Wu, Xuan Tang, Tom Mitchell, and Yuanzhi Li. 2024. SmartPlay : A Bench- mark for LLMs as Intelligent Agents. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=S2oTVrlcp3 [97] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, and Tao Gui. 2023. The Rise and Potential of Large Language Model Based Agents: A Survey. arXiv:2309.07864 [cs.AI] https://arxiv.org/abs/2309.07864 [98] Zheyang Xiong, Ziyang Cai, John Cooper, Albert Ge, Vasilis Papageorgiou, Zack Sifakis, Angeliki Giannou, Ziqian Lin, Liu Yang, Saurabh Agarwal, Grigorios G Chrysos, Samet Oymak, Kangwook Lee, and Dimitris Papailiopoulos. 2024. Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition. arXiv:2410.05603 [cs.LG] https://arxiv.org/abs/2410.05603 [99] Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, and Zhendong Mao. 2023. ExpertPrompting: Instructing Large Language Models to be Distinguished Experts. arXiv:2305.14688 [cs.CL] https://arxiv.org/abs/ 2305.14688 [100] Ke Yang, Jiateng Liu, John Wu, Chaoqi Yang, Yi R. Fung, Sha Li, Zixuan Huang, Xu Cao, Xingyao Wang, Yiquan Wang, Heng Ji, and Chengxiang Zhai. 2024. If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents. arXiv:2401.00812 [cs.CL] https://arxiv.org/abs/2401.00812 [101] Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2022. WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents. In Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022 (Advances in Neural Information Processing Systems), S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.). Neural information processing systems foundation. Publisher Copyright: © 2022 Neural information processing systems foundation. All rights reserved.; 36th Conference on Neural Information Processing Systems, NeurIPS 2022 ; Conference date: 28-11-2022 Through 09-12-2022. [102] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing Reasoning and Act- ing in Language Models. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=WE_vluYUL-X [103] Kechi Zhang, Jia Li, Ge Li, Xianjie Shi, and Zhi Jin. 2024. CodeAgent: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 13643–13658. https://doi.org/10.18653/v1/2024.acl-long.737 [104] Li Zhang, Peter Jansen, Tianyi Zhang, Peter Clark, Chris Callison-Burch, and Niket Tandon. 2024. PDDLEGO: Iterative Planning in Textual Environments. In Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024), Danushka Bollegala and Vered Shwartz (Eds.). Association for Computational Linguistics, Mexico City, Mexico, 212–221. https://doi.org/10. 18653/v1/2024.starsem-1.17
ai_researcher
1
Reading_The_Future_of_Healthcare_Facilities_Through_Science_Fiction_Films_in_The_Context_of_ScientificTechnological_Advancements.pdf
Unpacking Cultural Perceptions of Future Elder Care through Design Fiction Ng, Tse Pei*a; Lee, Jung-Joo a; Wu, Yiyingb a National University of Singapore, Singapore, Singapore b The Hong Kong Polytechnic University, Hung Hom, Hong Kong * [email protected] We present a case using Design Fiction to unpack cultural perceptions of future elder care rooted in the Asian context of Singapore. We created two design fictions, addressing the tensions between filial piety and automated care and the controversy of integrating elder care facilities into residential communities. The design fictions took the visual forms of a shopping web page and a petition site and the public were invited to make fictional decisions. Received in total 109 responses, we identify the key tensions and value conflicts and illustrate them through visual narratives. Further, we propose the Asian perspective of positioning relationships as the protagonist in creating elder care design fiction. Keywords: Design fiction; elder care; culture; filial piety; robot; ageing-in-place Introduction 1 Singapore faces a rapidly aging population, foreseeing that half its population will be 65 years old and above by 2050 (United Nations, Department of Economic and Social Affairs, Population Division 2017). While traditional nursing homes and elder care models have followed clinical typologies (Wong, Pang & Yap 2014), the Singapore government is exploring future models of elder care, focusing on enhancing care efficiency through automated technologies and integrating elder care facilities in residential communities. The background of this paper is a government-funded project, Designing Future-ready and Sustainable Nursing Homes for Person-Centric Care Models in Communities, conducted from 2017 to 2020. The project explores future typologies of nursing homes in Singapore, which aim to be people- centred, sustainable and future ready. Around the project, a few conflicts were observed around the innovation of the care process through automated technologies and community integration of the Appropriate copyright/license statement will be pasted here later when the publication contract is ready. This text field should be large enough to hold the appropriate release statement. elder care. They are critical tensions between the Confucian value of filial piety and automated care, and between the residence community and the elder care facilities. To unpack these underlying tensions and value conflicts deeply rooted in Singapore’s culture, we created two pieces of Design Fiction as a method to engage the public in the discussion, helping them articulate opinions and debate what kinds of future they desire in elder care. We developed design fictions by tweaking the original design proposals for the government agencies in the project. The design fictions were then published to the local public to provoke their reflections and responses. In this paper, we introduce the development of Design Fictions in the topic of the future elder care in Singapore, and thematic analysis of the community responses. The themes reveal cultural perceptions and value conflicts around the future elder care in Singapore. 2 Design fiction 2.1 Engaging in Design Fiction Design fiction is gaining popularity as a design and research tool for technological innovation in the Human-Computer Interaction community. Researchers often refer a design fiction practice to the speculative and critical tradition in Dunne and Raby’s work (Dunne & Raby 2013), Sterling’s definition of ‘the deliberate use of diegetic prototypes to suspend disbelief about change’ (Sterling 2013), or Bleecker’s influential book that especially elaborated the key concept ‘diegetic prototype’ (Bleecker 2009). With the two elements of narratives and speculation of the yet-to-exist, it is one of the valuable design attempts in investigating social, ethical implications of future technology. Thus, the value of this method is more than in generating imagination and novelty of technology. Instead, it is attentive to provoking discussion on sensitive and conflicting issues around the emerging technology. This is the ‘speculative turn’ in contemporary design practices called by Hales (Hales 2013): ‘it creates a discursive space within which new forms of cultural artefact (futures) might emerge’. We also see it as ‘a discursive turn’ (Lindley & Coulton 2015) that design fiction is used as a research tool to ask better questions than providing technological solutions. In design, the process of creating and using Design fiction is often participatory (Lyckvi, Roto, Buie, & Wu 2018). Design fiction is carefully presented to engage participants to provoke discussion, opinions or even further speculation and imagination. Thus, accordingly, design fiction takes a wide range of material and experiential formats. Markussen & Knutz (2013) describe a variety of multi- media as ‘packaging’ for design fiction stories. For instance, they are well-crafted exhibit objects and multiple media in museums (Auger 2013), functional artifacts brought to real everyday use settings of field (Pierce 2019; Søndergaard & Hansen 2018), collages and cardboard mock-ups used in co- design workshops (Hanna & Ashby 2016; Huusko, Wu, & Roto 2018), and experiential events and performance (Candy & Dunagan 2017; Elsden et al. 2017). Designers also borrow widely used daily items like advertisement poster (Bleecker 2014; Blythe, Steane, Roe, & Oliver 2015), commercial product catalogue (Brown et al. 2016), and products sold in 0.99 dollar grocery (Montgomery & Woebken 2016). All these design cases have well illustrated that design, with its constructive tradition, lies the competence in bringing invisible, intangible futures to live, material or experiential forms for people to see, experience, comment on and interact with (Candy & Kornet 2019). And at the same time, 2 such engagement allows researchers to study the emergent phenomena ethnographically for further enquiries (Lindley, Sharma, & Potts 2015; Smith et al. 2016). 2.2 Design Fiction in Elder Care In the context of designing for elder care, smart and IoT technologies of assisting and monitoring are introduced. More people realise that design is not only about applying smart technology to ensure safety and health, but about translating technology into the values that matter to older adult individuals and their social environment (Leong & Robertson 2016). How would the older adult users’ intimate experiences, psychological emotions, cultural values, and social connections be mediated by future technologies? To address this question, design fiction has been used relating to the domain topics like positive ageing in care centres (Blythe et al. 2015), assisting death (Tsekleves et al. 2017), volunteer services (Blight & Wright 2006), and dementia (Darby & Tsekleves 2018), (Noortman et al. 2019). Carrying the discursive stance, design fiction is often used collaboratively with older adult participants in workshops. The older adult are either invited to co-compose and -develop fictions to express their desires or fears (Ambe, Brereton, Soro, Buys, & Roe 2019), or to comment on written fictions that were carefully crafted with provocative design concepts and plot by researchers or experts (Ahmadpour, Pedell, Mayasari, & Beh 2019; Tsekleves et al. 2017). A recent one takes a more performative and interventionist approach (Noortman, Schulte, Marshall, Bakker, & Cox 2019). Researchers made a probing prototype of remote care device and invited participants to play the role of caregiver for a fictional character of Annie living with dementia. From those discussions, the recurring themes are mainly around the conflicting relations between the control imposed by technology and the craving for individual autonomy and independence (Soro, Ambe & Brereton 2017). Ambe and her colleagues (2019) reported that their older adult participants expressed a strong desire for breaking from the ‘invisible power of watchman’ for adventure regardless of the dangerous results. Schulte (2016) portrayed ‘a stubborn father’ who refuses to wear monitoring technology. In the short video ‘Uninvited guests’, a 70-year-old man gets frustrated by the monitoring ecosystem and plays tricks to treat the system (Superflux 2013). They are all strong-minded individuals who succeed in gaining (back) power and control in the end and value things like home comforts and a sense of autonomy. These types of characters imply cautious and reflective attitudes towards ‘temptations of technology’ as IJsselsteijn et al. (2020) call it. To give several examples, the hypothesis that made most of the design projects fundable is the perception of technology as the solution to most problems of elder care. However, we shall ask whether the technology is actually relevant to the problem space at all. Regarding the solution provision, sensors are installed everywhere unquestionably to track, monitor, and respond. Who is then actually benefiting from these monitoring systems, the one doing the monitoring or the older adult? 3 Crafting futures of elder care in Singapore The design fictions in this study are a spin-off from a three-year long government-funded project that explored future nursing home typologies in Singapore. The research project included a design 3 exploration component where the team, comprising architecture and design researchers, ideated, and proposed future concepts of care. The future proposals centred around two themes that were of particular interest to the funding agencies - care automation using robots and integrating eldercare services into public housing - for their potential to alleviate staffing challenges in the eldercare sector and to strengthen the integration of nursing homes with the community. While the potential of robots to provide utility seems promising, having a robot at home is far from the everyday reality of older adult Singaporeans, who grew up in an era of low technology proliferation. Robots’ social acceptability in caregiving has also yet to be thoroughly explored, especially in the home setting where these technologies interface with family values and dynamics informed by culture. Moreover, locating nursing homes within public housing estates have created tensions with the residential community in the past, which could prove a barrier to future integration. In the two design fictions presented here, we turned these nuanced issues to the future world five to ten years ahead, where the proposals in the future concepts have become ‘true’ or widely practiced. The design fictions were hosted on a free website. The sites featured mock-ups of real-world websites - Singapore’s major newspaper (‘The Straits Times’) and the online shopping site (‘Lazada’) to create a make-believe setting. The links for the mock-ups were distributed through social media, where participants were informed of the purpose of our research and our intent to gather their thoughts on the fictional material presented to them via the links. Upon landing on the site, participants had to acknowledge an automatic pop-up disclaiming the fictional nature of the materials before they can continue to view the site. 3.1 Design Fiction 1: Automating Care at Home “Give your parents the gift of care this Chinese New Year!” This fiction explores possible tensions between two phenomena observed. One, is the government’s push towards automation and smart technologies to cope with the projected rise in demand for eldercare services. Robots for caregiving has been one of the key development areas driven by the Singapore government (Singapore is turning to artificial intelligence for elder care 2017), (Assistive Technology and Robotics in Healthcare n.d.), followed by local research institutes (AboutCHART n.d.). Second, is the voice from family members visiting their parents in nursing homes, interviewed in the project. They carry feelings of guilt from not committing their responsibilities as children. This observation led us to question: when adult children feel guilty about placing their parents in a nursing home, how would they feel about entrusting their parent’s care to automated technology such as a robot? How would it affect family dynamics in a culture where family ties and filial piety are stressed? To delve into these questions, we imagined and crafted a near-future world where robot caregivers are commonplace options to provide care for seniors at home, taken as just another household electrical appliance. This fiction is presented through two fictional news articles conveying the government’s push for citizens to adopt automated technology to care for their aging parents through subsidies (see Figure 1). The news article site shows an advertisement banner leading to a fictional product, an elder care robot in this case, listed on an online shopping site (Figure 2). This fiction was positioned as the practice of gift-giving to immerse the audience with the role of an adult child and provoke them to consider how they would feel buying a robot to care for their parents. Participants see a pop-up advertisement positioning the fictional robot as a “gift of care” to parents during Chinese New Year, 4 a season when adult children give red packets to their parents as a symbol of their love and filial piety The fictional robot was created based on technology analysis conducted on various robot cases over the course of the future nursing home project. Among those referenced are robots assisting care giver’s physical work such as Robear (Dredge 2015) and Care-o-Bot (Care-O-bot ® 3 n.d.), and social robots providing emotional care to older adults such as PARO (PARO Robots 2014) and Dinsow (Kishimoto 2017). The visuals were created by modifying and combining existing images to create a believable fictitious product. After exploring the online shopping page, participants were guided to a form to key in their answers to their fictional decisions. Participants were first asked, “would you like to buy Rumii for your parents in their old age?”, then they are guided to share their thoughts on two questions - “Why do you want / not want to buy Rumii for your parents in their old age?” and “How do you think it may affect your relationship with your parents?”. 5 Figure 1. News articles from design fiction 1: “Give your parents the gift of care this Chinese New Year!” 6 Figure 2. Online shopping page with fictional customer reviews from design fiction 1: “Give your parents the gift of care this Chinese New Year!” 7 3.2 Design Fiction 2: Community-based care “Join the Petition to bring integrated care services to our residence!” In the past, the announcement of plans to build a nursing home had elicited resistance from residents of the neighbourhood. This community’s NIMBY, i.e., “not-in-my-backyard”, attitude towards nursing homes received significant media attention back in 2012, with one resident who opposed the idea even commenting that “the old folk will be groaning right into my home” (Seow 2017). In our project, we proposed a future model of care where care services are integrated into the public housing estates called HDB (Housing Development Board), where more than 80% of Singaporeans currently live. In this scenario, community members are leveraged upon to provide social support. This future concept proposal was targeted to meet the government’s current push towards ageing-in-place. Given the history of negative public perception towards eldercare facilities, however, we wanted to explore people’s responses by inviting them into this future scenario. We imagined a potential future where such integrated care neighbourhoods, called “Care Corridors” are being piloted by the government, and have garnered enough popularity that neighbourhoods are petitioning to have Care Corridors built in their estate, a reverse of the NIMBY situation - “Please-in- my-front-yard” (PIMFY). In the Care Corridors fiction, we explored the dilemma of balancing autonomy and safety, which we observed in local and overseas nursing homes, especially for residents with dementia. Currently, care staff grapple with balancing the two, sometimes using restrictive strategies to monitor residents and minimise safety risks, such as keeping residents seated in a visible area or using restraints. In a community setting, neighbours in the estate can be leveraged upon to provide a collective watchful eye and safety net but are likely to face a similar dilemma. How will the community respond? Participants are presented with a fictional petition site, inviting the public to vote for the government to develop integrated eldercare services at their public housing estate. The petition presents visual and written details of the amenities and programs available, with a particular focus on the community’s contribution to care such as intergenerational co-housing and crowdsourced ‘safety monitoring’ for older adults with dementia as key selling points. At the end of the webpage, participants were guided to a form where they are invited to consider how they felt about such an arrangement through a series of questions. Participants were asked to make a fictional decision whether to support or oppose the petition, then guided to share their thoughts on two questions – “Why do you support/oppose this petition?” and “How do you think the Care Corridor will affect life in your neighbourhood?”. Both fictions present the fictional comments by the citizens who live in each future world as if they responded to ‘Buy or not’ within the online shopping page and ‘Vote or not’ in the petition website. The comments were crafted based on insights gleaned from field research during the nursing home project regarding the lived experiences of persons with dementia in care homes, the struggles faced by caregivers in caring for a loved one with dementia and the community’s perception of eldercare facilities being built in their neighbourhood. Presentation of those comments was targeted to provide varied and contrasting perspectives of complex needs and issues. 8 Figure 3. Design Fiction 2: Community-based care “Join the Petition to bring integrated care services to our residence!” 9 4 Putting the Fictions Out There We published the two design fictions on online websites and allowed the links to circulate organically on social media channels such as Facebook, WhatsApp, and Telegram interest groups to invite the public to experience and interact with the fictions. We collected the participants’ responses through online commenting, as if they were making decisions of whether to buy or support in the two future settings, as explained above. In the online commenting, they were asked to submit their age and gender, with an option of non-submission, by being informed that the data is collected only for the research purpose. Table 1 Respondent breakdown according to age range and gender for Design Fiction 1 (DF1) and 2 (DF2) Design fiction 1 Female Male 18-24 25-34 35-44 45-54 55-64 65 and up Total 7 28 3 2 4 0 44 1 8 1 0 1 0 11 No submit 0 1 0 0 0 0 1 DF 1 Total 8 37 4 2 5 0 56 Design fiction 2 Female Male 4 26 3 1 6 1 41 1 8 1 0 0 0 10 No submit 1 1 0 0 0 0 2 DF 2 Total 6 35 4 1 6 1 53 Total 14 72 8 3 11 1 109 In total, we collected 56 responses on the first fiction and 53 responses on the second. While the participants’ ages vary in a range from early 20s to over 65 years old, many of them fall within the 25-34 age group (66% of the total 109 responses) followed by the 18-24 age group (13%) and the 55- 64 age group (10%). 85 out of the 109 responses (78%) were from females. The bias towards the younger age group can be due to the limitations of our chosen medium of online websites and fictional stories. Participants in the younger age group, being more accustomed to interacting with content on social media, are more likely to be engaged in going through the entire fiction and leaving their responses. In addition, this age group will likely be the ones needing to buy care robots for their older adult parents in the timeline pictured in the design fictions. Significantly more female participants commented in response to the fictions (85 of 109 participants). The authors speculate this may be due to women experiencing higher levels of relatability to the topic and so being more motivated to comment, reflecting the cultural expectation for women to bear the burden of caregiving as 60% of informal caregivers in Singapore are female (Chan 2010). We conducted a workshop to interpret the written responses based on a grounded theory approach (Glaser & Strauss 1967) and identified several themes depicting the tensions. Based on the themes, we tweaked the design fiction visuals presented in this paper to represent our interpretation of the participants’ contributions. This was intended to make the entire design fiction process reciprocal and dialogical between the researchers and the public, which can imply further fiction creation and issue investigation. 10 4.1 Reactions to Design Fiction 1 4.1.1 Negotiation of Labour Division between Robot and Adult Children in Caregiving Figure 4. The figure illustrates the participants’ perceived division of labour between robot and human caregivers, with the robot serving a utilitarian and practical function, while they themselves fulfil the emotional and relational needs of their parents which the robot cannot replace. (Illustration by the first author) The participants’ comments reveal the negotiation of the care work between the care robot and the adult children, where the care robot’s function is delegated for practical chores while humans are responsible for providing the emotional touch. Participants preferred the care robot to fulfil practical care needs. They conceived that the robot would relieve the stress of performing practical caregiving tasks and help them achieve the ideal intimate and caring relationship with their parents, which was to spend quality time together socially and engage emotionally. “Our conversations may be less naggy/trivial to more interesting conversations if rumii handles their basic needs so that I don’t have to attend to those.” (Female, 18-24) “It may take off the stress and leave me time to spend relaxing and having fun with my parents.” (Female, 25-34) The comments also reveal an underlying competitive dynamic between the adult children and the care robot. The roles played by the two should complement, not overlap, to maintain a good balance in the care relationship with their parents. “...it’d be better if Rumii was less personal - to clearly demarcate the difference between a utilitarian robot vs one that comes off as too empathetic and appears to be a replacement for human touch.” (Female 25-34) 11 “I think it depends on how my parents take it and how well I can create a good dynamic between the robot’s role and I (that it doesn’t feel like the robot takes over me but the robot is used to make life better for them)” (Female, 25-34) “I only need Rumii to manage daily schedule and routine, and light engagement. While for deeper engagement, my family and I can fulfil that.” (Male, 35-44) 4.1.2 Social Pressure for Both Children and Parents Figure 5. The figure illustrates an imagined scenario where relatives visit the participants’ home during Chinese New Year, where the care robot can be seen tending to their parents’ needs. The participant’s persona sits in the centre worried, as though wondering “what will our relatives think of us?”. (Illustration by the first author) We found that participants’ comments valuing providing care for their parents not only manifests their personal will but also their concern for their social image to their relatives. Interestingly, their concern on their social image appears twofold. First, they worried about their social image that might be portrayed as irresponsible children to their elder parents. Secondly, they also worried that their parents’ social image would get negatively affected as well as they brought up irresponsible children. “I feel like it will give a negative impression to the other relatives if I buy Rumii for my parents.” (Female, 25-34) “I also think they may not want relatives to know that they rely on a bot for companionship. Dunno, for me somehow if it is a robot to check their vitals, maybe it gives my parents a high-tech image, but if it is for social companionship then it seems embarrassing” (Female, 25-34) 12 4.1.3 Tension between Pragmatism and Traditional Chinese Values Figure 6. The “Value Calculator” illustrates the weighing of practical benefits and emotional trade-offs that participants grapple with when considering purchasing the robot caregiver. (Illustration by the first author) Many responses projected the worry that buying a care robot for parents might give their parents the impression that children want to replace their care responsibility with the robot. Their comments reflected their concerns of disappointing their parents’ expectations, the feeling of guilt, negative social image, and their over-reliance on technology resulting in less interaction with their parents. Some comments were clearly against the idea of robot care as their parents “would be insulted” (25-34, Female). Some other comments did think of the robot’s practical benefits such as health monitoring but still appear to place more priority on pursuing the ideal image as responsible children. “There’ll definitely be some form of disappointment. This is especially because of the expectations Asian parents have on their children, that they have to look after them when they are old. However if the use of the robot has been normalised and they see that many of their friends use it, it might be more acceptable.” (18-24, Female) 13 “It’s the children’s responsibility to care for their own parents and not outsource this to a bot. Similarly, my parents didn’t get a bot to care for me when I was a baby.” (Male, 35-44) “Now I see that it’ll be really easy for them to feel like I’ve bought this so that I can abandon them. I can imagine my parents pretending to like the bot, just so that I’ll feel okay with leaving them with it. Knowing my parents, they don’t like to feel like they are a burden on us.” (25-34. Female) On the other hand, there were also comments that reflect the participants’ practical perspectives in evaluating the robot’s worth, by calculating its price against its functions. Some mentioned that the robot was expensive, and they might consider if it was cheaper. “No, it’s still too expensive after a mere $500 subsidy. It should be subsidized by at least 80%.” (45-54, Female) “Anything that can improve the lives of my parents is worth a consideration. But my main issue is the price. That’s really steep.” (Male, 25-34) “Price is a little steep, not sure if people will be convinced that its functionalities are worth the $.” (25-34, Female) 4.2 Reactions to Design Fiction 2 4.2.1 Future Projected Empathy and Reciprocity Figure 7. The author imagines the young respondents putting themselves in the shoes of the older adult residents in the fictional care corridor. (Illustration by the first author) The responses showed overall positive support to the fictional petition. The majority of young respondents supported the petition and regarded themselves as possible caregivers although almost none explicitly stated the benefits they would get to their current life or seriously considered the intensive time or energy they might need to invest. Mainly, they developed the type of future projected empathy that if they treat the older adult well now, when they get old, they will get 14 treated well equally. One particularly mentioned that he did not want to be isolated when he gets old, therefore, it is not nice to isolate older adults with dementia. “We will be old one day, and we would not want to be isolated in our small and dark room.” (Male, 35-44) “It helps to foster a stronger and healthier community spirit by helping one another, especially since we would all be old someday.” (Female, 18-24) “We have to take some responsibility in caring for the older generation as we will be them in the future and will face the same problem.” (undisclosed gender, 18-24) 4.2.2 Prioritising the Role of Good Citizens Figure 8. The participants respond from the perspective of idealistic good citizens, thus viewing the situation through “rose- tinted glasses”. (Illustration by the first author) When commenting on the petition, participants seemed to take a role as a good citizen championing the ideal of a good and inclusive society, where community members take care of one another. In their comments, participants appeared taking a distance from what might likely happen, instead of thinking of themselves as a community member providing care, as reflected in one comment, “I’m kinda just curious to see what it would be like haha, and how it would affect the community spirit/vibe in my neighbourhood” (Female, 24-35). Potential tensions that may arise from being a community member contributing to the care of seniors in the neighbourhood were barely reflected in the responses. “It’s a more humane way to support the growing older adult population, especially those who are able enough to live outside of the nursing home system but are still looking for some form of care support.” (Female, 25-34) “It’s a brilliant idea! I think it’s a wonderful way to foster an inter-generational and inclusive community.” (Female 25-34) “Sounds good to have more people watching over older adult…” (Female, 25-34) 15 5 Discussion 5.1 Unpacking Tensions in Cultural Perceptions in the Asian Context We unpacked key conflicting values of cultural perceptions from the creation process of the design fiction and responses from public participants. The first relates to the first fiction where care robots introduce tension into existing social dynamics in Singapore built upon the Confucian value of filial piety. Participants tried to negotiate the new relationship between themselves as the supposed caregivers, the robot as the new caregiver and their elder parents. They chose to uphold the ideal of being a filial child over the care burden the robot might reduce. Also, the more the robot takes on a social and empathetic role, the more it is viewed as a threat to the ideal filial child-parent relationship. This ideal is also constructed from social pressure, as reflected in the comments worrying about their social image getting damaged by using the robot. The second value conflict lies in the coexistence of differing values in the current Singaporean society, which are pragmatism like cost calculation and the traditional Confucian values such as righteousness, loyalty, propriety, and filial piety (Kuah 1990). Pragmatism has been a national ideology instilled in the everyday life of Singaporeans, but also interacted with the Chinese Confucian values (Tupas 2015). As a result, in contemporary Chinese Singapore families, the values have found new meanings into pragmatic actions towards older adult such as efforts maintaining family harmony and social image, giving physical and financial care, spending time together and others (Mehta & Ko 2004), (Mehta & Leng 2006). Thirdly, more evident in the second fiction, we see the dynamic relationship between personal concerns and benefits and the expectation of an ideal society. When deciding to support the fictional petition, young participants put the hard work of daily care for older adult residents with dementia aside. Instead, they prioritised the vision of a good society and took the role of good citizens supporting the values of respecting seniors, inclusivity, and multi-generational integration. 5.2 Social Relationship as the Protagonist in Elder Care Fictions In investigating cultural perceptions of elder care in the Singaporean society, our work adds an Asian perspective to the overall future speculation of elder care. Most of design fictions on elder care and related discussions have been conducted in the western context. The clear pattern shared in those works is that the protagonist is the individual user who struggles with the tension with technological devices in use, implied by the typical User-Centred Design sense (e.g., see (Ambe et al. 2019)). However, in our work rooted in the Asian context with the dominant values of family and community, the protagonist in our fiction is the social relationship, either between the adult child and older adult parents or between younger and older adult residents. The being of the older adult is rather mutually constituted by the relationship with their adult children. And the losing value or worry caused by new technology is not only about the derived autonomy of individual user, but also the potentially damaged social image of the whole family caused by dismissing filial piety. Such plots of our design fictions reveal the social features of the Asian society drawing on our ethnographic observation from the future nursing homes project, revealing family members’ feeling of guilt in not fulfilling their responsibilities as children by placing their older adult parents in nursing homes, and the general negative social stigma around the issue (Seow 2017). In terms of engaging people with the fiction, we constructed the fictional roles of the older adult’s adult children and neighbours instead of the older adult themselves. Although some Design fiction work considered relational and social aspects in creating fiction (Blythe et al., 2015; Schulte et al., 16 2016; Superflux, 2015), the social relationship is just one of the many qualities of older adult wellbeing instead of the focus. Especially when crafting the friction and conflicts, the focus is on the negotiation between the older user and technological machines. Similar technique can be found in Noortman and her colleagues’ work who gave the research probe to the caregiver of the older adult patient (Noortman et al. 2019). However, we explored the social layer of the caregiving act which is not bounded by payment or unemployment but by kinship or neighbourhood. Another differentiation from related work is our study intentionally downplayed the friction with the machines. For instance, the one customer review from an older adult in the first Design fiction mourned the loss of connection and quality time with his/her son while at the same time praising the excellent value and performance of the care bot. Here, the conflict underlying social relationship was highlighted as elder was very dissatisfied regardless of the satisfaction with the technology. However, it also indicates the limitation of our work that issues and matters of concern could have been the interplaying result of the two social and technological aspects. Overall, we would argue that this study might provide the fundamentally distinguished setting and structure for fiction making on elder care in the global scene. Relational quality at the aspects of family relationship and community structure has been always one key element in designing for elder care with dementia (Morrissey et al. 2017; Sabat & Lee 2012; Tsekleves et al. 2017; Vreugdenhil 2014). And there are substantive design practices that use social relationship as the locus of design brief or objectives (Houben et al. 2020; Muñoz et al. 2019; Wintermans et al. 2017). We would like to suggest further work in Design fiction to put more effort in portraying tension related to social relationships or adopting the strategy of making a piece of relation as the protagonist instead of any individual. This increasing recognition on the social might benefit the investigation of designing for future elder care as care is a fundamentally relational and responsive act which is beyond the discourse of autonomy and human-machine interaction. 5.3 Different Entry Points to Interacting with the Fictional World We find participant responses from the first fiction appear more diverse with uncertainty and decision-making dilemmas while responses from the second appear more consistently supportive. The difference might be related to the different role-takings of participants according to the invitation formats. The first fiction invited the roles of consumer and the child of older adult parents, whereas the second invited the role of citizens supporting or opposing the petition. Participants used different references while taking different roles in both cases. As a ‘child’, they used mundane materials from real-life experiences relating to personal parent-child relationships. As a ‘citizen’, they used the moral value contributing to the community’s common good than actual concerns or dilemmas relating to day-to-day care work. We speculate that the discussion or responses might be different in the second fiction of ‘Care Corridors’ if the invite asks for the role of caregivers, like ‘Would you sign up for the volunteering work as a caregiver, and in which way’? This observation suggests a careful consideration when designing for the interactions between participants and design fictions. The props and act related to consumption is a common engagement technique in Design fiction (e.g., Brown et al. 2016; Montgomery & Woebken 2016). Our study again has illustrated that the commercial purchase page is a valuable tool to get people at the present quickly immersed with the fictiveness, easily understand the content, and express their opinions based on their mundane lived experience. Moreover, our study took a step further that we used the opinions collected from the 17 participants to continue the craft process of Design fiction. We developed more fictional scenes, for instance, the scene of relative visiting (in Figure 4), as a new entry point added to the world building and discursive space. 5.4 Challenges of Crafting Design Fiction and Suggestions for Future Work We faced challenges balancing between proposing ideas and provoking with ideas. Our initial storytelling approach proved overly prescriptive and emotionally laden, guiding audiences what they should feel about the issue. We then moved to a ‘World building’ approach that focused on providing space for participants to construct their own views (Coulton et al. 2017). With this approach, we created everyday artefacts to give verisimilitude to the future concepts and to provide participants a role to embody as they formed opinions on the concepts presented. After collecting and interpreting participants’ comments, we made visual tweaks to the design fiction materials, to project cultural perceptions and value conflicts arising from their responses. The tweaked fiction materials are our responses to their opinions, which aim to build continuous dialogue. As future work, the participants could be invited to a workshop where the tweaked fiction materials are shared and give rise to further discussions. The continuous discussion can also invite project stakeholders especially government agencies. 6 ACKNOWLEDGMENTS We thank all participants who took the time to share their thoughts on the design fictions. 7 REFERENCES About CHART n.d., Centre for Healthcare Assistive & Robotics Technology, viewed 19 February 2021, https://www.cgh.com.sg/chart/about-chart Ahmadpour, N., Pedell, S., Mayasari, A. & Beh, j. 2019. Co-creating and assessing future wellbeing technology using design fiction. She Ji: The Journal of Design, Economics, and Innovation, 5, 209-230. Ambe, AH, Brereton, M, Soro, A, Buys, L, & Roe, P 2019, ‘The adventures of older authors: Exploring futures through co-design fictions’, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Assistive Technology and Robotics in Healthcare n.d., viewed 19 February 2021, https://www.smartnation.gov.sg/what-is-smart-nation/initiatives/Health/assistive-technology-and-robotics-in- healthcare Auger, J. 2013. Speculative design: crafting the speculation. Digital Creativity, 24, 11-35. Bleecker, J 2009, ‘Design fiction: A short essay on design, science, fact and fiction. : Near Future Laboratory’ Bleecker, J. 2014. TBD Catalog. Near Future Laboratory LLC. Blythe, M, Steane, J, Roe, J, & Oliver, C 2015. ‘Solutionism, the game: design fictions for positive aging’, in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. Blythe, M, & Wright, P 2006, ‘Pastiche scenarios: Fiction as a resource for user centred design’, Interacting with Computers, vol. 18, no.5, pp. 1139-1164. Brown, B., Bleecker, J., D'Adamo, M., Ferreira, P., Formo, J., Glöss, M., Holm, M., Höök, K., Johnson, E.-C. B. & Kaburuan, E. The IKEA Catalogue: Design fiction in academic and industrial collaborations. GROUP, 2016. 335- 344. Candy, S. & Dunagan, J. 2017. Designing An Experiential Scenario: The People Who Vanished. Futures, 86, 136- 153. Candy, S. & Kornet, K. 2019. Turning Foresight Inside Out: An Introduction To Ethnographic Experiential Futures. Journal Of Futures Studies, 23, 3-22. Care-O-bot ® 3 n.d., viewed 19 February, 2021, https://www.care-o-bot.de/en/care-o-bot-3.html Chan, A, Østbye, T, Malhotra, R, & Hu, AJ 2010, The Survey on Informal Caregiving, Duke NUS, 1–35. Coulton, P, Lindley, JG, Sturdee, M, & Stead, M 2017, ‘Design fiction as world building’ in Proceedings of the 3rd Biennial Research Through Design Conference, Edinburgh. 18 Darby, AG & Tsekleves, E 2018, ‘Mentian: Developing design fiction for dementia policy.’ Dredge, S 2015, ‘Robear: the bear-shaped nursing robot who’ll look after you when you get old’, The Guardian, 27 Feb, viewed 19 February 2021, <https://www.theguardian.com/technology/2015/feb/27/robear-bear- shaped-nursing-care-robot> Dunne, A, & Raby, F 2013, Speculative everything: Design, fiction, and social dreaming, The MIT Press, Cambridge, MA Elsden, C., Chatting, D., Durrant, A. C., Garbett, A., Nissen, B., Vines, J. & Kirk, D. S. On Speculative Enactments. Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017. ACM, 5386-5399. Glaser, B, & Strauss, A 1967, The Discovery of Grounded Theory: Strategies for Qualitative Research, Sociology Press, Mill Valley, CA. Hales, D 2013, ‘Design fictions an introduction and provisional taxonomy’, Digital Creativity, vol. 24, no.1, pp. 1-10. Hanna, J. R. & Ashby, S. R. From Design Fiction To Future Models Of Community Building And Civic Engagement. Proceedings of the 9th Nordic Conference on Human-Computer Interaction, 2016. ACM, 77. Houben, M., Brankaert, R., Bakker, S., Kenning, G., Bongers, I. & Eggen, B. The Role Of Everyday Sounds In Advanced dementia care. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020. 1-14. Huusko, M., Wu, Y. & Roto, V. Structuring And Engaging: The Roles Of Design Fictions In A Co-Design Workshop. Proceedings Of The 6th Australian Conference on Human-Computer Interaction, 2018 Melbourne, Australia. IJsselsteijn, W, Tummers-Heemels, A, & Brankaert, R 2020, ‘Warm Technology: A Novel Perspective on Design for and with People Living with Dementia’, in R Brankaert & G Kenning (eds.), HCI and Design in the Context of Dementia, Springer, pp. 33-47 Kishimoto, M 2017. ‘Thai startup ready for mass production of elder-care robot’, Nikkei Asia, 24 June, viewed 19 February 2021, <https://asia.nikkei.com/Business/Biotechnology/Thai-startup-ready-for-mass-production- of-elder-care-robot> Kuah, KE 1990, ‘Confucian ideology and social engineering in Singapore’, Journal of Contemporary Asia, vol. 20 no. 3, pp. 371-383, doi: 10.1080/00472339080000381 Leong, TW, & Robertson, T 2016, ‘Voicing values: laying foundations for ageing people to participate in design’, in Proceedings of the 14th Participatory Design Conference: Full papers-Volume 1. Lindley, J, & Coulton, P 2015, ‘Back to the future: 10 years of design fiction’, in Proceedings of the 2015 British HCI Conference. Lindley, J., Sharma, D. & Potts, R. Operationalizing design fiction with anticipatory ethnography. Ethnographic Praxis in Industry Conference Proceedings, 2015. Wiley Online Library, 58-71. Lyckvi, S., Roto, V., Buie, E. & Wu, Y. The role of design fiction in participatory design processes. Proceedings of the 10th Nordic Conference on Human-Computer Interaction, 2018. 976-979. Noortman, R, Schulte, BF, Marshall, P, Bakker, S, & Cox, AL 2019, ‘HawkEye-Deploying a Design Fiction Probe’, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Markussen, T. & Knutz, E. The poetics of design fiction. Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces, 2013 Newcastle upon Tyne, United Kingdom. 2513531: ACM, 231-240. Mehta, KK, & Ko, H 2004, ‘Filial piety revisited in the context of modernizing Asian societies’, Geriatrics & Gerontology International, vol 4 no. s1, pp S77–S78, https://doi.org/https://doi.org/10.1111/j.1447- 0594.2004.00157.x Mehta, KK, & Leng, TL 2006, ‘Interdependence in Asian Families’, Journal of Intergenerational Relationships, vol. 4, no. 1, pp. 117–125, https://doi.org/10.1300/J194v04n01_13 Montgomery, E. P. & Woebken, C. 2016. Extrapolation Factory Operator's Manual, Extrapolationfactory. com. Morrissey, K., Garbett, A., Wright, P., Olivier, P., Jenkins, E. I. & Brittain, K. Care and Connect: exploring dementia-friendliness through an online community commissioning platform. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017. 2163-2174. Muñoz, D., Ploderer, B. & Brereton, M. Position Exchange Workshops: A Method To Design For each other in families. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019. 1-14. PARO Robots U.S. 2014, PARO Therapeutic Robot, viewed on 19 February, 2021, http://www.parorobots.com/ Pierce, J. Smart Home Security Cameras And Shifting Lines Of Creepiness: A Design-Led Inquiry. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019. 1-14. 19 Sabat, S. R. & Lee, J. M. 2012. Relatedness Among People Diagnosed With Dementia: Social Cognition and the possibility of friendship. Dementia, 11, 315-327. Schulte, B. F., Marshall, P. & Cox, A. L. Homes For Life: A Design Fiction Probe. Proceedings Of the 9th Nordic Conference on Human-Computer Interaction, 2016. ACM, 80. Schulte, BF 2016, ‘Using design fiction to reflect on autonomy in smart technology for people living with dementia’, in the Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. Seow, BY 2017, ‘A little less Nimby’, The Straits Times, 19 February, viewed 23 February 2021, <https://www.straitstimes.com/singapore/housing/a-little-less-nimby> ‘Singapore is turning to artificial intelligence for elder care 2017, Health Care Asia Magazine, viewed 23 February 2021, <https://healthcareasiamagazine.com/healthcare/in-focus/singapore-turning-artificial- intelligence-older adult-care> Smith, R. C., Van Heeswijk, J., Kjærsgaard, M., Otto, T., Halse, J. & Binder, T. (Eds.) 2016. Design anthropological futures, London: Bloomsbury Publishing. Søndergaard, M. L. J. & Hansen, L. K. Intimate Futures: Staying With The Trouble Of Digital Personal Assistants through Design Fiction. Proceedings of the 2018 Designing Interactive Systems Conference, 2018. 869-880. Soro, A, Ambe, AH, & Brereton, M 2017, ‘Minding the gap: Reconciling human and technical perspectives on the IoT for healthy ageing’ in Wireless Communications and Mobile Computing, 2017. Sterling, B 2013, ‘Patently untrue: Fleshy defibrillators and synchronised baseball are changing the future’ Wired UK, 11 October. Superflux 2015, Uninvited guests, online video, <https://superflux.in/index.php/work/uninvited-guests/#.> Tsekleves, E, Darby, A, Whicher, A, & Swiatek, P.2017, ‘Co-designing design fictions: a new approach for debating and priming future healthcare technologies and services’, Archives of Design Research, vol 30, no.2, pp. 5-21. Tupas, R 2015, ‘Pragmatism, Mandarin and political culture in Singapore: recent reprises of an ideology’, Journal of World Languages, vol 2, no. 2–3, pp 94–105. https://doi.org/10.1080/21698252.2016.1183269 Vreugdenhil, A. 2014. ‘Ageing-In-Place’: Frontline Experiences Of Intergenerational Family Carers Of people with dementia. Health Sociology Review, 23, 43-52. Wintermans, M., Brankaert, R. & Lu, Y. Together We Do Not Forget: Co-Designing With People living with dementia towards a design for social inclusion. Proceedings of the design management academy 2017. International Conference, Hong Kong, 2017. 767-782. Wong, GH, Pang, WS & Yap, P 2014, ‘A Paradigm Shift in Regulating and Running Nursing Homes in Singapore’, Journal of the American Medical Directors Association, vol 15, no. 6, pp. 440-444. United Nations, Department of Economic and Social Affairs, Population Division 2017, World Population Ageing 2017, viewed 23 February 2021, <https://www.un.org/en/development/desa/population/publications/pdf/ageing/WPA2017_Report.pdf> 20
ai_researcher
3
Directed_Diversity_Leveraging_Language_Embedding_Distances_for_Collective_Creativity_in_Crowd_Ideation.pdf
Directed Diversity: Leveraging Language Embedding Distances for Collective Creativity in Crowd Ideation Samuel Rhys Cox † National University of Singapore, Singapore, [email protected] Yunlong Wang † National University of Singapore, Singapore, [email protected] Ashraf Abdul National University of Singapore, Singapore, [email protected] Christian von der Weth National University of Singapore, Singapore, [email protected] Brian Y. Lim * National University of Singapore, Singapore, [email protected] ABSTRACT Crowdsourcing can collect many diverse ideas by prompting ideators individually, but this can generate redundant ideas. Prior methods reduce redundancy by presenting peers’ ideas or peer-proposed prompts, but these require much human coordination. We introduce Directed Diversity, an automatic prompt selection approach that leverages language model embedding distances to maximize diversity. Ideators can be directed towards diverse prompts and away from prior ideas, thus improving their collective creativity. Since there are diverse metrics of diversity, we present a Diversity Prompting Evaluation Framework consolidating metrics from several research disciplines to analyze along the ideation chain — prompt selection, prompt creativity, prompt-ideation mediation, and ideation creativity. Using this framework, we evaluated Directed Diversity in a series of a simulation study and four user studies for the use case of crowdsourcing motivational messages to encourage physical activity. We show that automated diverse prompting can variously improve collective creativity across many nuanced metrics of diversity. CCS CONCEPTS • Human-centered computing • Collaborative and social computing • Collaborative and social computing theory, concepts and paradigms • Computer supported cooperative work KEYWORDS Diversity, Collective Creativity, Crowdsourcing, Ideation, Motivational messaging, Collective Intelligence, Creativity Support Tool. ACM Reference Format: Samuel R. Cox, Yunlong Wang, Ashraf Abdul, Christian von der Werth, Brian Y. Lim. 2021. Directed Diversity: Leveraging Language Embedding Distances for Collective Creativity in Crowd Ideation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI’21). † Co-first authors, ordered alphabetically * Corresponding author Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. CHI '21, May 8–13, 2021, Yokohama, Japan © 2021 Copyright is held by the owner/author(s). ACM ISBN 978-1-4503-8096-6/21/05. https://doi.org/10.1145/3411764.3445782 1 1 Introduction Crowdsourcing has been used to harness the power of human creativity at scale to perform creative work such as text editing [7,21,78], iterating designs [27], , information synthesis [54], and motivational messaging [4,50,95]. In such tasks, empowering crowd workers to ideate effectively and creatively is key to achieving high-quality results. Different prompting techniques have been proposed to stimulate creativity and improve the diversity of ideas [2,27,50,95], but they suffer from ideation redundancy, where multiple users express identical or similar ideas [10,48,76,80]. Current efforts to avoid redundancy include iterative or adaptive task workflows [99], constructing a taxonomy of the idea space [40], and visualizing a concept map of peer ideas [80], but these require much manual effort and are not scalable. Instead, we propose an automatic prompt selection mechanism — Directed Diversity — to scale crowd ideation diversity. Directed Diversity composes prompts with one or more phrases to stimulate ideation. It helps to direct workers towards new ideas and away from existing ideas with the workflow: 1) extract phrases from text corpuses in a target domain, 2) embed phrases into a vector embedding, and 3) automatically select phrases for maximum diversity. These phrases are then shown as prompts to ideators to stimulate ideation. The phrase embedding uses the Universal Sentence Encoder (USE) [14] to position phrases within an embedding vector space . Using the embedding vectors, we calculated distances between phrases to optimally select phrases that are farthest apart from one another; this maximizes the diversity of the selected phrases. Hence, Directed Diversity guides ideators towards under-utilized phrases or away from existing or undesirable phrases. The embedding space provides a basis to calculate quantitative, distance-based metrics to estimate diversity in selected phrases and prompts, and subsequently ideated messages. These metrics can complement empirical measurements from user studies evaluate prompts and ideations. We curate multiple measures and evaluation techniques and propose a Diversity Prompting Evaluation Framework to evaluate perceived and subjective creativity and objective, computed creativity, and diversity of crowd ideations. We demonstrate the framework with experiments on Directed Diversity to 1) evaluate its efficacy to select diverse prompts in a simulation study, 2) measure the perceived diversity of selected prompts and effort to generate ideas in an ideation study, and 3) evaluate the creativity and diversity of generated ideas in validation studies using quantitative and qualitative analyses. The experiments were conducted with the application use case of writing motivational messages to encourage physical activity [2,3,50,95], though we discuss how Directed Diversity can apply to other crowd ideation tasks. In summary, our contributions are: 1. We present Directed Diversity, a corpus-driven, automatic approach that leverages embedding distances based on a language model to select diverse phrases by maximizing a diversity metric. Using these constrained prompts, crowdworkers are directed to generate more diverse ideas. This results in improved collective creativity and reduced redundancy. 2. A Diversity Prompting Evaluation Framework to evaluate the efficacy of diversity prompting along an ideation chain. This draws constructs from creativity and diversity literature, metrics computed from a language model embedding, and is validated with statistical and qualitative analyses. 3. We applied the evaluation framework in a series of four experiments to evaluate Directed Diversity for prompt selection, and found that it can improve ideation diversity without compromising ideation quality, but at a cost of higher user effort. 2 Background and Related Work We discuss related research on supporting crowd ideation with the cognitive basis for creative ideation, how creativity support tools help crowd ideation, and how artificial intelligence can help collective intelligence. 2.1 Cognitive Psychology of Creative Ideation Different cognitive models of creativity have been proposed to explain how ideation works. Memory-based explanation models describe how people retrieve information relevant to a cue (prompt) from long-term memory and process it generate ideas [1,26,53,67,68]. Since retrieval is dependent on prompts, they need to be sufficiently diverse to stimulate diverse ideation [68], otherwise people may fixate on a few narrow ideas [42]. Ideation-based models [64] explain how individuals can generate many ideas through complex thinking processes, including analogical reasoning [36,46,61], problem constraining [84], and vertical or lateral thinking [35]. We focus on prompting to promote memory-based retrieval than these other reasoning processes. Besides cue-based retrieval and thinking strategies, other factors influence ideation creativity, such as personal traits, motivation to perform the task, and domain-relevant skills that can affect individual creativity [90]. We provide technological support to improve the creative mental process, rather than to select creative personalities, recruit domain experts, or improve task 2 motivation. Next, we discuss how different cognitive factors have been leveraged at scale to support creative ideation with the crowd. 2.2 Creativity Support Tools for Crowd Ideation Creativity Support Tools have been widely studied in HCI to enable crowdworkers to ideate more effectively and at scale [31,32]. Showing workers ideas from their peers has been very popular [18,33,79,81], but can have limited benefit to creativity if peer ideas are too distant from the ideators' own ideas [18]. Other approaches include employing contextual framing to prompt ideators to imagine playing a role for the task [69] or using avatars for virtual interactions while brainstorming [57]. While these methods focus on augmenting individual creativity, they do not coordinate the crowd, so multiple new ideations may be redundant. More recent approaches apply provide more explicit guidance to workers. IdeaHound [80] visualizes an idea map to encourage workers to focus on gaps between peer ideas, but does not inform what ideas or topics will fill the gaps. BlueSky [40] and de Vries et al. [95] use crowd or expert annotators to construct taxonomies to constrain the sub-topics for ideation, but these taxonomies require significant manual effort to construct and are difficult to scale. Chan et al. [16] employed latent Dirichlet allocation (LDA) to automatically identify topics, but this still requires much manual curation which does not scale to many topics. With Directed Diversity, we automatically extract a phrase corpus and embed the phrases as vectors, and select diverse phrases for focused prompting. We employ pre-trained language model to provide crowd ideation support, thus we next discuss how artificial intelligence can support collective intelligence. 2.3 Supporting Collective Intelligence with Artificial Intelligence Collective Intelligence is defined as groups of individuals (the collective) working together exhibiting characteristics such as learning, judgement and problem solving (intelligence) [56]. Crowdsourcing is a form of collective intelligence exhibited when crowdworkers work towards a task mediated by the crowdsourcing platform. However, managing crowdwork to ensure data quality and maximize efficiency is difficult because of the nature and volume of the tasks, and varying abilities and skills of workers [97]. HCI research has contributed much towards this with interfaces to improve crowdworker efficiency, designing incentives for workers, and workflows to validate work quality [9,38,62,89,97]. Furthermore, recent developments in artificial intelligence (AI) provides opportunities to complement human intelligence to improve the quality and efficiency of crowd work [47,97], optimize task allocation [22,28], adhere to budget constraints [45], and dynamically control quality [11]. With Directed Diversity, we used AI to optimize ideation diversity by shepherding the crowd towards more desired and diverse ideation with diverse prompt selection. 3 Technical Approach We aim to improve the collective diversity of crowdsourced ideas by presenting crowdworker ideators, with carefully selected prompts that direct them towards newer ideas and away from existing ones. The prompts presented to the ideators consist of one or more phrases that represent ideas that are distinct and different from prior ideas. Prompts can have one or more phrases. As a running example throughout the technical discussion and experiments, we apply our approach to the application of motivational messages for healthy physical activity, where it is important to collect diverse motivational messages [50,95]. Figure 1 shows the 3-step overall approach to extract, embed, and select phrases. We next describe each of these steps in detail. Figure 1: Pipeline of the overall technical approach to extract, embed, and select phrases to generate diverse prompts. a) Phrase extraction by collecting phrases from online articles and discussion forums (shown as pages), filtering phrases to select a clean subset (shown as the black dash for each phrase); b) Phrase embedding using the Universal Sentence Encoder [14] to compute the embedding vector of each phrase (shown as scatter plot); c) Phrase Selection by constructing the minimal spanning tree to select optimally spaced phrases (see Figure 2 for more details). 3 a) Phrase Extractionb) Phrase Embeddingc) Phrase Selection 3.1 Phrase Extraction We extracted phrases from selected sources of documents with the following semi-automatic data-driven process: 1) collect a corpus of documents, 2) tokenize documents into sentences, 3) extract phrases as constituent structures, 4) filter for length, slang, emoticons. We collected documents about exercising, weight loss, and healthy living from two types of sources: i) credible, authoritative health news articles1 to obtain texts relevant to the domain (health and fitness), and ii) discussion posts from popular subreddits2 of online health communities related to fitness and physical activity to obtain texts relevant to the task (motivational messaging [50,95]). Together, the combined corpus contained 3,235 articles and 32,721 user posts. To extract phrases, we tokenized each document into sentences and performed Part-of-Speech (POS) tagging using Python Spacy to select phrases that form syntactic constituents [13]. From each sentence (e.g., "Regular exercising helps to improve people's health at any age.", we extract verb phrases (e.g., "helps to improve"), noun phrases (e.g., "regular exercising," "people’s health,” “age”) and prepositional phrases (e.g., “at any age”). To provide more context to each phrase, we combined adjoining verb and noun phrases to generate noun-verb phrases (e.g. “regular exercising helps to improve”) and verb-noun phrases (e.g., “helps to improve people’s health”). After extracting the phrases, we filtered phrases for length, quality, and relevance. We kept phrases that were 3 to 5 words long, since short phrases may not sufficiently stimulate creativity and long prompts may restrict creativity. Since user posts often contain typographical errors, slang, or other stylistic devices (e.g., emoticons), we kept phrases that only contain words from a dictionary3 of American and British words. To reduce repetition of phrases, we removed shorter phrases that overlapped with longer phrases (e.g., excluded “federal exercise recommendations”, kept “federal exercise recommendations and guidelines”). The final corpus contained clean 3,666 phrases. We next describe the construction of the multi-dimensional idea space to characterize how the phrases are separated or similar. 3.2 Phrase Embedding The corpus of extracted phrases provides a large set of potential phrases for prompting, but we seek to select phrases that are least similar to one another. For each phrase, we obtain a multi-dimensional vector representation, called an embedding, so that the phrase is a data point in an idea space. Similar work by Siangliulue et al. [79] obtained embeddings of 𝑁 = 52 ideas by training a Crowd Kernel model [91] from 2,818 triplet annotations is not scalable to our corpus of 𝑁 = 3,666 phrases, since that would need 𝑁(𝑁 − 1)(𝑁 − 2)/3 = 16.4 million triplets. Instead, similar to Chan et al.’s [18] use of GloVE [71], we use pre-trained language models based on deep learning to encode each word or sentence as a vector representation. Specifically, we use the more recent Universal Sentence Encoder (USE) [14] to obtain embeddings for phrases in our corpus, compute their pairwise distances, and selected a maximally diverse subset of phrases. Our approach is generalizable to other language embedding techniques [98]. Table 1: Demonstration of pairwise embedding angular distances between an example text items (first data row) and neighboring text items. Text items with semantically similar words have smaller distances. For interpretability, we highlighted words to indicate darker color with higher cosine similarity to the first phrase. a) Example extracted Phrases b) Example Ideations from Ideation User Study Phrase Distance to first Phrase Ideated Message Distance to first Ideation app with yoga poses yoga really taking off 0 (self) 0.284 popular form of yoga today 0.304 yoga pants or sweats of handstand push-ups on the road to diabetes 0.351 0.406 0.475 Exercise will release endorphins and you will feel good for a while after doing it. 0 (self) Exercise releases endorphins and makes you feel better! Exercise relieves stress in both the mind and the body. It’s the best way to get your mental health in check. 0.171 0.301 We are the leading country in obesity. Do you want to be part of? 0.509 1 Source of three authoritative websites on health: www.health.harvard.edu, www.medicinenet.com, www.webmd.com. 2 Source of 20 subreddits from www.reddit.com: 90daysgoal, advancedfitness, advancedrunning, bodyweightfitness, c25k, crossfit ,fitness, gainit, getmotivated, ketogains, kettlebells, leangains, loseit, motivation, powerlifting, running, selfimprovement, swimming, weightroom, xxfitness. 3 Debian Wordlist pagkage. packages.debian.org/es/sid/wordlist 4 To obtain the phrase embedding presentation, we use a pre-trained USE model4 to obtain embedding vectors for each phrase. With USE, all embeddings are 512-dimensional vectors are located on the unit hypersphere, i.e., all vectors are unit length, and only their angles are different. Hence, the dissimilarity between two phrase embeddings 𝒙𝑖 and 𝒙𝑗 is calculated as the angular distance arccos(𝒙𝑖, 𝒙𝑗), which is between 0 and 𝜋. For our phrase corpus, the pairwise distance between phrases ranged from Min=0.06 to Max=0.58, Median=0.4, inter-quartile range 0.39 to 0.46, SD=0.043; see Appendix Figure 10. We use the same USE model to compute embeddings and distances for ideated messages. For a dataset of 500 motivational messages ideated in a pilot study with no prompting, the pairwise distance between ideations ranged from Min=0.169 to Max=0.549, Median=0.405, inter-quartile range 0.376 to 0.432, SD=0.043; see Appendix Figure 11. Table 1 shows example phrases and messages and their corresponding pairwise dissimilarity distances. With the embedding vectors and pairwise distances for all phrases, the next step selects diverse phrases with which to prompt ideators. 3.3 Phrase Selection Given the embeddings of the curated phrases, we want to select the subset of phrases with maximum diversity. Mathematically, this is the dispersion problem or diversity maximization problem of “arranging a set of points as far away from one another as possible”. Among several diversity formulations [20], we choose the Remote-MST diversity formulation [37] (also called Remote-tree [20] or functional diversity [72]) that defines diversity as the sum of edge weights of a minimum spanning tree (MST) over a set of vertices. It is robust against nonuniformly distributed data points (e.g., with multiple clusters, see Table 4). We construct the minimum spanning tree by performing agglomerative hierarchical clustering on the data points with single linkage [82]. Next, we describe how we select phrases as prompts to direct ideators towards diverse phrases, or away from prior ideas. Figure 2 illustrates the technical approach. Figure 2: Procedure to direct ideation towards diverse phrases (top) and away from prior or redundant ideas (bottom). To attract ideation with diverse prompts: a) start with embeddings of corpus-extracted phrases; b) construct minimum spanning tree (MST); c) traverse tree to select distant prompts from clusters (most distant points as green dots, in clustered phrases as green ellipses); d) selected prompts are the most diverse. To repel ideation from prior ideas, e) compute embeddings of prior ideas (red hollow dots); f) compute prompt-ideation pairwise distances of all prompts from each prior ideation, exclude phrases (dotted black circles) with pairwise distance less than a user- defined threshold (red bubble), and construct the MST with remaining phrases; g) traverse MST to select a user- defined number of prompts; h) selected prompts are diverse, yet avoids prior ideas. 4 Pre-trained Universal Sentence Encoder model (https://tfhub.dev/google/universal-sentence-encoder/4) , which was trained using both unsupervised learning on Wikipedia, web news, web question-answer pages, and discussion forums, and supervised learning on Stanford Natural Language Inference (SNLI) corpus. 5 Directing Towardswith diverse phrase attractorsDirecting Awaywith prior idea repellersa) Extracted phrasesas embeddingse) With prior ideasas embeddingsb) Build Minimum Spanning Treef) Exclude phrases too close to prior ideasc) Select specified number of promptsg) Different prompts selectedd) Selected mostdiversepromptsh) Selected diverse& non-redundant prompts 3.3.1 Directing towards Diverse Phrases For phrase selection, we aim to select a fixed number of points 𝑛 from the corpus with maximum diversity. This is equivalent to finding a maximal edge-weighted clique in a fully connected weighted graph, which is known to be NP- hard [39]. Hence, we propose a scalable greedy approach that uses the dendrogram representation of the MST resulting from the hierarchical clustering. Starting from the root, we set the number of clusters to the desired number of phrases 𝑛. For each cluster 𝐶𝑟, we select the phrase that is most distant from other points, with largest minimum pairwise distance from all points from outside the cluster, i.e., 𝒙𝑟 = argmax 𝑖∈𝐶𝑟 (min 𝑗∉𝐶𝑟 𝑑(𝒙𝑖, 𝒙𝑗)) where 𝒙𝑟 is the diverse phrase selected in cluster 𝐶𝑟, 𝒙𝑖 is a point in cluster 𝐶𝑟 and 𝒙𝑗 is a point in the corpus not in 𝐶𝑟, and 𝑑 is the pairwise distance between 𝒙𝑖 and 𝒙𝑗. This method has O(n2) time complexity and runs in less than one second on a desktop PC for 3.6k phrases; it is generalizable and can be substituted for other approximate algorithms to select most diverse points [20,41]. Figure 2 (top row) illustrates the phrase selection method to direct towards areas without ideations: a) Start with all phrases in a corpus represented as USE embedding points. b) Construct a dendrogram (MST) from all points, using single-linkage hierarchical clustering. c) Set # clusters equal to desired number of diverse phrases. For each cluster, find the most distant phrase. d) Selected phrases are the approximately most diverse from the corpus, for the desired number of phrases. 3.3.2 Directing Away from Prior Ideas Other than directing ideators towards new ideas with diverse prompts, it is important to help them to avoid prior ideas written by peers. We further propose a method to remove corpus phrases that are close to prior ideas so that ideators do not get prompted to write ideas similar to prior ones. The method, illustrated in Figure 2 (bottom row), is similar as before, but with some changes: e) Add the embedding points of prior ideas to the corpus. f) Calculate phrase-ideation distance 𝑑(𝒙𝑖 𝐼) for each phrase 𝒙𝑖 𝑃, 𝒙𝑗 𝑃 and ideation 𝒙𝑖 𝐼 and exclude phrases too close to the ideas, i.e., 𝑑 < 𝛿, where 𝛿 is an application-dependent threshold, 𝛿 = 0.29 in our case. g) Same as step (c), but different clusters, since fewer points are clustered. h) Same as step (d), but different prompts would be selected, even if the number of phrases is the same. 3.3.3 Directing with Prompts of Grouped Phrases Instead of prompting with only one phrase, prompting with multiple related terms can help ideators to better understand the concept being prompted and generate higher quality ideas [17,67,83]. We extend the phrase selection method to group multiple phrases in a single prompt using the following greedy algorithm. After step (a), we i) sorted phrases by descending order of minimum pairwise distance for each phrase to produce a list of seed candidates, ii) for each seed phrase, perform a nearest neighbors search to retrieve a specified prompt size (number of phrases 𝑔 in a prompt) and remove the selected neighbors from the seed list, iii) repeat seed neighbor selection until 𝑃𝑟 as 𝑛 seed phrases have been processed. We grouped the phrases into a prompt and calculate its embedding point 𝒙𝑖 the angular average of all phrases 𝒙𝑘 is the magnitude 𝑃𝑟 instead of of the vector sum and 𝒙𝑖 individual phrases. Note that the corpus of prompts will be smaller than the corpus of phrases. This approach has disjoint prompts that do not share phrases, but there can be alternative approaches to group phrases5. 𝑔 𝑃𝑟 = ∑ 𝑘=1 2 𝑃𝑟 is also a unit vector. We then perform steps (b) to (d) with the prompts 𝒙𝑖 𝑔 /𝑍, where 𝑍 = ‖∑ 𝑘=1 𝑃 in the prompt, i.e., 𝒙𝑖 𝑃 𝒙𝑘 𝑃 𝒙𝑘 ‖ 4 Diversity Prompting Evaluation Framework To evaluate the effectiveness of the Directed Diversity prompt selection technique to improve the collective creativity of generated ideas, we define an ideation chain as a four step process (Figure 3 top): 1) setting the prompt selection technique will influence 2) the creativity of selected prompts (prompt creativity), 3) the ideation process of the ideators (prompt-ideation mediation), and 4) the creativity of their ideation (ideation creativity). We propose a Diversity Prompting Evaluation Framework, shown in Figure 3, to measure and track how creative and diverse information propagates along this ideation chain to evaluate how and whether a creativity prompting technique improves various measures of creativity and diversity in outcome ideas. Note that our proposed framework is descriptive to curate many useful metrics, but not prescriptive to recommend best metrics. 5 An alternative approach is, after step (c), to simply group nearest neighbors. However, this will cause the prompt embeddings to be shifted after the diversity is maximized, so it may reduce the diversity of the selected prompts. 6 4.1 Research Questions and Experiments Prompt stimuli act along the ideation chain to increase ideation diversity, but it is unclear how well they work and at which point along the chain they may fail. We raise three research questions between each step in the ideation chain, which we answer in four experiments (Section 5) with various measures and factors. RQ1. How do the prompt techniques influence the perceived diversity of prompts? (RQ1.1) How do they affect diversity in prompts? (RQ1.2) How well can users perceive differences in creativity and diversity in these prompts? These questions relate to the prompt selection technique effectiveness and serve as a manipulation check. We answer them in a Characterization Simulation Study (Section 5.1) with objective diversity measures, and an Ideation User Study (5.2) with subjective measures perceived prompt diversity measures. RQ2. How does diversity in prompts affect the ideation process for ideators? (RQ2.1) Do differences in diversity affect ideation effort? (RQ2.2) How well do ideators adopt and apply the content of the prompts? (RQ2.3) How does prompt creativity affect diversity in ideations? We answer these questions as a mediation analysis in the Ideation User Study (Section 5.2) with objective measures of task time and similarity between ideations and stimulus prompts, thematically coded creativity metrics, and perceived ease of ideation. RQ3. How do prompt selection techniques affect diversity in ideations? Having validated the manipulation checks, we evaluate the effectiveness of prompt selection techniques in questions in the Ideation User Study (Section 5.2) with subjective measures self-assessed creativity and thematically coded creativity metrics, and two Validation User Studies (Section 5.3) with subjective measures of perceived creativity. Figure 3: Diversity prompting evaluation framework to evaluate prompting to support diverse ideation along the ideation chain. We pose research questions (RQ1-3) between each step to validate the ideation diversification process. For each step, we manipulate or measure various experiment constructs to track how well ideators are prompted to generate creative ideas. Except for prompt selection, each construct refers to a statistical factor determined from factor analyses of multiple dependent variables. Constructs are grouped in colored blocks indicating different data collection method ( Computed embedding-based metric, validators, thematic coding of ideations). ratings from ideators, ratings from Independent variables of Prompt Specifications 4.1.1 We manipulated prompt selection technique, prompt count, and prompt size as independent variables; these are detailed in Appendix Table 6. We chose Random prompt selection as a key baseline where selection is non-trivial and data-driven based on our corpus, but not intelligently selected for diversity. 7 ndividual 4.2 Diversity and Creativity Measures of Prompting and Ideation We measured diversity and creativity for selected prompts and generated ideas with embedding-based and human rated metrics. We color code variable names based on data collection method as in Figure 3. 4.2.1 Embedding-based Diversity Metrics for Prompts and Ideations Although crowd creativity research has focused on the mean pairwise distance as a metric for idea diversity, our literature review has revealed many definitions and metrics. Here, we describe computational metrics calculated from the embedding-based distances. Inspired by Stirling’s general framework diversity framework [87], we collect definitions from crowd ideation [15,27,40,79,80], ecology [24,73,94], recommender systems [29,44,60,93], and theoretical computer science [20,37]. These cover many aspects of diversity to characterize the mean distance and minimum Chamfer distance between points, MST-based dispersion, sparseness of points around the median, span from the centroid, and entropy to indicate the evenness of points in the embedding vector space. Table 2 and Table 3 describe distance metrics for individual and collective text items, respectively. These metrics describe nuances of diversity, which we illustrate with example distributions in Table 4. Other measures of diversity and divergence [20] can be included in the framework, which we defer to future work. Next, we describe human-subjects ratings to validate these embedding-based metrics with measures that do not depend on the embeddings to avoid circular dependency. Table 2: Metrics of distances between two points in a multi-dimensional vector space. Each metric can be calculated for an individual text item. These metrics can apply to the embedding of phrases or ideations. Metric Mean Pairwise Distance Minimum Pairwise Distance Definition 𝑁 1 𝑁 − 1 𝑗=1 ∑ 𝑑(𝒙𝑖, 𝒙𝑗) Average distance of all other points to the current point. 𝑚𝑖𝑛 𝑗∉𝑖 𝑑(𝒙𝑖, 𝒙𝑗) Distance of closest neighbor to current point. This focuses on redundancy and ignores points that are very far from the current point. Interpretation Table 3: Metrics of diversity of phrases or ideation embeddings in a vector space. These capture more characteristics of diversity than average distances in Table 2. Each metric can only be calculated collectively for multiple items. Metric Remote- Clique Chamfer Distance Definition 1 𝑁2 ∑ 𝑑(𝒙𝑖, 𝒙𝑗) 𝑖,𝑗 1 𝑁 𝑁 ∑ 𝑚𝑖𝑛 𝑗∉𝑖 𝑖=1 𝑑(𝒙𝑖, 𝒙𝑗) Interpretation Average of mean pairwise distances. While commonly used in crowd ideation studies [27,44,80], it is insensitive to highly clustered points. Average of minimum pairwise distances. Chamfer distance [43] (or Remote-pseudoforest [20]) measures the distance to the nearest neighbor. However, it is biased when points are clustered. MST Dispersion Mean of MST edge distances 1 |𝐸𝑀𝑆𝑇| ∑ 𝑑(𝒙𝑖, 𝒙𝑗) (𝒙𝒊,𝒙𝑗)∈𝐸𝑀𝑆𝑇 Span percentile𝑃% 𝑑(𝑥𝑖, 𝑥̅ 𝑀) Mean distance to medoid Sparseness 1 𝑁 𝑁 ∑ 𝑑(𝑥𝑖 𝑖=1 𝑀, 𝑥̃𝑀) Entropy Shannon-Wiener index for points in a grid partition ∑ 𝑓𝑏 log(𝑓𝑏) 𝑏 𝑁 𝑖=1 /𝑁); i.e., “radius” of Popular in ecology research as functional diversity [72], and called Remote-tree or Remote-MST [20,37], this learns a minimum spanning tree (MST) of the points, and calculates the sum of edge weights. 𝑀 𝑃th percentile distance to centroid (𝑥̅ 𝑀 = ∑ 𝑥𝑖 distribution [12,65]. We calculate 90th percentile to centroid (vs. medoid) to be robust against outliers and skewed distributions, respectively. Sparsity of points positioned around the medoid (𝑥̃𝑀 = 𝑁 argmin𝑥𝑖{∑ 𝑗=1 medoid, then this metric will be small (i.e., not sparse). This index [75,86] indicates how evenly points are distributed; more even is more diverse. We calculated entropy for a 2D projection of the USE feature space to avoid high time complexity6 and divided the space into a 5×5 grid7, and counted the frequency 𝑓𝑏 of points in each bin 𝑏. }) [51,52,77]. If points cluster around the 𝑑(𝒙𝑖, 𝒙𝑗) 6 Since calculating entropy in high dimensions is computationally expensive, we reduce the 512-dimension USE feature space to a 2-dimension UMAP projection [59]. This is a dimensionality reduction technique that is more robust than t-SNE. We iterated hyperparameters settings and chose the projection with highest correlation between the entropy results and mean pairwise distances. 7 Entropy calculations will differ for different grid sizes, but the general trends with respect to points distribution should be similar. 8 Table 4: Comparison of diversity metrics for canonical examples of distributed points in a 2D space. Points farther apart mean higher diversity. Here, we calculate Euclidean instead of angular distance, but intuitions are similar. 4.2.2 Creativity Measures for Ideations Along with the computed diversity metrics, we evaluate with qualitative characteristics of creativity. From creativity literature, we draw from Torrance’s [92] description of several measures for creativity, including quality, flexibility and originality. Quality measures whether an ideation is “usable, practical, or appropriate” [66]. We asked ideators to self-assess on a 5-point Likert scale their message’s effectiveness (towards motivation) and creativity. We ask validator crowdworkers to rate each individual ideation on a 7-point Likert scale whether it is effective (motivating [95]), helpful8 [50,88], and informative [50] towards encouraging physical activity; rank collections of ideations on effectiveness, informativeness and unrepetitiveness.; and rate the pairwise difference between ideation pairs from each collection. Note that Directed Diversity was not designed to improve quality, since these metrics were not explicitly modeled. Flexibility [85] measures how many unique ideas were generated, and originality [100] measures how infrequently each idea occurs. These require expert annotation to identify distinct categories. We conducted a thematic analysis on the messages using open coding [34] to derive categories and affinity diagramming [8] to consolidate categories to themes (see details in Appendix Table 19). We calculate the flexibility and originality measures based on the coded categories (fine-grained) and themes (coarser) described in Appendix Table 7. 4.2.3 Creativity Measures for Prompts As a manipulation check, it is important to verify that prompts that are computed as more diverse, are perceived by ideators as more creative. Since perceived creativity encompasses more qualitative effects, computed diversity may not be correlated with creativity. Thus, we measure the creativity and usefulness of prompts by asking about prompt understandability, relevance to domain topic (physical activity), relevance to task9 (motivation), helpfulness to inspire ideation, and unexpectedness [66] along 7-point Likert scales. 4.2.4 Mediating Variables for Prompt-Ideation Process Even if more diverse prompts can facilitate more creative ideation, it is important to understand whether this requires more effort and time, how the consistency of phrases within prompts affect ideation, and how well ideators adopt words and concepts from the phrases into their ideations. We measure effort as ease of ideation with a 7-point Likert scale survey question. For individual creativity, fluency [30] is defined as the number of ideas an individual writes within a fixed time. Chan et al. had also measured fluency for an 8-minute crowd ideation task [18]. In contrast, we asked ideators to only write one idea per prompt without time constraint, so we measure the inverse relation of ideation task time to generate one ideation [5]. Specifically, since task time is skewed, we use – 𝐿𝑜𝑔(𝑖𝑑𝑒𝑎𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒) to represent fluency. For prompts with more than one phrase, the similarity between phrases can affect their perceived consistency. Therefore, we measure the intra-prompt mean phrase and prompt average phrase Chamfer distances (Appendix Table 8) to indicate the similarity between intra-prompt phrases. We measure the adoption of the prompt ideas by calculating the proportion of words from phrases in the ideations as prompt recall and prompt precision, and computing the prompt-ideation distance between the embeddings of the prompt and ideation (Appendix Table 9). 8 Note that a message could be helpful but written with negative impressions and thus not motivating. 9 Note that a prompt could be relevant to the domain, but not motivating. 9 Remote-Clique0.2580.4480.4110.3890.561Chamfer Distance0.8001.6001.6801.9312.263MST Dispersion0.1260.2320.2440.2470.333Span0.2180.3700.3270.2830.447Sparseness0.2210.3690.4090.3030.561Entropy0.6931.3861.7331.3862.079 4.3 Factor analyses to draw constructs from experiment variables With the numerous variables from our experiments, we observed some may be correlated since they measure similar notions or participants may confound questions to have similar meanings. We employed an iterative design- analytical process to organize and consolidate variables into factors with the following steps. • Identify metrics of creativity and diversity from a literature review from various research domains, such as ecology, creativity, crowdsourcing, theoretical computer science, recommender systems (Section 4.2.1). Ideate additional measures and questions to capture user behavior and opinions when generating and validating ideas. We refine and reduce measures based on survey pilots and usability testing. • Collect measurements of each metric with different methods: a) Compute embedding-based metrics from prompts shown and messages written. This was computed individually for each text item (e.g., mean pairwise distance) and collectively for all text items in each prompt technique (e.g., Remote-MST diversity). b) Measure perception ratings and behavioral measures regarding reading prompts and ideating messages and rating messages. We asked text rationale to help with interpretations. c) Measure subjective thematic measures to qualitatively assess the collective creativity with thematic analysis and idea counting. • Perform factor analysis on quantitative data to organize correlated variables into fewer factors. Variables are first grouped by data collection method10 and analyzed together. To determine the number of factors, we examined scree plots and verified grouped variables as consistent with constructs from literature. The final number of factors are statistically significant by the Bartlett Test of Sphericity (all p<.0001). See Appendix Tables 10-17 for the results of the factor analysis, including factor loadings and statistical significance. Table 5 summarizes the learned factors from 42 variables that we developed. • Perform statistical hypothesis testing using these learned factors to answer our research questions. Table 5: Constructs from factor analyses of variables along ideation chain. Factor loadings in Appendix Tables 10-17. Chain Factor Construct Prompt Distance y t i v i t a e r C t p m o r P Prompt Consistency Prompt Dispersion Prompt Evenness Prompt Unexpectedness Prompt Understandability Prompt Relevance Prompt Quality Ideation Fluency n o Ideation Ease i t a i Phrase Adoption d e M - t p m o r P n o i t a e d I Interpretation How distant and isolated the prompt is from other prompts. How similar (consistent) the phrases in a prompt are. How spread out the selected prompts are from one another. How evenly spaced the selected prompts are among themselves. Ideator rating of how unexpected a prompt was on a 5-pt Likert scale. Ideator rating of how understandable a prompt was on 5-pt Likert scale Ideator rating of prompt relevant to the domain (i.e. exercise) on 5-pt Likert scale Ideator rating of the overall quality of prompt on 5-pt Likert scale. Ideator speed to ideate (reverse of time taken). Ideator ease of ideating based on multiple 5-point Likert scale ratings. Measures the extent of phrase usage from the prompts in the ideation. Ideation Distance y t i v i t a e r C n o i t a e d I How distant and isolated the ideation is from other ideations. How spread out the ideations are from one another. How evenly spaced the ideations are among themselves. Count of unique categories/themes across all ideations. How rare each category/theme is across all ideations. Ideator self-rating of the overall quality of the ideation on 5-pt Likert scale. Validator rating of overall quality of individual ideation on 7-pt Likert scale. Validator rating of informativeness and helpfulness of individual ideation on 7-pt Likert scale. Ideation Dispersion Ideation Evenness Ideation Flexibility Ideation Originality Ideation Self-Quality Ideation Quality Ideation Informative- Helpfulness Ideations Unrepetitive Ideations Informative Ideations Motivating Ideations Pairwise Difference Validator rating of difference between a pair of ideations in collection. Validator cumulative rating of non-redundancy in collection of ideations. Validator cumulative rating of informativeness in collection of ideations. Validator cumulative rating of overall quality of collection of ideations. 10 E.g., individual text item metrics, collective text items metrics, ratings of text item from ideators, ratings of text item from validators, ratings of collection of text items from validators. 10 5 Evaluation: applying framework to study Directed Diversity We have described a general descriptive framework for evaluating diversity prompting. We applied it to evaluate our proposed Directed Diversity prompt selection technique against baseline approaches (no prompting, random prompt selection) in a series of experiments (characterization, ideation, individual validation, collective validation), for the use case of crowd ideating motivational messages for physical activity. Here, we describe the procedures for each experiment and their results. 5.1 Characterization Simulation Study The first study uses computational methods to rapidly and scalably evaluate prompt selection techniques. This helps us to fine tune prompt parameters to maximize their potential impact in later human experiments. 5.1.1 Experiment Treatments and Method We varied three independent variables (prompt selection, prompt count, prompt size) to measure the impact on 7 dependent variables of distance and diversity metrics. We varied Prompt Selection technique (None, Random, or Directed) to investigate how much Directed Diversity improves prompt diversity with respect to baseline techniques. For None prompt selection, we simulated ideation with 500 ideas collected from a pilot study where crowd ideators wrote messages without prompts. We simulated Random selection by randomly selecting phrases from the phrase corpus (Section 3.1) and Directed selection with our technical approach (Sections 3.1 to 3.3). If we assume that prompt embeddings are an unbiased estimator for ideation embeddings, then this gives an approximation of ideation diversity due to prompting. We conducted experiments for directing towards diverse prompts and for directing away from the 500 pilot prior ideations. We varied the number of prompts (Prompt Count, 𝑛 = 50,150, … ,950) to simulate how diversity increases with the number of ideation tasks performed. This investigates how diversity increases as the budget for crowd tasks increases. To investigate how well Directed selection avoids prior ideations, we varied the number of repeller prior ideations (Repeller Prior Ideations Count, 𝑛𝑅 = 50,100,150,200). We varied the number of phrases in prompts (Prompt Size, 𝑔 = 1 to 5) to simulate ideating on one or more phrases in each prompt. We computed the prompt embedding as the average of all phrases in the prompt. For Random selection, we randomly chose phrases to group together for each prompt. This random neighbor selection will lead to variation in prompt consistency, but does not bias the prompt embedding on average. For Directed selection, phrases in each prompt were chosen as described in Section 3.3.3. 5.1.2 Results on Manipulation Efficacy Analysis (RQ1.1) We visualized (Figure 4) the phrase embeddings to help to interpret how the selected prompts are distributed, whether they are well spread out, clustered, etc. We used Uniform Manifold Approximation and Projection (UMAP) [59] to reduce the 512 dimensions of USE to a 2D projection. Hyperparameters were selected such that the 2D points in UMAP had pairwise distances correlated with that of the 512-dimension USE embeddings. Figure 4: 2D UMAP projection showing how diversely selected prompts and resulting ideation messages are distributed based on Directed or Random prompt selection technique and prompt size (number in brackets). Each point represents the embedding of a text item. Light grey points represent all phrases in the extracted corpus, dark grey points represent selected phrases from the simulation study (Section 5.1) and blue dots represent the ideated messages written by crowdworkers in the ideator user study (Section 5.2). Gradient lines connect ideation messages to their stimulus prompts. 11 Corpus Phrase Prompt with Phrases Ideation MessageNone(0)None(0)Random(1)Random(3)Directed(1)Directed(3) We can see that Directed prompt selection led to prompts that were more spread out, and less redundant from prior ideation. This is more pronounced for higher prompt size (𝑔 = 3). Random(3) had lower diversity than None with tighter clustering of prompts (grey points in middle-bottom graph) than of messages (blue points in left graph). This was because Random(3) prompts averaged their embeddings from multiple phrases, such that this variance of means of points is smaller than the variance of points11. We further conducted a characterization study with 50 simulations for each prompt configuration to confirm that Directed Diversity improves diversity and reduces redundancy from prior ideations for various embedding-based metrics (see Appendix E and Figure 12). 5.2 Ideation User Study The Ideation User Study serves as a manipulation check that higher prompt diversity can be perceived by ideators, and as an initial evaluation of ideation diversity based on computed and thematically coded metrics. 5.2.1 Experiment Treatment and Procedure We conducted a between-subjects experiment with two independent variables prompt selection technique (None, Random, Directed) and prompt size (𝑔 = 1 and 3), and kept constant prompt count 𝑛 = 250. The None condition (no prompt) allows us to measure if the quality of ideations become worse due to the undue influence of phrases in prompts. The Random condition provides a strong baseline since it also leverages the extracted phrases in the first step of Directed Diversity. A prompt size of 𝑔 > 1 can provide more contexts to help ideators understand the ideas in the phrases, but may also lead to more confusion if the phrases are not consistent (too dissimilar). Figure 5 shows example prompts that ideator participants see in different conditions. The experiment apparatus and survey questions were implemented in Qualtrics (see Appendix Figures 13-19 for instructions and question interface). Figure 5: Example prompts shown to participants in different conditions: None (left, 𝒈 = 𝟎), Directed(1) (center, 𝒈 = 𝟏), and Directed(3) (right, 𝒈 = 𝟑). Phrase texts would be different for Random(1) and Random(3) selection techniques. 5.2.2 Experiment Task and Procedure Participants were tasked to write motivational messages and answer questions with the following procedure: Read the introduction to describe the experiment objective and consent to the study. 1. Complete a 4-item word associativity test [19] to screen for English language skills. 2. Write 5 messages to motivate for physical activity for a fitness mobile app. For each message, one at a time, a) On the first page, depending on condition, see no prompt or a prompt with one or three phrases selected randomly or by Directed Diversity (see Figure 5), then write a motivational message in one to three sentences. This page is timed to measure ideation task time. b) Rate on a 5-point Likert scale the experience of ideating the current message: ease of ideation (described in Section 4.2.4), self-assessed success in writing motivationally, and success in writing creatively (Section 4.2.2); perception of the prompt on: understandability, relevance to domain topic (physical activity), relevance to task (motivation), helpfulness for inspiration, and unexpectedness (Section 4.2.3). c) Reflect and describe in free text on their rationale, thought process, phrase word usage, and ideation effort. We analyze these quotes to verify our understanding of the collected quantitative data. 3. Answer demographics questions, and end the survey by receiving a completion code. 11 This is analogous to standard error is to standard deviation 12 5.2.3 Experiment Data Collection and Statistical Analyses We recruited participants from Amazon Mechanical Turk with high qualification (≥5000 completed HITs with >97% approval rate). Of 282 workers who attempted the survey, 250 passed the test to complete the survey (88.7% pass rate). They were 45.2% female, between 21 and 70 years old (M=38.6); 76.4% of participants have used fitness apps. Participants were compensated after screening and were randomly assigned to one prompt selection technique. Participants in the None condition were compensated with US$1.80, while others with US$2.50 due to more time needed to answer the additional survey questions about prompts. Participants completed the survey in median time 15.4 minutes and were compensated >US$8/hour. We collected 5 messages per participant, 50 participants per condition, 250 ideations per condition, and 1,250 total ideations. For all response variables, we fit linear mixed effects models described in Appendix Tables 20-23. To allow a 2- factor analysis, we divided responses in the None(0) condition (no prompt, 0 phrases) randomly and evenly to None(1) and None(3). Results are shown in Figure 6. We performed post-hoc contrast tests for specific differences identified. Due to the large number of comparisons in our analysis, we consider differences with p<.001 as significant and p<.005 as marginally significant. Most significant results reported are p<.0001. This is stricter than a Bonferroni correction for 50 comparisons (significance level = .05/50). We next describe the statistically significant results for prompt mediation check (RQ1.2), mediation analysis (RQ2.1, 2.2), and ideation evaluation (RQ3.1, 3.2). We include participant quotes from their rationale text response where available and relevant. 5.2.4 Results of Manipulation Check on Creativity and Mediation on Ideation Effort (RQ1.2, 2.1) We discuss findings on how ideators perceived creativity factors in prompts and how prompt configurations affected their ideation effort. Figure 6 (Top) shows that compared to Random, Directed Diversity selected prompts that were more unexpected (good for diversity); but were slightly more difficult to understand (by half unit on 5-point Likert scale), very slightly less relevant (1/4 unit), and of slightly lower quality (1/2 unit). However, the relevance of the selected diverse prompts was not explicitly controlled. P173 in Directed(1) felt that the phrase “first set of challenges is” was “straightforward and gave me the idea of what to write. It was very easy”; whereas P157 in felt that the phrase “review findings should be” “didn't really have anything I could think to tie towards a motivational message. I tried to think of it as looking back to see progress in terms of reviewing your journey.” Random prompts with more phrases were harder to understand, perhaps, because they were randomly grouped and are less semantically similar. P128 in Random(3) found that “these [phrases] were hard to combine since they deal with different aspects of exercise. Also the weight lifting seems to be not the best thing for addressing obesity, so that was hard to work in.” Figure 6: Results of ideators’ perceived prompt creativity (Top) and ideation effort (Bottom) for different Prompt Selection technique and Prompt Size. All factors values on a 5-point Likert scale (–2=“Strongly Disagree” to 2=“Strongly Agree”). Dotted lines indicate extremely significant p<.0001 comparisons, otherwise very significant with p-value stated; solid lines indicate no significance at p>.01. Error bars indicate 90% confidence interval. We found that ideation effort was mediated by prompt factors. Figure 6 (Bottom) shows that Directed prompts were least easy to use for ideation, and less adopted than Random selected prompts. This is consistent with Directed prompts being less understandable than Random. Ideating with 1-phrase prompts increased ideation time from 44.1s by 21.6s (48.9%) compared to None, and viewing 3 phrase increased time further by 11.9s. In summary, Directed 13 p=.0002 Diversity may improve diversity by selecting unexpected prompts, but at some cost of ideator effort and confusion. This cost compromises prompt adoption and suggests that directing diversity may not work. Yet, as we will show later, Directed Diversity does improve ideation creativity. We analyzed the confound of understandability further in Appendix Section K. Next, we investigate if prompts characteristics mediate more ideation creativity. 5.2.5 Results of Mediation Analysis of Diversity Propagation from Prompt to Ideation (RQ2.3) We found that prompt configuration and perceived prompt creativity mediated the individual diversity of ideated messages (RQ2.2). Appendix Table 21a (in) shows that Ideation Mean (or Min) Pairwise Distance increased with Prompt Mean (or Min) Pairwise Distance by +0.176 (or +0.146), and marginally with Intra-Prompt Phrase Mean Distance by +0.021 (or +0.020). This means that farther Prompts stimulated farther Ideations, and higher variety of Phrases within each prompt drove slightly farther Ideations too. Hence, prompt diversity (mean pairwise distance) influenced ideation diversity, and prompt redundancy (minimum pairwise distance) influenced ideation redundancy. Appendix Table 21b shows that as Prompt Relevance decreased by one Likert unit (on 5-point scale), ideation mean pairwise distance decreased by 0.0034 (7.9% of ideation pairwise distance SD of 0.043) and ideation minimum pairwise distance decreases by 0.0056 (13% of SD). This suggests that prompting with irrelevant phrases slightly reduced diversity, since users had to have to conceive their own inspiration; e.g., P165 in Directed(1) “couldn't make sense of the given messages, so I tried my best to make something somewhat motivational and correct from them.”. Prompt understandability and quality did not influence ideation individual diversity (p=n.s.). In summary, selecting and presenting computationally diverse and less redundant prompts increased the likelihood of crowdworkers ideating messages that are more computationally diverse and less redundant. 5.2.6 Results on Evaluating Individual, Collective Objective, Thematic Ideation Diversity (RQ3) Having shown the mediating effects of diverse prompts, we now evaluate how prompt selection techniques affect self-assessed creativity ratings, objective diversity metrics of ideations, and thematically coded diversity metrics of ideations. To carefully distinguish between the commonly used mean pairwise distance with the less used minimum pairwise distance, we performed our analyses on them separately. We calculated one measurement of each collective diversity metric in Table 3 for all messages in each prompt selection condition, and computed uncertainty estimations from synthesized 50 bootstrap samples12 to generate 50 readings of each diversity metric. We performed factor analyses on the metrics as described in Section 4.3, and performed statistical analyses on these factors as described in Appendix Table 22. Analyses on both individual diversity and collective diversity measures had congruent results (Figure 7), though results for collective diversity had more significant differences (p<.001). For collective diversity, our factor analysis found that Ideation Dispersion was most correlated with mean pairwise distance, and Ideation Evenness with entropy and mean of Chamfer distance. Directed(3) improved Ideation Dispersion from None, while Random reduced Dispersion (even more for 3 vs. 1 phrases). Directed prompts improved Ideation Evenness more than Random with respect to None. There was no significant difference for self- assessed Ideation Quality (p=n.s., Table 23a in Appendix). Figure 7: Results of computed individual and collective diversity from ideations for different prompt configurations. See Figure 6 caption for how to interpret charts. The previous ideation diversity metrics were all computational. We next assess diversity with human judgement based on thematic analysis. To conserve manpower to evaluate ideations, we limited thematic coding and crowdworker validation to ideations from three conditions of prompts with 1 phrase, i.e., None, Random(1), and Directed(1). From the results of computational metrics, we expect bigger differences between Directed(3) and 12 For each dataset, randomly sample with replacement from the original dataset until the same dataset size is reached. 14 Random(3) for this analysis too. From our thematic analysis, we coded 239 categories13 which we consolidated to 53 themes (see Table 19 in Appendix). Figure 8 shows results from our statistical analysis. We found that ideations generated with Directed prompts had higher Flexibility and Originality in categories and themes than with Random or None. Ideations from Random prompts mostly had higher Flexibility and Originality compared to None, but the theme Originality was significantly lower. This could be because Random prompts primed ideators to fixate on fewer broad ideas (themes), instead of the higher number of fine-grained idea categories. Figure 8: Results of diversity in categories and themes derived from thematic analysis of ideations. In summary, despite lower ideation ease and understandability with Directed prompts (Section 5.2.4), we found objective and thematic evidence that Directed Diversity improved ideation diversity compared to Random and None. Next, we describe how crowdworkers would rate these ideations. 5.3 Validation User Studies The third and fourth studies employed third-party crowdworkers to assess the creativity of ideated messages from the Ideation User Study, to answer (RQ3) How do prompt selection techniques affect diversity in ideations? This provides a less biased validation than asking ideators to self-assess. We conducted three experiments with different questioning format to strengthen the experiment design. Appendix Figures 20-24 details the questionnaires. Individual Validation: Experiment Treatment and Procedure 5.3.1 For the individual validation study, we conducted a within-subjects experiment with prompt selection technique (None, Random, Directed) as independent variable, and controlled prompt size (𝑔 = 1). Each participant assessed 25 ideation messages chosen randomly from the three conditions. Participants went through the same procedure as in the Ideation user study, but with a different task in step 3: 3. Assess 25 messages regarding how well they motivate for physical activity. For each message, a) Read a randomly chosen message. b) Rate on a 7-point Likert scale, whether the message is motivating (effective), informative, and helpful (as described in Section 4.2.4). c) Reflect and write the rationale in free text on why they rated the message as effective or ineffective. This was only asked randomly two out of 25 times, to avoid fatigue. As we discuss later, we found that participants confounded the three ratings questions and answered them very similarly (responses were highly correlated), thus, we designed collective validation user studies to pose different questions and distinguish between the measures. 5.3.2 Collective Ranking Validation: Experiment Treatment and Procedure The collective validation study had the same experiment design as before, but different procedure step 3: 3. Complete 5 trials to rate collections of ideation messages, where for each trial, a) Study three groups of 5 messages each (3×5 messages) to b) Rank message groups as most, middle or least motivating, informative, and unrepetitive (Section 4.2.4). Instead of rating messages individually, participants viewed grouped messages from each condition side-by-side and answered ranking questions. Messages in each group were selected from those ideated with the same prompt selection technique. By asking participants to assess collections rather than individual messages, we explicitly measured perceived diversity, since the user perceived the differences between all ideations in the collection; this is more direct than asking them about the “informativeness” of an ideation, since this could be confounded with “helpfulness”, “teaching something new”, “telling something different from other messages”, etc. This approach differs from the triplet similarity comparison [55,91] employed by Siangliulue et al. [79], and benefits from requiring 13 Example categories (in themes): Pull-ups (Exercise Suggestion), Strong immune system (Health Benefits), Set daily exercise goal (Goals). See Appendix 8.7 for full list of categories and themes. 15 fewer assessments. We asked participants to rank groups rather than rate them relatively to obtain a forced choice [25]. Another method to assess diversity involves longitudinal exposure (e.g., [50]), but this is expensive and difficult to scale. 5.3.3 Collective Pairwise Rating Validation: Experiment Treatment and Procedure The collective pairwise rating validation study further validates our results with an existing, commonly used measure to rate the difference between pairs of messages, both from the same prompt selection technique [27,79]. We randomly selected 200 message-pairs from None, Random(1) and Directed(1), yielding a pool of 600 message- pairs. All steps in the procedure are identical as before except for Step 3: 3. Rate 30 message-pairs randomly selected from the message-pair pool, where for each message-pair, a) Read the two messages b) Rate their difference on a 7-point Likert scale: 1 “Not at all different (identical)” to 7 “Very different” This complements the previous study by having participants focus on two messages to compare, which is more manageable than assessing 5 messages, but is limited to a less holistic impression on multiple messages. 5.3.4 Experiments Data Collection and Statistical Analysis For all validation studies, we recruited participants from Amazon Mechanical Turk with the same high qualification as the ideation study. Of 348 workers who attempted the surveys, 290 passed the screening tests to complete the surveys (83.3% pass rate). They were 50.2% female, between 22 and 71 years old (M=38.1); 67.5% of participants have use fitness apps. For the individual validation study, Participants completed the survey in median time 14.7 minutes and were compensated US$1.50; for the collective ranking validation study, participants completed the survey in median time 12.7 minutes and were compensated US$1.80; for the collective pairwise rating validation study, participants completed the survey in median time 8.4 minutes and were compensated US$1.00. In total, 740 messages were individually rated 3,375 times (M=4.56x per message), 450 message groups were ranked 1,350 times (M=3.00x per message group), and 600 message pairs were rated 2,430 times (M=4.05x per message pair). To assess inter-rater agreement, we calculated the average aggregate-judge correlations [18] as r=.59, .62, .63 for motivation, informativeness and helpfulness for individual validation ratings, respectively; these were comparable to Chan et al.’s r=.64 for idea novelty [18]. We performed the same statistical analyses as in the Ideation User Study (see Section 5.2.3), report the linear mixed effects models in Appendix Table 23, and include participant quotes from their rationale text response where relevant. For the collective ranking validation study, we counted how often each Prompt Selection technique was ranked first or last across the 5 trials, performed factor analyses on the counts for best and worst ranks for the three metrics (motivating, informative, unrepetitive) to derive three orthogonal factors (Ideations Unrepetitive, Ideations Informative, Ideations Motivating), and performed the statistical analysis on the factors (see Table 23b in Appendix). 5.3.5 Results on Evaluating Individual and Collective Ideation Creativity (RQ3) We investigated whether Directed prompts stimulate the highest ideation diversity and whether 3rd-party validations agree with our computed and thematic results. For illustration, Appendix Table 25 shows examples of message- groups with high and low factor values. Figure 9: Results of perceived individual and collective creativity from the three validation user studies. 16 IndividualIdeationValidationp=.0303p=.0014CollectiveIdeationsValidationp=.0004p=.0310p=.0298p=.0292p=.0185 Figure 9 shows results of our statistical analysis. We found that ideations from Directed prompts were most different and least repetitive, ideations from Random were no different and as repetitive as None. Ideations generated with prompts were more informative and helpful than without prompts, but there was no difference whether the prompts were Directed or Random. For example, P4 reviewed the message “Exercise and live longer, and prosper more!” ideated with None, and felt that “it's basically telling you what you already know. It's a rather generic message.”; P63 reviewed the message “Waking up early and working out will help you get into shape, and is a great way to have more energy and better sleep.” from the Directed(1) prompt “into a habit of sleep” and felt “it’s effective because it gives me a goal and tells me why this is a good goal”. There were no significant differences in ideation quality or motivation, though there was a marginal effect that Random prompts could hurt quality compared to None. Therefore, Directed Diversity helped to reduce ideation redundancy compared to Randomly selected prompts, improved informativeness, and did not compromise quality. 5.4 Summary of Answers to Research Questions We summarize our findings to answer our research questions with results from multiple experiments. RQ1. How did prompt selection techniques affect diversity in prompts? Compared to Random, Directed Diversity: a) selected more diverse prompts, b) with less redundancy from prior ideation, c) that ideators perceived as more unexpectedness, but d) of poorer quality and understandability. RQ2. How did diversity in prompts affect the ideation process for ideators? Compared to Random, prompts selected with Directed Diversity were: a) harder to ideate with, b) less applied for ideation, c) but their higher prompt diversity somewhat drove higher ideation diversity. RQ3. How did prompt selection techniques affect diversity in ideations? Compared to None and Random, Directed Diversity: a) improved ideation diversity and reduced redundancy, b) increased the flexibility and originality of ideated categories, c) without compromising ideation quality. 6 Discussion We discuss the generalization of our technical approach, evaluation framework, and experiment findings. 6.1 Need for Sensitive and Mechanistic Measures of Creativity We have developed an extensive evaluation framework for two key reasons: 1) to precisely detect effects on diversity, and 2) to track the mechanism of diversity prompting. We have sought to be very diverse in our evaluation of prompt technique to carefully identify any benefits or issues. We have found that some popular metrics (e.g., mean pairwise distance) were less sensitive than others (e.g., MST Dispersion / Remote-tree). Therefore, a null result in one metric (e.g., [79]) may not mean that diversity was not changed (if measured by another metric). Instead of only depending on the “black box” experimentation of prompt treatment on ideation (e.g., [18,40,79,80]), investigating along the ideation chain is interpretable and helpful for us to identify potential issues or breakdowns in the diversity prompting mechanism. Had our evaluation results on ideation diversity been non-significant, this would be helpful to debug the lack of effectiveness. Conversely, we may find that an ideation diversity effect may be due to contradictory or confounding effects. Indeed, we found that Directed Diversity improved diversity, despite poorer prompt understandability and adoption. Ideators could not directly use the selected prompts, but still managed to conceive ideas that were more diverse than not having seen prompts or seeing random ones. This suggests that they generated ideas sufficiently near the prompts. The findings also suggested that the increased effort helped to improve diverse ideation [5,6,96], but the ideator user experience should be improved. Future work is needed to improve Directed Diversity to reduce ideator effort and improve the relevance of selected prompts, such as by limiting the distance of new prompts from prior ideations, or using idea-based embeddings [79,80] instead of language models, as discussed next. 6.2 Generalization of Directed Diversity to other Domains The full process of Directed Diversity (Figure 1) allows us to generalize its usage to other domains, such as text creativity tasks beyond motivational messages (e.g., birthday greetings [79]) by changing the document sources in the phrase extraction step. In the phrase embedding step, we used the Universal Sentence Encoder [14], but other text embedding models (e.g., word2vec [63] , GloVe [71], ELMo [74], BERT [23]) could be used that model languages slightly differently. In the third step, we selected phrases based on the Remote-tree diversity formulation using an efficient greedy algorithm that approximates the diversity maximization. Other diversity criteria and maximization algorithms could be used (see review [20]). Note that since USE and similar language models are domain- independent, which do not model the semantics of specific domains and semantic quality, Directed Diversity cannot 17 guarantee improving quality. A domain-specific model trained with human-annotated labels of quality could be used to improve both diversity and quality. Furthermore, instead of representing text with language models, the idea space could be explicitly modelled to obtain embeddings from annotated semantic similarity [55,79]. Finally, since Directed Diversity operates on a vector representation of prompting and ideations, it can also be used for ideation tasks beyond text as long as they can be represented in a feature vector by feature engineering or with deep learning approaches, such as furniture [58], mood boards [49], and emojis [101]. 6.3 Generalization of Evaluation Framework Our Evaluation Framework is a first step towards the goal of standardizing the evaluation of crowd ideation. This requires further validation and demonstration on existing methods of supporting crowd ideation. Due to the costs of engineering effort, set-up preparation, and recruitment, we defer it to future work. Just as the Directed Diversity pipeline is generalizable, we discuss how the Diversity Prompt Evaluation Framework is generalizable. We had identified many diversity metrics, but only measured some of them; see [20] for a review of other mathematical metrics. If applying the framework to non-text domains, the vector-based distance metrics should still be usable if the concepts can be embedded with a domain model. While we analyzed diversity in terms of mathematical metrics [20] and several measures for creativity [92], other criteria may be important to optimize, such as serendipity for recommender systems to avoid boredom [44]. To measure creativity, just as in prior research [50], we had used several Likert scale ratings (e.g., helpfulness and informativeness) and found evidence that participants confound them. Furthermore, it may be excessive to apply all our measures, therefore the researcher is advised to use them judiciously. For example, we found that individually rating ideations tends to lead to poor statistical significance, so this data collection method should be avoided. The thematic analysis coding is also very labor intensive for the research team, but provides rich insights into the ideas generated. We had proposed using ranking and pairwise rating validations of collections of ideations as a scalable way to measure collective diversity. While our evaluations based on generating motivational messaging for physical activity helped to provide a realistic context, it was limited to measuring preliminary impressions of validators. The social desirability effect may have limited how accurately participants rated the effectiveness of the messages. While our focus was on evaluating diversity, future work that also seeks to improve and evaluate motivation towards behavior change should conduct longitudinal trials with stronger ecological validity [50]. 7 Conclusion In this paper, we presented Directed Diversity to direct ideators to generate more collectively creative ideas. This is a generalizable pipeline to extract prompts, embed prompts using a language model, and select maximally diverse prompts. We further proposed a generalizable Diversity Prompting Evaluation Framework to sensitively evaluate how Directed Diversity improves ideation diversity along the ideation chain — prompt selection, prompt creativity, prompt-ideation mediation, and ideation creativity. We found that Directed Diversity improved collective ideation diversity and reduce redundancy. With the generalizable prompt selection mechanism and evaluation framework, our work provides a basis for further development and evaluations of prompt diversity mechanisms. 8 Acknowledgements This work was carried out in part at NUS Institute for Health Innovation and Technology (iHealthtech) and with funding support from the NUS ODPRT and Ministry of Education, Singapore. REFERENCES 1. Leonard Adelman, James Gualtieri, and Suzanne Stanford. 1995. Examining the effect of causal focus on the option generation process: An experiment using protocol analysis. Organizational Behavior and Human Decision Processes. https://doi.org/10.1006/obhd.1995.1005 Elena Agapie, Bonnie Chinh, Laura R Pina, Diana Oviedo, Molly C Welsh, Gary Hsieh, and Sean Munson. 2018. Crowdsourcing Exercise Plans Aligned with Expert Guidelines and Everyday Constraints. In CHI 2018, 324. https://doi.org/10.1145/3173574.3173898 Elena Agapie, Lucas Colusso, Sean A Munson, and Gary Hsieh. 2016. PlanSourcing: Generating Behavior Change Plans with Friends and Crowds. In CSCW 2016, 119–133. https://doi.org/10.1145/2818048.2819943 Faez Ahmed, Sharath Kumar Ramachandran, Mark Fuge, Samuel Hunter, and Scarlett Miller. 2019. Interpreting Idea Maps: Pairwise comparisons reveal what makes ideas novel. Journal of Mechanical Design 141, 2. https://doi.org/10.1115/1.4041856 Baptiste Barbot. 2018. The dynamics of creative ideation: Introducing a new assessment paradigm. Frontiers 2. 3. 4. 5. 18 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. in Psychology 9, DEC: 1–8. https://doi.org/10.3389/fpsyg.2018.02529 Roger E. Beaty and Paul J. Silvia. 2012. Why do ideas get more creative across time? An executive interpretation of the serial order effect in divergent thinking tasks. Psychology of Aesthetics, Creativity, and the Arts 6, 4: 309–319. https://doi.org/10.1037/a0029171 Michael S Bernstein, Greg Little, Robert C Miller, Björn Hartmann, Mark S Ackerman, David R Karger, David Crowell, and Katrina Panovich. 2010. Soylent: A Word Processor with a Crowd Inside. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, 313–322. https://doi.org/10.1145/1866029.1866078 H. Beyer and K Holtzblatt. 1998. Contextual design: defining customer-centered systems. Morgan Kaufmann. Jeffrey P. Bigham, Michael S. Bernstein, and Eytan Adar. 2015. Human-Computer Interaction and Collective Intelligence. In Handbook of Collective Intelligence. Osvald M Bjelland and Robert Chapman Wood. 2008. An Inside View of IBM’s “Innovation Jam.” MIT Sloan management review 50, 1: 32. Jonathan Bragg, Mausam, and Daniel S. Weld. 2016. Optimal testing for crowd workers. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS. Nathan Brown, Stavros Tseranidis, and Caitlin Mueller. 2015. Multi-objective optimization for diversity and performance in conceptual structural design. In Proceedings of IASS Annual Symposia, IASS 2015 Amsterdam Symposium: Future Visions – Computational Design, 1–12. Andrew Carnie. 2010. Constituent structure. Oxford University Press. Daniel Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Céspedes, Steve Yuan, Chris Tar, Yun Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings ofthe 2018 Conference on Empirical Methods in Natural Language Processing (System Demonstrations), 169–174. https://doi.org/10.18653/v1/d18-2029 Joel Chan, Steven Dang, and Steven P Dow. 2016. Comparing Different Sensemaking Approaches for Large- Scale Ideation. In CHI 2016. https://doi.org/10.1145/2858036.2858178 Joel Chan, Steven P. Dow, and Christian D. Schunn. 2018. Do the Best Design Ideas (Really) Come from Conceptually Distant Sources of Inspiration? In Engineering a Better Future. 111–139. Joel Chan and Christian Schunn. 2015. The impact of analogies on creative concept generation: Lessons from an in vivo study in engineering design. Cognitive Science 39, 1: 126–155. https://doi.org/10.1111/cogs.12127 Joel Chan, Pao Siangliulue, Denisa Qori McDonald, Ruixue Liu, Reza Moradinezhad, Safa Aman, Erin T Solovey, Krzysztof Z Gajos, and Steven P Dow. 2017. Semantically far inspirations considered harmful? accounting for cognitive states in collaborative ideation. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, 93–105. https://doi.org/10.1145/3059454.3059455 Jesse Chandler, Cheskie Rosenzweig, Aaron J. Moss, Jonathan Robinson, and Leib Litman. 2019. Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behavior Research Methods 51, 5: 2022–2038. https://doi.org/10.3758/s13428-019-01273-7 Barun Chandra and Magnús M. Halldórsson. 2001. Approximation Algorithms for Dispersion Problems. Journal of Algorithms 38, 2: 438–465. https://doi.org/10.1006/jagm.2000.1145 Elizabeth Clark, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, and Noah A Smith. 2018. Creative writing with a machine in the loop: Case studies on slogans and stories. In 23rd International Conference on Intelligent User Interfaces, 329–340. https://doi.org/10.1145/3172944.3172983 Peng Dai, Christopher H. Lin, Mausam, and Daniel S. Weld. 2013. POMDP-based control of workflows for crowdsourcing. Artificial Intelligence 202: 52–85. https://doi.org/10.1016/j.artint.2013.06.002 Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint: arXiv:1810.04805. Sandra Diazz and Marcelo Cabido. 2001. Vive la diff é rence : plant functional diversity matters to ecosystem processes. Trends in Ecology and Evolution 16, 11: 646–655. https://doi.org/10.1016/S0169-5347(01)02283-2 John R. Douceur. 2009. Paper rating vs. paper ranking. Operating Systems Review (ACM) 43, 2: 117–121. https://doi.org/10.1145/1531793.1531816 Michael R.P. Dougherty, Charles F. Gettys, and Eve E. Ogden. 1999. MINERVA-DM: A memory processes model for judgments of likelihood. Psychological Review. https://doi.org/10.1037/0033-295X.106.1.180 Steven P. Dow, Alana Glassco, Jonathan Kass, Melissa Schwarz, Daniel L. Schwartz, and Scott R. Klemmer. 2010. Parallel prototyping leads to better design results, more divergence, and increased self-efficacy. ACM Transactions on Computer-Human Interaction 17, 4. https://doi.org/10.1145/1879831.1879836 Zipei Fan, Xuan Song, and Ryosuke Shibasaki. 2014. CitySpectrum: A non-negative tensor factorization approach. UbiComp 2014 - Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: 213–233. https://doi.org/10.1145/2632048.2636073 Daniel M. Fleder and Kartik Hosanagar. 2007. Recommender systems and their impact on sales diversity. EC’07 - Proceedings of the Eighth Annual Conference on Electronic Commerce: 192–199. https://doi.org/10.1145/1250910.1250939 Nancy A Fontenot. 1993. Effects of training in creativity and creative problem finding upon business people. The Journal of Social Psychology 133, 1: 11–22. Jonas Frich, Lindsay MacDonald Vermeulen, Christian Remy, Michael Mose Biskjaer, and Peter Dalsgaard. 19 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. SIAM Journal 2019. Mapping the Landscape of Creativity Support Tools in HCI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19, 1–18. https://doi.org/10.1145/3290605.3300619 Jonas Frich, Michael Mose Biskjaer, and Peter Dalsgaard. 2018. Twenty Years of Creativity Research in Human-Computer Interaction: Current State and Future Directions. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS ’18), 1235–1257. https://doi.org/10.1145/3196709.3196732 Victor Girotto, Erin Walker, and Winslow Burleson. 2017. The effect of peripheral micro-tasks on crowd ideation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 1843–1854. https://doi.org/10.1145/3025453.3025464 Barney G. Glaser and Anselm L. Strauss. 2017. Discovery of grounded theory: strategies for qualitative research. Routledge. Vinod Goel. 2010. Neural basis of thinking: Laboratory problems versus real-world problems. Wiley Interdisciplinary Reviews: Cognitive Science. https://doi.org/10.1002/wcs.71 Adam E. Green, David J.M. Kraemer, Jonathan A. Fugelsang, Jeremy R. Gray, and Kevin N. Dunbar. 2012. Neural correlates of creativity in analogical reasoning. Journal of Experimental Psychology: Learning Memory and Cognition. https://doi.org/10.1037/a0025764 Magnús M. Halldórsson, Kazuo Iwano, Naoki Katoh, and Takeshi Tokuyama. 1999. Finding subsets maximizing on Discrete Mathematics. structures. minimum https://doi.org/10.1137/S0895480196309791 F He, Y Pan, Q Lin, X Miao, and Z Chen. 2019. Collective Intelligence: A Taxonomy and Survey. IEEE Access 7: 170213–170225. https://doi.org/10.1109/ACCESS.2019.2955677 Seyedmohammadhossein Hosseinian, Dalila B M M Fontes, Sergiy Butenko, Marco Buongiorno Nardelli, Marco Fornari, and Stefano Curtarolo. 2017. The Maximum Edge Weight Clique Problem : Formulations and Solution Approaches. In Optimization Methods and Applications. 217–237. Gaoping Huang and Alexander J Quinn. 2017. BlueSky: Crowd-Powered Uniform Sampling of Idea Spaces. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition (C&amp;C ’17), 119–130. https://doi.org/10.1145/3059454.3059481 Piotr Indyk, Sepideh Mahabadi, Mohammad Mahdian, and Vahab S. Mirrokni. 2014. Composable core-sets for diversity and coverage maximization. Proceedings of the ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems: 100–108. https://doi.org/10.1145/2594538.2594560 David G. Jansson and Steven M. Smith. 1991. Design fixation. Design Studies. https://doi.org/10.1016/0142- 694X(91)90003-F Mark W. Jones, J. Andreas Bærentzen, and Milos Sramek. 2006. 3D distance fields: A survey of techniques and applications. IEEE Transactions on Visualization and Computer Graphics 12, 4: 581–599. https://doi.org/10.1109/TVCG.2006.56 Marius Kaminskas and Derek Bridge. 2016. Diversity, serendipity, novelty, and coverage: A survey and empirical analysis of beyond-Accuracy objectives in recommender systems. ACM Transactions on Interactive Intelligent Systems 7, 1: 1–42. https://doi.org/10.1145/2926720 David R. Karger, Sewoong Oh, and Devavrat Shah. 2014. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research 62, 1: 1–24. https://doi.org/10.1287/opre.2013.1235 L. Robin Keller and Joanna L. Ho. 1988. Decision Problem Structuring: Generating Options. IEEE Transactions on Systems, Man and Cybernetics. https://doi.org/10.1109/21.21599 Aniket Kittur, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton. 2013. The future of crowd work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW. https://doi.org/10.1145/2441776.2441923 Ana Cristina Bicharra Klein, Mark, and Garcia. 2015. High-speed idea filtering with the bag of lemons. Decision Support Systems 78: 39–50. https://doi.org/10.1016/j.dss.2015.06.005 Janin Koch, Nicolas Taffin, Michel Beaudouin-Lafon, Markku Laine, Andrés Lucero, and Wendy E. MacKay. 2020. ImageSense: An Intelligent Collaborative Ideation Tool to Support Diverse Human-Computer Partnerships. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1: 1–27. https://doi.org/10.1145/3392850 Rafal Kocielnik and Gary Hsieh. 2017. Send Me a Different Message: Utilizing Cognitive Space to Create Engaging Message Triggers. In CSCW, 2193–2207. https://doi.org/10.1145/2998181.2998324 Etienne Laliberté and Pierre Legendre. 2010. A distance-based framework for measuring functional diversity from multiple traits. Ecology 91, 1: 299–305. Joel Lehman and Kenneth O. Stanley. 2011. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary Computation 19, 2: 189–222. https://doi.org/10.1162/EVCO_a_00025 Todd I Lubart. 2001. Models of the creative process: Past, present and future. Creativity research journal 13, 3–4: 295–308. Kurt Luther, Nathan Hahn, Steven P Dow, and Aniket Kittur. 2015. Crowdlines: Supporting synthesis of diverse information sources through crowdsourced outlines. In Third AAAI Conference on Human Computation and Crowdsourcing. Laurens Van Der Maaten and Kilian Weinberger. 2012. Stochastic triplet embedding. In 2012 IEEE 1–6. International Workshop on Machine Learning for Signal Processing, 20 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 52: space design evolutionary exploration. Automation in Construction in Groups. Personality and Social Psychology Review https://doi.org/10.1109/MLSP.2012.6349720 Thomas W. Malone, Robert Laubacher, and Chrysanthos N. Dellarocas. 2009. Harnessing Crowds: Mapping the Genome of Collective Intelligence. https://doi.org/10.2139/ssrn.1381502 Manon Marinussen and Alwin de Rooij. 2019. Being Yourself to Be Creative: How Self-Similar Avatars Can Support the Generation of Original Ideas in Virtual Environments. In Proceedings of the 2019 on Creativity and Cognition (C&amp;C ’19), 285–293. https://doi.org/10.1145/3325480.3325482 Justin Matejka, Michael Glueck, Erin Bradner, Ali Hashemi, Tovi Grossman, and George Fitzmaurice. 2018. Dream lens: Exploration and visualization of large-scale generative design datasets. Conference on Human Factors in Computing Systems - Proceedings 2018-April: 1–12. https://doi.org/10.1145/3173574.3173943 Leland McInnes, John Healy, and James Melville. 2018. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. Retrieved from http://arxiv.org/abs/1802.03426 Cai-nicolas Ziegler Sean M Mcnee, Joseph A Konstan, and Georg Lausen. 2005. Improving Recommendation Lists Through Topic Diversification. In Proceedings of the 14th international conference on World Wide Web, 22–32. Joke Meheus. 2000. Analogical Reasoning in Creative Problem Solving Processes: Logico-Philosophical Perspectives. In Metaphor and Analogy in the Sciences. https://doi.org/10.1007/978-94-015-9442-4_2 P. Michelucci and J. L. Dickinson. 2016. The power of crowds. Science 351, 6268: 32–33. https://doi.org/10.1126/science.aad6499 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In NIPS’13: Proceedings of the 26th International Conference on Neural Information Processing Systems, 3111–3119. https://doi.org/10.5555/2999792.2999959 Fabio Del Missier, Mimì Visentini, and Timo Mäntylä. 2015. Option generation in decision making: Ideation beyond memory retrieval. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2014.01584 Caitlin T. Mueller and John A. Ochsendorf. 2015. Combining structural performance and designer preferences in 70–82. https://doi.org/10.1016/j.autcon.2015.02.011 Michael D Mumford, Wayne A Baughman, K Victoria Threlfall, Elizabeth P Supinski, and David P Costanza. 1996. Process-based measures of creative problem-solving skills: I. Problem construction. Creativity Research Journal 9, 1: 63–76. Bernard A. Nijstad and Wolfgang Stroebe. 2006. How the Group Affects the Mind: A Cognitive Model of Idea 186–213. Generation https://doi.org/10.1207/s15327957pspr1003_1 Bernard A Nijstad, Wolfgang Stroebe, and Hein F M Lodewijkx. 2002. Cognitive stimulation and interference in groups: Exposure effects in an idea generation task. Journal of Experimental Social Psychology 38, 6: 535– 544. https://doi.org/https://doi.org/10.1016/S0022-1031(02)00500-0 Jonas Oppenlaender and Simo Hosio. 2019. Design Recommendations for Augmenting Creative Tasks with Computational Priming. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia (MUM ’19). https://doi.org/10.1145/3365610.3365621 Rebecca Passonneau. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation. Proceedings of the 5th International Conference on Language Resources and Evaluation, LREC 2006: 831–836. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation Jeffrey. In Proceedings ofthe 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532–1543. https://doi.org/10.3115/v1/D14-1162 Owen L. Petchey and Kevin J. Gaston. 2002. Functional diversity (FD), species richness and community composition. Ecology Letters 5, 3: 402–411. https://doi.org/10.1046/j.1461-0248.2002.00339.x Owen L. Petchey and Kevin J. Gaston. 2002. Extinction and the loss of functional diversity. Proceedings of the Royal Society B: Biological Sciences. https://doi.org/10.1098/rspb.2002.2073 Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2227–2237. https://doi.org/10.18653/v1/n18-1202 Carlo Ricotta and Laszlo Szeidl. 2006. Towards a unifying approach to diversity measures: bridging the gap between the Shannon entropy and Rao’s quadratic index. Theoretical population biology 70, 3: 237–243. C. Riedl, I. Blohm, J. M. Leimeister, and H. Krcmar. 2010. Rating scales for collective intelligence in innovation communities: Why quick and easy decision making does not get it right. In Thirty First International Conference on Information Systems. Sebastian Risi, Sandy D. Vanderbleek, Charles E. Hughes, and Kenneth O. Stanley. 2009. How Novelty Search Escapes the Deceptive Trap of Learning to Learn. In GECCO’09. Pararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Abhinav Rastogi, Ankur Bapna, Neha Nayak, and Larry Heck. 2018. Building a conversational agent overnight with dialogue self-play. arXiv preprint arXiv:1801.04871. Pao Siangliulue, Kenneth C Arnold, Krzysztof Z Gajos, and Steven P Dow. 2015. Toward Collaborative Ideation at Scale: Leveraging Ideas from Others to Generate More Creative and Diverse Ideas. In Proceedings of the 10, 3: 21 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. of on 39: role (UIST design examples processes. Design Studies 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW ’15), 937–945. https://doi.org/10.1145/2675133.2675239 Pao Siangliulue, Joel Chan, Steven P Dow, and Krzysztof Z Gajos. 2016. IdeaHound: Improving Large-scale Collaborative Ideation with Crowd-Powered Real-time Semantic Modeling. In UIST 2016 - Proceedings of the 29th Annual Symposium on User Interface Software and Technology ’16), 609–624. https://doi.org/10.1145/2984511.2984578 Pao Siangliulue, Joel Chan, Krzysztof Z Gajos, and Steven P Dow. 2015. Providing timely examples improves the quantity and quality of generated ideas. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition, 83–92. https://doi.org/10.1145/2757226.2757230 R. Sibson. 1973. Slink: an optimally efficient algorithm for the single-link cluster method. The Computer Journal 16, 1: 30–34. Ut Na Sio, Kenneth Kotovsky, and Jonathan Cagan. 2015. Fixation or inspiration? A meta-analytic review of the 70–99. https://doi.org/10.1016/j.destud.2015.04.004 Steven M. Smith. 2010. The Constraining Effects of Initial Ideas. In Group Creativity: Innovation through Collaboration. https://doi.org/10.1093/acprof:oso/9780195147308.003.0002 Paul T Sowden, Lucie Clements, Chrishelle Redlich, and Carine Lewis. 2015. Improvisation facilitates divergent thinking and creativity: Realizing a benefit of primary school arts education. Psychology of Aesthetics, Creativity, and the Arts 9, 2: 128. https://doi.org/10.1037/aca0000018 Ian F. Spellerberg and Peter J. Fedor. 2003. A tribute to Claude-Shannon (1916-2001) and a plea for more rigorous use of species richness, species diversity and the “Shannon-Wiener” Index. Global Ecology and Biogeography 12, 3: 177–179. https://doi.org/10.1046/j.1466-822X.2003.00015.x Andy Stirling. 2007. A general framework for analysing diversity in science, technology and society. Journal of the Royal Society Interface 4: 707–719. https://doi.org/10.1098/rsif.2007.0213 Victor J. Strecher, Saul Shiffman, and Robert West. 2005. Randomized controlled trial of a web-based computer-tailored smoking cessation program as a supplement to nicotine patch therapy. Addiction 100, 5: 682–688. https://doi.org/10.1111/j.1360-0443.2005.01093.x Shweta Suran, Vishwajeet Pattanaik, and Dirk Draheim. 2020. Frameworks for collective intelligence: A systematic literature review. ACM Computing Surveys 53, 1: 1–36. https://doi.org/10.1145/3368986 Simon Taggar. 2002. INDIVIDUAL CREATIVITY AND GROUP ABILITY TO UTILIZE INDIVIDUAL CREATIVE RESOURCES : A MULTILEVEL MODEL. Academy of Management Journal 45, 2: 315–330. Omer Tamuz, Ce Liu, Serge Belongie, Ohad Shamir, and Adam Tauman Kalai. 2011. Adaptively learning the crowd kernel. arXiv preprint arXiv:1105.1033. E Paul Torrance. 2018. Guiding creative talent. Pickle Partners Publishing. Saúl Vargas, Linas Baltrunas, Alexandros Karatzoglou, and Pablo Castells. 2014. Coverage, redundancy and size-awareness in genre diversity for recommender systems. RecSys 2014 - Proceedings of the 8th ACM Conference on Recommender Systems: 209–216. https://doi.org/10.1145/2645710.2645743 Sebastien Villeger, Norman W. H. Mason, and David Mouillot. 2008. New Multidimensional Functional Diversity Indices for a Multifaceted Framework in Functional Ecology. Ecology 89, 8: 2290–2301. https://doi.org/10.1890/07-1206.1 Roelof A J de Vries, Khiet P Truong, Sigrid Kwint, Constance H C Drossaert, and Vanessa Evers. 2016. Crowd- Designed Motivation: Motivational Messages for Exercise Adherence Based on Behavior Change Theory Roelof. In CHI 2016, 297–308. https://doi.org/10.1145/2858036.2858229 Meijuan Wang, Ning Hao, Yixuan Ku, Roland H. Grabner, and Andreas Fink. 2017. Neural correlates of serial 92–100. order https://doi.org/10.1016/j.neuropsychologia.2017.03.001 Daniel S Weld, Christopher H Lin, and Jonathan Bragg. 2015. Artificial Intelligence and Collective Intelligence. In Handbook of Collective Intelligence. Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. 2018. Recent trends in deep learning language processing. IEEE Computational Intelligence Magazine 13, 3: 55–75. based natural https://doi.org/10.1109/MCI.2018.2840738 Lixiu Yu and Jeffrey V. Nickerson. 2011. Cooks or cobblers? Crowd Creativity through Combination. 1393. https://doi.org/10.1145/1978942.1979147 F Zenasni and T I Lubart. 2009. Perception of emotion, alexithymia and creative potential. Personality and Individual Differences 46, 3: 353–358. https://doi.org/10.1016/j.paid.2008.10.030 Xianda Zhou and William Yang Wang. 2018. MojiTalk: Generating Emotional Responses at Scale. In Proceedings ofthe 56th Annual Meeting ofthe Association for Computational Linguistics, 1128–1137. https://doi.org/10.18653/v1/P18-1104 thinking. Neuropsychologia divergent verbal effect 99: in 22 A Definitions of Prompt Selection Variables Table 6: Independent variables used in the simulation and user studies to manipulate how prompts are shown to ideators. Variable Definition Interpretation Prompt Selection None: no prompt, other than task instructions Random: randomly selected phrase from corpus Directed: prioritized phrase from corpus Prompt Count Prompt Size Number of prompts {50, 100, 150,200,250, … , 𝑛𝑝𝑟𝑜𝑚𝑝𝑡𝑠} Number of phrases per prompt {1,2,3,4,5} Selection algorithm for selecting phrases to include in prompts. Indicates how many prompts shown to generate new messages. A prompt may contain ≥ 1 phrases. This was only tested in the simulation study. Prompts selected depends on Prompt Selection. B Additional Definitions of Diversity Metrics B.1 Thematic Analysis Method for Flexibility and Originality Metrics Flexibility [85] measures how many unique ideas (conceptual categories) was generated, and originality [100] measures how infrequently each conceptual category occurs. These require expert annotation to identify distinct categories. We conducted a thematic analysis of ideated messages using open coding of grounded theory [34] to derive categories. These categories were added, reduced, merged, and refined by iteratively assessing the messages. We then consolidated the categories into themes using affinity diagramming [8]. This was done separately for different prompt techniques. The thematic analysis was primarily performed by one co-author researcher with regular discussion with co-authors who are experienced HCI researchers with experience in Amazon Mechanical Turk experiments and research on health behavior change. We calculated inter-rater reliability on a random 10% subset of messages was coded independently by another co-author to obtain a Krippendorff’s alpha with MASI distance [70] of 𝛼 = 0.82, which indicated good agreement. Note that while thematic analyses and affinity diagramming are popular methods to interpret qualitative data, we use them here for data pre-processing. Finally, we calculate the flexibility and originality measures based on the coded categories (fine-grained) and themes (coarser) described in Table 7. Table 7: Metrics of creativity of ideation based on categories and themes derived from a thematic analysis of generated ideas. Metrics are shown for categories, but are the same for themes. Metric Messages Flexibility Definition Number of categories coded ∑ [𝑓𝑐 > 0] 𝑐 Messages Originality Category originality 𝑜𝑐 = (1 − 𝑓𝑐 𝑁𝑝⁄ ) Interpretation This counts how many unique categories/themes were observed in messages for each Prompt Technique. A higher count indicates qualitatively more diversity. How original 𝑜𝑐 each theme is, where 𝑓𝑐 is the frequency for the category 𝑐, 𝑁𝑝 is the number of messages with the Prompt Technique 𝑝. B.2 Intra-Prompt Diversity Metrics based on Embedding Distances Table 8: Metrics of prompt diversity for all phrases in a single prompt. Metric Intra-Prompt Mean Phrase Distance Prompt Phrase Chamfer distance Definition Intra-prompt mean phrase-phrase distance 1 𝑔 ∑ 𝑖,𝑗∈𝑃𝑟𝑜𝑚𝑝𝑡 𝑑(𝒙𝑖 𝑃, 𝒙𝑗 𝑃) 1 𝑔 ∑ 𝑖∈𝑃𝑟𝑜𝑚𝑝𝑡 𝑚𝑖𝑛 𝑗∉𝑖 𝑑(𝒙𝑖 𝑃, 𝒙𝑗 𝑃) Interpretation Indicates how similar (consistent) all phrases are to one another in the same prompt. Prompts with better consistency would be easier to understand and use. Average distinctiveness of phrases in prompt. 23 C Definitions of Prompt Adoption Metrics Table 9: Metrics indicating how much of prompt text and concepts are adopted into the ideations. Metric Definition Interpretation Prompt Recall 1 𝑔 ∑ 𝑃ℎ𝑟𝑎𝑠𝑒∈𝑃𝑟𝑜𝑚𝑝𝑡 𝑛𝑤𝑜𝑟𝑑∈𝐼𝑑𝑒𝑎𝑡𝑖𝑜𝑛 ∧ 𝑤𝑜𝑟𝑑∈𝑃ℎ𝑟𝑎𝑠𝑒 𝑛𝑤𝑜𝑟𝑑∈𝑃ℎ𝑟𝑎𝑠𝑒 The proportion of words from phrases that were used in the ideated message. Prompt Precision ∑ 𝑝ℎ𝑟𝑎𝑠𝑒∈𝑝𝑟𝑜𝑚𝑝𝑡 𝑛𝑤𝑜𝑟𝑑∈𝐼𝑑𝑒𝑎𝑡𝑖𝑜𝑛∧ 𝑤𝑜𝑟𝑑∈𝑃ℎ𝑟𝑎𝑠𝑒 𝑛𝑤𝑜𝑟𝑑∈𝐼𝑑𝑒𝑎𝑡𝑖𝑜𝑛 Prompt-Ideation Distance Prompt-Ideation distance 𝑃𝑟, 𝒙𝑗 𝑑(𝒙𝑖 𝐼) The proportion of ideated message words that were from phrases in the shown prompt. Indicates how similar the written ideation message is to the prompt, as a measure of how the phrase(s) ideas were adopted. D Pairwise Embedding Distances of Phrases and Messages These figures show the distribution of pairwise distances based on the embeddings of phrases and messages. Figure 10: Distribution of pairwise distances between the extracted phrases (N=3,666). The pairwise distances ranged from Min=0.057 to Max=0.586, Median=0.430, inter-quartile range 0.394 to 0.460, SD= 0.047. Figure 11: Distribution of pairwise distances between the messages (N=250) ideated in the pilot study with no prompting (None). The pairwise distances ranged from Min=0.169 to Max=0.549, Median=0.405, inter-quartile range 0.376 to 0.432, SD=0.043. 24 E Results of Characterization Simulation Study We created 50 simulations for each prompts configuration to get a statistical estimate of the performance of each prompt selection technique. Figure 12 shows the results from the simulation study. Error bars are extremely small, and not shown for simplicity. Span and Sparseness results not shown, but are similar to Mean Distance. Note that we computed the mean of MST edge distances instead of sum, which is independent of number of prompts. In general, Directed Diversity selects prompts to be more diverse for fewer prompts (smaller prompt count), but after a threshold, Random selection can provide for better diversity. This demonstrates directing is useful for small crowd budgets. Note that the actual threshold depends on corpus and application domain. We found an interaction effect where single-phrase prompts benefit most with Directed Diversity, since for low prompt count, and Directed(1- phrase) has highest diversity, followed by Directed(3), Random(3), and Random(1) with lowest diversity. Figure 12: Influence of prompt selection technique, prompt size, and prompt count on various distance and diversity metrics. Higher values for all metrics indicate higher diversity. Span and Sparseness results are not shown, but are similar to Mean Distance. Note that we computed the mean of MST edge distances instead of sum, which is independent of number of prompts. Error bars are extremely small, and not shown for simplicity. 25 F Factor Loadings from Factor Analysis in User Studies Table 10: The rotated factor loading of factor analysis on metrics of prompt distance and consistency. Factors explained 73.6% of the total variance. Bartlett’s Test for Sphericity to indicate common factors was significant (χ2= 5810, p<.0001). Phrase Minimum Pairwise Distance Prompt Minimum Pairwise Distance Intra-Prompt Mean Phrase Distance Prompt Distance 0.95 0.95 -0.05 Prompt Consistency 0.08 0.05 -0.69 Table 11: The rotated factor loading of factor analysis on metrics of perceived helpfulness of prompts. Factors explained 68.9% of the total variance. Bartlett’s Test for Sphericity to indicate common factors was significant (χ2= 2575, p<.0001). Phrase Helpfulness rating Phrase Relevance to Task (Motivation) rating Phrase Understanding rating Phrase Relevance to Domain (Exercise) rating Phrase Unexpectedness rating Prompt Quality 0.85 0.89 0.62 0.59 -0.11 Prompt Unexpectedness -0.13 -0.2 -0.21 -0.14 0.63 Prompt Relevance 0.2 0.23 0.27 0.54 -0.06 Prompt Understandability 0.15 0.14 0.47 0.18 -0.05 Table 12: The rotated factor loading of factor analysis on metrics of prompt adoption. Factors explained 65.1% of the total variance. Bartlett’s Test for Sphericity to indicate common factors was significant (χ2= 1315, p<.0001). Prompt Precision Prompt Recall Prompt-Ideation Distance Phrase Adoption 0.82 0.67 -0.92 Table 13: The rotated factor loading of factor analysis on diversity metrics of generated messages. Factors explained 75.2% of the total variance. Bartlett’s Test for Sphericity to indicate common factors was significant (χ2= 2676, p<.0001). Message Remote-clique Message Sparseness Message Span Message MST Dispersion Message Chamfer Distance Message Entropy Ideation Dispersion 0.99 0.99 0.77 0.29 -0.02 0.01 Ideation Evenness 0.16 0.16 -0.05 0.96 0.91 0.3 26 Table 14: The rotated factor loading of factor analysis on metrics of perceived quality of the generated messages. Factors explained 80.9% of the total variance. Bartlett’s Test for Sphericity to indicate common factors was significant (χ2= 5810, p<.0001). Informativeness rating Helpfulness rating Motivation rating Ideation Informative- Helpfulness 0.8 0.66 0.39 Ideation Quality 0.39 0.65 0.79 Table 15: The rotated factor loading of factor analysis on metrics of group ranking of the generated messages. Factors explained 93.9% of the total variance. Bartlett’s Test for Sphericity to indicate common factors was significant (χ2= 366, p<.0001). For usability, “unrepetitive” was measured with the word “repetitive” in the survey. Sum(Most Unrepetitive (Rank=1)) Sum(Most Informative (Rank=1)) Sum(Least Unrepetitive (Rank=3)) Sum(Least Informative (Rank=3)) Sum(Most Motivating (Rank=1)) Sum(Least Motivating (Rank=3)) Ideations Unrepetitive 1.32 0.48 -0.73 -0.26 0.17 0.05 Ideations Informative 0.40 0.72 -0.50 -0.89 -0.04 -0.06 Ideations Motivating 0.10 0.05 0.02 -0.03 1.00 -0.54 Table 16: The rotated factor loading of factor analysis on metrics of message distinctness. Factors explained 74.8% of the total variance. Bartlett’s Test for Sphericity to indicate common factors was significant (χ2= 1022, p<.0001). Ideation Min Pairwise Distance Ideation Mean Pairwise Distance Ideation Distance 0.86 0.86 Table 17: The rotated factor loading of factor analysis on metrics of ideation effort. Factors explained 59.0% of the total variance. Bartlett’s Test for Sphericity to indicate common factors was significant (χ2= 1008, p<.0001). Message Creativity Self-Rating Message Motivation Self-Rating Message Writing Ease Ideation Self- Quality 0.76 0.63 0.76 Ideation Ease 0.19 0.55 0.19 27 G Survey Screenshots in User Studies G.1 Ideation User Study Figure 13: The instructions in the Ideation User Study for the None condition. Figure 14: For the None, users are asked to write a message that is at least one to three sentences long. 28 Figure 15: The instructions of Ideation User Study for the Random(1) and Directed(1) conditions. Figure 16: Random(1) and Directed(1) prompts consisted of one phrase per prompt. Note that selected phrase for each trial will be different. 29 Figure 17: The instructions of Ideation User Study for the Random(3) and Directed(3) conditions. Figure 18: Random(3) and Directed(3) prompts consist of three phrases per prompt. Note that selected phrases for each trial will be different. 30 Figure 19: Ideators are asked to evaluate the message they wrote by providing Likert scale ratings for many different factors along with a short reflection about the message writing process. The screenshot above shows the evaluation screen for Directed(3). 31 G.2 Validation User Studies Figure 20: The instruction for individual message rating tasks. Figure 21: Validators rated a randomly selected message on a Likert scale and gave a justification. Figure 22: The instruction for group message ranking tasks. 32 Figure 23: Validators were asked to rank groups of messages for motivation, informativeness and repetitiveness. Note that while we used the word “repetitive” for usability in the survey, we analyzed this dependent variable as “unrepetitive” to be consistent with other diversity metrics. Figure 24: Validators were asked to rate the difference of two messages in a message-pair. 33 H Examples of Prompts and Messages Written by Ideators Table 18: Messages generated in our study and the phrase prompt(s) that were shown to ideators. Prompt Selection Phrase(s) Shown Message Written Random(1) daily club swim workout Random(1) like a barrier of insecurity Directed(1) snooze button repeatedly isn't exercise Directed(1) next set of stats Random(3) (1) hard workout may feel (2) multiple exercise interventions in terms Random(3) (3) exercise program for clients plagued (1) religious institution offers exercise classes Directed(3) Directed(3) (2) workout program because people (3) other forms of water aerobics (1) in the risk of diabetes (2) for the development of diabetes (3) from complications of diabetes (1) book and workout videos (2) mechanics and workout plans (3) exercise tapes or videos Do you want a way to train your whole body? Try a swim workout! You can even join a club to help challenge you to reach your goals! Get out and try a new exercise today. Don't let not doing it be a barrier or insecurity. Even pro athletes have to try new exercises for the first time. Reminder that hitting the snooze button repeatedly is NOT considered an exercise! Make sure to wake up first thing, and get your legs moving! Not happy with what you see on the scale or the number of calories you burned? Don't let one day's data ruin your mood. Give it time and you'll see better results if you keep at it! Hard workouts may feel uncomfortable. However, those carry the most enjoyment and success for you! Your religious institution offers exercise classes and your local pool offers water aerobics. Exercise with people for motivation! Exercising will help you stay in shape. It will prevent health issues in the future and it can stop the risk of developing diabetes. Watching tapes and videos are good ways to try out new exercises. Follow along and impress your loved ones with your new moves! 34 I Thematic Analysis of Messages Table 19: Themes and categories identified with the qualitative coding of ideated messages. Theme Ambiguous Benefits Anecdote Appeal to "Obvious" Knowledge Appeal by Cohorts Appeal to Fear Appeal to Guilt Appeal to Shame Appeal to Social Approval Barrier to Ability Barrier to Boredom Barrier to Comfort Barrier to Cost Barrier to Effort Barrier to Energy Barrier to Enjoyment Barrier to Motivation Barrier to Resources Barrier to Self-Efficacy Barrier to Time Call to Action Call to Authority Collective Societal Benefits Equipment Exercise Suggestion Fear of Injury Food and Drink Future Life Goals Health Advice Categories Ambiguous benefits Anecdote Appeal to "obvious" knowledge Appeal to children | Appeal to older ages | Appeal to overweight Appeal to fear Appeal to guilt Appeal to shame Appeal to social approval Barrier to ability Encourage exercise variety | Prompt to try something new | Tips to make exercise less boring/more fun Barrier to comfort Cheap exercises | Lower healthcare/insurance costs It will get easier | Recommending less effortful routines or exercises | Take a short break then carry on Barrier to energy Prompt to research fun exercise | Recommending enjoyable activity Barrier to motivation At home exercises | No equipment needed | No gym available Don't feel bad if confused | Don't need certificate/qualifications | Improving self- confidence | Recommending exercises within ability Barrier to time Call to action Citing health experts | Unspecified authority Collective societal benefits Bench press | Exercise machine | Exercise machines (unspecified | Rubber exercise tubing/band | Swimming gear | Treadmill | Vertical or horizontal press | Work - standing desk Aerobic exercises | Aerobics | Anaerobic exercise | Biking | Body weight exercises | Cardiovascular exercise | Climbing | Competitive cycling | Dance | Diving | Double clean | Exercise through chores | Handstand | High intensity exercise | Hot yoga | Internal rotation workouts | Iron Yoga | Jump on bed | Jumping jacks | Lift weights | Lifting luggage | Meditation | Pull-ups | Pushing kids on swings | Push-ups | Resistance exercises | Ring Pull-ups | Running | Seated leg-raises | Sit-ups | Snatches | Sports | Squats | Strength training | Strenuous/moderate/vigorous exercise | Stretching | Swimming | Tennis | Using stairs | Vertical and horizontal presses | Volleyball | Walk your dog | Walking | Water exercises | Work/desk exercise | Yoga Don't overexert yourself | Recommending exercises to avoid injury | Research good techniques to avoid injury | Take breaks | Tips to avoid injury for outdoor activities Avoid steroids/pills/drugs | Avoid unhealthy food | Exercise supplements | Exercise to avoid medication/drugs | Food recommendation | Staying hydrated | Stress eating advice Improve quality of life | Live longer Journaling to track your goal | Set Actionable goals | Set daily exercise goal | Set goals based on health recommendations | Set unspecified goal | Set weight goal | Tips to reach goals | Visualizing meeting goals Advice for diabetics | See a doctor if you are worried 35 Theme Health Benefits Health Risks Improving Appearance Inspirational Phrase Lack of Knowledge Lack of Social Support Mental Health Benefits Muscle Building Overcoming Beliefs Overcoming Family Obligations Overcoming Self- Consciousness Push to Do More Push to Start Rewards Self-Empowerment Self-Forgiving Self-Reflection Social Comparison Specific Locations Specificity Time to See Results Time to Exercise Use of Technology Weight Loss Categories Better mobility | Bone health | Cardio Health | Fluid regulation | Help with foot problems | Lowers blood pressure | More stamina | Pain/strain relief | Slows aging process | Strong immune system | Unspecified health benefit Arthritis | Breathing difficulty | Cancer | Depression | Diabetes | Heart disease | Hernias | Obesity | Unspecified health risk "Look better" | Beach ready | Chest and back | Improved posture | Look more appealing to potential partners | Nice butt | Six-pack abs Inspirational phrase Research exercise routines | Research nutrition | Study exercise form Exercise with an expert | Exercise with friends | Family want you to be healthy | Find places to support you | Impress your doctor | Interacting with others | Join a health club | Join an exercise class | Meeting new people | Playing/exercising with children | Social competition "Feeling" better | Happier | Improved sleep | Improves cognitive abilities | Lower depression | Lowers anxiety | Relaxing | Release endorphins | Self-esteem | Stress reduction | Unspecified mental health benefits Biceps | Core strength | Leg Muscles | Physically stronger | Shoulder muscles | Triceps | Unspecified Muscle building | Upper body strength Changing mindset Overcoming family obligations Overcoming self-consciousness Prompt to increase | Prompt to steadily increase Prompt to start exercising | Prompt to stop sedentary lifestyle Prizes from exercise competitions | Reward with food | Reward with new clothes | Reward with new exercise equipment | Unspecified reward Self-empowerment Self-forgiving Self-reflection Avoid social comparisons | Downwards Social comparison | Upwards Social Comparison Around the neighbourhood | At desk exercises | At home | At school | Beach | Church/community centre | Front yard | Gym | Outdoors | Park | Travelling/airport | Walk to train station Appropriate Exercises (e.g. "Try what's best for you" | Developing habits | Even small amounts of exercise | Exercise daily | Exercise regularly | Follow exercise plan/routine | Specific amount/distance to exercise | Specific days a week to exercise | Specific minutes to exercise Dedicate time | Fast results | Promise of results | Tips to progress faster Anytime | End of the day | Morning exercise | Spring | Summer/hot weather Exergame | Experts review your exercises from an app | Follow videos | Listen to music/podcast | Reflect on progress | Use apps for exercise tips | Use apps for workout schedules | Use apps to track progress | Watch TV while exercising Aid digestion | Boost metabolism | Burning calories | Burning cellulite | Maintaining weight | Slimming down 36 J Linear Mixed Models and statistical analysis results of Prompt Creativity, Prompt- Ideation Mediation, and Ideation Diversity Table 20: Statistical analysis of responses due to effects (one per row), as linear mixed effects models, all with Participant as random effect, Prompt Selection and Prompt Size as fixed effects, their interaction effect. a) model for manipulation check analysis of how prompt configurations affect perceived prompt creativity (RQ1.2); b) model for mediation analysis of how prompt configurations affect ideation effort (RQ2.2). n.s. means not significant at p>.01. p>F is the significance level of the fixed effect ANOVA. R2 is the model’s coefficient of determination to indicate goodness of fit. a) Prompt Creativity Manipulation Check (RQ1.2) b) Prompt-Ideation Effort Mediation Analysis (RQ2.2) Response Prompt Unexpected- ness Prompt Understand- ability Prompt Relevance Prompt Quality Linear Effects Model (Participant random effect) Prompt Selection + Prompt Size + Prompt Selection × Size Prompt Selection + Prompt Size + Prompt Selection × Size Prompt Selection + Prompt Size + Selection × Size Prompt Selection + Prompt Size + Prompt Selection × Size p>F R2 Response Ideation Fluency Ideation Ease Prompt Adoption .523 .500 .450 .572 <.0001 n.s n.s .0008 <.0001 .0316 <.0001 <.0001 n.s. <.0001 <.0001 n.s. Linear Effects Model (Participant as random effect) Prompt Selection + Prompt Size + Prompt Selection × Size Prompt Selection + Prompt Size + Selection × Size Prompt Selection + Prompt Size + Prompt Selection × Size p>F R2 .542 .546 .575 <.0001 <.0001 .0042 <.0001 n.s n.s <.0001 n.s n.s Table 21: Statistical analysis and results of mediation effects (RQ2.3) of how prompt configurations (a) and perceived prompt creativity (b) affect ideation diversity. See Table 20 caption to interpret tables. Positive and negative numbers in second column represent estimated model coefficients indicating how much each fixed effect influences the response. a) Prompt Distance to Ideation Mediation b) Prompt Creativity to Ideation Mediation Response Ideation Mean Pairwise Distance Ideation Minimum Pairwise Distance Linear Mixed Effects Model (Participant as random effect) Prompt Mean Distance + Prompt Min Distance + Pr. P. Chamfer Dist. + Intra-Pr. P. Mean Dist. Prompt Mean Distance + Prompt Min Distance + Pr. P. Chamfer Dist. + Intra-Pr. P. Mean Dist. +0.18 +0.06 +0.01 +0.02 +0.10 +0.15 +0.06 +0.02 p>F R2 Response .399 .315 <.0001 .0205 n.s. <.0001 .0241 <.0001 .0115 .0041 Ideation Mean Pairwise Distance Ideation Minimum Pairwise Distance Linear Mixed Effects Model (Participant as random effect) Pr. Unexpectedness + Pr. Understandability + Prompt Relevance + Prompt Quality Pr. Unexpectedness + Pr. Understandability + Prompt Relevance + Prompt Quality –0.0014 +0.0001 –0.0034 +0.0018 +0.0024 –0.0026 –0.0056 +0.0052 p>F R2 .367 .272 .0315 n.s .0020 n.s .0087 n.s .0003 .0431 37 Table 22: Statistical analysis of how prompt selection influences ideation diversity defined by different metrics (RQ3): a) individual diversity, b) collective diversity, and c) thematic diversity. See Table 20 caption for how to interpret tables. a) Ideation Individual Diversity b) Ideation Collective Diversity p>F R2 Response Response Ideation Mean Pairwise distance Ideation Min Pairwise distance Linear Effects Model (Participant as random effect) Prompt Selection + Prompt Size + Selection × Size Prompt Selection + Prompt Size + Selection × Size Prompt Selection + Prompt Size + Selection × Size Ideation Self-Quality .361 .296 .570 <.0001 n.s .0005 <.0001 n.s n.s n.s .0292 .0152 Linear Effects Model (Sample as random effect) Prompt Selection + Prompt Size + Prompt Selection × Size Prompt Selection + Prompt Size + Prompt Selection × Size p>F R2 .873 .984 <.0001 n.s <.0001 <.0001 .0030 .0061 Ideation Dispersion Ideation Evenness c) Ideation Collective Diversity (Thematic Coding) Response Linear Effects Model (Sample as random effect) p>F R2 Category Flexibility Prompt Selection Category Originality Prompt Selection Prompt Selection Theme Flexibility Prompt Selection Theme Originality .979 <.0001 .933 <.0001 <.0001 .911 <.0001 .396 Table 23: Statistical analysis of how prompt selection influences ideation creativity as validated by different methods (RQ3.1): a) individual rating, b) collective ranking, and c) collective pairwise rating. See Table 20 caption for how to interpret tables. a) Individual Rating Validation b) Collective Ranking Validation Response Linear Effects Model (Participant + Ideation as random effects) p>F R2 Ideation Informative Helpfulness Ideation Quality Prompt Selection <.0001 .559 Prompt Selection n.s. .467 Linear Effects Model Response Ideations Unrepetitive Prompt Selection <.0001 Ideations Informative Ideations Motivating p>F R2 .284 Prompt Selection <.0001 .340 .028 Prompt Selection .0426 c) Collective Pairwise Rating Validation Response Difference Rating Linear Effects Model Prompt Selection p>F R2 <.0001 .279 K Investigating Confound of Prompt Understandability on Ideation Diversity Having found that prompt understanding difficulty is correlated with ideation diversity, we investigated the alternative hypothesis that the difficulty of interpreting the prompts was a key reason for improved ideation because of increased ideation determination, rather than the content diversity in phrases due to the prompt selection technique. We argue that the increase in ideation diversity due to Directed Diversity is evidenced by increased perceived diversity ratings from validators and the higher number of idea categories from the thematic analysis. This shows that Directed Diversity did stimulate more diverse ideas due to some knowledge transfer from prompt to ideations, albeit with difficulty. We identify three more sources of evidence next. First, we qualitatively analyzed ideation rationales and found that while prompts could be rated hard to understand or irrelevant, participants still adopted some ideas. Ideators cherry-picked parts that were usable or conceived tangential ideas: e.g., P1 read “orthopaedic surgeons and exercise specialists” and decided to “cut out the bit about surgeons… I focused on the idea of specialists…”; P2 read “ballistic stretch uses vigorous momentum”, commented that “this isn't a phrase that I'm familiar with”, yet could write about stretching: “Stretch, breathe, and feel mindful.” Second, we quantitatively analyzed the Ideation Mean Pairwise Distance for prompts that participants understood (Phrase Understanding factor > 0). Table 24a describes the statistical analysis of the linear mixed effects model. We found that although distance was slightly higher (i.e., less diverse) when ideators understood phrases less, regardless of understanding, ideations from Directed(1) prompts had higher distances than ideations from Random(1) prompts (Figure 25, left). The effect due to Prompt Type was larger than due to Phrase Understanding. Furthermore, we analyzed whether the difficulty to understand may manifest as slower ideation speed due to more thinking time to 38 ideate to lead to better diversity, but did not find a correlation between phrase understanding and ideation speed (𝜌 = .046, 𝑝 = 𝑛. 𝑠.), and found the opposite effect that slower ideations led to lower distances (Table 24a and Figure 25 right). These suggest that prompt selection is a primary factor. Third, we investigated if Directed Diversity helped to stimulate ideas closer to prompts than would be done naturally without prompts (None) or accidentally with Random prompts. We analyzed this by calculating the prompt-ideation distance between Directed prompts and their corresponding ideated message, and their closest None and Random messages. Table 24b describes the statistical analysis of the linear mixed effects model. Figure 26 shows that the directed ideations were closest to the prompts, indicating the efficacy of Directed Diversity to transfer knowledge for ideation diversity. Table 24: Statistical analysis of a) how ideators’ understanding of phrases influences ideation diversity and b) how similar Directed ideations are to their prompts compared to other None and Random Messages. a) Ideation Individual Diversity b) Prompt-Ideation Closeness Response Ideation Mean Pairwise distance Linear Effects Model (Participant as random effect) Prompt Selection + Prompt Size + Selection × Size + Phrase Understanding>0 + Selection × (Understanding>0) + Log(Ideation Speed)>Median + Selection × (Speed>Median) p>F R2 .289 <.0001 n.s .0001 .0074 n.s. .0043 .0213 Response Prompt- Message Distance Linear Effects Model (Participant random effect) Message Type p>F R2 <.0001 .047 Figure 25: Results of computed individual diversity from ideations for different prompt configurations for (left) prompts that users understood (>0) or did not and (right) ideations that were fast or slow. Figure 26: Results of prompt-message distance (how dissimilar a prompt is from a message) comparing different messages with respect to Directed(3) prompts. 39 L Examples of Message-Group Ranking The factors of message-group ranking were derived from the sum of rankings (for each of the three condition) per validator for his five ranking trials (see the factor loadings in Table 15: The rotated factor loading of factor analysis on metrics of group ranking of the generated messages.). Therefore, these factors reflect the probability of how a validator ranked the message-groups of each condition. The following table shows examples of the factors and the corresponding message-group samples. Table 25: Examples of the factors with low (< Median) and high (≥ Median) scores for “Ideations Unrepetitive” and “Ideations Informative”. e v i t i t e p e r n U s n o i t a e d I e v i t a m r o f n I s n o i t a e d I High High High Low Low High Low Low Example Message-Group of 5 Ideations • Exercise can help you have really good sleep. • Why don't you try something new? Shake it up a little? Maybe lift a few small weights, or add in some squats - variety keeps things interesting. • You have 24 hours in a day-- think about how much time you spend on social media or doing something that's not going to benefit you in the long run and use that time to workout by prioritizing your health! • Go for the goal, do not stop, do not think you cannot do it. YOU CAN! • Summer is coming up and you want to look good when you are outside. Exercising at a health club is a good way to meet other people. Have a friend to work out with you and have each other motivate each other. • Just get moving. It's that simple. • Your dog is bored. Take him for a walk! It's good for both of you and he'll be thrilled! • Work out more. You will feel and look better. You will get more toned. • Exercising can improve your cardio health, thus helping you to live a more fulfilling life. • Start exercising more! You'll improve your mood and boost your self confidence. You'll feel great! • Switch off an air conditioner while working out. Let the sweat out, and burn some calories. • Not happy with what you see on the scale or the number of calories you burned? Don't let one day's data ruin your mood. Give it time and you'll see better results if you keep at it! • Sleep is when the body recovers and is very important. Rest early and run tomorrow! • Overcome your anger and your fear by going to the gym and working out! • Always stretch so that you perform at your best. You can do it! • Exercise helps build strong muscles as well as well as making your body more flexible. You will reduce your risk of disease and injury by keeping up with your program. • The first page of every book is the hardest to grasp, the first drink tastes the most sour and the first minute of every exercise is the hardest. All things get easier as you press on. • Walking to the train station is better as it gets you more active. Avoid lifts to the train station. • Keep exercising to keep your mind off difficult personal issues, like college admissions. • By using the proper squat position, you can train muscles that take pressure off of your knee and back to help with pain in both areas. 40
ai_researcher
5
DARWIN_Series_Domain_Specific_Large_Language_Models_for_Natural_Science.pdf
7 0 0 2 t c O 4 2 ] h p - m s a l p . s c i s y h p [ 3 v 4 2 3 1 0 7 0 / s c i s y h p : v i X r a The exact Darwin Lagrangian Hanno Ess´en∗ Department of Mechanics, KTH SE-100 44 Stockholm, Sweden (Dated: 2007 July 25, corrections August) Darwin (1920) noted that when radiation can be neglected it should be possible to eliminate the radiation degrees-of-freedom from the action of classical electrodynamics and keep the discrete particle degrees-of-freedom only. Darwin derived his well known Lagrangian by series expansion in v/c keeping terms up to order (v/c) . Since radiation is due to acceleration the assumption of low speed should not be necessary. A Lagrangian is suggested that neglects radiation without assuming low speed. It cures deficiencies of the Darwin Lagrangian in the ultra-relativistic regime. 2 PACS numbers: 03.50.De, 11.10.Ef When radiation can be neglected the Lagrangian of classical electrodynamics, putting β = v/c, can be writ- ten, L = mac2 1 q β2 a + ea 2 [βa · − Xa (cid:26)− (cid:27) (1) In 1920 Darwin [1] expanded the Li´enard-Wiechert po- tentials to second order in β = v/c and thus found that, − A(ra) φ(ra)] . φ(ra) = eb Xb(6=a) ra | rb | − = Xb(6=a) eb rba , and (hats are used for unit vectors), A(ra) = Xb(6=a) eb[βb + (βb · 2rba ˆrba)ˆrba] . (2) (3) give the correct Lagrangian to this order. More recent derivations can be found in a few textbooks [2–4]. In par- ticular Jackson [4] notes that using the Coulomb gauge A = 0) makes the electrostatic Coulomb potential ( ∇ · φ exact and moves all approximation to the vector po- tential A which obeys the inhomogeneous wave equation with the transverse (divergence free) current as source. The Darwin Lagrangian results when the term c−2∂2/∂t2 in the d’Alembert operator is neglected so that the equa- tion becomes a Poisson equation. The Darwin Lagrangian has been shown to be a unique approximately relativistic Lagrangian (Woodcock and Havas [5], Kennedy [6]). It can be derived from the Fokker-Wheeler-Feynman action-at-a-distance the- ory (Anderson and Schiminovich [7]), and it is useful in various fundamental studies of electrodynamics [8–11]. The magnetic interaction described by the Darwin La- grangian is essential in relativistic many-electron calcu- lations as noted by Breit and others [12–15]. It has found applications in nuclear physics [16, 17], and especially in plasma physics, for numerical simulation [18–22], ther- modynamics and kinetics [23–27], as well fundamental theory [28–30]. Barcons and Lapiedra [31] noted that the Darwin approach is not valid for a relativistic plasma and therefore used a different approach to its statistical mechanics. Corrections to the Darwin Lagrangian have been dis- cussed. Since a system of particles with identical charge to mass ratio does not dipole radiate a higher order ex- pansion should be meaningful for such systems [32–34]. To that order, however, acceleration inevitably enters and must be handled in some way. Others have argued that since radiation is due to acceleration, v/c expansion is irrelevant, and further that radiation can be negligible even if the particle speeds are considerable (Trubnikov and Kosachev [35], Frejlak [36]). We will pursue that lead here. One frequently encounters the statement that the Dar- win approach neglects retardation. This may be due to the fact that the, nowadays best known, elegant deriva- tion by Jackson [4] hides the complications due to re- tardation. Nevertheless it is wrong. The derivations by Darwin [1] and by Landau and Lifshitz [2] show that the contribution of retardation to the Coulomb potential in the Lorenz gauge, is quite large. The main acceleration dependent part, however, vanishes either, as in Darwin’s derivation, because it gives a total time derivative term in the Lagrangian, or, as in Landau and Lifshitz, because of a gauge transformation (to the Coulomb gauge). Both these derivations also show that the velocity dependent part of the retardation is handled exactly to order (v/c)2. A natural idea that does not work is to assume constant velocities and use the known exact Li´enard-Wiechert po- tentials for that case in (1). Darwin’s original deriva- tion shows that this does not give the electric interac- tion to sufficiently accuracy. It is important to note that gauge invariance (for a review, see Jackson and Okun [37]), which is valid for the exact theory, does not nec- essarily hold for approximations. We therefore impose the Coulomb gauge (for a recent discussion see Heras [38]) and then solve the inhomogeneous wave equation for A assuming constant velocities in the transverse current density. In this way one treats the electric interaction exactly, neglects acceleration in the solution for A, but do not assume low speeds. 2 The constant velocity exact Coulomb gauge vector po- tential does not seem to be well known. A special case was solved by Labarthe [39]. The explicit general solu- tion has recently been published by Hnizdo [40] who used a gauge transformation function given by Jackson [41] to find it, starting from the corresponding Li´enard-Wiechert r′(t), the vector from the potentials. Denote by, R = r source particle at r′(t), with charge e, to the field point r, so that β = ˙r′(t)/c. If we then put, − η = ˆR β, × (4) Hnizdo’s solution, which assumes the source particle to be at the origin at time t = 0 and to have constant ve- locity along the x-axis, i.e. r′(t) = cβtˆx, can be written, It is remarkable that an equivalent vector potential has been found by Crater and Lusanna [9] in a canonical for- malism. When the momenta (denoted ~κ) of Eq. (5.28) β2 the expression (8) of [9] are replaced by mβ/ 1 is recovered. The authors of [9] use a relativistic phase p space formalism and assume that charges are anticom- muting Grassmann variables. In this way they treat the Pauli exclusion principle semiclassically. Their Hamilto- nian formalism, which must entail the neglect of acceler- ation in an indirect way, is mainly intended for treatment of bound states. The Hamiltonian based on the ordinary Darwin Lagrangian (1) is discussed in [25]. − The explicit expression for the interaction Lagrangian of two particles that results when (8) replaces (3) in (1) is, ACy = (φL ACx = βφL − − yx y2 + z2 (φL zx y2 + z2 (φL ACz = − − φC )/β, φC )/β, φC )/β, (5) (6) (7) − 1 p at t = 0, so that x, y, z are the components of R. Here, φC = e/R, is the Coulomb potential, and, φL = η2, its Lorenz gauge form. One notes the iden- φC / tity 1/(y2 + z2) = (β/η)2/R2. Using this, and that the only relevant vectors are R and β, one can, by expressing everything in terms of these, or scalar and vector prod- ucts involving these, arrive at the coordinate independent form, L12 = e1e2 r21 (cid:20) g(η2 1) + g(η2 2) 2 β1 · β2+ h(η2 1) + h(η2 2) 2 (β1 · ˆr21)(β2 · ˆr21) , 1 (cid:21) − (11) × a = (ˆrab βa)2. We now consider two special where, η2 cases. If the velocity of a particle is parallel to the inter- particle vector to another particle, η = 0, so the Darwin interaction needs no correction in these cases. Assuming that two particles have equal velocities v1 = v2 = v parallel to ˆr21 we find the the interaction term in (1) gives, AC (r) = e g(η2)β + h(η2)(β h R ˆR) ˆR · i , (8) of Hnizdo’s solution. Here we have introduced the nota- tion, and the functions g and h are defined by, g(x) 1 1 + √1 ≡ 1 2 + 1 8 x ≈ − x + . . . , (9) and, h(x) g(x) ≡ √1 x ≈ − 1 2 + 3 8 x + . . . . (10) Note that g(1) = 1 but that h diverges for x = 1. From these expansions it is clear that the leading terms give the vector potentials (3) of the Darwin Lagrangian. The vector potential of the original Darwin Lagrangian is thus recovered from (8) when η = 0. One notes that in the derivation of (3) there was no need to assume that the velocity is constant since the solution to the Poisson equation does not require retardation, while it is neces- sary for solving the wave equation. Hence the assumption of constant velocity. One purpose of a Lagrangian is, af- ter all, to find equations of motion that determine the accelerations. If it is necessary to know them beforehand a Lagrangian approach is pointless. L12 = e1e2 r21 v2 c2 − e1e2 r21 , (12) and (11) gives the same result. One sees that this term, and the corresponding force, goes to zero in the ultra- c. Now consider instead the inter- relativistic limit v action of two particles that move with equal velocities, v1 = v2 = v, but side by side, so that, ˆr21 ⊥ v. The interaction part of the Lagrangian (1) is then, → L12 = 1 2 e1e2 r21 v2 c2 − e1e2 r21 . (13) One sees that even in the limit v c the Coulomb in- teraction dominates; the magnetic interaction can only compensate for half of it. This is clearly wrong, however. It is well known that in an ultra-relativistic beam the transverse Lorentz force cancels the transverse Coulomb repulsion (see e.g. [42]). → Let us instead use the vector potential (8) and the corresponding interaction (11). We first note that in this, side by side, case η2 = v2/c2 and thus η2 = 1 in the limit c. In this limit g(1) = 1 and h diverges. The two v scalar products in the second term will, however, be zero and a simple investigation shows that this compensates for the divergence of h, so that term does not contribute. → Finally we get, L12 = g(v2/c2) e1e2 r21 v2 c2 − e1e2 r21 , (14) and in the limit v force, are zero, as they should, when (11) is used. c this term and the corresponding → In conclusion, the Lagrangian obtained by using the exact constant velocity Coulomb gauge vector potential (8), instead of the A used in (3), has been derived with- out assuming that v/c is small, only that accelerations are not needed in estimating the Coulomb gauge vector potential. In this way all velocity dependent retardation, and, as discussed above, also the main part of the ac- celeration dependent retardation, is accounted for. We have also shown that using this Lagrangian we account correctly for the pinching of an ultra-relativistic beam, something the original Darwin Lagrangian does not do. ∗ Electronic address: [email protected]; URL: http:// www.mech.kth.se/~hanno/ [1] C. G. Darwin, Phil. Mag. ser. 6. 39, 537 (1920). [2] L. D. Landau and E. M. Lifshitz, The Classical Theory of Fields (Pergamon, Oxford, 1975), 4th ed. [3] J. L. Anderson, Principles of Relativity Physics (Aca- demic Press, New York, 1967). [4] J. D. Jackson, Classical Electrodynamics (John Wiley & Sons, New York, 1999), 3rd ed. [5] H. W. Woodcock and P. Havas, Phys. Rev. D 12, 3422 (1972). [6] F. J. Kennedy, Am. J. Phys. 40, 63 (1972). [7] J. L. Anderson and S. Schiminovich, J. Math. Phys. 8, 255 (1967). [8] S. Coleman and J. H. Van Vleck, Phys. Rev. 171, 1370 (1968). [9] H. W. Crater and L. Lusanna, Ann. Phys. (N.Y.) 289, 87 (2001). [10] H. Ess´en, Eur. J. Phys. 26, 279 (2005). [11] T. H. Boyer, J. Phys. A: Math. Gen. 39, 3455 (2006). [12] G. Breit, Phys. Rev. 34, 553 (1929). [13] J. Sucher, Advances in Quantum Chemistry 30, 433 [14] J. De Luca, Phys. Rev. Lett. 80, 680 (1998). [15] K.-H. Yang and J. Hirschfelder, J. Chem. Phys. (USA) (1998). 72, 5863 (1980). [16] H. Primakoff and T. Holstein, Phys. Rev. 55, 1218 (1939). 3 [17] A. B. Balantekin, C. A. Bertulani, and M. S. Hussein, Nucl. Phys. A (Netherlands) 627, 324 (1997). [18] A. N. Kaufman and P. S. Rostler, Phys. Fluids 14, 446 (1971). [19] C. W. Nielson and H. R. Lewis, in Methods in Computa- tional Physics, edited by J. Killeen (Academic Press, New York, 1976), vol. 16, pp. 367–388, series ed. B. Alder, S. Fernbach, and M. Rotenberg. [20] D. Q. Ding, L. C. Lee, and D. W. Swift, J. Geophys. Res. 97, 8453 (1992). [21] M. R. Gibbons and D. W. Hewett, J. Comput. Phys. 130, 54 (1997). [22] W. W. Lee, R. C. Davidson, E. A. Startsev, and H. Qin, Nucl. Instr. and Meth. A 544, 353 (2005). [23] J. E. Krizan and P. Havas, Phys. Rev. 128, 2916 (1962). [24] H. Ess´en, Phys. Rev. E 53, 5228 (1996). [25] H. Ess´en and A. B. Nordmark, Phys. Rev. E 69, 036404 (2004). [26] A. Alastuey and W. Appel, Physica A 238, 369 (1997). [27] S. El Boustani, P. R. Buenzli, and P. A. Martin, Phys. Rev. E 73, 036113 (2006). [28] V. Mehra and J. De Luca, Phys. Rev. E 61, 1199 (2000). [29] H. Ess´en, J. Phys. A: Math. Gen. 32, 2297 (1999). [30] H. Ess´en, Phys. of Plasmas 12, 122101 (2005). [31] X. Barcons and R. Lapeidra, J. Phys. A: Math. Gen. 18, 271 (1985). [32] V. N. Golubenkov and Y. A. Smorodinskii, Zh. Eksp. Teor. Fiz. (USSR) 31, 330 (1956), english translation: Sov. Phys. JETP 4, 55 (1957). [33] D. D. Dionysiou and D. A. Vaiopoulos, Lett. Nuovo Ci- mento 26, 5 (1979). [34] B. M. Barker and R. F. O’Connel, Ann. Phys. (N.Y.) 129, 358 (1980). [35] B. A. Trubnikov and V. V. Kosachev, Zh. Eksp. Teor. Fiz. 66, 1311 (1974), english translation: Sov. Phys. JETP 39, 641 (1974). [36] W. Frejlak, Int. J. of Theor. Phys. 27, 711 (1988). [37] J. D. Jackson and L. B. Okun, Rev. Mod. Phys. 73, 663 (2001). [38] J. A. Heras, Europhys. Lett. (France) 69, 1 (2005). [39] J.-J. Labarthe, Eur. J. Phys. 20, L31 (1999). [40] V. Hnizdo, Eur. J. Phys. 25, 351 (2004). [41] J. D. Jackson, Am. J. Phys. 70, 917 (2002). [42] H. Wiedemann, Particle Accelerator Physics, Basic Prin- ciples and Linear Beam Dynamics (Springer-Verlag, Berlin Heidelberg, 1993).
ai_researcher
6
LLM_and_Simulation_as_Bilevel_Optimizers_A_New_Paradigm_to_Advance_Physical_Scientific_Discovery.pdf
LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery Pingchuan Ma 1 Tsun-Hsuan Wang 1 Minghao Guo 1 Zhiqing Sun 2 Joshua B. Tenenbaum 1 3 4 Daniela Rus 1 Chuang Gan 5 6 Wojciech Matusik 1 4 2 0 2 y a M 6 1 ] G L . s c [ 1 v 3 8 7 9 0 . 5 0 4 2 : v i X r a Abstract Large Language Models have recently gained sig- nificant attention in scientific discovery for their extensive knowledge and advanced reasoning ca- pabilities. However, they encounter challenges in effectively simulating observational feedback and grounding it with language to propel advance- ments in physical scientific discovery. Conversely, human scientists undertake scientific discovery by formulating hypotheses, conducting experiments, and revising theories through observational anal- ysis. Inspired by this, we propose to enhance the knowledge-driven, abstract reasoning abili- ties of LLMs with the computational strength of simulations. We introduce Scientific Generative Agent (SGA), a bilevel optimization framework: LLMs act as knowledgeable and versatile thinkers, proposing scientific hypotheses and reason about discrete components, such as physics equations or molecule structures; meanwhile, simulations function as experimental platforms, providing ob- servational feedback and optimizing via differ- entiability for continuous parts, such as physical parameters. We conduct extensive experiments to demonstrate our framework’s efficacy in constitu- tive law discovery and molecular design, unveil- ing novel solutions that differ from conventional human expectations yet remain coherent upon analysis. 1. Introduction In physical science, spanning physics, chemistry, pharma- cology, etc., various research streams aim to automate and speed up scientific discovery (Wang et al., 2023). Each 1MIT CSAIL 2CMU LTI 3MIT BCS 4Center for Brains, Minds and Machines 5UMass Amherst 6MIT-IBM Watson AI Lab. Cor- respondence to: Pingchuan Ma <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). 1 stream innovates within its field, creating methods tailored to its specific challenges and nuances. However, this ap- proach often misses a universally applicable philosophy (Popper, 2005; Fortunato et al., 2018), which can be piv- otal to democratizing access to advanced research tools, standardizing scientific practices, and enhancing efficiency across disciplines. Our goal aims to transcend specific do- mains, offering a unified approach to physical science. As an inspiration, we observe how human scientists con- duct scientific discovery experiments and conclude a few key experiences: (i) iteratively propose a hypothesis and make observations from experimentation to correct theories (Popper, 2005), (ii) divide the solutions into discrete com- ponents, such as physics equations or molecule structures, and continuous components, such as parameters for physics and molecule properties (Wang et al., 2023), (iii) exploit the existing knowledge while occasionally explore novel ideas aggressively in pursuits of breakthrough (Wuestman et al., 2020), (iv) follow a generic, universal principle for all types of physical scientific discovery yet with specific nuance of each discipline (Rosenberg & McIntyre, 2019). Standing out as generalist tools with an extensive reposi- tory of knowledge (AI4Science & Quantum, 2023), large language models (LLMs) have recently risen to prominence in scientific discovery for their expansive knowledge bases, advanced reasoning capabilities, and human-friendly nat- ural language interface. One line of research focuses on fine-tuning LLMs with domain-specific data to align nat- ural language with scientific information, such as chemi- cal (Chithrananda et al., 2020) or drug (Liu et al., 2021) structures; however, these methods are domain-bound and demand extensive data for broader application. Another re- search direction seeks to leverage the innate capabilities of pre-trained LLMs, augmented by external resources like the internet, programming, or documentation. LLMs serve as optimizers or agents (Huang et al., 2023) for mathematical problem-solving (Romera-Paredes et al., 2023), conducting chemical experiments (Boiko et al., 2023), and advancing molecular (Li et al., 2023) and drug discovery (Sharma & Thakur, 2023). Nevertheless, these approaches are confined to the computational capability of LLMs, a crucial factor Scientific Generative Agent Figure 1. The overall pipeline of Scientific Generative Agent (SGA). Taking the constitutive law searching problem as an example, the input is an initial guess (a purely elastic material), and the output is another constitutive law optimized towards the ground-truth (weakly compressible fluid). The initial guess first initialize a top-K heap for storing the solutions. In the outer-level optimization, an LLM takes in top-K previously proposed solutions and generates a better one upon them with modified continuous parameterization Θ and discrete expression E. In the inner-level optimization, a gradient-based optimization solves for optimal Θ via simulation and appends these optimized solutions in the heap. After a few iterations of bilevel optimization, the heap returns the top-1 solutions as the final solution. in physical science for tasks like calculating numerical re- sults based on physics law hypotheses to predict natural phenomena. To address this limitation, we propose to aug- ment LLMs with physical simulation, hereby merging the knowledge-driven, abstract reasoning abilities of LLMs with the computational structures and accuracy of simulations. To this end, inspired by the overarching philosophy of hu- man scientists, we introduce Scientific Generative Agent (SGA), a bilevel optimization approach wherein the outer- level engages LLMs as knowledgeable and versatile thinkers for generating and revising scientific hypothesis, while the inner-level involves simulations as experimental platforms for providing observational feedback. First, we employ LLMs to generate hypotheses, which then guide the ex- ecution of simulations. These simulations, in turn, yield observational feedback that helps refine and improve the proposed hypotheses. Secondly, we introduce a bilevel opti- mization framework: one level performs search-based opti- mization on discrete, symbolic variables like physics laws or molecule structures via LLMs; the other level performs gradient-based optimization via differentiable simulation for continuous parameters like material stiffness or molecule coordinates. Thirdly, we devise an exploit-and-explore strat- egy for the hypothesis proposal by adjusting LLM’s gener- ation temperature. Lastly, we demonstrate our pipeline is generally applicable across scientific disciplines, with only minimal modification such as altering the prompts. For the empirical study, we focus on (i) molecular design that aims to discover molecular structure and atoms’ coordi- nates based on its conformation and quantum mechanical properties and (ii) constitutive law discovery that aims to discover material constitutive equations and its correspond- ing mechanical properties directly from a recorded motion trajectory. To provide a concrete example, let’s assume that we initially have simply the code for a purely linear mate- rial. We then task our model to uncover a more complex representation by optimizing its code to fit a highly non- linear trajectory. In this task, our method capitalizes on the 2 Top-K Heap Continuous ParameterizationDiscrete ExpressionClass Header1 class Physics(nn.Module):2 def __init__(self): 3 super().__init__() 4 self.a = ... 5 def forward(self, F): 6 F_new = self.a * F 7 return F_new LLMNext?Exploit!Explore!Python CodeContinuous ParameterizationDiscrete Expression4 - self.a = ... 4 + self.b = ... 6 - F_new = self.a * F 6 + F_new = F / self.b SimulationIterationLossFeedbacktopk()Outer-Level OptimizationCode EvaluationInner-Level Optimziationappend()LLM-Driven Outer-Level OptimizationSim-Driven Inner-Level Optimizationt=0t=1t=2t=3t=0t=1t=2t=3Purely Elastic MaterialWeakly Compressible FluidLoss = 10.0Loss = 0.1init()top1() Scientific Generative Agent strengths of bilevel optimization: the outer-level utilizes LLMs to identify the correct symbolic material constitutive equations and formulates a proposition for potentially bene- ficial continuous parameterization (e.g., Young’s modulus and Poisson’s ratio); and the inner-level refines the proposed material parameters and provides informative feedback us- ing differentiable simulation. Generally, our method can discover the desired molecules and constitutive laws, out- performing other LLM-based baselines; more interestingly, it can propose well-performing solutions that are beyond human expectation yet sensible under analysis by domain experts. Overall, our contributions are concluded as: • We present a generic framework for physical scientific discovery that combines LLMs with physical simulations. • We propose a bilevel optimization with LLMs for discrete- space search-based optimization and differentiable simula- tions for continuous-space gradient-based optimization. • We conduct extensive experiments to demonstrate the ef- fectiveness and generality of the proposed framework in physics law discovery and molecular design; moreover, we showcase novel molecules or constitutive laws, while unexpected from a conventional perspective, are deemed reasonable upon examination by domain experts. 2. Scientific Generative Agent SGA is a bilevel optimization framework where the upper level features LLMs as proposers of scientific solutions, and the lower level utilizes simulations as experimental platforms for validation. In Sec. 2.1, we describe a formal definition of the bilevel optimization, followed by Sec. 2.2 for outer optimization and Sec. 2.3 for inner optimization. 2.1. Bilevel Optimization Pipeline We formally describe the pipeline of our method, including the input/output of the system and the underlying submod- ules, and the overall optimization formulation. Suppose we are given a metric to evaluate a physical phenomenon y (e.g., a configuration of deformation) for a scientific problem L (y) (e.g., reconstruction of a mechanistic behavior). First, we describe the simulation (as an experimental platform) as, y, z = Φ (θ; E) , (1) where Φ is a simulator that takes in scientific expression E (e.g., constitutive equations) and continuous components θ (e.g., material parameters) as inputs and gives simulated physical phenomenon y and additional observational feed- back z (e.g., particles’ trajectories) as outputs. Next, the LLM is prompted to act as a thinker to propose expressions E based on past experimental results from simulation, (cid:17) (cid:16) {L (yk) , zk, ok, Ek, Θk}k∈[K] ; P , (2) E, Θ = LLM Algorithm 1 Scientific Generative Agent Input: Discrete expression and continuous param (E,θ ∈ Θ), Num of exploiting Ml, Num of exploring Mh, Exploiting temperature Tl, Exploring temperature Th 1: # Store ranked (solution,param) by heap 2: H ← heap() 3: # Continuous optimization 4: ˆθ ← optim(E,θ;Φ) 5: H.append((E,ˆθ)) 6: for i = 1, . . . , N do 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: end for Output: H.topk(1) # Return the best # Generate Ml solutions from LLM (E,Θ)[:Ml] ← LLM(H.topk(K),Tl) # Generate Mh solutions from LLM (E,Θ)[Ml:Ml+Mh] ← LLM(H.topk(K),Th) for m = 1, . . . , Ml + Mh do # Continuous optimization ˆθ ← optim(E,θ ∈ Θ;Φ) H.append((E,ˆθ)) end for where the set [K] summarizes the pointers to the past sim- ulation results containing an evaluation of the scientific problem L(yk), other physical feedback (zk, ok), and past proposals (Ek, Θk); ok summarizes the intermediate results of the inner optimization (later detailed in Sec. 2.3); Θ de- termines the continuous parameterization for the decision variables of the inner optimization (e.g., which variables to be optimized within a proposed equation); P is prompt. With these, we define the bilevel optimization problem as, (cid:16) (cid:16) y L E, Θ, ˆθ; Φ (cid:17)(cid:17) min E,Θ s.t. G (E, Θ; Φ) ≤ 0 ˆθ ∈ arg min L (y (θ; Φ, E)) , θ∈Θ (3a) (3b) (3c) where G (·) ≤ 0 refers to the validity of the simulation (i.e., whether an expression E is simulatable). The outer opti- mization searches for (i) an expression E that defines what experiments to be conducted Φ (·; E) and (ii) continuous parametrization Θ that defines the search space of the inner continuous optimization minθ∈Θ. With the dependencies on the outer-level variables (E, Θ), the inner optimization searches for the optimal continuous parameters ˆθ given the proposed expression via differentiable simulation. 2.2. LLM-Driven Outer-Level Search We dive deeper into how we use LLMs (Eq. 2) and their interaction with the simulation for outer-level search (Eq. 3). LLM-driven Optimization LLMs have shown to be ef- fective sequential decision makers for generic optimization, providing proper guidance via prompting and sufficiently informative contexts (Yang et al., 2024; Romera-Paredes 3 Scientific Generative Agent et al., 2023). We craft prompts to direct LLMs in a structured manner, enabling them to (i) perform analysis on past exper- imental results from the simulation, e.g., the deviatoric parts of stress tensor are likely correct based on the loss curve; (ii) devise a high-level plan on how to formulate a hypothesis or improve upon previous experiments, e.g., ensure numerical stability with the usage of the determinant of deformation gradient; (iii) suggest a solution that can be executed as ex- periments via simulation for hypothesis testing; e.g., a code snippet describing a constitutive equation. For Eq. 3a, in- spired by (Ma et al., 2024), we adopt an evolutionary search that generates multiple offspring {Em, Θm}m∈[M ] (M is offspring size) in each iteration and retain the best selection. Distinctively, our approach (Alg. 1) involves selecting sev- eral high-performing candidates rather than the best only, which (i) enhances the feasibility of hypotheses in simu- lation (Eq. 3b) and (ii) facilitates evolutionary crossover, with LLMs generating new hypotheses from various past experiments (“breeds”) for better exploration, akin to the findings in (Romera-Paredes et al., 2023). Interfacing with Simulation The primary challenge in integrating LLMs with simulation lies in devising a protocol that enables efficient, structured, yet adaptable communi- cation between the two modules. We observe that physical scientific solutions are often represented as mathematical ex- pressions or structured entities. Hereby, from LLMs to sim- ulation, we consider two settings: equation searching and entity searching, both unified as the abstraction (E, Θ) in Eq. 2. In equation searching, LLMs are allowed to propose equations E along with the search space of the inner-level continuous optimization Θ; for practitioners, an example using PyTorch can be Θ as init that defines continu- ous parameters via nn.Parameter and E as forward that defines computation of equations (see Fig. 1). In entity searching, LLMs propose descriptions of structures E (e.g., how atoms are connected to form a molecule) with Θ simply reduced to constant (e.g., every atom has its 3D coordinates to be optimized) and omitted from the optimization Eq. 3a as decision variables. On the other hand, from simulation to LLMs, we leverage domain experts’ knowledge to craft functions for extracting compact, relevant information z as observational feedback; this process is akin to an experi- enced scientist offering guidance to a junior colleague on how to document experimental findings effectively. For instance, human experts often monitor the movements of specific body regions to derive constitutive laws. Therefore, to aid in this process, we include a function in the simulation that records the particle trajectories. Lastly, the subsequent section Sec. 2.3 will provide an in-depth explanation of the inner optimization results denoted as o. These results serve as feedback from the simulation to the LLMs. 4 Exploitation and Exploration Inspired by human scien- tists achieving breakthroughs by skillfully balancing careful progression with bold exploration, we devise an exploit- and-explore strategy by tuning the LLMs’ decoding tem- perature (Yang et al., 2024). When generating offspring {Em, Θm}m∈[M ] in Eq. 3a, we divide them into two groups: one (m ∈ Mexploit) consists of cautious followers that keep the “gradient” and conservatively trails previous solutions, while the other (m ∈ Mexplore) comprises daring adventur- ers that take risks and suggest unique solutions. Empirically, we observed that (i) Mexploit often contains repetitive so- lutions from previous iterations, and (ii) Mexplore tends to yield solutions too random to be informative for guiding op- timization, or invalid (i.e., violating Eq. 3b), thus providing little feedback signal. As a rule of thumb, we have found that a 1:3 ratio between Mexploit and Mexplore is effective. 2.3. Differentiable Inner-Level Optimization Under the search space Θ and expression for simulation E from the outer level, inner optimization (Eq. 3c) involves a gradient-based optimization that solves for optimal continu- ous parameters ˆθ ∈ Θ via differentiable simulation (Eq. 1). Essentially, the domain-specific knowledge is distilled via gradients ∇θΦ(θ; E) from the simulation to the intermedi- ate optimization results o (like loss curve). The (ˆy, o) are then fed back to LLMs for revising solutions. Note that o may involve the loss curve toward the target metric L and other auxiliary recordings throughout optimization, carrying information of how to improve solutions in various aspects; for example, with L as displacement of position, o may include velocities across the inner optimization iterations. 3. Experiments 3.1. Problem Definitions Constitutive Law Discovery Identifying the constitutive law from motion observations stands as one of the most dif- ficult challenges in fields such as physics, material science, and mechanical engineering. Here we follow the recent ad- vances in physical simulation and formulate the constitutive law discovery task as an optimization problem (Ma et al., 2023) using differentiable Material Point Method (MPM) simulators (Sulsky et al., 1995; Jiang et al., 2016). Note that our method is not specifically tailored to MPM simula- tors and applies to any physical simulation. The objective of this task is to identify both the discrete expression and continuous parameters in a constitutive law, specifically the symbolic material models φ (·) and their corresponding ma- terial parameters θ, from a ground-truth trajectory of particle positions ˆXt∈[1,...,T ] where T denotes the number of steps. In this problem, we consider two types of constitutive laws, φE (·; θE) and φP (·; θP ), for modeling elastic and plastic Scientific Generative Agent Table 1. Benchmark. We compare our method against 4 baselines and 2 variations of our method, while also noting the difference in architecture or hyper-parameters. We use column #Iter. as the number of iterations, #Hist. as the K value for the top-k retrieval in the historical optimization steps, #Exploit #Explore as the number of offspring for exploitation versus exploration, Bilevel as if bilevel optimization is enabled. Our experiments encompass 8 different tasks, which are divided into constitutive law search (a-d) and molecule design (e-h). A lower loss value is preferable across all tasks. The best method with the lowest loss is highlighted in bold text. Method #Iter. #Hist. #Exploit #Explore Bilevel CoT FunSearch Eureka OPRO Ours (no bilevel) Ours (no exploit) Ours 1 20 5 5 5 5 5 5 2 1 5 5 5 5 N/A 0 / 4 0 / 16 0 / 16 4 / 12 0 / 16 4 / 12 ✗ ✗ ✗ ✗ ✗ ✓ ✓ Constitutive Law Search Molecule Design (a) ↓ (b) ↓ (c) ↓ (g) ↓ (h) ↓ 298.5 210.3 128.0 136.2 90.2 3.0e-3 1462.3 872.2 531.0 508.3 517.0 3.9e-1 (d) ↓ 384.1 139.5 150.1 128.8 (e) ↓ 3.0 1.1 4.3 2.4 (f) ↓ 32.1 7.1 9.8 9.4 150.0 82.8 101.7 99.2 18.6 8.3 3.3 3.1 83.6 6.6e-2 68.4 1.4e-12 8.6e-1 4.0e-4 9.1 1.5e-1 1.8 6.1e-1 6.0 1.1 9.7e-1 1.3 1.4 2.8e-5 5.2e-5 2.1e-1 6.0e-2 1.4e-12 1.3e-4 1.1e-1 5.4e-1 3.6e-5 materials respectively, and they are formally defined as: φE (F; θE) (cid:55)→ τ φP (F; θP ) (cid:55)→ Fcorrected, (4a) (4b) where F ∈ R3×3 is the deformation gradient, τ ∈ R3×3 is the Kirchhoff stress tensor, Fcorrected ∈ R3×3 is the deforma- tion gradient after plastic return-mapping correction, and θE and θP are the continuous material parameters for elastic and plastic constitutive laws respectively. Given a specific constitutive law, we input it to the differentiable simulation and yields a particle position trajectory: Xt∈[1,...,T ] = sim (φ (·; θ)) , (5) and we optimize the constitutive law by fitting the output trajectory to the ground truth ˆXt∈[1,...,T ]. Molecule Design In this study, we focus on a prevalent task in molecule design: discovering molecules with spe- cific quantum mechanical properties. Our objective is to determine the optimal molecular structure and its 3D con- formation to match a predefined target quantum mechanical property. The design process involves both the discrete ex- pression – the molecular structure represented by SMILES strings (Weininger, 1988), and the continuous parameters – the 3D coordinates of each atom in the molecule. The methodology comprises two loops: In the outer loop, the LLM generates the initial molecular structure as a SMILES string, along with a preliminary guess for the 3D atom coor- dinates. The inner loop involves simultaneous optimization of both the molecule’s 3D conformation and quantum me- chanical properties, both determined by 3D atom positions. For the generation of 3D conformations, we utilize the ETKGD algorithm (Riniker & Landrum, 2015) followed by optimization using the Merck Molecular Force Field (MMFF) (Halgren, 1996), both implemented within the RD- Kit (Landrum et al., 2013). To get the quantum mechanical Figure 2. Loss trends comparison. Loss of the best solution aver- aged across seeds at different iterations of LLM-driven optimiza- tion, where the shading shows the min/max value. property values, we employ UniMol (Zhou et al., 2023), a pre-trained transformer-based large model, which has been fine-tuned on the QM9 dataset (Ramakrishnan et al., 2014). 3.2. Experiment Setup Task Design We design a diverse set of challenging tasks for evaluation. For constitutive law discovery, we propose 4 tasks including: (a) fitting the non-linear elastic material starting from a linear elastic material, (b) fitting the von Mises plastic material starting from a purely elastic mate- rial, (c) fitting the granular material starting from a purely elastic material, and (d) fitting the weakly compressible fluid starting from a purely elastic material. For molecular design task, we consider 4 popular tasks, centering on 3 commonly evaluated quantum mechanical properties (Fang et al., 2022; Zhou et al., 2023), each set to different target values: (e) HOMO (Highest Occupied Molecular Orbital) set to 0, (f) LUMO (Lowest Unoccupied Molecular Orbital) set to 0, (g) the HOMO-LUMO energy gap set to 0, and (h) the HOMO-LUMO energy gap set to -2. All these values are normalized on all data in QM9 dataset. 5 12345Iteration0100200300400500600Loss12345Iteration0.00.20.40.60.81.0FunSearchEurekaOPROOursZoom In Scientific Generative Agent Table 2. Comparison with symbolic regression. We compare our method against 5 most performant methods in SRBench (La Cava et al., 2021) and 3 pre-trained symbolic regression methods. Sym. denotes whether the result is symbolic or not. Table 3. Comparison with population-based molecule design. We compare our method against a traditional population-based molecule design method GhemGE (Yoshikawa et al., 2018) and report the results of molecule design tasks (e-h). Method FFX MLP FEAT DSO Operon R2 ↑ MSE ↓ MAE ↓ 0.9824 0.9876 0.9964 0.9968 0.9988 4.5e+5 3.2e+5 9.2e+4 8.2e+4 2.8e+4 3.7e+2 3.4e+2 1.7e+2 9.2e+1 9.8e+1 SymbolicGPT 0.5233 NeSymReS T-JSL 6.9e+6 N/A to >3 variables N/A to >2 variables 1.7e+3 Ours 0.9990 1.7e+4 8.6e+1 Sym. ✓ ✗ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Implementation Details We run all our experiments 5 times with different random seeds following previous prac- tices (Ma et al., 2024). Due to the complexity of the task, we provide a simple bootstrapping example of a valid design to ensure the success rate. We use warp (Macklin, 2022) for the differentiable MPM simulation, and we develop our inner-level optimization upon PyTorch (Paszke et al., 2019). In all our experiments, we use mean square error as the criteria and Adam optimizer (Kingma & Ba, 2015). We choose gpt-4-turbo-preview as the backbone model for LLM and tentatively set the exploiting tempera- ture Tl = 0.5 and exploring temperature Th = 1.0. 3.3. Physical Scientific Discovery We consider 6 strong baselines for evaluation: (i) Chain- of-Thoughts (CoT) prompting (Wei et al., 2022) solves the problem by looking at step-by-step solutions from exam- ples. We provide 5 examples with an explanation to CoT as the initial solution. (ii) FunSearch (Romera-Paredes et al., 2023) utilizes evolutionary strategy to avoid local optimum. We adopt the given hyperparameters from the original implementation with 2 optimization histories and 4 explorers. We set the number of iterations to 20, yielding the same number of solutions evaluated, for a fair comparison to other methods. (iii) Eureka (Ma et al., 2024) generates multiple solutions in each iteration to improve the success rate of the generated code. We keep the hyperparameters from the original implementation. (iv) Optimization by PROmpting (OPRO) (Yang et al., 2024) highlights the advantages of involving a sorted optimization trajectory. We set the hyperparameters to be equal to Eureka except for the number of historical optimization steps. In all these works (i-iv), we notice the temperatures for LLM inference are all 1.0, which is equal to the exploring temperature in our method, so we denote them with 0 exploiter. We also consider 2 variants of our method: (v) Ours (no bilevel) 6 Method (e) ↓ (f) ↓ (g) ↓ (h) ↓ GhemGE 4.8e-3 1.3e-4 Ours 1.8 1.1e-1 1.5 5.4e-1 9.8e-5 3.6e-5 Table 4. Experiment in imaginary constitutive law. We con- struct an imaginary constitutive law to keep LLM from cheating by memorization and report the results of our method and baselines. Method FunSearch Eureka OPRO Ours Loss 105.0 89.1 98.0 1.3e-3 removes the bilevel optimization by only searching with LLM. (vi) Ours (no exploit) removes the exploitation by setting the temperature to 1.0 all the time. We present our experiments against the 8 designed tasks and show the results in Table 1. Compared to baselines (i-iv), our method is significantly better by a number of magnitudes. When the bilevel optimization is removed from our method, the performance drops dramatically, but still statistically better than baselines (i-iv), indicating the choice of hyperparameters and the integration of exploitation is helpful for the task. When we remove the exploitation but restore the bilevel optimization, we notice the performance grows back. It has comparable performance compared to our method in (d) or even better results in (h). However, in some tasks, especially hard ones (e.g., (b) and (f)) that we care more in reality, the performance gap is over 50%, indicating the effectiveness of our exploit-and-explore strategy. We also present the loss trend in task (a) in Figure 2, our method outstands with a much lower loss and a converging trend. We also compare our method with traditional methods in each specific area to demonstrate the generalizability of our method. First, we reformulate our constitutive law search task (a) into a symbolic regression task by (i) capture the ground-truth output (the stress tensors) as the supervision, and (ii) separate the 9 output dimension into 9 independent problems and ensemble them for evaluation. Note that these modifications dramatically simplified the original task: we removed back-propagation through time (BPTT) and di- rectly discover the constitutive law without surrogate loss. We evaluate 14 traditional baselines in SRBench (La Cava et al., 2021) and 3 data-driven pre-trained baselines. We select the top few baselines in Table 2 and show the rest in the Appendix C.1. As shown the table, our method topped on this task even with a much more challenging setting. Also, since our method depends on the in-context learning ability of LLMs, it has little constraint in the num- Scientific Generative Agent Figure 3. Ablation on bilevel optimization. We denote the opti- mization trajectory with and without out bilevel optimization with red dot and orange triangle respectively. We visualize the interme- diate step of our method before the inner-level optimization using orange cross. We also highlight the outer LLM optimization and inner simulation optimization using orange and red arrows. ber of variables than the data-driven pre-trained baselines. For moledule design tasks, we also compare our method with GhemGE (Yoshikawa et al., 2018), which employs a population-based molecule design algorithm. As shown in Table 3, our method presents a much lower loss, demonstrat- ing the general effectiveness of our method. 3.4. Ablation Study Generalization or Memorization In order to figure out if the improvement introduced by our method is merely because the LLM saw the solutions in its training phase, we design an experiment ablating it by making it invent an imaginary constitutive law that does not exist on the earth. We mix the constitutive law of von Mises plasticity, granular material, and weakly compressible fluid by 50%, 30%, and 20%, so that the new constitutive law represents an imag- inary material whose behavior is extremely complex. We repeat our experiment setup as in Figure 1. We compare our method against the baselines and report the performances in Table 4. As shown in the table, our method can still discover the constitutive law with a low quantitative loss. From our observation, there is very little visual difference between the ground-truth material and the optimized constitutive law. We show the discovered constitutive law in Appendix D.9. Bilevel Optimization is the Key Here we evaluate the importance of bilevel optimization in Figure 3 using the task (h). Comparing the blue triangle curve and the red dot curve, which represent the LLM-driven outer-level opti- mization and the simulation-driven inner-level optimization, it is easy to conclude that the loss performance with bilevel optimization is better. Nevertheless, we are also interested in how bilevel optimization works inside each optimization step and how much LLMs and simulations help respectively. As shown as a zigzag curve, we found that LLMs and sim- ulations help each other over all optimization steps: the 7 Figure 4. Ablation on the backbone LLM. We compare the per- formances of 4 selected backbone LLMs and report the rank of them. A outer curve indicates a better performance. next proposal from LLMs will be better with simulation- optimized results, and vice versa. We argue that LLMs and simulations have different expertise: LLMs are generalist scientists who have cross-discipline knowledge, while simu- lations are domain experts who have specialized knowledge. LLM Backbone In addition to GPT-4 (OpenAI, 2023), we repeat the experiments in Table 1 using 3 additional LLM backbones: (i) GPT-3.5 (Ouyang et al., 2022), (ii) Claude- 3-Sonnet (Anthropic, 2024), and (iii) Mixtral-8x7B (Jiang et al., 2024), and report the rank of them in Figure 4. Indi- cated by the largest area, GPT-4, as our choice, statistically outperforms the other methods. Interestingly, we found Claude-3-Sonnet is the second top method on most of con- stitutive law search task, while Mixtral-8x7B even tops on 2 molecule design tasks. As a result, our workflow also works for other LLMs, however, our suggestion for practitioners is to try GPT-4 as the first choice but also consider open-source model (e.g., Mixtral-8x7B) for budget or customizability. Exploitation v.s. Exploration We visualize the statistics of the simulation execution status in Figure 5 (a) using the task (b), which is one of the most challenging tasks in our ex- periments. When the exploitation is removed, the error rate dramatically increases, as shown by a decrease in green bars. It leads to a degeneration in the performance of the methods with exploitation as shown in Figure 5 (b). However, even though the success rate remains high, when exploration is removed, the optimization result is still worse than keeping them both. We argue that exploration is significant when the optimization problem is challenging, especially in our case, where the search space is highly non-linear and unstructured and resulting in numerous local optimum. 12345Iteration10−1100101Lossouteroptim(LLM)inneroptim(sim)w/obilevelw/bilevel(pre-inner-optim)w/bilevel(ours)(a)(b)(c)(d)(e)(f)(g)(h)GPT-3.5Claude-3Mixtral-8x7BGPT-41243 Scientific Generative Agent Figure 5. Ablation on exploration-exploitation. (a) Histogram of solutions that are valid for simulation (Eq. 3b) across iterations. (b) Loss (L in Sec. 2.1) of the best solution averaged across seeds at different iterations, where the shading indicates the min/max values. 3.5. Case Study Constitutive Law Search We provide a trimmed snippet of our searched constitutive law in Figure 6 (a) for task (a) where a highly non-linear material is provided as the trajectory to fit. We reformat the code slightly to fit into the text, where the complete example can be found in the Appendix. Starting from a linear material, our method is able to automatically generate the constitutive law with a quadratic deviatoric term. Note that our method also provides a concrete implementation of init function that defines the continuous parameters in the computational graph for later inner-level optimization. Molecule Design When comparing the two molecules with respect to their HOMO-LUMO energy gap based on optimized results from the LLM as shown in Figure 6 (b), we observe distinct characteristics in each: (i) Molecule A (gap-0) includes sulfur and chlorine atoms attached to a ring, coupled with a trifluoromethyl group, introducing electron-withdrawing effects, and (ii) Molecule B (gap-2) includes oxygen (notably in ethers) and sulfur within the ring structures introducing localized non-bonding electron pairs. Furthermore, the overall structure of Molecule B is more complex than that of Molecule A, containing multiple rings. An intriguing aspect of Molecule B, which might initially defy expectations, is the presence of a single fluo- rine atom. The high electronegativity of fluorine typically leads to electron density withdrawal, influencing the gap value. However, due to the complexity of Molecule B’s structure, the impact of the fluorine atom is somewhat local- ized, thereby not significantly altering the gap value. 4. Related Work 4.1. Automated Scientific Discovery Automated scientific discovery, enhanced by machine learn- ing methods, serves as a powerful accelerator for research, enabling scientists to generate hypotheses, design experi- ments, interpret vast datasets, and unearth insights that may Figure 6. Case Study. (a) We give a concrete example of the searched constitutive law. (b) We provide 2 novel molecules opti- mized for different objectives with their SMILES stings. elude traditional scientific methodologies (AI4Science & Quantum, 2023; Kramer et al., 2023; Wang et al., 2023). This multifaceted process unfolds through two synergisti- cally linked stages: hypothesis formation and the collection and analysis of experimental data. The integration of au- tomated systems not only augments the scientific inquiry process but also streamlines the discovery pipeline, from conceptualization to empirical validation. This paper places a particular emphasis on, but is not limited to, constitutive law discovery and molecular design. These areas exemplify the profound impact of automation in unraveling complex material behaviors and in the innovative design of molecules with tailored properties. Automatic identification of consti- tutive material models has been a long-standing problem and recent works utilizes differentiable simulation (Du et al., 2021; Ma et al., 2023; 2021) to address it as a system identi- fication problem. Leveraging machine learning and artificial intelligence, researchers are able to predict molecular be- 8 01234Iteration0.2250.2500.2750.3000.3250.3500.3750.400LossOursNoExplorationNoExploitation12345123451234501020304050607080#Solutions33443432017402020541610605153428185911105222648211160146244214252530292427401228262628SuccessTrainingErrorSyntaxErrorSuccessRateCurveOursNo ExplorationNo ExploitationIterationIterationIteration(a)(b)Molecule AMolecule BC1CC(SC1Cl)C(C(F)(F)F)NC1OC2SC3C4OC(F)S4C13C2(b)(a)class Physics(nn.Module): def __init__(self, youngs_modulus_log: float = 13.03, poissons_ratio_sigmoid: float = -1.99): super().__init__() self.youngs_modulus_log = nn.Parameter( torch.tensor(youngs_modulus_log)) # Log of Young's modulus self.poissons_ratio_sigmoid = nn.Parameter( torch.tensor(poissons_ratio_sigmoid)) # Sigmoid of Poisson's ratio def forward(self, F: torch.Tensor) -> torch.Tensor: youngs_modulus = self.youngs_modulus_log.exp() poissons_ratio = torch.sigmoid(self.poissons_ratio_sigmoid) * 0.49 mu = youngs_modulus / (2 * (1 + poissons_ratio)) # Shear modulus lam = youngs_modulus * poissons_ratio / ( (1 + poissons_ratio) * (1 - 2 * poissons_ratio)) # Deformation gradient determinant J J = F.det().view(-1, 1, 1) F_invT = F.inverse().transpose(1, 2) # Volumetric part P_vol = lam * (J - 1) * F_invT # Deviatoric part P_dev = mu * (F - (1 / J) * F_invT) # Compute Kirchhoff stress tensor kirchhoff_stress = P_vol + P_dev @ F.transpose(1, 2) return kirchhoff_stress Scientific Generative Agent havior, optimize chemical structures for specific functions, and thus, rapidly accelerate the development of new drugs, materials, and chemicals (Jin et al., 2018; Zhou et al., 2019; Schneider, 2018). 4.2. Large Language Models and Agents The advancement of Large Language Models (LLMs) such as ChatGPT and GPT-4 has sparked considerable interest in their potential as autonomous agents (Brown et al., 2020; OpenAI, 2022; 2023). Recent developments have shown that LLMs can be enhanced to solve complex problems by creating and utilizing their own tools, as demonstrated in the LATM framework (Sumers et al., 2024), and by acting as optimizers in the absence of gradients, as seen in the OPRO methodology (Yang et al., 2024). These approaches signify a shift towards more independent and versatile LLM-based agents capable of generating solutions through self-crafted tools and optimization techniques (Cai et al., 2024; Yao et al., 2023b;a), showcasing their evolving problem-solving capabilities. In the realm of scientific discovery, LLMs have begun to make significant contributions, particularly in mathematics and computational problems. The FunSearch method (Romera-Paredes et al., 2023) pairs LLMs with evaluators to exceed known results in extremal combina- torics and online bin packing, illustrating LLMs’ ability to discover new solutions to established problems. Similarly, AlphaGeometry’s success (Trinh et al., 2024) in solving olympiad-level geometry problems without human demon- strations highlights the potential of LLMs in automating complex reasoning tasks. These examples underline the transformative impact of LLMs in pushing the boundaries of scientific inquiry and automated reasoning. 4.3. Bilevel Optimization Bilevel optimization involves a hierarchical structure with two levels of optimization problems, where the solution to the upper-level problem is contingent upon the outcome of the lower-level problem (Colson et al., 2007). Bilevel opti- mization problems are inherently more complex than their single-level counterparts due to the nested nature of the opti- mization tasks and the intricate interdependencies between them. Recent advancements have focused on developing ef- ficient algorithms, including evolutionary algorithms (Sinha et al., 2017b), gradient-based approaches (Liu et al., 2022), and approximation techniques (Sinha et al., 2017a), to tackle the computational challenges presented by the non-convex and non-differentiable characteristics of many bilevel prob- lems. Among a wide span of application domains of bilevel optimization, neural architecture search (NAS) (Liu et al., 2019; Bender et al., 2018; Cai et al., 2019; Xue et al., 2021) is prominent and close to the problem setting in this pa- per: the upper level optimizes the discrete neural network architecture while the lower level optimizes the continu- 9 ous weights of the neural network. However, typical NAS methods require a predefined search space, constraining the exploration of discrete network architectures to manually specified boundaries. Our framework distinguishes itself by employing LLM encoded with general knowledge and gets rid of the limitations imposed by manual design constraints. 5. Conclusion (i) We consider a few limitations and future directions. Although we prompt the LLM to generate pseudo-code plans and comments, it is generally hard to ensure the in- terpretability of LLM-generated solutions. (ii) Since the LLM-generated codes are executed directly without any filtering in our application, there exists potential AI safety risk that hazards the operating system. (iii) Our method only utilizes the internal knowledge of LLMs as the prior, where in reality people design manual constraints and rule to regularize and improve the optimization (Udrescu et al., 2020). We leave these domain-specific applications and human feedback-based regularization methods as our future work. (iv) The performance our method highly depends on the differentiablity of the generated code. However, Zero- order optimizers (Hansen, 2006) should also shine since the number of continuous parameters is relatively limited. (v) LLM inference requires large computational resources and thus increases expense. For example, it spends around $10 for our method to complete one task using GPT-4, which will be increasingly inacceptable when the number of iter- ation grows. (vi) Due to the reuse of previously generated solutions in our proposed top-k heap, the KV cache in LLM will be highly similar between neighbor iterations. It opens a gate for recent KV cache optimization methods (Zheng et al., 2023) to speedup our method by KV cache reusing. In conclution, we present Scientific Generative Agent, a bi- level optimization framework: LLMs serve as knowledge- able and adaptable thinkers, formulating scientific solutions like physics equations or molecule structures; concurrently, simulations operate as platforms for experimentation, of- fering observational feedback and optimizing continuous components like physical parameters. We focused on two scientific problems: constitutive law search and molecular design. Our approach outperforms other LLM-based bench- mark methods, delivering consistent, robust, and nearly monotonic improvement. Furthermore, it shows exceptional ability in identifying unknown, true constitutive laws and molecular structures. Remarkably, our system generates innovative solutions that, despite being unconventional, are deemed reasonable after being thoroughly analyzed by ex- perts in their respective domains. We view our process as a trailblazer, establishing a new paradigm for utilizing LLMs and simulations as bilevel optimization to further advancements in physical scientific discoveries. Scientific Generative Agent Acknowledgements We would like to thank Bohan Wang, Ziming Liu, Zhuoran Yang, Liane Makatura, Megan Tjandrasuwita, and Michael Sun for the valuable discussion. The mesh “Stanford Bunny” in Figure 1 is from The Stanford 3D Scanning Repository. This work is supported by MIT-IBM Watson AI Lab. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. References AI4Science, M. R. and Quantum, M. A. The impact of large language models on scientific discovery: a prelimi- nary study using gpt-4. arXiv preprint arXiv:2311.07361, 2023. Anthropic. 2024. news/claude-3-family. Introducing the next generation of claude, URL https://www.anthropic.com/ Arnaldo, I., Krawiec, K., and O’Reilly, U.-M. Multiple regression genetic programming. In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, pp. 879–886, 2014. Bender, G., Kindermans, P.-J., Zoph, B., Vasudevan, V., and Le, Q. Understanding and simplifying one-shot archi- tecture search. In International conference on machine learning, pp. 550–559. PMLR, 2018. Biggio, L., Bendinelli, T., Neitz, A., Lucchi, A., and Paras- candolo, G. Neural symbolic regression that scales. In International Conference on Machine Learning, pp. 936– 945. Pmlr, 2021. Boiko, D. A., MacKnight, R., Kline, B., and Gomes, G. Au- tonomous chemical research with large language models. Nature, 624(7992):570–578, 2023. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877–1901, 2020. Cai, H., Zhu, L., and Han, S. ProxylessNAS: Direct neu- ral architecture search on target task and hardware. In International Conference on Learning Representations, 2019. Cai, T., Wang, X., Ma, T., Chen, X., and Zhou, D. Large language models as tool makers. In International Confer- ence on Learning Representations, 2024. Cava, W. L., Singh, T. R., Taggart, J., Suri, S., and Moore, J. Learning concise representations for regression by evolving networks of trees. In International Conference on Learning Representations, 2019. Chen, T. and Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd inter- national conference on knowledge discovery and data mining, pp. 785–794, 2016. Chithrananda, S., Grand, G., and Ramsundar, B. Chemberta: large-scale self-supervised pretraining for molecular prop- erty prediction. arXiv preprint arXiv:2010.09885, 2020. Colson, B., Marcotte, P., and Savard, G. An overview of bilevel optimization. Annals of operations research, 153: 235–256, 2007. Du, T., Wu, K., Ma, P., Wah, S., Spielberg, A., Rus, D., and Matusik, W. Diffpd: Differentiable projective dynamics. ACM Transactions on Graphics (TOG), 41(2):1–21, 2021. Fang, X., Liu, L., Lei, J., He, D., Zhang, S., Zhou, J., Wang, F., Wu, H., and Wang, H. Geometry-enhanced molecular representation learning for property prediction. Nature Machine Intelligence, 4(2):127–134, 2022. Fortunato, S., Bergstrom, C. T., B¨orner, K., Evans, J. A., Helbing, D., Milojevi´c, S., Petersen, A. M., Radicchi, F., Sinatra, R., Uzzi, B., et al. Science of science. Science, 359(6379):eaao0185, 2018. Halgren, T. A. Merck molecular force field. i. basis, form, scope, parameterization, and performance of mmff94. Journal of computational chemistry, 17(5-6):490–519, 1996. Hansen, N. The cma evolution strategy: a comparing review. Towards a new evolutionary computation: Advances in the estimation of distribution algorithms, pp. 75–102, 2006. Huang, Q., Vora, J., Liang, P., and Leskovec, J. Benchmark- ing large language models as ai research agents. arXiv preprint arXiv:2310.03302, 2023. Jiang, A. Q., Sablayrolles, A., Roux, A., Mensch, A., Savary, B., Bamford, C., Chaplot, D. S., Casas, D. d. l., Hanna, E. B., Bressand, F., et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024. Jiang, C., Schroeder, C., Teran, J., Stomakhin, A., and Selle, A. The material point method for simulating continuum materials. In Acm siggraph 2016 courses, pp. 1–52. 2016. 10 Scientific Generative Agent Jin, W., Barzilay, R., and Jaakkola, T. Junction tree vari- ational autoencoder for molecular graph generation. In International conference on machine learning, pp. 2323– 2332. PMLR, 2018. Liu, R., Mu, P., Yuan, X., Zeng, S., and Zhang, J. A gen- eral descent aggregation framework for gradient-based bi-level optimization. IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, 45(1):38–57, 2022. Jin, Y., Fu, W., Kang, J., Guo, J., and Guo, J. Bayesian symbolic regression. arXiv preprint arXiv:1910.08892, 2019. Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural infor- mation processing systems, 30, 2017. Kingma, D. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations, San Diega, CA, USA, 2015. Kommenda, M., Burlacu, B., Kronberger, G., and Affen- zeller, M. Parameter identification for symbolic regres- sion using nonlinear least squares. Genetic Programming and Evolvable Machines, 21(3):471–501, 2020. Kramer, S., Cerrato, M., Dˇzeroski, S., and King, R. Au- tomated scientific discovery: From equation discov- ery to autonomous discovery systems. arXiv preprint arXiv:2305.02251, 2023. La Cava, W., Helmuth, T., Spector, L., and Moore, J. H. A probabilistic and multi-objective analysis of lexicase selection and ε-lexicase selection. Evolutionary Compu- tation, 27(3):377–402, 2019. La Cava, W., Orzechowski, P., Burlacu, B., de Franca, F., Virgolin, M., Jin, Y., Kommenda, M., and Moore, J. Con- temporary symbolic regression methods and their relative performance. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, 2021. Landrum, G. et al. Rdkit: A software suite for cheminformat- ics, computational chemistry, and predictive modeling. Greg Landrum, 8:31, 2013. Li, J., Liu, Y., Fan, W., Wei, X.-Y., Liu, H., Tang, J., and Li, Q. Empowering molecule discovery for molecule- caption translation with large language models: A chatgpt perspective. arXiv preprint arXiv:2306.06615, 2023. Li, W., Li, W., Sun, L., Wu, M., Yu, L., Liu, J., Li, Y., and Tian, S. Transformer-based model for symbolic re- gression via joint supervised learning. In The Eleventh International Conference on Learning Representations, 2022. Liu, H., Simonyan, K., and Yang, Y. DARTS: Differen- tiable architecture search. In International Conference on Learning Representations, 2019. Liu, Z., Roberts, R. A., Lal-Nag, M., Chen, X., Huang, R., and Tong, W. Ai-based language models powering drug discovery and development. Drug Discovery Today, 26 (11):2593–2607, 2021. Ma, P., Du, T., Tenenbaum, J. B., Matusik, W., and Gan, C. Risp: Rendering-invariant state predictor with dif- ferentiable simulation and rendering for cross-domain parameter estimation. In International Conference on Learning Representations, 2021. Ma, P., Chen, P. Y., Deng, B., Tenenbaum, J. B., Du, T., Gan, C., and Matusik, W. Learning neural constitutive laws from motion observations for generalizable pde dynam- ics. In International Conference on Machine Learning. PMLR, 2023. Ma, Y. J., Liang, W., Wang, G., Huang, D.-A., Bastani, O., Jayaraman, D., Zhu, Y., Fan, L., and Anandkumar, A. Eureka: Human-level reward design via coding large lan- guage models. In International Conference on Learning Representations, 2024. Macklin, M. Warp: A high-performance python framework for gpu simulation and graphics, March 2022. NVIDIA GPU Technology Conference. McConaghy, T. Ffx: Fast, scalable, deterministic symbolic regression technology. Genetic Programming Theory and Practice IX, pp. 235–260, 2011. Mundhenk, T. N., Landajuela, M., Glatt, R., Santiago, C. P., Faissol, D. M., and Petersen, B. K. Symbolic regres- sion via neural-guided genetic programming population seeding. In Advances in Neural Information Processing Systems, 2021. OpenAI. OpenAI: Introducing ChatGPT, 2022. URL https://openai.com/blog/chatgpt. OpenAI. OpenAI: GPT-4, 2023. URL https://openai. com/research/gpt-4. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730–27744, 2022. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. 11 Scientific Generative Agent Petersen, B. K., Larma, M. L., Mundhenk, T. N., Santi- ago, C. P., Kim, S. K., and Kim, J. T. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. In International Conference on Learning Representations, 2020. Popper, K. The logic of scientific discovery. Routledge, 2005. Ramakrishnan, R., Dral, P. O., Rupp, M., and Von Lilienfeld, O. A. Quantum chemistry structures and properties of 134 kilo molecules. Scientific data, 1(1):1–7, 2014. Riniker, S. and Landrum, G. A. Better informed distance geometry: using what we know to improve conforma- tion generation. Journal of chemical information and modeling, 55(12):2562–2574, 2015. Romera-Paredes, B., Barekatain, M., Novikov, A., Balog, M., Kumar, M. P., Dupont, E., Ruiz, F. J., Ellenberg, J. S., Wang, P., Fawzi, O., et al. Mathematical discoveries from program search with large language models. Nature, pp. 1–3, 2023. Rosenberg, A. and McIntyre, L. Philosophy of science: A contemporary introduction. Routledge, 2019. Schapire, R. E. The boosting approach to machine learning: An overview. Nonlinear estimation and classification, pp. 149–171, 2003. Schneider, G. Automating drug discovery. Nature reviews drug discovery, 17(2):97–113, 2018. Sharma, G. and Thakur, A. Chatgpt in drug discovery. 2023. Sinha, A., Malo, P., and Deb, K. Evolutionary algorithm for bilevel optimization using approximations of the lower level optimal solution mapping. European Journal of Operational Research, 257(2):395–411, 2017a. Sinha, A., Malo, P., and Deb, K. A review on bilevel op- timization: From classical to evolutionary approaches and applications. IEEE Transactions on Evolutionary Computation, 22(2):276–295, 2017b. Sulsky, D., Zhou, S.-J., and Schreyer, H. L. Application of a particle-in-cell method to solid mechanics. Computer physics communications, 87(1-2):236–252, 1995. Sumers, T., Yao, S., Narasimhan, K., and Griffiths, T. Cog- nitive architectures for language agents. Transactions on Machine Learning Research, 2024. ISSN 2835-8856. Survey Certification. Trinh, T. H., Wu, Y., Le, Q. V., He, H., and Luong, T. Solv- ing olympiad geometry without human demonstrations. Nature, 625(7995):476–482, 2024. Udrescu, S.-M., Tan, A., Feng, J., Neto, O., Wu, T., and Tegmark, M. Ai feynman 2.0: Pareto-optimal symbolic re- gression exploiting graph modularity. Advances in Neural Information Processing Systems, 33:4860–4871, 2020. Valipour, M., You, B., Panju, M., and Ghodsi, A. Sym- bolicgpt: A generative transformer model for symbolic regression. arXiv preprint arXiv:2106.14131, 2021. Virgolin, M., Alderliesten, T., and Bosman, P. A. Linear scaling with and within semantic backpropagation-based genetic programming for symbolic regression. In Pro- ceedings of the genetic and evolutionary computation conference, pp. 1084–1092, 2019. Virgolin, M., Alderliesten, T., Witteveen, C., and Bosman, P. A. Improving model-based genetic programming for symbolic regression of small expressions. Evolutionary computation, 29(2):211–237, 2021. Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., Chandak, P., Liu, S., Van Katwyk, P., Deac, A., et al. Scientific discovery in the age of artificial intelligence. Nature, 620(7972):47–60, 2023. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35: 24824–24837, 2022. Weininger, D. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences, 28(1):31–36, 1988. Wuestman, M., Hoekman, J., and Frenken, K. A typology of scientific breakthroughs. Quantitative Science Studies, 1(3):1203–1222, 2020. Xue, C., Wang, X., Yan, J., Hu, Y., Yang, X., and Sun, K. Rethinking bi-level optimization in neural architecture search: A gibbs sampling perspective. In AAAI Confer- ence on Artificial Intelligence, volume 35, pp. 10551– 10559, 2021. Yang, C., Wang, X., Lu, Y., Liu, H., Le, Q. V., Zhou, D., and Chen, X. Large language models as optimizers. In International Conference on Learning Representations, 2024. Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., and Narasimhan, K. R. Tree of thoughts: Deliberate prob- lem solving with large language models. In Conference on Neural Information Processing Systems, 2023a. 12 Scientific Generative Agent Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., and Cao, Y. ReAct: Synergizing reasoning and act- ing in language models. In International Conference on Learning Representations, 2023b. Yoshikawa, N., Terayama, K., Sumita, M., Homma, T., Oono, K., and Tsuda, K. Population-based de novo molecule generation, using grammatical evolution. Chem- istry Letters, 47(11):1431–1434, 2018. Zheng, L., Yin, L., Xie, Z., Huang, J., Sun, C., Yu, C. H., Cao, S., Kozyrakis, C., Stoica, I., Gonzalez, J. E., et al. Efficiently programming large language models using sglang. arXiv preprint arXiv:2312.07104, 2023. Zhou, G., Gao, Z., Ding, Q., Zheng, H., Xu, H., Wei, Z., Zhang, L., and Ke, G. Uni-mol: A universal 3d molecu- lar representation learning framework. In International Conference on Learning Representations, 2023. Zhou, Z., Kearnes, S., Li, L., Zare, R. N., and Riley, P. Op- timization of molecules via deep reinforcement learning. Scientific reports, 9(1):10752, 2019. 13 A. Full Prompts System prompt for constitutive law discovery: Scientific Generative Agent You are an intelligent AI assistant for coding, physical simulation, and scientific discovery. Follow the user’s requirements carefully and make sure you understand them. Your expertise is strictly limited to physical simulation, material science, mathematics, and coding. Keep your answers short and to the point. Do not provide any information that is not requested. Always document your code as comments to explain the reason behind them. Use Markdown to format your solution. You are very familiar with Python and PyTorch. Do not use any external libraries other than the libraries used in the examples. System prompt for molecule design: You are an intelligent AI assistant for coding, molecule design, and scientific discovery. Follow the user’s requirements carefully and make sure you understand them. Your expertise is strictly limited to physical simulation, material science, chemistry, molecule design, mathematics, and coding. Keep your answers short and to the point. Do not provide any information that is not requested. Always document your code as comments to explain the reason behind them. Use Markdown to format your solution. You are very familiar with PyTorch. Your are very familiar with the SMILES notation (Simplified Molecular-Input Line-Entry System). Do not use any external libraries other than the libraries used in the examples. Coding format prompt for elastic constitutive law discovery: ## Format Requirements ### PyTorch Tips 1. When element-wise multiplying two matrix, make sure their number of dimensions match before the operation. For example, when multiplying ‘J‘ (B,) and ‘I‘ (B, 3, 3), you should do ‘J.view(-1, 1, 1)‘ before the operation. Similarly, ‘(J - 1)‘ should also be reshaped to ‘(J - 1).view(-1, 1, 1)‘. If you are not sure, write down every component in the expression one by one and annotate its dimension in the comment for verification. 2. When computing the trace of a tensor A (B, 3, 3), use ‘A.diagonal(dim1=1, dim2=2).sum(dim=1).view(-1, 1, 1)‘. Avoid using ‘torch.trace‘ or ‘Tensor.trace‘ since they only support 2D matrix. ### Code Requirements 1. The programming language is always python. 2. Annotate the size of the tensor as comment after each tensor operation. For example, ‘# (B, 3, 3)‘. 3. The only library allowed is PyTorch. Follow the examples provided by the user and check the PyTorch documentation to learn how to use PyTorch. 4. Separate the code into continuous physical parameters that can be tuned with differentiable optimization and the symbolic constitutive law represented by PyTorch code. Define them respectively in the ‘__init__‘ function and the ‘forward‘ function. 5. The first output of the ‘forward‘ function is the updated deformation gradient. Always remember the second output of the ‘forward‘ function is Kirchhoff stress tensor, which is defined by the matrix multiplication between the first Piola-Kirchhoff stress tensor and the transpose of the deformation gradient tensor. Formally, ‘tau = P @ FˆT‘, where tau is the Kirchhoff stress tensor, P is the first Piola-Kirchhoff stress tensor, and F is the deformation gradient tensor. Do not directly return any other type of stress tensor other than Kirchhoff stress tensor. Compute Kirchhoff stress tensor using the equation: ‘tau = P @ FˆT‘. 6. The proposed code should strictly follow the structure and function signatures below: ‘‘‘python import torch import torch.nn as nn class Physics(nn.Module): def __init__(self, param: float = DEFAULT_VALUE): """ Define trainable continuous physical parameters for differentiable optimization. Tentatively initialize the parameters with the default values in args. Args: param (float): the physical meaning of the parameter. """ super().__init__() self.param = nn.Parameter(torch.tensor(param)) def forward(self, F: torch.Tensor) -> torch.Tensor: """ Compute Kirchhoff stress tensor from deformation gradient tensor. Args: F (torch.Tensor): deformation gradient tensor (B, 3, 3). Returns: kirchhoff_stress (torch.Tensor): Kirchhoff stress tensor (B, 3, 3). """ return kirchhoff_stress ‘‘‘ ### Solution Requirements 14 Scientific Generative Agent 1. Analyze step-by-step what the potential problem is in the previous iterations based on the feedback. Think about why the results from previous constitutive laws mismatched with the ground truth. Do not give advice about how to optimize. Focus on the formulation of the constitutive law. Start this section with "### Analysis". Analyze all iterations individually, and start the subsection for each iteration with "#### Iteration N", where N stands for the index. Remember to analyze every iteration in the history. 2. Think step-by-step what you need to do in this iteration. Think about how to separate your algorithm into a continuous physical parameter part and a symbolic constitutive law part. Describe your plan in pseudo-code, written out in great detail. Remember to update the default values of the trainable physical parameters based on previous optimizations. Start this section with "### Step-by-Step Plan". 3. Output the code in a single code block "‘‘‘python ... ‘‘‘" with detailed comments in the code block. Do not add any trailing comments before or after the code block. Start this section with "### Code". Coding format prompt for plastic constitutive law discovery: ## Format Requirements ### PyTorch Tips 1. When element-wise multiplying two matrix, make sure their number of dimensions match before the operation. For example, when multiplying ‘J‘ (B,) and ‘I‘ (B, 3, 3), you should do ‘J.view(-1, 1, 1)‘ before the operation. Similarly, ‘(J - 1)‘ should also be reshaped to ‘(J - 1).view(-1, 1, 1)‘. If you are not sure, write down every component in the expression one by one and annotate its dimension in the comment for verification. 2. When computing the trace of a tensor A (B, 3, 3), use ‘A.diagonal(dim1=1, dim2=2).sum(dim=1).view(-1, 1, 1)‘. Avoid using ‘torch.trace‘ or ‘Tensor.trace‘ since they only support 2D matrix. ### Code Requirements 1. The programming language is always python. 2. Annotate the size of the tensor as comment after each tensor operation. For example, ‘# (B, 3, 3)‘. 3. The only library allowed is PyTorch. Follow the examples provided by the user and check the PyTorch documentation to learn how to use PyTorch. 4. Separate the code into continuous physical parameters that can be tuned with differentiable optimization and the symbolic deformation gradient correction model represented by PyTorch code. Define them respectively in the ‘__init__‘ function and the ‘forward‘ function. 5. The proposed code should strictly follow the structure and function signatures below: ‘‘‘python import torch import torch.nn as nn class Physics(nn.Module): def __init__(self, param: float = DEFAULT_VALUE): """ Define trainable continuous physical parameters for differentiable optimization. Tentatively initialize the parameters with the default values in args. Args: param (float): the physical meaning of the parameter. """ super().__init__() self.param = nn.Parameter(torch.tensor(param)) def forward(self, F: torch.Tensor) -> torch.Tensor: """ Compute corrected deformation gradient from deformation gradient tensor. Args: F (torch.Tensor): deformation gradient tensor (B, 3, 3). Returns: F_corrected (torch.Tensor): corrected deformation gradient tensor (B, 3, 3). """ return F_corrected ‘‘‘ ### Solution Requirements 1. Analyze step-by-step what the potential problem is in the previous iterations based on the feedback. Think about why the results from previous constitutive laws mismatched with the ground truth. Do not give advice about how to optimize. Focus on the formulation of the constitutive law. Start this section with "### Analysis". Analyze all iterations individually, and start the subsection for each iteration with "#### Iteration N", where N stands for the index. Remember to analyze every iteration in the history. 2. Think step-by-step what you need to do in this iteration. Think about if the plasticity is needed to improve performance. Remember that plasticity is not necessary. If your analysis supports plasticity, think about how to update deformation gradient using plasticity. Think about how to separate your algorithm into a continuous physical parameter part and a symbolic deformation gradient correction model part. Describe your plan in pseudo-code, written out in great detail. Remember to update the default values of the trainable physical parameters based on previous optimizations. Start this section with "### Step-by-Step Plan". 3. Output the code in a single code block "‘‘‘python ... ‘‘‘" with detailed comments in the code block. Do not add any trailing comments before or after the code block. Start this section with "### Code". Coding format prompt for molecule design: ## Format Requirements ### Code Requirements 1. The programming language is always python. 15 Scientific Generative Agent 2. Annotate the size of the tensor as comment after each tensor operation. For example, ‘# (B, 3, 3)‘. 3. Separate the code into: (1) python string ‘SMILES‘: the SMILES string describing the molecular topology structure and atomic types, and (2) matrix ‘coordinates‘ the 3D coordinates of all atoms. These representations should not include hydrogens. 4. The SMILES string should be valid. Use your knowledge about Simplified Molecular-Input Line-Entry System to help you design a valid one. 5. The number of atoms in the SMILES string should be no less than 8, which means the number of atoms should be >= 8. Try to generate molecule with diverse atoms. 6. The 3D coordinates of the atoms should not be overlapping with each other. In another word, every row in the matrix ‘coordinates‘ should be distinct from each other. 7. The ‘coordinates‘ matrix is of shape ‘(N, 3)‘ where ‘N‘ stands for the number of atoms in the molecule. It should be identical to the number of atoms that the proposed SMILES string represents. State out the shape of any matrix defined in the comment as shown in the following example. State out the number of atoms that the SMILES string represents in the comment as shown in the following example. 8. The discrete SMILES string is critical in this problem since it defines the structure and cannot be tuned using differentiable optimization. Please propose different SMILES string from all examples or iterations above to discover and evaluate more structure. This is very important. 9. The proposed code should strictly follow the structure and function signatures below: ‘‘‘python SMILES: str # N atoms coordinates: list[list[float]] # (N, 3) ‘‘‘ ### Solution Requirements 1. Analyze step-by-step what the potential problem is in the previous iterations based on the feedback. Think about why the results from previous molecule structure mismatched with the ground truth. Do not give advice about how to optimize. Focus on the formulation of the SMILES string. Start this section with "### Analysis". Analyze all iterations individually, and start the subsection for each iteration with "#### Iteration N", where N stands for the index. Remember to analyze every iteration in the history. 2. Think step-by-step what you need to do in this iteration. Think about how to separate your algorithm into a continuous 3D coordinate system part and a discrete SMILES string part. Remember the SMILES string proposed should always be different from previous iterations. After propose the new SMILES string, compute and count step-by-step how many atoms it contains. The continuous parameter should follow the number of atoms in the SMILES string. Describe your plan in pseudo-code, written out in great detail. Start this section with "### Step-by-Step Plan". 3. Output the code in a single code block "‘‘‘python ... ‘‘‘" with detailed comments in the code block. After the SMILES string, compute the number of atoms in it by counting. Remember that the number of atoms in the SMILES string should be no less than 8, which means the number of atoms should be >= 8. Try to generate molecule with diverse atoms. Do not add any trailing comments before or after the code block. Start this section with "### Code". B. More Explanations B.1. Data Workflow The full input to LLM has 3 main parts: (i) system prompt, (ii) iteration information, and (iii) format prompt. For the system prompt, we insert it into the LLM at the beginning or input it as a special instruction depending on the type of LLM. For the iteration information, we first concatenate the code and its feedback and then simply stack the top K solutions. Finally, we append the format prompt at the end of the prompt to regularize the expected output. From our experiments, it is important to keep the order of prompts to ensure the performance and the successful parsing. More precisely, we show this process in the following python-like code: 1 prompts = [] 2 prompts.append(system_prompt) 3 for solution in reversed(solutions.topk()): 4 5 6 prompts.append(format_prompt) 7 full_prompt = ’\n’.join(prompts) iteration_prompt = solution.code + ’\n’ + solution.feedback prompts.append(iteration_prompt) B.2. Differences to Symbolic Regression Task • Our problem focuses on loss-guided general scientific discovery, which is a super-set of regular regression problems. In the constitutive law search tasks, we do not directly feed the input/output pair to our method. Instead, we consider a much more challenging task: apply the generated constitutive law recursively and use the overall loss as the performance metric. Concretely, a classic SR methods solve arg minf ∥f (X) − y∥ given < X, y > pairs, whereas our method solves arg minf ∥g(f (X))∥ given < X, g(f (X)) > pairs and g is a complex function like physical simulation. It is easy to construct g to cover the former case using the later formulation, proving the generality of our problem setup. We formulate our problem as such to reflect a more realistic scenario in scientific discovery, where direct supervision is extremely sparse. • Our method supports arbitrary number of input variables and output features, where most of SR methods (Valipour et al., 2021) have limitation on the number of input and output. The input limitation strongly caps the complexity 16 Scientific Generative Agent of tasks they can solve, and the output limitation forces them ignore the structural correlation between each output dimension. In a comparison, our method supports arbitrary problem settings thanks to the code-based representation, which enables multi-dimensional arrays and tensor operations. • Our model adapts to multi-discipline application easily, while traditional SR methods typically incorporate with domain- experts’ priors via hard-coded constraints and heuristic (Udrescu et al., 2020), which is limited, domain-specific, and difficult to customize. Our method is built upon LLMs pre-trained on internet-level data that contains multi-discipline natural languages, mathematical expressions, and codes. As a result, it is easy for users to customize it and adapt to their own diverse applications via natural language guidance. C. More Experiments C.1. Symbolic Regression We present the full results of the comparison to symbolic regression methods in Table 5 Method Table 5. Symbolic Regression R2 ↑ MSE ↓ MAE ↓ AIFeynman (Udrescu et al., 2020) DSR (Petersen et al., 2020) BSR (Jin et al., 2019) AdaBoost (Schapire, 2003) GP-GOMEA (Virgolin et al., 2021) SBP-GP (Virgolin et al., 2019) LightGBM (Ke et al., 2017) XGBoost (Chen & Guestrin, 2016) MRGP (Arnaldo et al., 2014) EPLEX (La Cava et al., 2019) FFX (McConaghy, 2011) MLP FEAT (Cava et al., 2019) DSO (Mundhenk et al., 2021) Operon (Kommenda et al., 2020) SymbolicGPT (Valipour et al., 2021) NeSymReS (Biggio et al., 2021) T-JSL (Li et al., 2022) 0.05105 0.57527 0.66526 0.75058 0.77734 0.81773 0.83368 0.87775 0.91074 0.91851 0.93124 0.98240 0.98761 0.99642 0.99684 0.52333 22814675.8 10966411.0 8642965.0 6439962.9 5749076.4 4706077.0 4294433.7 3156500.5 2304682.5 2104070.1 1775263.7 454461.5 319800.6 92374.9 81577.9 6862154.7 N/A to >3 variables N/A to >2 variables 2520.0 2045.0 1938.6 1777.7 1580.1 1367.5 1129.9 1109.2 950.5 122.2 801.7 366.3 336.1 168.6 92.4 1680.7 Ours 0.99901 17424.6 86.4 Symbolic ✓ ✓ ✓ ✗ ✓ ✓ ✗ ✗ ✓ ✓ ✓ ✗ ✓ ✓ ✓ ✓ ✓ ✓ ✓ C.2. Longer Iteration In order to further investigate the potential of our method and ablate the hyper-parameters for practitioners, we add a new study in terms of the number of iterations (question-answering cycles). We repeat our experiment in Table 1 with a prolonged number of iterations to 20 and report the performance in Table 6. Table 6. Longer Iteration #Iterations 5 20 (a) ↓ 5.2e-5 4.2e-6 (b) ↓ 2.1e-1 4.0e-4 (c) ↓ 6.0e-2 2.5e-3 (d) ↓ (e) ↓ (f) ↓ 1.4e-12 1.4e-12 1.3e-4 1.3e-4 1.1e-1 6.5e-2 (g) ↓ 5.4e-1 1.2e-1 (h) ↓ 3.6e-5 5.6e-6 Improvement +1138.1% +52400.0% +2300.0% 0.0% 0.0% +69.2% +350.0% +542.9% As shown in the table, the number of iterations turns out to be a determining hyper-parameter with significant impart on the performance. While it has little affect on relatively easier tasks, it dramatically improves the performance of the most 17 challenging tasks including (b) and (c). For practitioners, the number of iteration should be first considered as the most important hyper-parameter when adapting our method to their own tasks. Scientific Generative Agent D. More Results D.1. Constitutive Law Discovery (a) The best solution on task (a) optimized by our method: 1 import torch 2 import torch.nn as nn 3 4 class Physics(nn.Module): 5 6 7 8 9 # Best values from the training curves DEFAULT_YOUNGS_MODULUS_LOG = 13.03 DEFAULT_POISSONS_RATIO_SIGMOID = -1.99 def __init__(self, youngs_modulus_log: float = DEFAULT_YOUNGS_MODULUS_LOG, poissons_ratio_sigmoid: float = 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 DEFAULT_POISSONS_RATIO_SIGMOID): """ Define trainable continuous physical parameters for differentiable optimization. Initialize the parameters with the best values from previous feedback. """ super().__init__() # Initialize the parameters as trainable parameters self.youngs_modulus_log = nn.Parameter(torch.tensor(youngs_modulus_log)) # Log of Young’s modulus self.poissons_ratio_sigmoid = nn.Parameter(torch.tensor(poissons_ratio_sigmoid)) # Sigmoid of Poisson’s ratio def forward(self, F: torch.Tensor) -> torch.Tensor: """ Compute Kirchhoff stress tensor from deformation gradient tensor. Args: F (torch.Tensor): Deformation gradient tensor (B, 3, 3). Returns: kirchhoff_stress (torch.Tensor): Kirchhoff stress tensor (B, 3, 3). """ # Convert the parameters to their actual values youngs_modulus = self.youngs_modulus_log.exp() # (1,) poissons_ratio = torch.sigmoid(self.poissons_ratio_sigmoid) * 0.49 # (1,) # Lame parameters mu = youngs_modulus / (2 * (1 + poissons_ratio)) # Shear modulus (1,) lam = youngs_modulus * poissons_ratio / ((1 + poissons_ratio) * (1 - 2 * poissons_ratio)) # First Lame parameter (1,) # Deformation gradient determinant J and its reshape for operations (B,) J = F.det().view(-1, 1, 1) # Inverse transpose of F for stress computation (B, 3, 3) F_invT = F.inverse().transpose(1, 2) # Compute first Piola-Kirchhoff stress tensor P (B, 3, 3) # Volumetric part P_vol = lam * (J - 1) * F_invT # Deviatoric part combining neo-Hookean behavior # This accounts for the near incompressible nature of the material P_dev = mu * (F - (1 / J) * F_invT) # Compute Kirchhoff stress tensor tau by multiplying the first Piola-Kirchhoff with the transpose of F (B, 3, 3) kirchhoff_stress = P_vol + P_dev @ F.transpose(1, 2) return kirchhoff_stress D.2. Constitutive Law Discovery (b) The best solution on task (b) optimized by our method: 1 import torch 2 import torch.nn as nn 3 4 class Physics(nn.Module): 5 6 7 8 9 10 11 12 13 14 15 Args: def __init__(self, gamma: float = -0.07): # Based on best value from iteration 5 """ Initialize gamma as a trainable parameter which will be used for scaling the soft deformation correction. gamma (float): scaling factor for the deformation correction. """ super().__init__() self.gamma = nn.Parameter(torch.tensor(gamma)) # Initialize gamma, (1,) 18 Scientific Generative Agent 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 def forward(self, F: torch.Tensor) -> torch.Tensor: """ Compute corrected deformation gradient tensor F, by applying a soft correction proportional to the deviation of its determinant from 1, effectively guiding the gradient towards physically realistic states. Args: F (torch.Tensor): deformation gradient tensor (B, 3, 3). Returns: F_corrected (torch.Tensor): corrected deformation gradient tensor (B, 3, 3). """ # Compute determinant of F and create a condition based on its value, (B,) J = torch.det(F) # (B,) # Apply a smooth step function as a deviation condition, (B,) J_deviation_condition = torch.tanh(J - 1) # (B,) # Prepare for correction, taking into account the batch dimension (B,) gamma_correction = self.gamma * J_deviation_condition.view(-1, 1, 1) # (B, 1, 1) # Identity matrix, expanded for batch size (B, 3, 3) I = torch.eye(3, device=F.device).repeat(F.size(0), 1, 1) # (B, 3, 3) # Correct F by pulling towards identity matrix when determinant deviates from 1, (B, 3, 3) F_corrected = F - gamma_correction * (F - I) # (B, 3, 3) return F_corrected D.3. Constitutive Law Discovery (c) The best solution on task (c) optimized by our method: 1 import torch 2 import torch.nn as nn 3 4 # The default value for elastic_limit is set to the best from the last iteration, and 5 # we initialize a new parameter for capturing the hardening effect 6 DEFAULT_ELASTIC_LIMIT = 0.92 7 DEFAULT_HARDENING_FACTOR = 0.1 8 9 class Physics(nn.Module): 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 def __init__(self, elastic_limit: float = DEFAULT_ELASTIC_LIMIT, hardening_factor: float = DEFAULT_HARDENING_FACTOR): """ Define trainable continuous physical parameters for differentiable optimization. Args: elastic_limit (float): the parameter determining the initial yield strength. hardening_factor (float): the parameter controlling the rate of hardening. """ super().__init__() self.elastic_limit = nn.Parameter(torch.tensor(elastic_limit)) # () self.hardening_factor = nn.Parameter(torch.tensor(hardening_factor)) # () def forward(self, F: torch.Tensor) -> torch.Tensor: """ Compute corrected deformation gradient from deformation gradient tensor. Args: F (torch.Tensor): deformation gradient tensor (B, 3, 3). Returns: F_corrected (torch.Tensor): corrected deformation gradient tensor (B, 3, 3). """ # Obtain the polar decomposed rotation (R) and stretch (S) U, S, V = torch.svd(F) R = U @ V.transpose(-2, -1) # U: (B, 3, 3), S: (B, 3), V: (B, 3, 3) # R: (B, 3, 3) # Correct the S tensor with hardening # Assuming hardening affects the elastic limit linearly with accumulated plastic strain plastic_strain = torch.relu(S - self.elastic_limit) # Presumed plastic strain hardening_adjustment = 1.0 + (self.hardening_factor * plastic_strain) S_clamped = torch.min(S, self.elastic_limit * hardening_adjustment) # Clamp S with hardening S_corrected = torch.diag_embed(S_clamped) # S_corrected: (B, 3, 3) F_corrected = R @ S_corrected # (B, 3, 3) Corrected deformation gradient tensor # Ensure volume preservation J = torch.det(F).view(-1, 1, 1) J_corrected = torch.det(F_corrected).view(-1, 1, 1) # (B, 1, 1) Determinant of the corrected F volume_ratio = (J / J_corrected) ** (1/3) F_corrected = F_corrected * volume_ratio # (B, 3, 3) Volume-preserved F_corrected # (B, 1, 1) Determinant of the input F for volume return F_corrected 19 Scientific Generative Agent D.4. Constitutive Law Discovery (d) The best solution on task (d) optimized by our method: DEFAULT_VALUE = 0.0 1 import torch 2 import torch.nn as nn 3 4 class Physics(nn.Module): 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 # Best guess based on previous behavior def __init__(self, param: float = DEFAULT_VALUE): """ Define trainable continuous physical parameters for differentiable optimization. The parameter modulates corrections towards nearly isochoric behavior. """ super().__init__() self.param = nn.Parameter(torch.tensor([param])) # Scalar modulation parameter def forward(self, F: torch.Tensor) -> torch.Tensor: """ Symbolic deformation gradient correction model. """ # Compute the determinant of the deformation gradient (volumetric change) J = torch.det(F) # (B,) # Compute the volumetric part of the deformation gradient: Jˆ(1/3)*I I = torch.eye(3).to(F.device) # Expand the identity matrix to the entire batch I = I.view(1, 3, 3).expand(F.size(0), -1, -1) # (B, 3, 3) # Calculate volumetric part vol_deform = torch.pow(J, 1.0 / 3.0).view(-1, 1, 1) * I # (B, 3, 3) # (3, 3) # Calculate the deviatoric part of F: divide F by Jˆ(1/3) dev_deform = F / torch.pow(J, 1.0 / 3.0).view(-1, 1, 1) # (B, 3, 3) # Modulate correction by self.param and construct the correction term correction = self.param * (I - dev_deform) # (B, 3, 3) # Combine the volumetric part with the deviatoric correction F_corrected = vol_deform + correction # (B, 3, 3) return F_corrected # (B, 3, 3) D.5. Molecule Design (e) The Top-20 solution on task (e) optimized by our method: 1. C1=CC=C(Br)C=C1C2=CN=CC=C2 2. C1=CC=C(I)C=C1C2=CC=NC=C2 3. C1=CN(C=C1)C2=CC(=CC(=C2)F)O 4. C1C2CC3CC(C1)C(C2)(C3)N 5. C1CC2OC1COC2=O 6. C1=CC=C(C=C1)C2=NC(Cl)=NC=C2 7. C1=CC=C(I)C=C1C2=NC(C(F)(F)F)=CN=C2 8. C1N2C3C4OC(C5)C13C24C5 9. C1=CC=C2N=CC=C(Br)C2=C1 10. C=CC1C(=O)NC(=S)N1C 11. C1=CC=CS1C2=NC=CC(Cl)=C2 12. C1OC2C(O)C3C(N)C1C23 13. O=C(NC1=CC=CC=C1)C2CCOCC2Cl 14. C1=CC=C2C(=C1)N=CN2C3=CC=CC=C3Br 20 Scientific Generative Agent 15. C1OC2C3N4C5C6C7C8C1C2C3C4C5C6C7C8 16. C1=CC=C(C=C1)C2=CN=CC=C2C=CCl 17. C1=CC=C(S1)C2=CC=C(O2)C(F)F 18. C1=CC=C2C(=C1)C(=NN2)C3=CC=CC=C3Br 19. C1=CC=C2N=C(C(=O)NC2=C1)C3=CC=CS3 20. C1=CC=C(C=C1)C2=NN=C(S2)Cl D.6. Molecule Design (f) The Top-20 solution on task (f) optimized by our method: 1. SC(F)(F)CC(Cl)(Cl)N 2. C1=CSC(=C1)NNC(=O)CF 3. C1C2C3CSC1N2OC3F 4. C1CSC(Cl)C(N)C1Cl 5. CC(C(=O)O)NC(F)(F)F 6. O=C(O)C(F)=C(Cl)C(Br)C=O 7. C1CSC(C(=O)O)N=C1F 8. C1=CC(=CS1)C2=CC=C(Br)C(F)=C2 9. O=S(=O)(N)C1=CC=C(Br)C=C1 10. CC(C(=O)O)C(Cl)C(Br)C(N)C(I) 11. C1SC(Cl)C(C1)C(=O)O 12. FC1=CC=C(C=C1)CS 13. C1CSC2C3CC(Cl)C1C23 14. C1=CC(=O)N(C2=CS1)C2=O 15. C1=CC=C(N)C(Cl)=C1Cl 16. C1SCC2C1C1=C(C=O)C=CC1C2Cl 17. C1=CC2=C(N=C(I)C=C2)C=C1 18. NC(CS)C(C(=O)O)Cl 19. C1=CC=NC2=C1C(=O)SC2=CCBr 20. C1=CC=C2C(=C1)C(=CS2)Cl 21 Scientific Generative Agent D.7. Molecule Design (g) The Top-20 solution on task (g) optimized by our method: 1. c1cc(sc1Cl)C(C(F)(F)F)N 2. C(C(Cl)Cl)C1=CSC(N)=N1 3. C1C2CC3NC1C3C2O 4. C1OC2C3NOC1C3C2 5. C1C(Cl)C2CNC1C2O 6. C1(=CC(=C(N1Cl)O)C#N)S 7. C1COC2C3CC4(NC2C34)O1 8. c1cc(c(c(c1)Cl)O)C(F)(F)F 9. O1CCN2CC(F)C2C1 10. N1CC2OC(F)C2C1 11. C1(O)C2C(NC2C1C)C 12. C1=CC2=C(S1)C(=CC(=C2)P)I 13. C1CC2NCC(C1)C2O 14. C1CC2SCC(C1)N2 15. C1=NC2=CS(=O)(=O)N=C12 16. C1OC2C3CC(S)C1C23 17. C1C2C(NC=O)C(Cl)C1OC2 18. O=S(=O)(c1ccccc1)N 19. C1C=CC(O1)(F)N2C=CC(Br)=C2Cl 20. O=N(=O)C=C(C)C=C(C)N=O D.8. Molecule Design (h) The Top-20 solution on task (h) optimized by our method: 1. CC(NC(=O)C(Cl)C(=O)O)CSC 2. C1=CC(=CC=C1)C2=NSN=C2 3. FC(F)Oc1ccccc1N 4. O=C1NC(=O)SC2=C1C=CC=C2 5. C1NOC2C1SC1C2N1Cl 6. C1CSCC(N)C1N=C(O)C2=CC=CS2 7. C1C2CC(NC1=O)C(Cl)C2I 8. C1OC2SC3C4OC(F)S4C13C2 22 Scientific Generative Agent 9. SC(Cl)(Cl)C1=CC=CC=C1O 10. C1=NOC(=C1)C2C(C(=O)NC2=O)Cl 11. C1OC2C3NCC4S1C23C4 12. C1C2C(N(C1Cl)C2=O)F 13. OC1C2C(O1)N=CS(=O)C2 14. C1OC2C3N4C1SC23N4 15. C1CSC(C2=NC=CS2)N1 16. C1CSCC(N1)C2=CC=CS2 17. O=P1(OP(=O)(OC1=O)C2=CN=CC=C2)I 18. C1CC2(CNC2)C(O1)C3=CN=CN3 19. O=C1SCCN(C1)C2=CN=CO2 20. C1=CC=NC2=C1C(=O)N=C(S2)Cl D.9. Imaginary Constitutive Law The best solution on the imaginary constitutive law invention task optimized by our method: def __init__(self, kappa: float = DEFAULT_KAPPA, mu: float = DEFAULT_MU): """ Initialize the continuous physical parameters kappa and mu for differentiable optimization. """ super().__init__() self.kappa = nn.Parameter(torch.tensor(kappa)) # Bulk modulus correction factor (scalar) # Shear modulus correction factor (scalar) self.mu = nn.Parameter(torch.tensor(mu)) def forward(self, F: torch.Tensor) -> torch.Tensor: 1 import torch 2 import torch.nn as nn 3 4 # Default values for the physical parameters based on previous iterations 5 DEFAULT_KAPPA = 0.08 6 DEFAULT_MU = 0.28 7 8 class Physics(nn.Module): 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 # Compute trace of F for shape correction (B, 1, 1) trace_F = F.diagonal(dim1=1, dim2=2).sum(dim=1).view(-1, 1, 1) dev_F = F - (trace_F / 3) * I # Deviatoric part of F (B, 3, 3) # Batch size (scalar) """ Compute the corrected deformation gradient from the deformation gradient tensor. """ B = F.size(0) I = torch.eye(3, device=F.device).unsqueeze(0).expand(B, -1, -1) # Identity matrix (B, 3, 3) J = torch.det(F).view(-1, 1, 1) # Jacobian determinant (B, 1, 1) # Volume correction factor (B, 1, 1) vol_correction_factor = torch.clamp(self.kappa * (J - 1), min=0.0, max=1.0) vol_correction = vol_correction_factor * I # Volume correction term (B, 3, 3) # Shape correction factor (scalar) shape_correction_factor = torch.clamp(self.mu, min=0.0, max=1.0) shape_correction = shape_correction_factor * dev_F # Shape correction term (B, 3, 3) F_corrected = F - vol_correction - shape_correction # Corrected deformation gradient (B, 3, 3) return F_corrected 23
ai_researcher
1
LLM_Based_Biological_Named_Entity_Recognition_from_Scientific_Literature.pdf
Astro-NER — Astronomy Named Entity Recognition: Is GPT a Good Domain Expert Annotator? Julia Evans and Sameer Sadruddin and Jennifer D’Souza TIB - Leibniz Information Centre for Science and Technology Welfengarten 1B, 30167 Hanover, Germany 4 2 0 2 y a M 4 ] L C . s c [ 1 v 2 0 6 2 0 . 5 0 4 2 : v i X r a Abstract In this study, we address one of the challenges of developing NER models for scholarly do- mains, namely the scarcity of suitable labeled data. We experiment with an approach using predictions from a fine-tuned LLM model to aid non-domain experts in annotating scientific en- tities within astronomy literature, with the goal of uncovering whether such a collaborative pro- cess can approximate domain expertise. Our re- sults reveal moderate agreement between a do- main expert and the LLM-assisted non-experts, as well as fair agreement between the domain expert and the LLM model’s predictions. In an additional experiment, we compare the perfor- mance of finetuned and default LLMs on this task. We have also introduced a specialized sci- entific entity annotation scheme for astronomy, validated by a domain expert. Our approach adopts a scholarly research contribution-centric perspective, focusing exclusively on scientific entities relevant to the research theme. The resultant dataset, containing 5,000 annotated astronomy article titles, is made publicly avail- able. 1 Introduction Named Entity Recognition (NER) is an essen- tial tool in modern NLP pipelines, facilitating many downstream tasks. One such application is extracting information for populating knowledge graphs (KG) and other digital information struc- tures (D’Souza and Auer, 2022). However, there is a persistent bottleneck limiting the development of KGs for scientific disciplines: scholarly-focused NER poses unique challenges not addressed by generic NER solutions (D’Souza and Auer, 2022; Enkhsaikhan et al., 2021). For general purpose NER, there is an abundance of labeled English text data, along with readily ac- cessible NER tools. However, in the context of highly specialized scholarly domains, even English may present a low-resource data scenario when ap- propriately labeled data is rare. Technical jargon and the particular stylistics of academic writing, as well as unique entities-of-interest beyond those common in NER, render existing NER datasets and tools difficult to apply (Enkhsaikhan et al., 2021). At the same time, generating high-quality labeled data in scholarly domains is especially challeng- ing due to a limited pool of qualified annotators and their potential reticence to participate in the annotation process. The surge in the development of Large Language Models (LLMs) in recent years has led to research investigating whether LLMs can support, or even replace, annotators, thus easing the burden of gen- erating labeled text data (Wang et al., 2021; Ding et al., 2023). Such experiments are particularly rel- evant for annotation tasks requiring uncommon or highly-specialized expert knowledge. In this work, we present an approach to address the bottleneck issue of limited expert annotator availability. We use predictions from a finetuned GPT-3.5 model (Brown et al., 2020) to support non-domain expert annotators in the task of an- notating scientific entities in astronomy literature. This is a highly complex task for non-domain ex- perts. Scholarly papers in astronomy contain a large set of scientific terminology for celestial ob- jects, astronomical phenomena, and astrophysical concepts, which must be understood with preci- sion and nuance. Moreover, these concepts must be interpreted within the broader context of astron- omy research, which requires knowledge of the objectives of a research work, the significance of its findings, and its connection with previous works and established theories. Given these complexities, on the one hand, domain expert annotators are pre- ferred. On the other hand, however, an approach which allowed non-domain expert annotators to perform at a similar level of competency would greatly benefit the speed at which such specialized datasets could be produced. The contributions of this paper are as follows. 1) The release of a corpus of titles from astronomy literature annotated with the scientific entities that reflect the contribution of the work. 2) A quantita- tive empirical assessment of three LLM variants in both default and finetuned states, for our defined astronomy scientific named entity recognition task. Additionally, state-of-the-art reported models for a different NER task for astronomy were also fine- tuned to our task dataset and released as baseline results. 3) An evaluation of the feasibility of using LLMs to assist in annotation tasks where special- ized knowledge is required but no domain expert is available. 2 Related Work There are multiple data labeling and structuring approaches that can be followed when creating scholarly knowledge representations. Grezes et al. (2022) organized the DEAL shared task focused on astronomy NER in which they propose a set of thirty-three entities and enlist a domain expert to annotate text fragments from scholarly articles in astronomy. They divide their labels into five categories: 1) generic NER entities (Person,1 Orga- nization, Location); 2) entities related to facilities for studying astrophysics (Observatory, Telescope); 3) entities related to funding (Grant, Proposal); 4) astronomical object entities (CelestialObject); and 5) entities found in academic literature (Citation, URL). We take a somewhat different perspective and follow previous work in contribution-centric NER (D’Souza and Auer, 2022; D’Souza, 2023) by fo- cusing exclusively on the entities pertinent to de- scribing the research contribution of a paper. In this approach, “only those entities that are either the outcome of a particular research endeavor or used to support the outcome of that work are candi- date extraction targets” (D’Souza and Auer, 2022). To that end, an appropriate annotation scheme en- compasses both domain-specific entities such as AstrObject and PhysicalQuantity, as well as more domain-agnostic research-focused entities like Re- searchProblem and Method. However, annotation of this nature is often an expensive and time-consuming process, made all the more difficult by the challenge of finding, re- cruiting, and financing annotators who possess the required specialized knowledge. Wang et al. (2021) 1Throughout this paper, we use UpperCamelCase to indi- cate entity labels. have proposed an approach that potentially offers a solution to this predicament, demonstrated in their experiments involving GPT-3 generated labels for various NLP datasets. Their findings indicate that optimal performance is attained through a combi- nation of GPT-3 and human annotators, specifically by establishing a minimum confidence threshold for GPT-3 predictions and subsequently undertak- ing manual relabeling for instances falling below that threshold. In another study, Ding et al. (2023) explored three approaches utilizing GPT-3 for annotating or generating training data for NER and other NLP tasks. Their research demonstrates that an NER model trained on a moderate volume of GPT-3 generated data (at least 1500 samples) outperforms a model trained on a comparatively smaller dataset of human-annotated data (100 samples), which is what might be obtainable for a similar cost and time expenditure. The best performance was achieved by leveraging Wikidata to extract example entities and prompting GPT-3 to generate sentences using said entities; the worst performance was using GPT- 3 purely for annotating existing unlabeled data. Finally, Hedderich et al. (2021) note that domain- adaptation by finetuning a general-domain model is a common solution that improves performance on tasks within the target domain. Given that schol- arly domains are highly specialized, such domain- adaptation may be relevant to the task of LLM annotation as well. 3 Our Corpus 3.1 Task Definition In this paper, we present our Astro-NER corpus: a collection of 5000 astronomy article titles anno- tated with contribution-centric scientific entities. This corpus was constructed, in part, by finetuning a GPT-3.5 model (see Section 3.3) for the task of astronomy literature annotation, and then making the predictions available to non-domain expert an- notators. Rather than using a confidence threshold for determining when to accept the GPT labels (as in Wang et al. (2021)), annotators considered every label and used their own judgement. The labels and their definitions can be found in Table 1 (see Sec- tion 3.2 for more information on how these labels were selected). Our data source consists of the titles from around 15,000 astronomy articles with the CC-BY redis- tributable license, downloaded from Elsevier. From this, approximately 5000 titles were randomly se- lected for annotation by two graduate students. An- notator 1 is a PhD student in Computational Lin- guistics and Annotator 2 is a master’s student in Computer Science. Both annotators possess ad- vanced proficiency in scientific English. 3.2 Annotation Process Both the scientific entity labels and the annota- tion guidelines were refined through an iterative process. An initial list of candidate labels was drawn from the definitions in previous works on astronomy NER (Becker et al., 2005; Murphy et al., 2006; Grezes et al., 2022), contribution-centric NER (D’Souza and Auer, 2022; D’Souza, 2023), and the top-level concepts in an astronomy-specific ontology (Derriere et al., 2010). This yielded 36 potential labels, of which not all were suited to our task. Some labels from previous contribution- centric NER do not apply to the domain of astron- omy, such as Language and Dataset. Additionally, several labels could be subsumed under broader categories, such as Star, Planet, and Nebula under the label of AstrObject. After removing such labels which were too fine-grained for our purposes or unrelated to our task, a set of 21 labels remained. A small pilot annotation was performed on a ran- dom sample of 50 titles to evaluate the coverage and applicability of the labels. After examining the results, it was decided to tentatively remove 12 unused labels, and add an additional 3 labels for relevant entities which were not covered un- der the existing definitions, resulting in 12 labels. The unused labels were primarily concerned with specific measurements or data properties such as Duration, Luminosity, and Spectral Feature. Subse- quently, a second pilot annotation was performed on a random sample of 100 titles with the new set of labels. For the final assessment, the astronomy subject librarian at the German National Library of Science and Technology was consulted on the conceptual relevance and technical validity of the selected la- bels for the field of astronomy. On his advice, 2 labels were renamed, 2 were removed, and 2 new labels were added. Additionally, feedback on the 100 annotated titles provided explicit training in accurately selecting entities. See Table 2 for an overview of the selection process. Annotation guidelines were also developed in tandem with the labels. As a foundation, we used the annotation guidelines from previous work in contribution-centric NER (D’Souza and Auer, 2022), which defined the linguistic and contex- tual considerations for identifying entity types and spans. After each round of pilot annotations, the guidelines were reviewed and some small adapta- tions made. One particularity of our annotation scheme is that some entities may correspond to both Research- Problem or Method and another label. For instance, “jet quenching” is a phenomenon in which some types of particles produced in the early stages of a collision lose energy as they traverse the collision- created medium. This is a Process according to our label definitions. In the case that it is the subject of the investigation, it would also be a Research- Problem. As only one label per entity is allowed, we decided on a precedence hierarchy in which Re- searchProblem is selected above any other labels which may apply, and Method is selected above any other labels except ResearchProblem. These two entity types are prioritized given their centrality to contribution-centric knowledge representations. The most significant principles guiding our an- notation process can be summarized as follows. 1. There are no restrictions on the morphosyntac- tic form of entities, but noun phrases without articles are preferred wherever possible. 2. Include prepositions only if they are indeed part of the term itself or modify the entity in an essential way. 3. Select the most precise text reference possible, including all necessary modifiers, as a single unit. Consider the intended meaning in the given context to determine whether a modi- fier is necessary – anything that changes the meaning of a term ought to be included. 4. Given an expression in which several concepts or terms are nested or containing conjunctions with ellipsis of a shared noun phrase, annotate the entire sequence as one entity. 5. Follow the precedence hierarchy of Research- Problem > Method > all other entity types. After finalizing the labels and guidelines, the annotation task was conducted in two phases. Phase I. Annotators determined entities by read- ing the paper’s abstract, looking up definitions of terms, and/or consulting ChatGPT. Label AstrObject AstroPortion ChemicalSpecies Instrument Measurement Method Morphology PhysicalQuantity Process Project ResearchProblem SpectralRegime Definition All concepts representing astronomical objects, e.g. black holes. All concepts representing portions of astronomical objects which are not astro- nomical objects themselves, e.g. sunspots. Atomic elements such as element names from the periodic table, atoms, nuclei, dark matter, e.g. Fe. Names of measurement instruments, including telescopes, e.g. Large Hadron Collider. Measured observational parameters or properties (both property and value), e.g. frequency. Abstractions which are commonly used to support the solution of the investiga- tion, e.g. minimal supersymmetrical model. Geometry or morphology of astronomical objects or physical phenomena, e.g. asymmetrical. Properties of physical phenomena interacting, e.g. gravity. Phenomenon or associated process, e.g. Higgs boson decay. Survey or research mission, e.g. the dark energy survey. The theme of the investigation, e.g. final state hadronic interactions. Observed or analyzed electromagnetic spectrum, e.g. mega electron volt. Table 1: The final scientific entity labels applicable on titles of scholarly articles in the astronomy domain and their definitions. In case of overlap, ResearchProblem is selected over all other entity types, and Method is selected over all other entity types except ResearchProblem. Phase II. Annotators were provided with pre- dicted labels from a finetuned GPT-3.5 model for each title. After checking the predictions, annota- tors could use any of the strategies from Phase I as well, and were free to accept or disregard any of the predictions. The final corpus contains 5000 annotated texts. Table 3 summarizes the distribution across annota- tors and annotation settings, and Table 4 shows the frequency of entity types. 3.3 Finetuned GPT-3.5 as an Annotation Assistant second round of finetuning the previous model on 1901 texts. The resulting finetuned GPT-3.5 model was used in Phase II of the annotation process to predict labels for an additional 2577 texts. See Table 5 for a skeleton outline of the prompts. 3.4 Qualitative Observations Task Difficulty. Several features of astronomy literature make the annotation task particularly dif- ficult for non-experts. Below is a summary of some of the challenges. In Phase I, an initial 2001 texts were annotated by a single annotator and used to finetune the GPT- 3.5 model davinci-0022 to predict our astronomy labels. A two-stage finetuning process was used: • Lists of concepts or phenomena without any explicit relationship between them, e.g. “Generalized Poincaré algebras and Love- lock–Cartan gravity theory”. 1. A prompt containing an explanation of the task, all entity types and their definitions, and a few rules for annotation such as no overlap- ping spans was used to finetune the model on 100 texts. 2. A much shorter prompt containing a single sentence of task instruction and the list of en- tity types without definitions was used for a 2https://platform.openai.com/docs/models • The form “⟨method/process⟩ ⟨connector⟩ ⟨method/process⟩” where it is unclear whether a method is being applied in a certain context to understand or develop the method itself or whether it is being used to learn more about the process, e.g. “One-loop QCD corrections to the e + e− → W + W − bb− process”. • The research problem is implied but not ex- plicitly stated, e.g. “Quasi-normal modes of Round I Round II Round III AstrObject, AstroPortion, Atomic Element, Date, Dura- tion, EMS Spectrum Range, Frequency, Instrument Name, Location, Luminosity, Measure- ment, Method, Morphology, Position, Process, Research Problem, Source Name, Source Type, Spectral Feature, Survey, Telescope +Force, +Matter, +Model AstrObject, AstroPortion, Atomic Element → Chemical Species, Force → Physical Quantity, Instrument, Matter, Measurement, Method, Model, Morphology, Process, Research Problem +Project, +Spectral Regime AstrObject, AstroPortion, Atomic Element, Classification Category, Dataset, Date, Du- ration, EMS Spectrum Range, Frequency, Galaxy, Instrument Name, Ion, Language, Loca- tion, Luminosity, Measurement, Method, Morphology, Nebula, Planet, Position, Process-1, Process-2, Research Problem, Resource, Solution, Source Name, Source Type, Spectral Feature, Star, Star Cluster, Su- pernova, Survey, Technology, Telescope, Tool Table 2: The evolution of the label set after each round of discussions. Round I shows the entire list of candidate labels, with those deemed too fine-grained or irrelevant for our task crossed out. Round II shows the label set after discussing the first pilot annotation, in which labels were removed if unused and additional labels added. Round III shows the final label set after discussing the second pilot annotation with a domain expert. Phase I Phase II Total Annotator 1 Annotator 2 Total 2325 98 2423 1583 994 2577 3908 1092 5000 Table 3: The size of our corpus. holographic system with Weyl correction and momentum dissipation” (Quasi-normal modes are a concept for studying black holes and strongly coupled systems). • Metonymy in which the actual term and the intended referent correspond to differ- ent labels, e.g. “Complementarity between Hyperkamiokande and DUNE in determin- ing neutrino oscillation parameters” (Hyper- kamiokande and DUNE are both instruments, but the implied meaning is “measurements from Hyperkamiokande/DUNE”). • The linguistic structure obscures the roles, e.g. “Transverse anomalies and Dyson–Schwinger equation in QED3 and QED2 theories” (the Dyson–Schwinger equation is used to study transverse anomalies in the framework of QED3 and QED2 theories). For domain experts, the research applications of different methods and the relationships between them are likely clear, regardless of how they are formulated in the text. But for non-experts, a con- siderable amount of deciphering may be required. Finetuned GPT-3.5 Performance. Table 6 shows examples of some of the most common types of errors made by the finetuned GPT-3.5 model. Occasional errors also include reordering words and creating new labels (a proposed theory called “Gravity’s Rainbow” was labeled as Book). The predictions are generally highly plausible, even in cases where they are not totally correct. The two annotators had different perceptions of the utility of the finetuned GPT-3.5 predictions. An- notator 1 found them helpful for narrowing down the potential entities and labels, while then using her own judgement to refine the final annotations. Meanwhile, Annotator 2 agreed the predictions were a good starting reference, but found some of the errors to be distracting and the overall predic- tions not trustworthy. 3.5 Inter-Annotator Agreement Inter-annotator agreement was computed on two sets of 100 texts using Cohen’s κ, with all tokens included. The first set of texts were annotated dur- ing Phase I, while the second set were annotated during Phase II. The domain expert was only avail- Astro Astro Chem. Inst- Meas- Morph- Phys. Research Spect. Object Portion Species rument urment Method ology Quant. Process Project Problem Regime 143 97 851 615 320 3169 385 547 1273 123 3801 141 Table 4: The frequency of occurrences of scientific entity types in our corpus. Stage I Stage II Intro Please fulfill the following NER task by an- notating the given scholarly paper title in the domain of astronomy. ... Rules Entities Entity types to consider: 1. AstrObject: sub- sumes all the concepts representing astro- nomical objects. ... Annotation rules: - Each word can be in- cluded in at most one annotation. ... Please provide the annotations in JSON for- mat with the entity labels as keys. Annotate the following title: “title” Output Please fulfill the following NER task by an- notating the given scholarly paper title in the domain of astronomy. Consider only the following 12 entity types and rely on your knowledge for their defini- tions: 1. AstrObject, 2. AstroPortion, ... Rely on your knowledge of the annotation rules please provide the annotations in JSON for- mat with the entity labels as keys. Annotate the following title: “title” Table 5: Skeleton of the prompts used for finetuning GPT-3.5. Predictions Annotations Incorrect Under labeled Effective theory of dark matter decay into monochromatic photons and its implica- tions: Constraints from associated cosmic- ray emission. ResearchProblem, Method Energy conditions in F ( T , Θ ) gravity and compatibility with a stable de Sitter solution. PhysicalQuantity Under specified The origin of large-p T π θ suppression at Over specified Missing coordinated expressions RHIC. ResearchProblem, Instrument Lepton flavor violation in the triplet Higgs model. ResearchProblem Baryon number and lepton universality vi- olation in leptoquark and diquark models. ResearchProblem, Method Effective theory of dark matter decay into monochromatic photons and its implica- tions: Constraints from associated cosmic- ray emission. Method, ResearchProblem Energy conditions in F ( T , Θ ) gravity and compatibility with a stable de Sitter solution. ResearchProblem, Method The origin of large-p T π θ suppression at RHIC. ResearchProblem, Instrument Lepton flavor violation in the triplet Higgs model. ResearchProblem, Method Baryon number and lepton universality vi- olation in leptoquark and diquark models. ResearchProblem, Method Table 6: Common prediction error types made by finetuned GPT-3.5. able to annotate one set of texts, for which the set from Phase II was chosen. However, he did not have access to the finetuned GPT-3.5 predictions and rather followed the annotation procedure from Phase I. The results are shown in Table 7. The scores between the two annotators indicate moderate agreement, reflecting the difficulty and complexity of this task. Of note is the finding that agreement decreased between Phase I and Phase II, indicating that the finetuned GPT-3.5 predictions bi- ased the annotators towards different conclusions. Meanwhile, the scores between the domain ex- pert and Annotator 1 have low moderate agreement, whereas the domain expert and Annotator 2 have A1-A2 A1-DE A2-DE P R F1 Phase I Phase II 0.62 0.53 - 0.42 - 0.35 Table 7: Cohen’s κ for both annotators (A1 and A2) and the domain expert (DE), computed on 100 texts. The domain expert did not have access to the finetuned GPT-3.5 predictions during the annotation process. GPT-3.5 OOTB GPT-3.5 FT A1 A2 DE 0.12 0.10 0.14 0.70 0.48 0.35 Table 8: Cohen’s κ computed between annotators or the domain expert and GPT-3.5 out-of-the-box or GPT-3.5 finetuned. fair agreement. These results indicate that even with support from the finetuned GPT-3.5 model, non-domain experts can still only weakly approxi- mate the performance of a domain expert. Table 8 shows the agreement between the human annotators and the GPT-3.5 models.3 We observe very low agreement between all annotators and GPT-3.5 out-of-the-box, whereas agreement with the finetuned model varies significantly. Annota- tor 1 showed substantial agreement and Annotator 2 showed moderate agreement with the finetuned model, which aligns with their reported experiences of working with the predictions. Meanwhile, the domain expert and finetuned model have an agree- ment of 0.35, indicating only fair agreement. This provides additional evidence that even with fine- tuning, GPT-3.5 still lacks the domain knowledge and sophistication to perform annotation at a level comparable to a domain expert. 4 Experiments and Results Dataset. An experimental dataset of 1500 texts was used to compare the performance of out-of- the-box GPT-3.5 (GPT-3.5 OOTB), out-of-the-box GPT-4 (GPT-4 OOTB), and finetuned GPT-3.5 (GPT 3.5 FT). All texts came from Phase II and were divided evenly between the annotators. Pre- dictions from each model were obtained using the same prompt as in stage 2 of the finetuning process (see Section 3.3). 3Due to limitations in the GPT data (see Section 4), 57 texts were included in the calculations for GPT-3.5 OOTB and 98 for GPT-3.5 FT. GPT-3.5 OOTB 0.04 GPT-3.5 FT 0.55 GPT-4 OOTB 0.23 0.04 0.48 0.24 0.04 0.51 0.23 Table 9: Micro precision (P), recall (R), and F1-score (F1) evaluating GPT predictions against human anno- tations. Annotators saw the GPT-3.5 FT predictions during the annotation process. For some texts, GPT failed to find any enti- ties and these texts are therefore excluded. In other cases, entities with overlapping spans were returned. Here we used a precedence hierarchy similar to that of our human annotators to manu- ally resolve the labels: 1) any spans overlapping with ResearchProblem are discarded; 2) any spans except ResearchProblem overlapping with Method are discarded; 3) for all other overlapping spans, the first predicted label is taken and the rest discarded. Finally, some texts required additional processing to be made usable, and are also excluded. There was a stark difference in quality between GPT3.5 OOTB and the other two models. The response object very often contained malformed json, frequently so mangled it was impossible to process. Additionally, it had a tendency to return all labels in the same order they were passed in the prompt, with an annotation for each one. This made our precedence hierarchy impractical, since the output order inherently privileged certain la- bels over others. As a result of these constraints, there are significantly fewer usable texts from the GPT3.5 OOTB model. GPT Models. The usable texts from each model were aligned with the corresponding human- annotated texts so that the predictions could be compared against our corpus. This resulted in 793 usable texts for GPT-3.5 OOTB, 1497 for GPT3.5 FT, and 1465 GPT-4 OOTB. Micro averages for precision, recall, and f-score are reported in Table 9. The results indicate extremely weak perfor- mance by GPT-3.5 OOTB. GPT-4 OOTB shows an impressive 19-point improvement, while the fine- tuned model performs by far the best. Nonetheless, for an NER-adjacent task, an f-score of 0.51 may be considered low. NER Models. As the ultimate goal of this an- notation project is to provide training data for an mT5-Small1 FLAN-T5-Small1 mT5-Small2 FLAN-T5-Small2 mT5-Small3 FLAN-T5-Small3 P R F1 0.38 0.36 0.41 0.37 0.45 0.40 0.32 0.33 0.43 0.39 0.42 0.41 0.35 0.34 0.42 0.38 0.43 0.40 Table 10: Micro precision (P), recall (R), and F1-score (F1) for each of our NER models. The subscript num- bers indicate the dataset split: 1 trained on the same 2000 texts as the GPT-3.5 FT model and tested on the same 1500 experimental texts; 2 trained on all texts ex- cept the 1500 experimental texts and tested on those; 3 trained and tested on a random 90/10 split of the com- plete datatset. Astro-NER service, the following NER models were also trained and evaluated. The FLAN-T5 model (Chung et al., 2022) in the Small (77M) size was selected due to its efficiency at learning new tasks (Longpre et al., 2023). Additionally, the mT5 model (Xue et al., 2021) in the Small (300M) size was also included because the best performing sys- tem (Ghosh et al., 2022) from the DEAL astronomy NER shared task (Grezes et al., 2022) (described in Section 2) utilized this model. The same hyper- parameters were used across models: 100 epochs, learning rate of 3e-4, and a batch size of 16. The micro precision, recall, and f-score metrics for each of our NER models along different dataset splits are presented in Table 10. Overall, the best results are obtained with mT5 and a random 90/10 split of the complete dataset for training and test- ing, with an f-score of 0.43. For reference, the top performing system on the DEAL task reported an f-score of 0.81, although it must be noted that this task used a different dataset with a different annotation scheme, so the results are not directly comparable. Nonetheless, we conclude that our results are not competitive in the context of current astronomy NER systems. We hypothesize that hav- ing two annotators lacking expertise in the domain may have introduced some inconsistencies into the dataset which were reflected in the training results of the model. 5 Discussion and Limitations Based on the precision, recall, and f-score metrics, we conclude the following. GPT-3.5 OOTB is not a good domain expert annotator, which aligns with our intuition that it excels at handling common sense tasks but not tasks requiring domain exper- tise. GPT-4 OOTB shows more promise, but is still insufficiently informed in highly-specialized scientific fields. In order to use GPT as an annota- tion assistant, finetuning is necessary. We find an enormous 47 point improvement in f-score before and after finetuning. We also find that the fine- tuned GPT-3.5 outperforms our best NER model. Nonetheless, the results overall are weak, and our best NER model underperforms compared to previ- ous work in astronomy NER (Grezes et al., 2022; Ghosh et al., 2022). Moreover, these f-scores are computed against the annotations of non-domain experts, whose annotations are themselves subject to validation. Considering the inter-annotator agreement, we conclude that specialized scientific domains remain an area in which domain expert annotators are still necessary. Annotator 1, whose annotations were slightly more aligned with the domain expert, ben- efited the most from the GPT assistance. On the other hand, Annotator 2 seemed to maintain some independence from the GPT predictions and had slightly lower agreement with the domain expert as well. But compared to the significant differ- ence in agreement between the annotators and GPT (Cohen’s κ 0.70 vs 0.48), the difference between their agreement with the domain expert is relatively minor (Cohen’s κ 0.42 vs 0.35)–it seems that adher- ence to the GPT predictions had minimal impact on the accuracy of annotations for non-domain ex- perts. Overall, the agreement between this domain ex- pert and the annotators may be considered low, despite the complexity of this task. However, we also note that scientific entity annotation is an in- herently subjective task. For domains entailing high-expertise, allowance must be made for subjec- tivity in the annotation decisions, and we recognize that results with a different domain expert might differ. We do observe one benefit to using GPT-3.5 as an annotation assistant: it dramatically quickened the pace of annotation. Phase II of the annotation process was completed in just six weeks, whereas Phase I took approximately 4 months, despite a similar weekly time investment. In this way, GPT can be thought of as a sounding board for annota- tors, giving them a starting point for consideration in astronomy literature. On a small sample of the data, we find that the agreement between the do- main expert and GPT-assisted non-experts is fair to moderate, while the agreement between the domain expert and the finetuned predictions is also fair. As part of this endeavour, we have developed a scientific entity annotation scheme for astron- omy and validated it with a domain expert. Un- like previous works in astronomy NER, we take a contribution-centric perspective to scientific entity identification: we select only those entities which are pertinent to the theme of the investigation. The dataset resulting from this annotation scheme, con- sisting of 5000 annotated titles from astronomy articles, is also published to support the continued development of scholarly contribution-focused as- tronomy NLP tools. Ethics Statement In this work we have presented our Astro-NER cor- pus. During its creation, we used a finetuned LLM. In this context, we declare the instructions selected for finetuning in this study were intended to align the behavior of the language models towards pro- ducing responses that are both helpful (fulfilling our objective) and harmless (not causing any physi- cal, psychological, or social harm to individuals or the environment). There were no living subjects analyzed in this study. Overall, this study complies with the ACL Ethics Policy. Data and Code Availability To facilitate further research, our Astro-NER dataset is publicly released at the following repos- itory, along with our experimental datasets. Fur- thermore, the prompts used to finetune GPT-3.5 are accessible here and here. The code used to finetune the mT5 and Flan-T5 models can be downloaded here. The annotation guidelines can be viewed here. Acknowledgements This work was supported by the German BMBF project SCINEXT (ID 01lS22070). rather than a blank slate. Nevertheless, this ap- proach is only advantageous insofar as high-quality annotations can be obtained. Our methodology was limited by the scant avail- ability of the domain expert, which we note as a realistic setting for such projects. As a result, our model was finetuned on non-domain expert anno- tations. Expert-labeled training data might have resulted in a different outcome, but is not feasible in all annotation projects. Some additional limitations concerning the orig- inal content of the dataset warrant discussion. The domain expert noted that the titles were overwhelm- ingly from the astronomy subfield of astrophysics, with a particular emphasis on astroparticle physics. There was discussion as to whether describing this as an astronomy dataset was inappropriately gen- eral, but given that the source of the titles was Else- vier publications labeled as astronomy, we chose to maintain this nomenclature. The distribution of entity types is extremely un- balanced in our corpus. Given our precedence hier- archy, as well as the conventions of academic title writing, ResearchProblem and Method appearing 3801 and 3169 times respectively is not unexpected. However, only one other label appears more than 1000 times: Process, with 1273 instances. The remaining entity types are mostly supported by sev- eral hundred samples. We note that this significant disparity is not ideal. Finally, the costs of the various models must be considered. Getting predictions for the 1500 texts in the experimental dataset cost $8.35 for GPT-3.5 OOTB and $10.98 for GPT-4 OOTB. The finetuned model was considerably more expensive, costing $49.80 to finetune and $33.63 to get predictions on the experimental texts ($57.68 for all texts in Phase II), for a total of $83.43 (or $107.48 when including all texts). 6 Conclusions In this work, we address one of the challenges associated with acquiring NER models for schol- arly domains, namely the scarcity of appropriate labeled data. While the involvement of domain ex- perts in annotation projects is often indispensable due to the requisite subject knowledge, the reality is that access to such experts may be limited. We present a novel approach to overcoming this hurdle by enlisting a finetuned GPT-3.5 model to assist non-domain experts in annotating scientific entities Bibliographical References Markus Becker, Benjamin Hachey, Beatrice Alex, and Claire Grover. 2005. Optimising Selec- tive Sampling for Bootstrapping Named En- tity Recognition. In Proceedings of the ICML- 2005 Workshop on Learning with Multiple Views, Bonn, Germany. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sas- try, Amanda Askell, et al. 2020. Language mod- els are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Hyung Won Chung, Le Hou, Shayne Longpre, Bar- ret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling Instruction- Finetuned Language Models. Sébastien Derriere, Andrea Preite-Martinez, Alexandre Richard, Laurent Cambrésy, and Paolo Padovani. 2010. Ontology of Astronom- ical Object Types Version 1.3. International Virtual Observatory Alliance. Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Boyang Li, Shafiq Joty, and Li- dong Bing. 2023. Is GPT-3 a Good Data Annota- tor? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11173–11195, Toronto, Canada. Association for Computational Linguistics. Jennifer D’Souza and Sören Auer. 2022. Computer Science Named Entity Recognition in the Open In From Born- Research Knowledge Graph. Physical to Born-Virtual: Augmenting Intelli- gence in Digital Libraries, pages 35–45, Hanoi, Vietnam. Springer International Publishing. Jennifer D’Souza. 2023. Agriculture Named Entity Recognition - Towards FAIR, Reusable Schol- arly Contributions in Agriculture. Preprint. Majigsuren Enkhsaikhan, Wei Liu, Eun-Jung Holden, and Paul Duuring. 2021. Auto- Labelling Entities in Low-Resource Text: A Geo- logical Case Study. Knowledge and Information Systems, 63(3):695–715. Madhusudan Ghosh, Payel Santra, Sk Asif Iqbal, and Partha Basuchowdhuri. 2022. Astro-mT5: Entity Extraction from Astrophysics Literature using mT5 Language Model. In Proceedings of the first Workshop on Information Extraction from Scientific Publications, pages 100–104, On- line. Association for Computational Linguistics. Felix Grezes, Sergi Blanco-Cuaresma, Thomas Allen, and Tirthankar Ghosal. 2022. Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL). In Proceedings of the first Workshop on Information Extraction from Scientific Publications, pages 1–7, Online. Association for Computational Linguistics. Michael A. Hedderich, Lukas Lange, Heike Adel, Jannik Strötgen, and Dietrich Klakow. 2021. A survey on recent approaches for natural language processing in low-resource scenarios. In Pro- ceedings of the 2021 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, pages 2545–2568, Online. Association for Computational Linguistics. Shayne Longpre, Le Hou, Tu Vu, Albert Web- son, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023. The Flan Collection: Designing Data and Methods for Effective Instruction Tun- ing. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 22631–22648. PMLR. Tara Murphy, Tara McIntosh, and James R. Curran. 2006. Named entity recognition for astronomy literature. In Proceedings of the Australasian Language Technology Workshop 2006, pages 59– 66, Sydney, Australia. Shuohang Wang, Yang Liu, Yichong Xu, Chen- guang Zhu, and Michael Zeng. 2021. Want In to reduce labeling cost? GPT-3 can help. Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4195–4205, Punta Cana, Dominican Republic. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer. In Proceedings of the 2021 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics.
ai_researcher
3
Monte_Carlo_Tree_Search_Boosts_Reasoning_via_Iterative_Preference_Learning.pdf
4 2 0 2 n u J 7 1 ] I A . s c [ 2 v 1 5 4 0 0 . 5 0 4 2 : v i X r a Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning Yuxi Xie1∗∗ Anirudh Goyal Wenyue Zheng1 Min-Yen Kan1 Timothy Lillicrap2 Kenji Kawaguchi1 Michael Shieh1 1 National University of Singapore 2 Google DeepMind Abstract We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process in- spired by the successful strategy employed by AlphaZero. Our work leverages Monte Carlo Tree Search (MCTS) to iteratively collect preference data, utilizing its look-ahead ability to break down instance-level rewards into more granular step-level signals. To enhance consistency in intermediate steps, we combine outcome validation and stepwise self-evaluation, continually updating the quality assessment of newly generated data. The proposed algorithm employs Direct Preference Optimization (DPO) to update the LLM policy using this newly gen- erated step-level preference data. Theoretical analysis reveals the importance of using on-policy sampled data for successful self-improving. Extensive evaluations on various arithmetic and commonsense reasoning tasks demonstrate remarkable performance improvements over existing models. For instance, our approach outperforms the Mistral-7B Supervised Fine-Tuning (SFT) baseline on GSM8K, MATH, and ARC-C, with substantial increases in accuracy to 81.8% (+5.9%), 34.7% (+5.8%), and 76.4% (+15.8%), respectively. Additionally, our research delves into the training and inference compute tradeoff, providing insights into how our method effectively maximizes performance gains. Our code is publicly available at https://github.com/YuxiXie/MCTS-DPO. 1 Introduction Development of Large Language Models (LLMs), has seen a pivotal shift towards aligning these models more closely with human values and preferences (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a). A critical aspect of this process involves the utilization of preference data. There are two prevailing methodologies for incorporating this data: the first entails the construction of a reward model based on preferences, which is then integrated into a Reinforcement Learning (RL) framework to update the policy (Christiano et al., 2017; Bai et al., 2022b); the second, more stable and scalable method, directly applies preferences to update the model’s policy (Rafailov et al., 2023). In this context, the concept of “iterative” development is a key, especially when contrasted with the conventional Reinforcement Learning from Human Feedback (RLHF) paradigm (Christiano et al., 2017; Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a), where the reward model is often trained offline and remains static. An iterative approach proposes a dynamic and continuous refinement process (Zelikman et al., 2022; Gülçehre et al., 2023; Huang et al., 2023; Yuan et al., 2024). It involves a cycle that begins with the current policy, progresses through the collection and analysis of data to generate new preference data, and uses this data to update the policy. This ∗Correspondence to: Yuxi Xie ([email protected]) and Anirudh Goyal ([email protected]). Preprint. Under review. Figure 1: Monte Carlo Tree Search (MCTS) boosts model performance via iterative preference learning. Each iteration of our framework (on the left) consists of two stages: MCTS to collect step-level preferences and preference learning to update the policy. Specifically, we use action values Q estimated by MCTS to assign the preferences, where steps of higher and lower Q values will be labeled as positive and negative data, respectively. The scale of Q is visualized in the colormap. We show the advantage of the online manner in our iterative learning framework using the validation accuracy curves as training progresses on the right. The performance of ARC-C validation illustrates the effectiveness and efficiency of our proposed method compared to its offline variant. approach underlines the importance of ongoing adaptation in LLMs, highlighting the potential for these models to become more attuned to the complexities of human decision-making and reasoning. A compelling illustration of the success of such an iterative approach can be seen in the case of Alp- haZero (Silver et al., 2017) for its superhuman performance across various domains, which combines the strengths of neural networks, RL techniques, and Monte Carlo Tree Search (MCTS) (Coulom, 2006; Kocsis and Szepesvári, 2006). The integration of MCTS as a policy improvement operator that transforms the current policy into an improved policy (Grill et al., 2020). The effectiveness of AlphaZero underscores the potential of combining these advanced techniques in LLMs. By inte- grating MCTS into the iterative process of policy development, it is plausible to achieve significant strides in LLMs, particularly in the realm of reasoning and decision-making aligned with human-like preferences (Zhu et al., 2023; Hao et al., 2023). The integration of MCTS in collecting preference data to improve the current policy iteratively is nuanced and demands careful consideration. One primary challenge lies in determining the appropriate granularity for applying MCTS. Conventionally, preference data is collected at the instance level. The instance-level approach employs sparse supervision, which can lose important information and may not optimally leverage the potential of MCTS in improving the LLMs (Wu et al., 2023). Another challenge is the reliance of MCTS on a critic or a learned reward function. This function is crucial for providing meaningful feedback on different rollouts generated by MCTS, thus guiding the policy improvement process (Liu et al., 2023a). Addressing this granularity issue, evidence from LLM research indicates the superiority of process- level or stepwise evaluations over instance-level ones (Lightman et al., 2023; Li et al., 2023; Xie et al., 2023; Yao et al., 2023). Our approach utilizes MCTS rollouts for step-level guidance, aligning with a more granular application of MCTS. Moreover, we employ self-evaluation, where the model assesses its outputs, fostering a more efficient policy improvement pipeline by acting as both policy and critic (Kadavath et al., 2022; Xie et al., 2023). This method streamlines the process and ensures more cohesive policy updates, aligning with the iterative nature of policy enhancement and potentially leading to more robust and aligned LLMs. To summarize, we propose an algorithm based on Monte Carlo Tree Search (MCTS) that breaks down the instance-level preference signals into step-level. MCTS allows us to use the current LLM policy to generate preference data instead of a predetermined set of human preference data, enabling the LLM to receive real-time training signals. During training, we generate sequences of text on the fly and label the preference via MCTS based on feedback from self-evaluation (Figure 1). To update the LLM policy using the preference data, we use Direct Preference Optimization (DPO) (Rafailov et al., 2023). We extensively evaluate the proposed approach on various arithmetic and commonsense reasoning tasks and observe significant performance improvements. For instance, the proposed approach outperforms the Mistral-7B SFT baseline by 81.8% (+5.9%), 34.7% (+5.8%), and 76.4% (+15.8%) on GSM8K, MATH, and SciQ, respectively. Further analysis of the training and test compute tradeoff shows that our method can effectively push the performance gains in a more efficient way compared to sampling-only approaches. 2 Monte Carlo Tree SearchPrompt Data Poollow Q Values highPreference LearningStep-Level PreferencesƊiπθ ( i - 1)Policy from Last Iterationπθ ( i )Updated PolicyPolicy Tuningat Iteration i0102030405060Training Data %60657075808590Accuracy on ARC-COnlineOfflineSFT Baseline Algorithm 1 . MCTS-Enhanced Iterative Preference Learning. Given an initial policy πθ(0) = πsft, our algorithm iteratively conducts step-level preference data sampling via MCTS and preference learning via DPO to update the policy. Input: DP : prompt dataset; q(· | x): MCTS sampling strategy that constructs a tree-structured set of possible responses given a prompt x, where qπ represents that the strategy is based on the policy π for both response generation and self-evaluation; ℓi(x, yw, yl; θ): loss function of preference learning at the i-th iteration, where the corresponding sampling policy is π(i); M : number of iterations; B: number of samples per iteration; T : average number of steps per sample Train πθ on DP using step-level preference learning. for i = 1 to M do P , elicit a search tree of depth T via qπθ (· | x). π(i) ← πθ ← πθ(i−1) Sample a batch of B samples from DP as D(i) P . /* MCTS for Step-Level Preference Data Collection */ For each x ∈ D(i) Collect a batch of preferences Di = { {(xj, y(j,t) qπθ (· | xj) }, where y(j,t) respectively, among all the children nodes of their parent node. /* Preference Learning for Policy Improvement */ Optimize θ by minimizing J(θ) = E(x,yw ,yl)∼Di ℓi(x, yw, yl; θ). Obtain the updated policy πθ(i) and y(j,t) l w end for πθ ← πθ(M ) Output: Policy πθ w ∼ is the nodes at depth t, with the highest and lowest Q values, j=1 s.t. xj ∼ D(i) P , y(j,t) ̸= y(j,t) , y(j,t) l t=1}|B )|T w l 2 MCTS-Enhanced Iterative Preference Learning In this paper, we introduce an approach for improving LLM reasoning, centered around an iterative preference learning process. The proposed method begins with an initial policy πθ(0), and a dataset of prompts DP . Each iteration i involves selecting a batch of prompts from DP , from which the model, guided by its current policy πθ(i−1) , generates potential responses for each prompt. We then apply a set of dynamically evolving reward criteria to extract preference data Di from these responses. The model’s policy is subsequently tuned using this preference data, leading to an updated policy πθ(i) , for the next iteration. This cycle of sampling, response generation, preference extraction, and policy tuning is repeated, allowing for continuous self-improvement and alignment with evolving preferences. In addressing the critical aspects of this methodology, two key challenges emerge: the effective collection of preference data and the process of updating the policy post-collection. We draw upon the concept that MCTS can act as an approximate policy improvement operator, transforming the current policy into an improved one. Our work leverages MCTS to iteratively collect preference data, utilizing its look-ahead ability to break down instance-level rewards into more granular step-level signals. To enhance consistency in intermediate steps, we incorporate stepwise self-evaluation, continually updating the quality assessment of newly generated data. This process, as depicted in Figure 1, enables MCTS to balance quality exploitation and diversity exploration during preference data sampling at each iteration. Detailed in section 2.1, our approach utilizes MCTS for step-level preference data collection. Once this data is collected, the policy is updated using DPO, as outlined in section 2.2. Our method can be viewed as an online version of DPO, where the updated policy is iteratively employed to collect preferences via MCTS. Our methodology, thus, not only addresses the challenges in preference data collection and policy updating but also introduces a dynamic, iterative framework that significantly enhances LLM reasoning. 2.1 MCTS for Step-Level Preference To transform instance-level rewards into granular, step-level signals, we dissect the reasoning process into discrete steps, each represented by a token sequence. We define the state at step t, st, as the prefix of the reasoning chain, with the addition of a new reasoning step a transitioning the state to st+1, where st+1 is the concatenation of st and a. Utilizing the model’s current policy πθ, we 3 sample candidate steps from its probability distribution πθ(a | x, st)2, with x being the task’s input prompt. MCTS serves as an approximate policy improvement operator by leveraging its look-ahead capability to predict the expected future reward. This prediction is refined through stepwise self- evaluation (Kadavath et al., 2022; Xie et al., 2023), enhancing process consistency and decision accuracy. The tree-structured search supports a balance between exploring diverse possibilities and exploiting promising paths, essential for navigating the vast search space in LLM reasoning. The MCTS process begins from a root node, s0, as the sentence start or incomplete response, and unfolds in three iterative stages: selection, expansion, and backup, which we detail further. Select. The objective of this phase is to identify nodes that balance search quality and computational efficiency. The selection is guided by two key variables: Q(st, a), the value of taking action a in state st, and N (st), the visitation frequency of state st. These variables are crucial for updating the search strategy, as explained in the backup section. To navigate the trade-off between exploring new nodes and exploiting visited ones, we employ the Predictor + Upper Confidence bounds applied to Trees (PUCT) (Rosin, 2011). At node st, the choice of the subsequent node follows the formula: st+1 ∗ = arg max st (cid:104) Q(st, a) + cpuct · p(a | st) (cid:112)N (st) 1 + N (st+1) (cid:105) (1) where p(a | st) = πθ(a | x, st)/|a|λ denotes the policy πθ’s probability distribution for generating a step a, adjusted by a λ-weighted length penalty to prevent overly long reasoning chains. Expand. Expansion occurs at a leaf node during the selection process to integrate new nodes and assess rewards. The reward r(st, a) for executing step a in state st is quantified by the reward difference between states R(st) and R(st+1), highlighting the advantage of action a at st. As defined in Eq. (2), reward computation merges outcome correctness O with self-evaluation C. We assign the outcome correctness to be 1, −1, and 0 for correct terminal, incorrect terminal, and unfinished intermediate states, respectively. Following Xie et al. (2023), we define self-evaluation as Eq. (3), where A denotes the confidence score in token-level probability for the option indicating correctness3. Future rewards are anticipated by simulating upcoming scenarios through roll-outs, following the selection and expansion process until reaching a terminal state4. R(st) = O(st) + C(st) C(st) = πθ(A | prompteval, x, st) (2) (3) Backup. Once a terminal state is reached, we carry out a bottom-up update from the terminal node back to the root. We update the visit count N , the state value V , and the transition value Q: Q(st, a) ← r(st, a) + γV (st+1) V (st) ← (cid:88) a N (st+1)Q(st, a)/ (cid:88) a N (st+1) N (st) ← N (st) + 1 (4) (5) (6) where γ is the discount for future state values. For each step in the response generation, we conduct K iterations of MCTS to construct the search tree while updating Q values and visit counts N . To balance the diversity, quality, and efficiency of the tree construction, we initialize the search breadth as b1 and anneal it to be a smaller b2 < b1 for the subsequent steps. We use the result Q value corresponding to each candidate step to label its preference, where higher Q values indicate preferred next steps. For a result search tree of depth T , we obtain T pairs of step-level preference data. Specifically, we select the candidate steps of highest and lowest Q values as positive and negative samples at each tree depth, respectively. The parent node selected at each tree depth has the highest value calculated by multiplying its visit count and the range of its children nodes’ visit counts, indicating both the quality and diversity of the generations. 2For tasks (e.g., MATH) where the initial policy performs poorly, we also include the ground-truth reasoning steps for training. We detail the step definition for different tasks with examples in Appendices C and D. 3We show an example of evaluation prompt in Table 6. 4The terminal state is reached when the whole response is complete or exceeds the maximum length. 4 2.2 Iterative Preference Learning Given the step-level preferences collected via MCTS, we tune the policy via DPO (Rafailov et al., 2023). Considering the noise in the preference labels determined by Q values, we employ the conservative version of DPO (Mitchell, 2023) and use the visit counts simulated in MCTS to apply adaptive label smoothing on each preference pair. Using the shorthand hyw,yl πref (yw|x) − log πθ(yl|x) πref (yl|x) , at the i-th iteration, given a batch of preference data Di sampled with the latest policy πθ(i−1), we denote the policy objective ℓi(θ) as follows: = log πθ(yw|x) πθ ℓi(θ) = − E(x,yw,yl)∼Di (cid:104) (1 − αx,yw,yl ) log σ(β hyw,yl πθ (cid:1) + αx,yw,yl log σ(−βhyw,yl πθ ) (cid:105) (7) where yw and yl represent the step-level preferred and dispreferred responses, respectively, and the hyperparameter β scales the KL constraint. Here, αx,yw,yl is a label smoothing variable calculated using the visit counts at the corresponding states of the preference data yw, yl in the search tree: αx,yw,yl = 1 N (x, yw)/N (x, yl) + 1 (8) where N (x, yw) and N (x, yl) represent the states taking the actions of generating yw and yl, respec- tively, from their previous state as input x. After optimization, we obtain the updated policy πθ(i) and repeat the data collection process in Section 2.1 to iteratively update the LLM policy. We outline the full algorithm of our MCTS- enhanced Iterative Preference Learning in Algorithm 1. 3 Theoretical Analysis Our approach can be viewed as an online version of DPO, where we iteratively use the updated policy to sample preferences via MCTS. In this section, we provide theoretical analysis to interpret the advantages of our online learning framework compared to the conventional alignment techniques that critically depend on offline preference data. We review the typical RLHF and DPO paradigms in Appendix B. We now consider the following abstract formulation for clean theoretical insights to analyze our online setting of preference learning. Given a prompt x, there exist n possible suboptimal responses {¯y1, . . . , ¯yn} = Y and an optimal outcome y∗. As specified in Equation 7, at the i-th iteration, a pair of responses (y, y′) are sampled from some sampling policy π(i) without replacement so that y ̸= y′ as y ∼ π(i)(· | x) and y′ ∼ π(i)(· | x, y). Then, these are labeled to be yw and yl according to the preference. Define Θ be a set of all global optimizers of the preference loss for all M iterations, i.e., for any θ ∈ Θ, ℓi(θ) = 0 for all i ∈ {1, 2, · · · , M }. Similarly, let θ(i) be a parameter vector such that ℓj(θ(i)) = 0 for all j ∈ {1, 2, · · · , i − 1} for i ≥ 1 whereas θ(0) is the initial parameter vector. This abstract formulation covers both the offline and online settings. The offline setting in previous works is obtained by setting π(i) = π for some fixed distribution π. The online setting is obtained by setting π(i) = πθ(i−1) where πθ(i−1) is the latest policy at beginning of the i-th iteration. The following theorem shows that the offline setting can fail with high probability if the sampling policy π(i) differs too much from the current policy πθ(i−1): Theorem 3.1 (Offline setting can fail with high probability). Let π be any distribution for which there exists ¯y ∈ Y such that π(¯y | x), π(¯y | x, y) ≤ ϵ for all y ∈ (Y \ ¯y) ∪ {y∗} and πθ(i−1)(¯y | x) ≥ c for some i ∈ {1, 2, · · · , M }. Set π(i) = π for all i ∈ {1, 2, · · · , M }. Then, there exists θ ∈ Θ such that with probability at least 1 − 2ϵM (over the samples of π(i) = π), the following holds: πθ(y∗ | x) ≤ 1 − c. If the current policy and the sampling policy differ too much, it is possible that ϵ = 0 and c ≈ 1.0, for which Theorem 3.1 can conclude πθ(y∗ | x) ≈ 0 with probability 1 for any number of steps M . When ϵ ̸= 0, the lower bound of the failure probability decreases towards zero as we increase M . Thus, it is important to make sure that ϵ ̸= 0 and ϵ is not too low. This is achieved by using the online setting, i.e., π(i) = πθ(i). Therefore, Theorem 3.1 motivates us to use the online setting. Theorem 3.2 confirms that we can indeed avoid this failure case in the online setting. 5 Theorem 3.2 (Online setting can avoid offline failure case). Let π(i) = πθ(i−1) . Then, for any θ ∈ Θ, it holds that πθ(y∗ | x) = 1 if M ≥ n + 1. See Appendix B for the proofs of Theorems 3.1 and 3.2. As suggested by the theorems, a better sampling policy is to use both the latest policy and the optimal policy for preference sampling. However, since we cannot access the optimal policy π∗ in practice, we adopt online DPO via sampling from the latest policy πθ(i−1). The key insight of our iterative preference learning approach is that online DPO is proven to enable us to converge to an optimal policy even if it is inaccessible to sample outputs. We provide further discussion and additional insights in Appendix B. 4 Experiments We evaluate the effectiveness of MCTS-enhanced iterative preference learning on arithmetic and commonsense reasoning tasks. We employ Mistral-7B (Jiang et al., 2023) as the base pre- trained model. We conduct supervised training using Arithmo 5 which comprises approximately 540K mathematical and coding problems. Detailed information regarding the task formats, specific implementation procedures, and parameter settings of our experiments can be found in Appendix C. Datasets. We aim to demonstrate the effectiveness and versatility of our approach by focusing on two types of reasoning: arithmetic and commonsense reasoning. For arithmetic reasoning, we utilize two datasets: GSM8K (Cobbe et al., 2021), which consists of grade school math word problems, and MATH (Hendrycks et al., 2021), featuring challenging competition math problems. Specifically, in the GSM8K dataset, we assess both chain-of-thought (CoT) and program-of-thought (PoT) reasoning abilities. We integrate the training data from GSM8K and MATH to construct the prompt data for our preference learning framework, aligning with a subset of the Arithmo data used for Supervised Fine-Tuning (SFT). This approach allows us to evaluate whether our method enhances reasoning abilities on specific arithmetic tasks. For commonsense reasoning, we use four multiple-choice datasets: ARC (easy and challenge splits) (Clark et al., 2018), focusing on science exams; AI2Science (elementary and middle splits) (Clark et al., 2018), containing science questions from student assessments; OpenBookQA (OBQA) (Mihaylov et al., 2018), which involves open book exams requiring broad common knowledge; and CommonSenseQA (CSQA) (Talmor et al., 2019), featuring commonsense questions necessitating prior world knowledge. The diversity of these datasets, with different splits representing various grade levels, enables a comprehensive assessment of our method’s generalizability in learning various reasoning tasks through self-distillation. Performance evaluation is conducted using the corresponding validation sets of each dataset. Furthermore, we employ an unseen evaluation using the validation set of an additional dataset, SciQ (Welbl et al., 2017), following the approach of Liu et al. (2023b), to test our model’s ability to generalize to novel reasoning contexts. Baselines. Our study involves a comparative evaluation of our method against several prominent approaches and fair comparison against variants including instance-level iterative preference learning and offline MCTS-enhanced learning. We use instance-level sampling as a counterpart of step-level preference collection via MCTS. For a fair comparison, we also apply self-evaluation and correctness assessment and control the number of samples under a comparable compute budget with MCTS in instance-level sampling. The offline version uses the initial policy for sampling rather than the updated one at each iteration. We contrast our approach with the Self-Taught Reasoner (STaR)(Zelikman et al., 2022), an iterated learning model based on instance-level rationale generation, and Crystal(Liu et al., 2023b), an RL- tuned model with a focus on knowledge introspection in commonsense reasoning. Considering the variation in base models used by these methods, we include comparisons with Direct Tuning, which entails fine-tuning base models directly bypassing chain-of-thought reasoning. In the context of arith- metic reasoning tasks, our analysis includes Language Model Self-Improvement (LMSI)(Huang et al., 2023), a self-training method using self-consistency to gather positive data, and Math-Shepherd(Wang et al., 2023a), which integrates process supervision within Proximal Policy Optimization (PPO). To account for differences in base models and experimental setups across these methods, we also present result performance of SFT models as baselines for each respective approach. 5https://huggingface.co/datasets/akjindal53244/Arithmo-Data 6 Table 1: Result comparison (accuracy %) on arithmetic tasks. We supervised fine-tune the base model Mistral-7B on Arithmo data, while Math-Shepherd (Wang et al., 2023a) use MetaMATH (Yu et al., 2023b) for SFT. We highlight the advantages of our approach via conceptual comparison with other methods, where NR, OG, OF, and NS represent “w/o Reward Model”, “On-policy Generation”, “Online Feedback”, and “w/ Negative Samples”. Approach Base Model LMSI SFT (MetaMath) Math-Shepherd SFT (Arithmo) MCTS Offline-DPO Instance-level Online-DPO Ours Ours (w/ G.T.) PaLM-540B Mistral-7B Mistral-7B Conceptual Comparison NR OG OF NS GSM8K MATH ✓ − ✗ − ✓ ✓ ✓ ✓ ✓ − ✓ − ✗ ✓ ✓ ✓ ✗ − ✗ − ✗ ✓ ✓ ✓ ✗ − ✓ − ✓ ✓ ✓ ✓ 73.5 77.7 84.1 75.9 79.9 79.7 80.7 81.8 − 28.2 33.0 28.9 31.9 32.9 32.2 34.7 Figure 2: Performance on the validation set of ARC-C via training with different settings. Table 2: Result comparisons (accuracy %) on commonsense reasoning tasks. The results based on GPT-3-curie (Brown et al., 2020) and T5 (Raffel et al., 2020) are reported from Liu et al. (2023b). For CSQA, we also include the GPT-J (Wang and Komatsuzaki, 2021) results reported by Zelikman et al. (2022). We follow Liu et al. (2023b) to combine the training data of ARC, AI2Sci, OBQA, and CSQA for training , while STaR (Zelikman et al., 2022) only use CSQA for training. Approach Base Model Conceptual Comparison ARC-c AI2Sci-m CSQA SciQ CoT Tuning Direct Tuning STaR Direct TUning Crystal SFT Base (Arithmo) Direct Tuning MCTS Offline-DPO Instance-level Online-DPO Ours GPT-3-curie (6.7B) GPT-J (6B) T5-11B Mistral-7B NR ✓ ✓ ✓ ✓ ✗ − ✓ ✓ ✓ ✓ OG ✗ ✗ ✓ ✗ ✓ − ✗ ✗ ✓ ✓ OF ✗ ✗ ✓ ✗ ✓ − ✗ ✗ ✓ ✓ NS ✗ ✗ ✗ ✗ ✓ − ✗ ✓ ✓ ✓ − − − 72.9 73.2 60.6 73.9 70.8 75.3 76.4 − − − 84.0 84.8 70.9 85.2 82.6 87.3 88.2 56.8 60.0 72.5 82.0 82.3 54.1 79.3 68.5 63.1 74.8 − − − 83.2 85.3 80.8 86.4 87.4 87.6 88.5 Train Data Used (%) 100 100 86.7 100 100 − 100 19.2 45.6 47.8 4.1 Main Results Arithmetic Reasoning. In Table 1, we present a comparative analysis of performance gains in arithmetic reasoning tasks. Our method demonstrates substantial improvements, notably on GSM8K, increasing from 75.9% → 81.8%, and on MATH, enhancing from 28.9% → 34.7%. When compared to Math-Shepherd, which also utilizes process supervision in preference learning, our approach achieves similar performance enhancements without the necessity of training separate reward or value networks. This suggests the potential of integrating trained reward model signals into our MCTS stage to further augment performance. Furthermore, we observe significant performance gain on MATH when incorporating the ground-truth solutions in the MCTS process for preference data collection, illustrating an effective way to refine the preference data quality with G.T. guidance. Commonsense Reasoning. In Table 2, we report the performance on commonsense reasoning tasks, where our method shows consistent improvements. Notably, we achieve absolute accuracy increases of 2.5%, 3.0%, and 2.1% on ARC-Challenge (ARC-C), AI2Sci-Middle (AI2Sci-M), and SciQ, respectively, surpassing the results of direct tuning. However, in tasks like OBQA and CSQA, our method, focusing on intermediate reasoning refinement, is less efficient compared to direct tuning. Despite significant improvements over the Supervised Fine-Tuning (SFT) baseline (for instance, from 59.8% to 79.2% on OBQA, and from 54.1% to 74.8% on CSQA), the gains are modest relative to direct tuning. This discrepancy could be attributed to the base model’s lack of specific knowledge, where eliciting intermediate reasoning chains may introduce increased uncertainty in model generations, leading to incorrect predictions. We delve deeper into this issue of hallucination and its implications in our qualitative analysis, as detailed in Section 4.2. 4.2 Further Analysis Training- vs. Test- Time Compute Scaling. Our method integrates MCTS with preference learning, aiming to enhance both preference quality and policy reasoning via step-level alignment. We analyze the impact of training-time compute scaling versus increased inference-time sampling. 7 1020304050Training Data %5560657075Accuracy72.274.776.475.675.866.572.273.375.273.469.270.867.366.560.6ARC-CStep-Level (Online)Instance-Level (Online)Step-Level (Offline)SFT Baseline Figure 3: Training- vs. Test- Time Compute Scaling on ARC-C, SciQ, and MATH evaluation sets. The cumulative pass rate of our iterative learning method can be seen as the pass rate of an ensemble of different model checkpoints. We use greedy decoding to obtain the inference time performance of our method of iterative learning. Table 3: Ablation of “EXAMPLE ANSWER” in self-evaluation on GSM8K, MATH, and ARC-C. We report AUC and accuracy (%) to compare the discriminative abilities of self-evaluation scores. Approach GSM8K MATH ARC-C AUC Accuracy AUC Accuracy AUC Accuracy w/ example answer w/o example answer 74.7 62.0 72.5 69.5 76.6 48.1 48.8 42.3 65.2 55.8 57.5 48.4 We measure success by the pass rate, indicating the percentage of correctly elicited answers. Figure 3 displays the cumulative pass rate at each checkpoint, aggregating the pass rates up to that point. For test-time scaling, we increase the number of sampled reasoning chains. Additionally, we compare the inference performance of our checkpoints with a sampling-only method, self-consistency, to assess their potential performance ceilings. The pass rate curves on ARC-C, SciQ, and MATH datasets reveal that our MCTS-enhanced approach yields a higher training compute scaling exponent. This effect is particularly pronounced on the unseen SciQ dataset, highlighting our method’s efficiency and effectiveness in enhancing specific reasoning abilities with broad applicability. Inference-time performance analysis shows higher performance upper bounds of our method on ARC-C and SciQ. For instance, while self-consistency on SciQ plateaus at around 84%, our framework pushes performance to 88.6%. However, on MATH, the sampling-only approach outperforms training compute scaling: more sampling consistently enhances performance beyond 35%, whereas post-training performance hovers around 32.2%. This observation suggests that in-domain SFT already aligns the model well with task-specific requirements. Functions of Self-Evaluation Mechanism. As illustrated in Section 2.1, the self-evaluation score inherently revises the Q value estimation for subsequent preference data collection. In practice, we find that the ground-truth information, i.e., the “EXAMPLE ANSWER” in Table 6, is crucial to ensure the reliability of self-evaluation. We now compare the score distribution and discriminative abilities when including v.s. excluding this ground-truth information in Table 3. With this information , the accuracy of self-evaluation significantly improves across GSM8K, MATH, and ARC-C datasets. Ablation Study. We ablate the impact of step-level supervision signals and the online learning aspect of our MCTS-based approach. Tables 1 and 2 shows performance comparisons across commonsense and arithmetic reasoning tasks under different settings. Our method, focusing on step- level online preference learning, consistently outperforms both instance-level and offline approaches in commonsense reasoning. For example, we achieve 76.4% on ARC-C and 88.5% on SciQ, surpassing 70.8% and 87.4% of the offline variant, and 75.3% and 87.6% of the instance-level approach. In arithmetic reasoning, performance differences among settings are less pronounced for challenging task such as MATH without the incorporation of ground-truth solutions (e.g., 32.2% for our method 8 01234567# Checkpoints6065707580859095Pass Rate72.273.674.775.176.475.876.260.679.786.990.091.392.493.394.171.980.686.689.391.792.993.5ARC-CIterative Learning (Pass@1)Iterative Learning (Cumulative)Sampling Only (Cumulative)SFT Baseline (Pass@1)102030405060k6065707580859095Accuracy61.970.072.273.474.160.6Sampling Only (SC@k)SFT Baseline (Pass@1)01234567# Checkpoints80.082.585.087.590.092.595.097.5100.0Pass Rate86.088.386.588.587.788.680.889.694.295.996.382.886.489.891.5SciQIterative Learning (Pass@1)Iterative Learning (Cumulative)Sampling Only (Cumulative)SFT Baseline (Pass@1)102030405060k80.082.585.087.590.092.595.097.5100.0Accuracy81.682.883.884.184.284.480.8Sampling Only (SC@k)SFT Baseline (Pass@1)0246810# Checkpoints3035404550556065Pass Rate29.230.432.231.229.038.446.751.657.134.942.449.657.929.0MATHIterative Learning (Pass@1)Iterative Learning (Cumulative)Sampling Only (Cumulative)SFT Baseline (Pass@1)45678910k3035404550556065Accuracy30.031.532.233.334.235.129.0Sampling Only (SC@k)SFT Baseline (Pass@1) Figure 4: Abltation study on step- vs. instance-level supervision, online vs. offline learning, and w/ vs. w/o G.T. supervision. We also compare the accuracy of the training signals collected via MCTS or instance-level sampling. Table 4: Qualitative Analysis of model outputs on OBQA, AI2Sci, and MATH. We highlight correct and incorrect keywords. The predicted answers on MATH are boxed , where our method solves the complex problem while SFT baseline failed to elicit a complete reasoning chain. Task Inputs and Outputs Answer OBQA AI2Sci MATH MATH Question: As the rain forest is deforested the atmosphere will increase with: Answer Choices: (A) oxygen (B) nitrogen (C) carbon (D) rain SFT Baseline: As the rain forest is deforested the atmosphere will increase with carbon. The answer is C. Ours: When trees are cut down, the carbon stored in the trees is released into the atmosphere as carbon dioxide. This carbon dioxide then increases the carbon content of the atmosphere. Since carbon is the main ingredient in plant life, the increased carbon content of the atmosphere will lead to increased plant growth, which will lead to increased photosynthesis, which will lead to increased oxygen production. Therefore, oxygen is the gas that will increase in the atmosphere as the rain forest is deforested. The answer is: A. Question: Which pair of instruments can be used to measure wind chill? Answer Choices: (A) barometer and wind vane (B) wind vane and anemometer (C) thermometer and barometer (D) anemometer and thermometer SFT Baseline: Wind chill is the effect of wind on the apparent temperature, it is the temperature that the human body feels due to the movement of air. It is an index of the discomfort felt by the human body due to low temperature and high wind speed. It is measured by an instrument called wind chill meter. The answer is B Ours: To measure wind chill, you would need an anemometer and a thermometer. The anemometer measures the wind speed, and the thermometer measures the temperature. Together, these two instruments can be used to calculate the wind chill factor. The answer is: D Question: Let f (x) = x7 + ax3 + cx If f (5) = 3, then find f (−5). SFT Baseline: We know that f (5) = 57 + 5a + 5c = 3. We want to find f (−5), which is (−5)7 + (−5a) +(−5c). Since (−5)7 = −57 = −3, we have f (−5) = −3 − 5a − 5c. We can rewrite this as f (−5) = −3 − 5(−3) − 5c = −3 + 15 − 5c = 12 − 5c . Ours: We know that f (5) = 57 + 5a + 5c = 3. Since f (x) is a polynomial, we can use the fact that f (−5) = (−5)7 − 5a − 5c = −3 . Question: If f (c) = 3 where k, l, m, and n are integers. SFT Baseline: We start by finding f −1(c). To do this, we set f (c) = x and solve for c: lm when f −1(c) × c × f (c) equals the simplified fraction kc+l mc+n , 2c−3 , find kn2 3 2c−3 = x, 3 x +3 2 3 c +3 9+9c 2c − 3 = 3 .So f −1(c) = 2c × 3 4c(c−1) = kc+l 2c−3 Simplifying, we get: 3+3c 2c . Next, we substitute f −1(c) into the expression mc+n , so we set them equal to each other and solve for k, l, m, mc+n Cross-multiplying, we get: 9 + 9c = 4c(c − 1)(kc + l) Expanding and simplifying, x , 2c = 3 x + 3, c = 3 +3 f −1(c) × c × f (c): 2c × c × 3 c We want this expression to be equal to kc+l and n: we get: 9 + 9c = 4c2k − 4ck − 4cl + l Rearranging terms, we get: 4c2k − 4 Ours: We know that f (f −1(c)) = c for all c in the domain of f −1. So, f (f −1(c)) = f (c) for all c in the domain of f −1. Therefore, f (f −1(c)) = f (c) implies f (c) = c for all c in the domain of f −1. Since f (c) = 3 domain of f −1. Simplifying, we get c = 3 for all c in the domain of f −1. Therefore, f −1(c) = 3 for all c in the domain of f −1. Now, we can substitute f −1(c) = 3 into f −1(c) × c × f (c) to get 3 × 3 × Simplifying, we get 9 × 3 2c−3 , we have f (c) = c for all c in the domain of f −1. This means that 2c − 3 = c for all c in the 9+9c 4c2−6c 9+9c 4c(c−1) 3 2(3)−3 . 2c−3 3 = 9. Therefore, kn2 lm = 9 (C) (D) -3 9 (w/o G.T.) vs. 31.9% and 32.9% for offline and instance-level on MATH). The comparable perfor- mance of offline learning aligns with our theoretical analysis that offline approaches can be effective when the initial policy is already well-tuned with high-quality, in-domain data. We further interpret how G.T. guidance integration to enhance the effectiveness of our framework in Figure 4. With G.T. supervision, the accuracy of training signals improve significantly from 81.2% to 97.9%, leading to substantial performance gain on model performance. This also explains the similar performance (w/o G.T.) between corresponding using step- and instance-level supervision, where our step-level approach shows effectiveness in narrowing the gap between accuracies of corresponding supervisions. 9 ARC-C(step-level)ARC-C(instance-level)MATH(step-level)MATH(instance-level)MATH(step-levelw/ G.T.)020406080100Accuracy60.660.629.029.029.076.475.332.232.935.494.489.281.287.997.9Initial ModelTrained ModelG.T. Supervision Training Dynamics in Iterative Learning. As shown in Figure 2, online learning exhibits cyclic performance fluctuations, with validation performance peaking before dipping. We conduct theoretical analysis on this in Appendix B and shows that continuous policy updates with the latest models can lead to periodic knowledge loss due to insufficient optimization in iterative updates. We further probe these phenomena qualitatively next. Qualitative Analysis. Our qualitative analysis in Table 4 examines the impact of step-level supervi- sion on intermediate reasoning correctness across different tasks. In OBQA, the implementation of MCTS, as discussed in Section 4.1, often leads to longer reasoning chains. This can introduce errors in commonsense reasoning tasks, as seen in our OBQA example, where an extended chain results in an incorrect final prediction. Conversely, in the MATH dataset, our approach successfully guides the model to rectify mistakes and formulates accurate, extended reasoning chains, demonstrating its effectiveness in complex math word problems. This analysis underscores the need to balance reasoning chain length and logical coherence, particularly in tasks with higher uncertainty, such as commonsense reasoning. 5 Related Work Various studies focus on self-improvement to exploit the model’s capability. One line of work focuses on collecting high-quality positive data from model generations guided by static reward heuristic (Zelikman et al., 2022; Gülçehre et al., 2023; Polu et al., 2023). Recently, Yuan et al. (2024) utilize the continuously updated LLM self-rewarding to collect both positive and negative data for preference learning. Fu et al. (2023) adopt exploration strategy via rejection sampling to do online data collection for iterative preference learning. Different from prior works at instance-level alignment, we leverage MCTS as a policy improvement operator to iteratively facilitate step-level preference learning. We discuss additional related work in Appendix A. 6 Conclusion In this paper, we propose MCTS-enhanced iterative preference learning, utilizing MCTS as a policy improvement operator to enhance LLM alignment via step-level preference learning. MCTS balances quality exploitation and diversity exploration to produce high-quality training data, efficiently pushing the ceiling performance of the LLM on various reasoning tasks. Theoretical analysis shows that online sampling in our iterative learning framework is key to improving the LLM policy toward optimal alignment. We hope our proposed approach can inspire future research on LLM alignment from both data-centric and algorithm-improving aspects: to explore searching strategies and utilization of history data and policies to augment and diversify training examples; to strategically employ a tradeoff between offline and online learning to address the problem of cyclic performance change of the online learning framework as discussed in our theoretical analysis. Acknowledgments and Disclosure of Funding The computational work for this article was partially performed on resources of the National Super- computing Centre (NSCC), Singapore6. References Thomas Anthony, Zheng Tian, and David Barber. 2017. Thinking fast and slow with deep learning and tree search. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5360–5370. Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal Valko, and Rémi Munos. 2023. A general theoretical paradigm to understand learning from human preferences. CoRR, abs/2310.12036. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, 6https://www.nscc.sg/ 10 Sheer El Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, Benjamin Mann, and Jared Kaplan. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. CoRR, abs/2204.05862. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073. Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324–345. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4299–4307. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. CoRR, abs/2110.14168. Rémi Coulom. 2006. Efficient selectivity and backup operators in monte-carlo tree search. In International conference on computers and games, pages 72–83. Springer. Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. 2023. MME: A comprehensive evaluation benchmark for multimodal large language models. CoRR, abs/2306.13394. Jean-Bastien Grill, Florent Altché, Yunhao Tang, Thomas Hubert, Michal Valko, Ioannis Antonoglou, and Rémi Munos. 2020. Monte-carlo tree search as regularized policy optimization. In International Conference on Machine Learning, pages 3769–3778. PMLR. Çaglar Gülçehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud Doucet, Orhan Firat, and Nando de Freitas. 2023. Reinforced self-training (rest) for language modeling. CoRR, abs/2308.08998. Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. 2023. Reasoning with language model is planning with world model. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 8154–8173. Association for Computational Linguistics. Junxian He, Jiatao Gu, Jiajun Shen, and Marc’Aurelio Ranzato. 2020. Revisiting self-training for neural sequence generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the MATH dataset. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2023. Large language models can self-improve. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 1051–1068. Association for Computational Linguistics. H. J. Scudder III. 1965. Probability of error of some adaptive pattern-recognition machines. IEEE Trans. Inf. Theory, 11(3):363–371. 11 Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. CoRR, abs/2310.06825. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. 2022. Language models (mostly) know what they know. CoRR, abs/2207.05221. Levente Kocsis and Csaba Szepesvári. 2006. Bandit based monte-carlo planning. In European conference on machine learning, pages 282–293. Springer. Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. 2023. RLAIF: scaling reinforcement learning from human feedback with AI feedback. CoRR, abs/2309.00267. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2023. Making language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5315–5333. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. CoRR, abs/2305.20050. Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, and Asli Celikyilmaz. 2023a. Making PPO even better: Value-guided monte-carlo tree search decoding. CoRR, abs/2309.15028. Jiacheng Liu, Ramakanth Pasunuru, Hannaneh Hajishirzi, Yejin Choi, and Asli Celikyilmaz. 2023b. Crystal: Introspective reasoners reinforced with self-feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11557–11572, Singapore. Association for Computational Linguistics. Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, and Jialu Liu. 2023c. Statistical rejection sampling improves preference optimization. CoRR, abs/2309.06657. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? A new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2381–2391. Association for Computational Linguistics. Eric Mitchell. 2023. A note on dpo with noisy preferences & relationship to ipo. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In NeurIPS. Daniel S Park, Yu Zhang, Ye Jia, Wei Han, Chung-Cheng Chiu, Bo Li, Yonghui Wu, and Quoc V Le. 2020. Improved noisy student training for automatic speech recognition. arXiv preprint arXiv:2005.09629. Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya Sutskever. 2023. Formal mathematics statement curriculum learning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. CoRR, abs/2305.18290. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Jie Ren, Yao Zhao, Tu Vu, Peter J Liu, and Balaji Lakshminarayanan. 2023. Self-evaluation improves selective generation in large language models. arXiv preprint arXiv:2312.09300. Christopher D Rosin. 2011. Multi-armed bandits with episode context. Annals of Mathematics and Artificial Intelligence, 61(3):203–230. 12 John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. CoRR, abs/1707.06347. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, Karen Simonyan, and Demis Hassabis. 2017. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. CoRR, abs/1712.01815. Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. 2020. Learning to summarize from human feedback. CoRR, abs/2009.01325. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288. Jonathan Uesato, Nate Kushman, Ramana Kumar, H. Francis Song, Noah Y. Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems with process- and outcome-based feedback. CoRR, abs/2211.14275. Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax. Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Y Wu, and Zhifang Sui. 2023a. Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning. arXiv preprint arXiv:2312.08935. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023b. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. In Proceedings of the 3rd Workshop on Noisy User-generated Text, NUT@EMNLP 2017, Copenhagen, Denmark, September 7, 2017, pages 94–106. Association for Computational Linguistics. Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, and Hannaneh Hajishirzi. 2023. Fine-grained human feedback gives better rewards for language model training. CoRR, abs/2306.01693. Qizhe Xie, Minh-Thang Luong, Eduard H. Hovy, and Quoc V. Le. 2020. Self-training with noisy student improves imagenet classification. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 10684–10695. Computer Vision Foundation / IEEE. Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min-Yen Kan, Junxian He, and Qizhe Xie. 2023. Decompo- sition enhances reasoning via self-evaluation guided decoding. CoRR, abs/2305.00633. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601. 13 David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computational Linguistics, 26-30 June 1995, MIT, Cambridge, Massachusetts, USA, Proceedings, pages 189–196. Morgan Kaufmann Publishers / ACL. Fei Yu, Anningzhe Gao, and Benyou Wang. 2023a. Outcome-supervised verifiers for planning in mathematical reasoning. CoRR, abs/2311.09724. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023b. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284. Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, and Jason Weston. 2024. Self-rewarding language models. arXiv preprint arXiv:2401.10020. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. 2022. Star: Bootstrapping reasoning with reasoning. In NeurIPS. Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, and Joseph E. Gonzalez. 2023. The wisdom of hindsight makes language models better instruction followers. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 41414–41428. PMLR. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. CoRR, abs/2306.05685. Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang. 2023. Solving math word problems via cooperative reasoning induced language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 4471–4485. Association for Computational Linguistics. Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, and Quoc Le. 2020. Rethinking pre-training and self-training. Advances in neural information processing systems, 33:3833–3845. 14 A Related Work Iterated Learning. Typical iterated learning operates in a multi-agent scenario, consisting of a loop where an apprentice self-plays, learns from expert feedback, and replaces the current expert for the new iteration (Anthony et al., 2017). Polu et al. (2023) apply expert iteration on formal mathematical reasoning to conduct proof search interleaved with learning. Zelikman et al. (2022) avoid the need for training a separate value function by directly assessing the final outcomes of reasoning to filter generated examples for iterated learning. Recently, Yuan et al. (2024) leverage the technique of LLM-as-a-Judge (Zheng et al., 2023) and introduce self-rewarding language models to improve LLM alignment with self-feedback. Differently, we combine the feedback of outcome assessment and LLM self-evaluation and further decompose them into fine-grained signals via MCTS for step-level iterative preference learning. Self-Training. Self-training uses unlabeled data to improve model training by assigning pseudo labels from a learned labeler (III, 1965; Yarowsky, 1995; Xie et al., 2020; He et al., 2020; Park et al., 2020; Zoph et al., 2020). Recent research has explored several alternatives to label the examples. Zelikman et al. (2022) and Gülçehre et al. (2023) use static reward heuristic to curate high-quality examples, while Huang et al. (2023) collect high- confidence outputs as training data via chain-of-thought prompting (Wei et al., 2022) and self-consistency (Wang et al., 2023b). Lee et al. (2023) and Yuan et al. (2024) utilize the off-the-shelf LLM to reward its generations for preference learning. To mitigate the noise from the sparse instance-level signals, we further refine the preference labels via stepwise tree search and LLM self-evaluation. Preference Learning. The empirical achievements of LLMs have identified the benefits of RL techniques to better align with human preferences (Touvron et al., 2023; Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a). The standard preference learning process learns a reward model to provide feedback in online RL (Schulman et al., 2017). Recently, a variety of studies avoid training separate reward or value networks by hindsight instruction relabeling (Zhang et al., 2023), direct preference optimization (Rafailov et al., 2023) and LLM self-evaluation (Ren et al., 2023). We further explore automatic supervision with MCTS to collect step-level preferences by breaking down outcome correctness integrated with self-evaluation. Our approach enables the continual collection of better-quality data via iterative learning, mitigating the limit of preference data when using a frozen reward model or offline learning algorithms. Guided Search for Reasoning. Recent works improve the LLM reasoning ability by eliciting the interme- diate reasoning chain (Wei et al., 2022) and breaking it down into multiple steps via searching (Yao et al., 2023; Hao et al., 2023; Yu et al., 2023a). The decomposition of the reasoning process has also been shown effective in reinforcement learning. Lightman et al. (2023) and Li et al. (2023) apply process supervision to train more reliable reward models than outcome supervision in mathematical reasoning (Uesato et al., 2022). Wang et al. (2023a) reinforce LLMs step-by-step with process supervision data automatically collected via model sampling and annotation. We leverage the look-ahead ability of MCTS and integrate it with step-by-step self-evaluation to provide refined process supervision for reasoning. This improves the generalization ability of our framework to update the policy via real-time collected preferences iteratively. B Theoretical Analysis of Online DPO Preliminaries. A typical alignment technique begins with a policy πsft(y | x) supervisedly fine-tuned on high-quality data from the target domain, where x and y are the prompt and the response, respectively. The SFT policy is used to sample pairs of responses (y1, y2) ∼ πsft(y | x) with prompts x, which will be further labeled as pairwise preference data yw ≻ yl | x, where yw and yl represent the preferred and dispreferred responses respectively. The standard RLHF paradigm trains a reward model (Ouyang et al., 2022) on the preference data and employs PPO (Schulman et al., 2017) to optimize the policy πθ with the feedback provided by the reward model, where πθ is also initialized to πsft in practice. DPO avoids fitting a reward model by optimizing the policy πθ using preferences directly. Given a reward function r(x, y) and prompt distribution P, RLHF and DPO optimize the KL-constrained reward maximization objective as follows: max π Ex∼P,y∼π[r(x, y)] − βDKL[π(y | x) ∥ πsft(y | x)] (9) where β scales the strength of the KL constraint. Let the ground-truth reward function be r∗, then Rafailov et al. (2023) estimate the optimal policy π∗ by fitting the Bradley-Terry model (Bradley and Terry, 1952) on 15 preference data: = p∗(y1 ≻ y1 | x) = σ(r∗(x, y1) − r∗(x, y2)) 1 (cid:16) β log π∗(y2|x) πsft(y2|x) − β log π∗(y1|x) 1 + exp πsft(y1|x) (cid:17) (10) As the maximum likelihood estimator (MLE) of the optimal policy requires preferences sampled from the target policy (Liu et al., 2023c), DPO uses a fixed, potentially optimal but unknown policy to collect preference data of good quality. This discrepancy can be a problem when the sampling policy differs dramatically from the current policy. Moreover, the absence of a reward model in DPO presents challenges in learning from additional policy-generated data that lacks explicit preference indicators. We further discuss the offline and online settings of DPO in Section 3. Additional details on labeling outcomes. After a pair of outcomes (y(i), y′(i)) are sampled from some sampling policy π(i), these are labeled to be y(i) w and y(i) according to some preference density p. That is, l ) = (y′(i), y(i))] = 1 − p(y(i) ≻ y′(i) | Pr[(y(i) x). For simplicity, a preference density is set to be p(y∗ ≻ ¯y | x) = 1 for every optima-suboptimal pairs (y∗, ¯y) for all ¯y ∈ Y . We do not specify the preference density for other pairs, i.e., p(¯y ≻ ¯y′ | x) is arbitrary for (¯y, ¯y′) ∈ Y × Y . l ) = (y(i), y′(i))] = p(y(i) ≻ y′(i) | x) and Pr[(y(i) w , y(i) w , y(i) l Abstract formulation for both offline and online settings. Our abstract formulation covers both the offline and online settings. The offline setting in previous papers is obtained by setting π(i) to be a single distribution fixed over i ∈ {1, 2, · · · , M }, e.g., an initial policy, an optimal policy, or an empirical data distribution of a given preference data. In the case of the empirical data distribution, the preference density p is set to the function outputting only 0 or 1 to recover the given preference data. The online setting is obtained by setting π(i) = πθ(i−1) where πθ(i−1) is the latest policy at the beginning of the i-th iteration, i.e., for i ≥ 1, θ(i) satisfies ℓj(θ(i)) = 0 for j ∈ {1, 2, · · · , i − 1} and θ(0) is the initialization. Thus, we can analyze both offline and online settings with this abstract framework. Proof of Theorem 3.1. Proof. The intuition behind the proof of Theorem 3.1 is that the current policy πθ(i) may not be corrected if a fixed sampling policy π never samples a suboptimal output ¯y ∈ Y whose probability is high for the current policy πθ(i) . Let ¯y be the suboptimal output such that π(¯y | x) ≤ ϵ and πθ(i) (¯y | x) ≥ c for some i ∈ {1, 2, · · · , M }. Denote preferences sampled by policy π(i) as (y(i) l ). From the definition of the logistic function, we can rewrite w , y(i) (cid:32) ℓi(θ) = − log σ β log πθ(y(i) πref (y(i) w | x) w | x) 1 − β log (cid:33) πθ(y(i) l πref (y(i) l | x) | x) = − log = − log = − log = − log (i) 1 + exp(β log πθ (y l πref (y |x) (i) l |x) − β log πθ (y πref (y (i) w |x) (i) w |x) ) exp(β log πθ (y πref (y ) (i) w |x) (i) w |x) (i) ) + exp(β log πθ (y l (i) πref (y l |x) ) exp(β log πθ (y πref (y (i) w |x) (i) w |x) πθ (y πref (y (i) w |x)β (i) w |x)β (i) w |x)β (i) w |x)β (i) + πθ (y l (i) πref (y l πθ(y(i) w | x)β + πθ(y(i) l |x)β |x)β w | x)β πθ (y πref (y πθ(y(i) (i) w |x) | x)β( πref (y (i) |x) πref (y l )β |x) . From this equation, we observe that ℓi(θ) can be minimized to be zero by minimizing πθ(y(i) w | x). That is, for any β > 0, if πθ(y(i) without maximizing πθ(y(i) | x) = 0, l l | x) to be zero w | x)β w | x)β + 0 Thus, even if we sample y∗ with the optimal policy, ℓi(θ) can be minimized without maximizing πθ(y∗ | x) and minimizing πθ(¯y|x) for ¯y ̸= y(i) for all i ∈ {1, 2, · · · , M }, there exists θ such that . Thus, if ¯y ̸= y(i) = − log 1 = 0. ℓi(θ) = − log πθ(y(i) πθ(y(i) l l 16 ℓi(θ) ≤ 0 for all i = 1, . . . , M , and πθ(¯y | x) ≥ c, because of the condition that πθ(¯y | x) ≥ c for some i ∈ {1, 2, · · · , M }: i.e., πθ(¯y | x) is never minimized from the i-th iteration while minimizing ℓi(θ) arbitrarily well, if ¯y is never sampled. Therefore, if ¯y is never sampled over m iterations, since the probabilities sums up to one, we have πθ(y∗ | x) ≤ 1 − πθ(¯y|x) ≤ 1 − c. Moreover, Pr[ ¯y being never sampled over m iterations ] ≥ (1 − 2ϵ)m ≥ 1 − 2ϵm, where the last line follows Bernoulli’s inequality. By combining the above two equations, it holds that Pr[πθ(y∗|x) ≤ 1 − c] ≥ 1 − 2ϵM. Proof of Theorem 3.2. Proof. From the proof of Theorem 3.1, we have ℓi(θ) = − log πθ(y(i) w | x)β + πθ(y(i) l w | x)β πθ(y(i) (i) w |x) | x)β( πref (y (i) πref (y |x) l )β . For α ≥ 0 and β > 0, the condition ℓi(θ) ≤ α implies that − log ⇐⇒ πθ(y(i) w | x)β + πθ(y(i) l w | x)β πθ(y(i) (i) w |x) | x)β( πref (y (i) |x) πref (y l )β πθ(y(i) w | x)β + πθ(y(i) l w | x)β πθ(y(i) (i) w |x) | x)β( πref (y (i) |x) πref (y l )β ≤ α ≥ exp(−α) ⇐⇒ πθ(y(i) w | x)β ≥ exp(−α)πθ(y(i) w | x)β + exp(−α)πθ(y(i) l (cid:32) | x)β ⇐⇒ πθ(y(i) w | x)β[1 − exp(−α)] ≥ πθ(y(i) l ⇐⇒ πθ(y(i) w | x)β[1 − exp(−α)] ≥ πθ(y(i) l (cid:32) (cid:32) | x)β exp(−α) | x)β exp(−α) πref (y(i) πref (y(i) w | x) | x) l (cid:33)β πref (y(i) πref (y(i) w | x) | x) l (cid:33)β w | x) |x) πref (y(i) πref (y(i) (cid:33)β l ⇐⇒ πθ(y(i) w | x)(exp(α) − 1)1/β (cid:32) Since πθ(y(i) w | x) ≤ 1, this implies that | x) πref (y(i) πref (y(i) l w | x) (cid:33) ≥ πθ(y(i) l | x). πθ(y(i) l | x) ≤ πθ(y(i) w | x)(exp(α) − 1)1/β πref (y(i) πref (y(i) l w | x) | x) ≤ (exp(α) − 1)1/β πref (y(i) πref (y(i) l w | x) | x) . Thus, while we can prove a similar statement for α > 0 with this equation, we set α = 0 for this theorem for a cleaner insight, yielding the following: the condition ℓi(θ) ≤ 0 implies that πθ(y(i) l | x) = 0. Since y(i) and y′(i) are sampled from πθ(i) without replacement, this means that we have πθ(i+k) (y(i) | x) = 0 for all k ≥ 1 from the definition of πθ(i) : i.e., πθ(i) is the policy such that ℓj(θ(i)) = 0 for all j = 1, . . . , i − 1. Since πθ(i+k) is then used to sample y(i) and y′(i) in the followings iterations for k ≥ 1, we will never sample this y(i) again. Thus, at each iteration, we always sample pairs of y and y′ such that these do not include an l l 17 output judged to be not preferred in a previous iteration. This implies that at each iteration, we increase the number of suboptimal samples ¯y ∈ Y such that πθ(i) (¯y | x) = 0. In other words, we have |{¯y ∈ Y | πθ(i) (¯y | x) = 0} |≥ i − 1. Thus, πθ(i) (y∗ | x) = 1 − n (cid:88) j=1 πθ(i) (¯yj | x) = 1 − (cid:88) j∈S πθ(i) (¯yj | x). where |S| ≤ n + 1 − i. Therefore, πθ(i) (y∗ | x) = 1 when i ≥ n + 1. Additional discussion. We list the additional insights gained from the theoritical analysis. • The proofs of Theorems 3.1–3.2 suggest that a better sampling policy is to use both the current policy and the optimal policy at the same time in the preference learning loss, i.e., sample y ∼ π∗ and y′ ∼ πθ(i−1) . This avoids the failure case of Theorem 3.1 and improves the convergence speed in Theorem 3.2. However, since we cannot access the optimal policy π∗ in practice, Theorems 3.1–3.2 motivate online DPO. Online DPO is proven to enable us to converge to an optimal policy even if we cannot sample outputs from the optimal policy. • The proofs of Theorems 3.1–3.2 suggest that if we can sample from the optimal policy, then we can also use the samples of the optimal policy with the negative log-likelihood loss − log πθ(y∗ | x) instead of DPO loss to avoid the failure case. • The proofs of Theorems 3.1–3.2 suggest that in the online setting, we should minimize the DPO loss to a certain low degree per iteration, i.e., we should take several rounds of minimization of DPO loss per online iteration, instead of only taking one round of minimization per iteration. This is because the proofs of Theorems 3.1–3.2 show that we can get into the cyclic situation in the online setting if the DPO loss is not minimized sufficiently per iteration. For example, we can sample ¯y1 and ¯y2 in one iteration and ¯y2 and ¯y3 in another iteration where ¯y1 ≻ ¯y2 ≻ ¯y3. If the probability of sampling ¯y2 is not minimized sufficiently in the first iteration, it can be sampled again in the second iteration, where the probability of sampling ¯y2 can be increased as ¯y2 ≻ ¯y3. Then, this can repeat indefinitely. Thus, it is important to minimize DPO loss with several optimizer iterations per iteration. C Implementation Details We use Mistral-7B as our base pre-trained model. The supervised fine-tuning and preference learning experiments are conducted with a maximum of 4 × 40GB GPUs (NVIDIA A100). We choose the learning rates 5e-6 and 1e-6 for SFT and DPO training, respectively, with a cosine learning rate scheduler. The maximum sequence length of models is 512. We train the model with a batch size of 128 and 32 for SFT and DPO, respectively. For DPO, we follow the DPO paper to set the KL constraint parameter β as 0.1. Each sample in DPO is a set of step-level preference data decomposed by MCTS. We set the max length for each step as 64. The number of MCTS iterations is set as K = 5 for all tasks. For arithmetic reasoning, we combine the problems in GSM8K and MATH training sets as the prompt data containing a total of 24K samples for preference learning. For each sample, we conduct MCTS with an initial breadth of b1 = 5 and decrease it to b2 = 3 for the subsequent steps, with a maximum search depth d = 4. It takes about 2 minutes per sample to collect the step-level preferences via MCTS. This requires about 30 A100 days of compute to train one whole epoch. In practice, we can adopt an early stop when the performance saturates, which usually only needs 30% of the training data. For commonsense reasoning, we combine the training data of ARC, AI2Science, OBQA, and CSQA, which produces a total of 12K samples. As the model generations are more diversified on these tasks, we set the initial breadth as b1 = 4 and decrease it to b2 = 2 for subsequent steps. As the intermediate reasoning chains are relatively shorter than those in arithmetic reasoning, we set the maximum search depth d = 3. Likewise, we also adopt an early stop at around 50% of the training progress where the performance saturates. Hyperparameter Tuning of MCTS. We compare the performance in commonsense reasoning when employing different searching breadths in MCTS. Table 5 shows how different search heuristics impact learning performance. O2 produces better performance, highlighting the importance of increasing the search space at the beginning point of MCTS. One can efficiently reduce compute while maintaining good performance by using a small search space for the subsequent steps. For future work, we will explore the hyperparameter settings in MCTS, including the search breadth, depth, number of steps, and iteration time, to probe the cost–performance tradeoff of our MCTS-enhanced iterative learning framework. 18 Approach SFT Baseline O1 (b1 = 3, b2 = 3) O2 (b1 = 4, b2 = 2) ARC-e ARC-c AI2Sci-e AI2Sci-m OBQA CSQA SciQ 69.2 88.4 88.5 60.6 74.7 76.4 74.9 92.1 91.7 70.9 88.5 88.2 59.8 77.8 79.2 54.1 73.2 74.8 80.8 88.3 88.5 Table 5: Result comparison of using different search breadths in MCTS. For O2, we have a broader spectrum for the initial step and narrow the search space for the subsequent steps of each path. Prompt Example. See an example of the evaluation prompt we use for self-evalution in Table 6. For more details, please refer to our implementation code. Table 6: Evaluation Prompt Template. The text underlined will be replaced with content from different examples. QUESTION: Which of the following is an example of the formation of a mixture? Answer Choices: (A) rust forming on an iron nail (B) sugar crystals dissolving in water (C) sodium and chlorine forming table salt (D) hydrogen and oxygen reacting to produce water EXAMPLE ANSWER: The answer is (B) sugar crystals dissolving in water PROPOSED SOLUTION: The formation of a mixture occurs when two or more substances are combined together without changing their individual properties. In the given options, rust forming on an iron nail is an example of the formation of a mixture. The iron nail and the oxygen in the air combine to form iron oxide, which is a mixture. The answer is A. QUESTION: Evaluate if the proposed solution is logically heading in the correct direction. Provide an answer of (A) correct or (B) incorrect. ANSWER: The answer is D Further Analysis Reward Criteria in MCTS. We probe the effect of different reward guidance of MCTS in terms of both searching and training. Table 7 shows how different reward signals impact the pass rate of searching. The guidance of outcome correctness is substantially dominant in eliciting correct outcomes. We see that MCTS can produce significant improvement across various tasks with the reward signals integrated of outcome correctness and self-evaluation, increasing the baseline performance from 60.6% to 83.0% on ARC-C, 70.9% to 90.5% on AI2Sci-M, and 75.9% to 85.8% on GSM8K. We observe a significant performance gain from learning when using greedy decoding on commonsense reasoning. For example, learning increases the accuracy to 76.4% (+16.4%) on ARC-C, compared to the increase of 9.1% on MCTS performance. This suggests a substantial improvement in the model’s policy when applying our MCTS-enhanced iterative learning to tasks that the initial policy is not good at. Furthermore, the ablation study on the reward components shows consistent improvement brought by self-evaluation to increase the MCTS performance in both before- and after- learning cases, suggesting the effectiveness of the integration of self-evaluation in our approach. Table 7: Pass Rates when Ablating MCTS Settings. SE represents the guidance from self-evaluation. Decoding Strategy After Learning ARC-C AI2Sci-M GSM8K Greedy Decoding MCTS w/o SE MCTS ✗ ✓ ✗ ✓ ✗ ✓ 60.6 76.4↑16.4 70.9 88.2↑17.3 82.5 91.0↑8.5 83.0 92.1↑9.1 87.3 96.1↑9.8 90.5 97.3↑6.8 75.9 80.7↑5.2 84.4 89.0↑5.6 85.8 90.2↑4.4 Qualitative Analysis on Collected Preferences. We show examples of the result search trees elicited via MCTS on different tasks in Figures 5–9. 19 Figures 5 and 6 show the result search trees to answer the same science question using MCTS employed with different search breadths. We see that MCTS not only figures out the correct answer (i.e., the option “D”) via broad searching but also serves as a policy improvement optimizer to collect steps along this path as positive samples for preference learning. For example, the Q values of the preference pair at the last step (at the bottom right of Figure 5) are 0.70838 and −0.45433, compared to the original probability in the policy generation as 0.37989 and 0.38789. Compared to searching with breadth b1 = 4, b2 = 2 in Figure 5, Figure 6 shows that a higher breadth for the subsequent steps can produce an even larger search tree. However, as we only collect preference pairs alongside the paths leading to correct prediction, these two search heuristics can result in preference data of similar size. Figure 7 shows the search tree using the trained policy on commonsense reasoning. Compared to the one generated by the initial policy in Figure 5, the policy has a higher chance to elicit correct reasoning chains, as we see more successful predictions of the ground-truth option “D”. We also observe that the policy tends to generate longer reasoning chains after being motivated to conduct chain-of-thought reasoning with fine-grained process supervision. On arithmetic reasoning, we also probe the impact of diversity in model generations using policies trained for different numbers of epochs in SFT. Figures 8 and 9 show the elicited search trees with data sampled by policies corresponding to different levels of diversity, where the policy used in Figure 8 has generations with higher diversity. With higher diversity, MCTS can explore more alternatives of the correct solutions, as there are more paths of correct predictions, as shown in Figure 8 than Figure 9. Furthermore, higher diversity with reasonable quality also provide more fine-grained supervision signals as there are more branches alongside the reasoning path of correct predictions. E Extended Experiments Loss Function. DPO is one of the reward-model-free loss functions we can use for preference learning. We now illustrate the generalizability of our approach using another loss function, Identity Preference Optimization (IPO) (Azar et al., 2023), which addresses the overfitting problem of DPO. Table 8 shows that IPO achieves similar performance as DPO. In practice, we find that IPO boosts the reasoning on validation tasks while maintaining a more stable performance on the held-out dataset, as indicated by the higher accuracy 89.8% obtained on SciQ. Table 8: Result comparison of employing our approach with different loss functions. Approach ARC-e ARC-c AI2Sci-e AI2Sci-m OBQA CSQA SciQ SFT Baseline O1 (IPO) O2 (DPO) 69.2 88.1 88.5 60.6 75.1 76.4 74.9 92.1 91.7 70.9 89.6 88.2 59.8 76.8 79.2 54.1 74.3 74.8 80.8 89.8 88.5 Base Model. We extensively validate the generalizability of our approach on Llama2-13B (Touvron et al., 2023) on arithmetic reasoning. We employ the same process of SFT on Arithmo and preference learning with DPO on GSM8K and MATH. This experiment is done on a maximum of 2 × 80GB GPUs (NVIDIA A100). Table 9: Result comparison (accuracy %) for Llama2-13B on arithmetic tasks. Approach Base Model GSM8K-CoT GSM8K-PoT MATH-CoT SFT (Arithmo) Ours Llama2-13B 74.5 78.9↑4.4 62.3 67.0↑4.7 23.8 26.1↑2.3 20 Figure 5: Example of the result search tree of a science question “An unbalanced equation for the reaction of methane gas (CH4) with oxygen is shown below. CH4 + □O2 → 2CO2 + 4H2O How many molecules of oxygen gas (O2) are needed to properly balance this equation? Answer Choices: (A) 1 (B) 2 (C) 3 (D) 4”. The ground truth answer is “(D) 4”. Here, we set the search breadth as b1 = 4, b2 = 2. The numbers at the beginning of each sequence indicate the visit count N of the corresponding node, while the Q and P values at the end of the sequence represent the Q values and the sentence probability, respectively. 21 [51] <s>(Q:0.00000)(P:-1.00000)[5] . When 1 CH4molecule reacts 2 O2 moleculesare needed with 2 Co2molecules and 4 H2O moleculesformed.(Q:-0.14156)(P:0.36265)[2] Since CH4 + (2O2) > 2 CO2 +4 H2O So CH4 +O2 + O2 > 2CO2 + 4 H2O...(Q:-1.10341)(P:0.35997)[0] Thus O2 isthe missing molecule. Theanswer is option A</s>(Q:-1.07136)(P:0.26702)[1] So 2 O2are needed to balanceO2. The answer isB</s>(Q:-1.06490)(P:0.51277)[2] So 2molecules of O2are required(Q:-1.00626)(P:0.40928)[1] . The answeris B The answeris B</s>(Q:-1.02341)(P:0.35482)[0] to balance thisreaction The answer isB B</s>(Q:-1.01727)(P:0.27444)[5] . CH4 + O2 -> 2CO2+ 4H2O By looking at Answer choicesWe can see that we need tofind a multiple of O2. If wetake 2 O2. Then we can have2 CO2.(Q:-0.28709)(P:0.33311)[2] CH4 + 2O2 -> 2CO2 +4H2O So we need2 O2.(Q:-1.45947)(P:0.63279)[1] The answeris B</s>(Q:-1.44929)(P:0.89721)[2] So Answeris 2. HenceB.(Q:-1.02474)(P:0.43360)[1] </s>(Q:-1.00000)(P:0.60447)[3] . The overall reaction is 24in H, 32 in C, and 0in O, so it's clearly a hydrocarboncombustion. Balancing the overall equation is notvery clear. This unbalanced equation can bebalanced to show the oxygen requirement.(Q:-0.02361)(P:0.15714)[2] So answer is2. CH4 + 2O2 2 CO2+ 2 H2O(Q:-1.41080)(P:0.52668)[1] 24 in H, 32 in C,and 0 in O and it isalready balanced, so the overall balanced is:CH4 + xO2 > 2 CO2 +2 H2O The answer is 2. ANSWER:B</s>(Q:-1.39894)(P:0.33174)[0] The total number of oxygen required isgiven in the balanced equation but the unbalancedCH4 + 2 O2 -> 2 CO2 +2 H2O shows a balanced molecular ratio of4 elements - H = 4; C =2; O = 4 as compared to theun(Q:-0.03713)(P:0.20969)[29] . CH4 + O2 ->2CO2 + 4H2O, On the RHS,2CO2 combines with water to formcarboxylic acid. Hence, the number ofwater molecules should be 2(Q:1.02593)(P:0.36184)[2] . Hence, the Equation shouldbe CH4 + 2 O2 ->2CO2 + 4H2O. Now, the numberof H on either side ofthe equation should be equal. Onthe LHS: H4 C(Q:-1.10386)(P:0.34391)[1] 4 On the RHS: H2O2 Hence, the number of O2should be 2. The balanced equationis CH4 + 2 O2 ->2CO2 + 2H2O. The answer isOption B</s>(Q:-1.11003)(P:0.53874)[0] CH4 On the RHS: H4O 2H4 O2 (4 H's inH2O) Thus, H2O has to undergothe reaction 2H2O -> H4O2. Theanswer is B</s>(Q:-1.13745)(P:0.30599)[32] CO2.(Q:0.21432)(P:0.04583)[29] 2 H2O -> 4 H2O Now,the RHS has 4 water molecules and4 O atoms but the LHS hasonly 2 water molecules and 1 Oatom. Hence, Both water molecules should have2 O atoms each , hence(Q:0.37176)(P:0.33691)[4] 2 water molecules have 2 X4 = 8 O atoms 1 watermolecule has 1 X 2 = 2O atoms Hence, 8 - 2 =6 O atoms are still left. 6O atoms come from O2 molecules Answer(Q:-0.45433)(P:0.37989)[1] CH4 + O2 -> 2CO2+ 4H2O 2x( H ) +1x( O ) + 1x( O) -> 2x( O ) +4x( H ) + 4x( O) H2O + O ->(Q:-0.00086)(P:0.48775)[0] O2 The correctanswer is B henceB</s>(Q:-1.05593)(P:0.18124)[12] , total 4 Oatoms. So O2 is neededto balance the equation Theanswer is D</s>(Q:0.70838)(P:0.38789)[7] 2H2O -> CO2x2H2 On theRHS, 2CO2 x 2H2O -> CO2x2H2O2On the LHS, CH4 : C1H4 - C1 H1 On theLHS, CH4 -(Q:-0.06057)(P:0.30056)Tree Visualization Figure 6: Example of the result search tree of the same science question as in Figure 5. Here, we set the search breadth as b1 = 3, b2 = 3. 22 [50] <s>(Q:0.00000)(P:-1.00000)[5] . When 1 CH4molecule reacts 2 O2 moleculesare needed with 2 Co2molecules and 4 H2O moleculesformed.(Q:0.16088)(P:0.36265)[0] Balance the equation by writingthe correct equation above and thensubtracting each part from both inthe following: New equation: 2CH4 +4 O2 -> 4Co2 + 8H2O(Q:0.01167)(P:0.22393)[2] Balanced Equation: CH4+ 2 O2 ->2 CO2 + 4H2O(Q:-0.72269)(P:0.76063)[1] Hence 'excess' O2 molecules need tobe formed to form the H2O neededto completely balance the oxygen formed. 2O2 molecules will form 2 H2O moleculesthat exactly balance the 2 O2 moleculesthat were formed. Thus there is NO(Q:-0.03828)(P:0.18065)[0] Therefore 2 oxygenmolecules are needed. Theanswer is B</s>(Q:-1.07299)(P:0.57110)[0] O2 is adi molecular gas so2. The answer isB</s>(Q:-1.07995)(P:0.13073)[2] So, 2/2 O2/1 CH4 =>1 O2 for 1 CH4 Inthe unbalanced equation 1 CH4 +1 O2 -> 2 CO2 +2 H2O So 1 O2 wouldbalance the given equation.(Q:-1.16414)(P:0.37983)[1] The answeris A</s>(Q:-1.09599)(P:0.92723)[0] ( For a balanced equationNumber of atoms of same elementon the two sides should beequal ) Correct Answer The correctanswer is A</s>(Q:-1.10798)(P:0.23028)[5] . CH4 + O2 -> 2CO2+ 4H2O By looking at Answer choicesWe can see that we need tofind a multiple of O2. If wetake 2 O2. Then we can have2 CO2.(Q:0.00566)(P:0.33311)[2] That means we need 2to balance answer. The next onewould be 4 CO2. For Balancing,We need 4 as well. Thechoice is 2.(Q:-1.36182)(P:0.21155)[0] The answeris B 2O2.</s>(Q:-1.32715)(P:0.42094)[0] The answeris 2 ANSWER:B</s>(Q:-1.39801)(P:0.59756)[1] The answeris B</s>(Q:-1.29846)(P:0.91250)[0] HenceAnswer.(Q:-0.01028)(P:0.35430)[2] Thus Balanced Equation, CH4 +2 O2 -> 2 CO2 +4H2O Take 1 O2. Then weneed only 1 CO2. Thus, Take2 CO2.(Q:-1.00641)(P:0.44905)[1] Thus the Balanced Equation, CH4+ 1 O2 -> 2 CO2+ 4H2O Thus the Balanced Equation,CH4 + 1 O2 -> 2CO2 + 4H2O The equation getsbalanced by having Two(Q:-0.00585)(P:0.54334)[0] We need only 1 O2.So Answer is O2. Therefore, Take2 O2. The Equation is Balanced.Take 2 O2 The answer isOption B</s>(Q:-1.55572)(P:0.35634)[0] And then we needO2. Total Answer, CH4 +O2 -> 2CO2 + 4H2OHence 1 Mole of Theanswer is A</s>(Q:-1.37009)(P:0.35589)[31] . The overall reaction is 24in H, 32 in C, and 0in O, so it's clearly a hydrocarboncombustion. Balancing the overall equation is notvery clear. This unbalanced equation can bebalanced to show the oxygen requirement.(Q:1.18694)(P:0.15714)[2] CH4 + O2 -> CO2+ H2O The total oxygen requirementshould be 2. The ratio CH4to O2 is 1 to 1,suggesting oxygen demand is 1 to2.(Q:-1.43110)(P:0.23469)[0] CH4 + 2 O2 ->2 CO2 + 2 H2O Therefore,2 molecules of oxygen gas areneeded to balance the equation Theanswer is B</s>(Q:-1.41782)(P:0.63130)[0] The answeris B</s>(Q:-1.43029)(P:0.65357)[1] CH4 + 2O2 -> 2 CO2+ 2 H2O. Theanswer is B</s>(Q:-1.40455)(P:0.66416)[31] 2C +2 H_2O + O2 ->2CO2 + 2 H2O The balanced equationuses more oxygen, because now it showsthat one molecule of each element oneither side is balanced by two moleculesof each element.(Q:0.33569)(P:0.20021)[4] Oxygen is not the limitingreagent, because hydrogen is. 2C +2H_2O + O2 -> 2CO2 +2 H2O 1+2+1 -2 = 63+2+1 -4 = 4 Total 6for(Q:0.02011)(P:0.27802)[0] both sides with the balanced equation. Nowlook at the oxygen. Before it was 2,and then it was 4, a doubling ofoxygen requirement. If a tripling of oxygen requirementis needed to balance the equation, then itwill be 8. If 3 is needed, thenit goes(Q:0.03925)(P:0.18201)[1] left side, 4 for rightside; not balanced -- H2O islimiting, only 2 more are needed.2+2+1 -2 = 6 3+2+1 -6= 2 Answer ; H2O islimiting; needs two more molecules.(Q:0.06306)(P:0.22207)[18] The answeris D</s>(Q:0.68647)(P:0.46097)[3] That balances the equation and the twowaters from the combustion are now together. Thenew equation is balanced (because the number nextto each element is balanced by the numberof times it appears on the other sideof the equation) and clearly requires two moleculesof O2. Final The answer is(Q:-1.19775)(P:0.19619)[1] B</s>(Q:-0.54880)(P:0.78020)[0] B</s>(Q:-0.97030)(P:0.71450)[3] Here 22C+240H+260(total molecularweight) Then Both sidesare divided by 2.1/212C+1/240H+1/4600(total molecular weight)(Q:-0.02526)(P:0.14846)[1] Now Since 2O2 isrequired for 4600*1/4. 2O2 isneeded for overall equation. 212C+240H+2600(totalmolecular weight) 1/212C+1/240(Q:0.02280)(P:0.33173)[1] 1/21 + 1/4 +1/4600 = 1/4600(total molecular weight)46 + 0 + 0= 46 2(46+260)2(46) Therefore, you(Q:0.03093)(P:0.29965)[0] The total molecular weight representsthe amount of oxygen gas requiredwhen the 12C is completely combusted.(12C+16H+260) Hence 1 mole (1 *64) of oxygen gas need 12C,16H, and(Q:0.03317)(P:0.17285)Tree Visualization Figure 7: Example of the result search tree of the same science question as in Figure 5. Here, we use the policy after preference learning and set the search breadth as b1 = 4, b2 = 2. 23 [50] <s>(Q:0.00000)(P:-1.00000)[3] . Looking at the products ofthe reaction we can see that thereare 4 O atoms on the rightside. 4 atoms of either oxygen orhydrogen must come from the left sideto balance.(Q:0.81835)(P:0.21888)[2] Since there are two hydrogen atomsfor every one carbon atom, we knowthere must be twice as many carbonatoms as hydrogen atoms on the leftside of the equation. We can writeCH4 as C H 2.(Q:0.29309)(P:0.42767)[1] With two hydrogen atoms,we would need 2 *2 = 4 to balancethe oxygen. The answer isD.</s>(Q:1.20966)(P:0.42097)[0] To balance the oxygen atomswe need to add one oxygenatom for each carbon atom. Thereforewe need 2 oxygen atoms tobalance this equation. The answer isB.</s>(Q:-0.83019)(P:0.50695)[0] Since there are 2 hydrogenatoms in methane (CH4), 2 hydrogenatoms are converted to 2 watermolecules (2 H2O) that yield 2oxygen atoms in the process. Balanceequation:(Q:0.01158)(P:0.32396)[3] . When an equation is unbalance,we have to add the same moleculeto both sides so they will balance.In this case, we need to addequal number of oxygen atoms to bothsides. The overall reaction is:(Q:0.53122)(P:0.24138)[0] CH_{4} + 2CO_{2}+ 4 H_{2}O, Thisnow balances the equation.(Q:0.01760)(P:0.25560)[2] CH_{4} + /Box O_{2}-> 2CO_{2} + 4H_{2}O Since,CH_{4} has 4 Hydrogen and1 Carbon atoms. Hence,(Q:-0.27509)(P:0.59783)[1] carbon atom reacts with 4oxygen atoms. Therefore, 2CO_{2} will have4 oxygen atoms. So equation willbe: CH_{4} + 4 O_{2} ->2CO_{2} + 4H_{2}O The answer isD.(Q:0.19991)(P:0.49764)[0] it will need 2Oxygen atoms just to balancethe O atoms. The answeris B</s>(Q:-0.84370)(P:0.32001)[29] . First, we need tocheck which element is missing onboth sides of the equation. Weknow that oxygen (O) is missingon the left side.(Q:1.73437)(P:0.39171)[2] We figure this out becauseon the right side, 2(CO2) means4(O) and 4(H2O) means 8(O), whichis more oxygen than what isneeded to fulfill the 2(CO2).(Q:0.17238)(P:0.33288)[0] Therefore, the left uses 1(O) twice, fora total of 2(O). Second, we must findout how many oxygen molecules are needed toget a balanced equation. We balance by addingthe same thing to both sides that canbe multiplied the same on each side. Inthis(Q:0.03162)(P:0.23481)[1] Next, we need to determine if CH4is a limiting reagent. A limiting reagent isthe reactant that is mostly likely to runout before all of the products can bemade. This is important because we need tofind out how many times we need tomultiply the number of moles of oxygen(Q:-0.01939)(P:0.38211)[33] So we need to findthe number of oxygen atoms wecan get from the right sideand place that many on theleft side.(Q:1.16743)(P:0.35560)[22] 2CO2 has a total of 4oxygen atoms. So, we can place 4oxygen atoms on the left side andbalance the equation. The equation is now:CH4 + 4O2 -> 2CO2 + 4H2O.The answer is D.</s>(Q:1.17833)(P:0.63351)[5] On the right side, we have2 molecules of carbon dioxide (CO2) and4 molecules of water (H2O). Each moleculeof carbon dioxide has 2 oxygen atoms,so 2 x 2 = 4 oxygenatoms from the carbon dioxide(Q:0.05621)(P:0.76038)[2] . Each molecule of water has 1oxygen atom, so 4 x 1 = 4oxygen atoms from the water. So, we canplace 4 oxygen atoms from the right sideon the left side. Now, we have abalanced equation on the left side, which meanswe have the right(Q:0.28120)(P:0.57643)[1] number of elements with the rightatomic weights. So, we can add 4molecules of oxygen gas (O2) on theleft side. The final equation is: CH4+ 4O2 -> 2CO2 + 4H2O Theanswer is D.</s>(Q:1.30184)(P:0.55775)[0] number of each element onboth sides. The left side has4 oxygen atoms, so we need2 molecules of oxygen gas (O2)to complete the reaction. The answeris B.</s>(Q:-0.74346)(P:0.58410)[0] . Each molecule of water has 2hydrogen atoms and 1 oxygen atom, so 4x 1 = 4 oxygen atoms from water.A total of 4 oxygen atoms is availablefrom the right side. Now, we need toplace 2 oxygen atoms on the left sideof the equation(Q:-0.08359)(P:0.63161)[5] . The chemical formula formethane gas is CH4. The chemicalformula for oxygen gas is O2.The chemical formula for carbon dioxideis CO2. The chemical formula forwater is H2O.(Q:0.31012)(P:0.81115)[2] To balance this equation, we needto determine the ratio of the numberof molecules of O2 to the numberof molecules of CO2 and H2O. Onthe left side of the equation, wehave 1 molecule of CH4 and xmolecules of O2.(Q:0.09074)(P:0.80392)[1] On the right side of theequation, we have 2 molecules of CO2and 4 molecules of H2O. To balancethe number of molecules, we need tohave 2 molecules of CO2 for every1 molecule of CH4, and we needto have(Q:-0.02205)(P:0.81702)[0] On the right side of theequation, we have 2 molecules of CO2and 4 molecules of H2O. Since thecoefficient of CO2 is 2 and thecoefficient of H2O is 4, the ratioof the number of molecules of O2to the number of(Q:-0.04654)(P:0.79002)[2] To balance this equation, we needto compare the number of oxygen atomson the two sides. There are 4oxygen atoms on the right side ofthe equation. Therefore, we need 4 oxygenatoms on the left side of theequation to properly balance the equation.(Q:-0.71715)(P:0.59589)[0] Since each oxygen moleculecontains 2 oxygen atoms, weneed 2 O2 molecules. Theanswer is B.</s>(Q:-0.76463)(P:0.71485)[1] Since each oxygen atomconsists of 2 atoms (O2),we need 2 molecules ofoxygen gas (O2). The answeris B.</s>(Q:-0.77903)(P:0.74965)Tree Visualization Figure 8: Example of the result search tree of a GSM8K question “The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning. They sold 93 loaves in the morning and 39 loaves in the afternoon. A grocery store returned 6 unsold loaves. How many loaves of bread did they have left?”. The example solution is “The Bakery sold 93 + 39 = 132 loaves. The Bakery made 200 loaves and sold 132, leaving 200 - 132 = 68 loaves remaining. The grocery store returned 6 loaves, so there were 6 + 68 = 74 loaves left.”. The policy we use here is the one only tuned for 1 epoch on SFT training data. We conduct MCTS with breadth b1 = 5, b2 = 3. Duplicate generations are merged into one node. Figure 9: Example of the result search tree of the same GSM8K question as in Figure 8 with the same search breadth. We use the policy tuned after 3 epochs to sample the generations. 24 [49] <s>(Q:0.00000)(P:-1.00000)[6] The bakers baked 200 loavesof bread in total. They sold93 loaves in the morning and39 loaves in the afternoon, sothey sold 93 + 39 =132 loaves.(Q:0.74724)(P:0.87045)[0] A grocery store returned6 unsold loaves, so theyhave 200 - 132 +6 = 78 loaves left.(Q:0.02081)(P:0.79960)[2] They also returned 6unsold loaves, so they had132 - 6 = 126loaves left.(Q:-0.88117)(P:0.89317)[1] Therefore, they had126 loaves of breadleft. The answer is:126</s>(Q:-0.89611)(P:0.95100)[0] The answeris: 126</s>(Q:-0.90947)(P:0.96013)[3] After selling, they had 200- 132 = 68 loaves ofbread left. The grocery store returned6 unsold loaves, so they had68 + 6 = 74 loavesof bread left.(Q:1.18816)(P:0.89653)[2] The answeris: 74</s>(Q:1.06829)(P:0.99951)[3] The bacon factory produced200 - 93 = 107loaves on Monday. Then theyproduced 200 - 39 =161 loaves on Tuesday.(Q:-0.06717)(P:0.47814)[0] So the company bakeda total of 107 +161 = 268 loaves ofbread.(Q:0.00192)(P:0.62347)[0] They had 161 +107 - 6 = 252- 6 = 246 loaveson hand They have 246loaves of bread left.(Q:-0.01176)(P:0.51426)[2] They have 107+ 161 - 6= 254 loaves left.(Q:-1.01392)(P:0.71103)[1] The answeris 254</s>(Q:-1.00238)(P:0.99992)[30] They sold 93 loaves inthe morning and 39 loaves inthe afternoon, so they sold atotal of 93 + 39 =132 loaves of bread.(Q:1.44411)(P:0.84845)[11] So, they have 200 -132 = 68 loaves of breadleft. The grocery store returned 6unsold loaves, so they have 68+ 6 = 74 loaves ofbread left.(Q:1.13120)(P:0.80345)[10] The answeris: 74</s>(Q:0.99283)(P:0.99903)[2] The grocery store returned 6unsold loaves, so the total numberof loaves of bread they didnot sell is 132 + 6= 138 loaves.(Q:-0.49569)(P:0.65347)[0] They started with 200 loavesand sold 138 loaves, so theyhave 200 - 138 = 62loaves of bread left. The answeris: 62</s>(Q:-0.99298)(P:0.93236)[1] They baked 200 loaves of bread,so the total number of loaves ofbread they had from the beginning is200. To find the number of loavesof bread they had left, we subtractthe number of loaves they did notsell from the total number of(Q:-0.07920)(P:0.77642)[21] They had 200 loavesbaked and sold 132 loaves,so they had 200 -132 = 68 loaves ofbread left.(Q:1.14610)(P:0.89809)[15] The grocery store returned6 unsold loaves, so theyhad 68 + 6 =74 loaves of bread left.The answer is: 74</s>(Q:1.08365)(P:0.98273)Tree Visualization[46] <s>(Q:0.00000)(P:-1.00000)[3] The bakery baked 200 loaves ofbread on Monday morning. They sold 93loaves in the morning and 39 loavesin the afternoon, so they sold atotal of 93 + 39 = 132loaves.(Q:0.25430)(P:0.93239)[2] They also had 6 loaves returnedby the grocery store. To find outhow many loaves of bread they haveleft, we need to subtract the loavesthey sold and the returned loaves fromthe total number of loaves baked.(Q:-1.05085)(P:0.79282)[0] 200 - 132 +6 = 78 The bakeryhas 78 loaves of breadleft. The answer is: 78</s>(Q:-1.07053)(P:0.84111)[1] So, 200 - 132- 6 = 62 Therefore,they have 62 loaves ofbread left. The answer is:62</s>(Q:-1.07033)(P:0.91906)[0] A grocery store returned 6unsold loaves, so the total numberof loaves they have left is200 - 132 + 6 =88 loaves.(Q:-0.02430)(P:0.80175)[0] A grocery store returned 6unsold loaves, so they had 132- 6 = 126 loaves left.Therefore, the bakery had 200 -126 = 74 loaves of breadleft.(Q:-0.03073)(P:0.89432)[3] They had 200 - 93= 107 loaves left after themorning sales. They had 107 -39 = 68 loaves left afterthe afternoon sales.(Q:-0.52703)(P:0.86016)[2] They had 68 -6 = 62 loaves ofbread left after the grocerystore returned the unsold loaves.(Q:-1.05568)(P:0.80182)[1] The answeris 62</s>(Q:-1.06738)(P:0.99711)[30] After the morning sales, theyhad 200 - 93 = 107loaves of bread left. After theafternoon sales, they had 107 -39 = 68 loaves of breadleft.(Q:1.65898)(P:0.80497)[34] After the grocery storereturned the loaves, they had68 + 6 = 74loaves of bread left.(Q:1.13097)(P:0.91597)[28] The answeris 74</s>(Q:0.98327)(P:0.99718)Tree Visualization
ai_researcher
1
Using_Semantic_Technology_To_Model_Persona_For_Adaptable_Agents.pdf
Using Natural Language Inference to Improve Persona Extraction from Dialogue in a New Domain Alexandra DeLucia1, 2˚, Mengjie Zhao1, Yoshinori Maeda1, Makoto Yoda1, Keiichi Yamada1, Hiromi Wakaki1 1Sony Group Corporation, Tokyo, Japan 2Center for Language and Speech Processing, Johns Hopkins University [email protected] {Mengjie.Zhao, Yoshinori.B.Maeda, Makoto.Yoda, Keiichi.K.Yamada, Hiromi.Wakaki}@sony.com 4 2 0 2 n a J 2 1 ] L C . s c [ 1 v 2 4 7 6 0 . 1 0 4 2 : v i X r a Abstract While valuable datasets such as PersonaChat provide a foundation for training persona- grounded dialogue agents (Zhang et al., 2018), they lack diversity in conversational and nar- rative settings, primarily existing in the “real” world. To develop dialogue agents with unique personas, models are trained to converse given a specific persona, but hand-crafting these per- sona can be time-consuming, thus methods ex- ist to automatically extract persona informa- tion from existing character-specific dialogue (Wang et al., 2022). However, these persona- extraction models are also trained on datasets derived from PersonaChat and struggle to pro- vide high-quality persona information from conversational settings that do not take place in the real world, such as the fantasy-focused dataset, LIGHT (Urbanek et al., 2019b). Cre- ating new data to train models on a specific setting is human-intensive, thus prohibitively expensive. To address both these issues, we introduce a natural language inference method for post-hoc adapting a trained persona extrac- tion model to a new setting. We draw inspira- tion from the literature of dialog natural lan- guage inference (NLI; (Welleck et al., 2019)), and devise NLI-reranking methods to extract structured persona information from dialogue. Compared to existing persona extraction mod- els, our method returns higher-quality extracted persona and requires less human annotation. 1 Introduction Dialog agents are assigned a persona or a descrip- tion of a character identity to impersonate while responding to a user. These hand-crafted persona descriptions are supplied to the model’s context along with the conversation history (Urbanek et al., 2019a; Zhang et al., 2018) or via character-specific embeddings added to the encoder (Li et al., 2016b) during response generation. ˚Work performed while interning at Sony. Figure 1: PeaCoK-style persona graph for a few charac- ters from the LIGHT fantasy role-playing dataset. The graph was built from character utterances in conversa- tions. Constructing persona for these agents can be time-consuming, but persona extraction alleviates this by automatically extracting information about a character given past dialogue utterances (Wang et al., 2022; Lu et al., 2022; Zhou et al., 2023; Zhu et al., 2023b). The task is a precursor to developing personalized response generation models consis- tent with its assigned persona. Manually crafting persona can be arduous, and what a character says can be rich with information. However, current models and techniques for persona extraction are all trained on the same domain, i.e., casual chit- chat, since the standard dataset for training is based on PersonaChat (Zhang et al., 2018). PersonaChat is considered to exist in the “real world” and thus persona extraction models trained on it have diffi- culty extracting persona in new and distant narra- tive domains such as the fantasy world in LIGHT (Urbanek et al., 2019a). Since data annotation for fine-tuning is costly, we explore post-hoc methods on a trained model to mitigate domain adaptation issues. Our approach to this domain adaptation issue is to cast it as a natural language inference task, since we want to ensure that extracted persona information is reasonable given the original utterance. We start with a new model trained on PersonaExt (Zhu et al., 2023b), a semi-automatic labeled dataset for persona ex- traction from PersonaChat utterances. PersonaExt contains fine-grained relations that do not apply to all narrative settings (“domains”), so we categorize the 105 persona relation types into 4 that are gen- eral enough to work with various narratives and characters in varying domains: experience, goal or plan, routine or habit, and characteristic. For example, both a pirate in a fantasy world and a real- world accountant have persona knowledge about routines and goals, e.g., (I, goal or plan, want to pillage) and (I, goal or plan, want a raise), respec- tively (Figure 1). These relation types are from the only knowledge graph (KG) specifically designed for persona, PeaCoK (Gao et al., 2023). For our methods, we cast persona extraction as a sequence-to-sequence (seq2seq) problem, mapping an utterance to a persona “triplet” that can be parsed and added to a KG (Ni et al., 2022; Wang et al., 2022; Zhu et al., 2023b). An important problem when generating from out-of-domain utterances is model hallucination or generation of low-quality, generic, or incorrect persona information. Since KGs should be precise, we introduce a natural- language inference persona pruning step, with a natural language inference (NLI) model trained specifically for determining if an extracted triplet can be inferred from the utterance. We explore three approaches: (1) guided decoding to force the model’s output to be in the correct format, (2) gen- erate many and re-rank, and (3) generate many and classify. We show that this pruning step reduces false positives (extracting persona when there is no persona information) and improves the quality of extracted character persona when compared to the current state-of-the-art model PAED (Zhu et al., 2023b). The converse relationship between utterances and persona has been investigated before, and even used to make persona extraction datasets (Wang et al., 2022), but the direction we explore allows us to determine if the extracted persona is reasonable given the utterance. In this work, we contribute the following:1 • PersonaExt-PeaCoK: a semi-automatically adapted dataset PersonaExt (Zhu et al., 2023b) for training PeaCoK-compatible persona ex- traction models from dialogue utterances • A trained persona extraction model for ex- tracting PeaCoK relations (experience, goal or plan, routine or habit, characteristic) from dialogue history for the purpose of expanding and adapting PeaCoK to specific narratives • Persona-NLI: an NLI model for evaluating the entailment relationship between a dialogue utterance and extracted persona • Qualitative analysis of model performance on fantasy dataset LIGHT (Urbanek et al., 2019a) 2 Related Work Our work is relevant in the focus areas of persona-grounded dialogue, persona extraction, and dialogue-specific natural language inference (Dialogue-NLI). Persona-grounded Dialogue The primary issues with dialogue agents are their boring and generic responses (Li et al., 2016a; Khayrallah and Sedoc, 2021) and their inability to maintain a consistent persona, often contradicting themselves in conver- sations (Shuster et al., 2022). An early, widely-used work to address this issue is PersonaChat (Zhang et al., 2018) which is a manually created dataset of crowdworker conversations paired with 3-4 sen- tence persona descriptions. Similarly, Learning in Interactive Games with Humans and Text (LIGHT; Urbanek et al. (2019a)) is a dataset created the same way and modeled after PersonaChat, but is set in a fantasy world with fairies, pirates, and kings. While other fantasy dialogue datasets exist, either they contain prose instead of dialogue (Zhu et al., 2023a) and/or do not have structured persona infor- mation paired with each character (Callison-Burch et al., 2022; Weir et al., 2023; van Stegeren and Theune, 2020) (see Appendix Table 9 for details). A missing component of persona-grounded datasets is commonsense grounding. Gao et al. (2023) address this issue with PErsonA- grounded COmmonsense Knowledge graph (Pea- CoK) which formalizes relations within and be- tween personas. Personas are represented as graph nodes consisting of head (subject) and tail (verb phrase) entities connected by set labeled edges (i.e., 1All code and models will be released upon publication. relations), e.g, (I am a famous pianist, characteris- tic, skilled in playing the piano, see Figure 1). Persona Extraction While the task of persona extraction is new, methods for this task draw from the established field of relation extraction. In this work, we extract pre-defined relations from a char- acter’s utterances to build/expand a persona knowl- edge graph. Wang et al. (2022) separate persona extraction into extraction and inference subtasks, or whether or not the relation can be found verbatim in the ut- terance. They trained GPT-2 Small on Dialog-NLI (Welleck et al., 2019) and similar to our method, im- plemented constrained decoding to ensure proper relation format. While they also trained a separate model for re-ranking generated output, it was not fine-tuned for the NLI task like ours. Ni et al. (2022); Zhu et al. (2023b) also cast persona extraction as a seq2seq mapping from di- alogue utterance to a persona triplet, e.g., (sub- ject/“head”, relation type, object/“tail”). Zhu et al. (2023b) used a variational auto-encoder (VAE) to create new synthetic examples to train the model to be able to distinguish between related relations (“like" and “dislike"). They introduced the BART- based zero-shot model PAED, and also released a modified Persona Extraction dataset (Person- aExt) built from the persona extraction dataset from Wang et al. (2022). Similar to Wang et al. (2022) and us, they use constrained decoding to ensure compliance with expected output format. Related to persona extraction is persona expan- sion, where commonsense KGs are leveraged to add more information given a presented persona (typically from PersonaChat) (Kim et al., 2023; Liu et al., 2022) and character identification, where a speaker is identified given their persona informa- tion and/or past dialogue (Sang et al., 2022). In summary, while we also use seq2seq modeling for persona extraction, we introduce NLI-based reranking and filtering for quality control in a new narrative setting (from “real world” to fantasy), and use PeaCoK (Gao et al., 2023) persona relations. Persona NLI Most NLI datasets are not in the narrative or dialogue domain but Dialog-NLI ad- dresses this issue Welleck et al. (2019). This dataset consists of manually annotated (persona sentence, character utterance) pairs for their entailment rela- tionship from the aforementioned PersonaChat. We incorporate a newly trained NLI model fine-tuned for persona extraction from dialogue (Persona-NLI). Since Dialog-NLI is built from Per- sonaChat, which is also what PersonaExt was built from, we do not include it to train the Persona- In- NLI model due to test leakage concerns. stead, we turn to a related dataset built for pairing sentences and facts. Commonsense Fact linking dataset (ComFact) (Gao et al., 2022) is a semi- automatically created dataset for linking statements and commonsense facts from Atomic20 20 (Hwang et al., 2021). More details of adapting ComFact to the NLI task are in Section 4. Ammanabrolu et al. (2021) introduced a com- monsense knowledge graph for LIGHT (ATOMIC- LIGHT), but we do not include this data since it does not contain dialogue and our experimental setting assumes no narrative-specific information. 3 Persona Extraction from Dialogue Our goal is to automatically adapt a persona KG to a new narrative, starting from character dialogue. PeaCoK (Gao et al., 2023) is the first and only KG for personas, and its relation types (i.e, edges) are general enough to fit any narrative setting (see Figure 1). However, PeaCoK was not built from di- alogue utterances, so we use PersonaExt (Zhu et al., 2023b), a dataset of (utterance, persona relation) pairs to train a PeaCoK-persona extraction model.2 We discuss the modification of PersonaExt to fit the PeaCoK persona format in Section 3.1. For the persona extraction task, we fine-tune ex- isting models with a variety of prompts and test different decoding methods. We compare our mod- els to the persona attribute extraction in dialogues (PAED) model (Zhu et al., 2023b). 3.1 Dataset: PersonaExt-PeaCoK PeaCoK is a KG consisting of head and tail entities with relation types of persona commonsenses, e.g, (“I am a famous pianist”, “characteristic”, “skilled in playing the piano”), and does not contain ut- terances. While the heads and tails are phrases, combining them into full sentences would lead to repetitive and unnatural utterances. Instead, we start from an annotated dialogue dataset, Person- aExt, and semi-automatically convert them into PeaCoK format by re-annotating the dialogue ut- terances on the relation level to PeaCoK relations. 2We use PersonaExt instead of the precursor dataset in- troduced by Wang et al. (2022) because Zhu et al. (2023b) improved upon the annotations and labels. Figure 2: Overview of persona extraction process from dialogue with a trained seq2seq (BART) and natural language inference (NLI) model. The head, tail, and relation types are parsed from the model output. As seen from the selected example, the proposed NLI re-ranking step correctly adjusts the order of possible extracted persona, penalizing the non-entailed “(i, characteristic, like drink blood)”. We refer to the relabeled PersonaExt dataset as PersonaExt-PeaCoK. The train, validation, and test sets for model training were stratified-split ac- cording to the labels. Appendix B reports more details. 3.2 Dataset: LIGHT As discussed in Section 2, we select LIGHT (Ur- banek et al., 2019a) as the new narrative dataset for evaluation. We take the best-performing per- sona extraction model from Section 3 and evaluate on LIGHT. Unlike PersonaExt-PeaCoK, there are no ground-truth dialogue-level persona annotations for LIGHT, so we provide a qualitative analysis alongside other intrinsic metrics instead of report- ing accuracy metrics. We download the LIGHT dataset from ParlAI (Miller et al., 2017).3 We found 10,268 dialogues (230 more than Urbanek et al. (2019a)) and 1,382 unique characters (e.g., “a baby dragon”). “Unique” characters were determined by counting the unique (character, description) pairs. We ignore the “ob- jects”, “actions”, “emotes”, and “actions” portions of LIGHT since our persona extraction model is based only on dialogue. We compare the persona extracted from the character description (i.e., the persona profile of a character)4 to persona extracted from their dialogue utterances. There are roughly 3-4 persona description sentences per character. Al- though a single dialogue utterance, or conversation turn, can consist of more than one sentence, we do not separate into individual sentences. 3https://github.com/facebookresearch/ ParlAI/tree/main/projects/light 4Persona descriptions are a paragraph where each sentence contains a character trait, we parse it into individual sentences. 3.3 Methods For the persona extraction model, We fine-tuned the HuggingFace implementation of BART-Large (Lewis et al., 2020; Wolf et al., 2020) on PersonaExt-PeaCoK; Appendix C reports training details. We created a structured input and output tem- plate based on the one used by PAED (Zhu et al., 2023b), shown in Table 2. Note that the template tokens are newly added to the model vocabulary with trainable parameters in the embedding lookup layer. After experimenting with other templates, we found that using special tokens for both the entity markers and relation types led to a better- performing model. Also, unlike the PAED template where the relation type is generated last, we found better performance when the template is ordered: relation, head entity, and then tail entity. Other work has used standard relation triplet ordering of head, relation, and tail (Wang et al., 2022). The benefit of the relation token in the beginning of the sequence is that the model can then be used to easily generate different relations from the same utterance by changing the relation type. We initial- ized embeddings of the added tokens to the average embedding of a short text description (Table 1). Since the output needs to be in the correct format for parsing into a triplet, we impose constraints on the generation to ensure template adherence. These constraints are flexible and can be combined with any decoding method. We compare generations from greedy search, beam search, and diverse beam search (Vijayaku- mar et al., 2018). For beam search, we set the number of beams to 5 and return all 5 sequences, with only the most likely sequence presented as the final output. For diverse beam search, we set the number of beam groups to 5 to thoroughly explore Token Description [CONTEXT] [RELATION] [HEAD] [TAIL] [characteristic] [no_relation] [routine_habit] [goal_plan] [experience] context relation head entity tail entity character trait no relation regularly or consistently do will do or achieve in the future did in the past Table 1: Tokens added to BART vocabulary for training the persona extraction model. The descriptions for the relation types are from PeaCoK (Gao et al., 2023). Ta- ble 2 shows an example model input and output. the search space and diversity strength λ “ 0.4. We trained the model on four 48GB NVIDIA RTX A6000 GPUs for 50 epochs, 128 batch size, AdamW optimizer, and 5e ´ 5 learning rate. Train- ing time was 1.5 hours. 3.4 Comparative Models We compare our persona extraction model to PAED and ablated versions of our model. PAED (zero-shot) PAED (see Section 2) is an- other fine-tuned BART model for persona extrac- tion in zero-shot settings, and we test it with Pea- CoK relations. We trained the PAED model from the author-provided codebase in the 10 unseen label setting. PAED (fine-tuned) While PAED was designed as a zero-shot model to adapt to new relation types, we also include a fully fine-tuned version that is trained on the same train split of PersonaExt- PeaCoK as our model. We trained the PAED model from scratch on PersonaExt-PeaCoK with the same settings as Zhu et al. (2023b) (see Appendix C.2). 3.5 Evaluation We evaluate persona extraction performance through reference and reference-free (intrinsic) metrics. PersonaExt-PeaCoK has ground-truth la- bels to measure accuracy. In the reference met- rics, the head, tail, and relation are parsed from the structured output and compared against the re- spective gold-standard entity. However, the LIGHT dataset (in the new domain of fantasy) does not have ground-truth labels, and we rely on intrinsic and manual evaluations. Reference Accuracy For evaluating the extrac- tion we employ the same accuracy metric as in Zhu et al. (2023b). While Zhu et al. (2023b) only awards credit to the model if the entire (head, relation, tail) triplet is predicted correctly (shown in our results as “Triplet”), we additionally relax this metric and evaluate the model perfor- mance separately on identifying the head, relation, and tail entities from the triplet, similar to Wang et al. (2022). Since there are no ground-truth persona triplets for LIGHT, we developed intrinsic (i.e., non- reference-based) metrics alongside qualitative anal- ysis. Intrinsic Metrics The non-accuracy-based met- rics focus on the number of returned persona and unique persona. The metrics are measured for each character, overall, and on the dialogue and descrip- tion datasets. We define utterance coverage as the ratio of ex- tracted persona to the number of utterances. If the extracted persona has the “not a relation” relation type, it is not considered. We are curious about how many extracted re- lations are about the speaking character or other characters. This first person metric measures the ratio of first-person triplets, where the head is “I”, “me”, or “my”, as compared to all extracted triplets. Regarding diversity and uniqueness, we want to ensure that the model is not assigning the same character traits to everyone. We look at the rate of unique head and tail entities, referred to as unique head and unique tail. Persona “Recall” While there are no ground- truth labels for LIGHT on the utterance-level, we can consider the persona descriptions as “silver” labels. We define recall as, using persona triplets extracted from the provided persona descriptions, the ratio of persona relations successfully recovered from the utterances. For a consistent reference, we use the persona triplets extracted by the model with greedy decoding and no NLI re-ranking or filtering. Human Evaluation For the qualitative analysis, two authors compared the extracted persona infor- mation from the dialogue to that extracted from the character description, and to the character de- scription itself. The annotation descriptions are in Section 3.5. Annotation guidelines were created by one au- thor after qualitatively evaluating 5 random charac- Utterance i am a tour guide at a museum . what do you do for a living ? Input [CONTEXT] i am a tour guide at a museum . what do you do for a living ? [RELATION] <MASK> [HEAD] <MASK> [TAIL] <MASK> Output [RELATION] [routine_habit] [HEAD] i [TAIL] has profession tour guide Table 2: The template for model input and output for BART-Large fine-tuned for persona extraction on PeaCoK- PersonaExt. <MASK> represents BART’s mask token, while the tokens in square brackets are special tokens added to the vocabulary. The relation types (e.g., routine_habit) are also in the model’s vocabulary. Accepted Sub-category Guideline Yes No Directly Reasonable Relates to the persona description Character trait given the persona description Contradictory Candidate persona goes against the persona description. E.g., persona is a dragon but the relation is (I, routine_habit, am a high priestess) Unreasonable Non-specific relation E.g., (we, characteristic, like activity it) Malformed Parsed triplet is missing an entity, e.g. the tail Unreasonable trait given character description Table 3: Guidelines for the human evaluation of extracted persona information. Evaluators were required to select a sub-category within “Yes” and “No” providing a reason for the selection. ters (sampled with random seed 13). Each triplet was annotated against the provided character name and 3-4 sentence description (i.e., persona profile in Section 3.2). Triplets were annotated individu- ally and not as a whole, e.g., two triplets that are contradictory could still be labeled as "reasonable". For example, if the description does not contain in- formation about the character’s family, "has sibling sister" and "has no siblings" are both reasonable. Also, triplets were annotated based on the present. If the description specifies "I am a pirate" and the triplet says "I want to be a pirate" it’s labeled as a contradiction since the character already is a pirate. Further, some triplets were near-duplicates of each other and were consolidated by merging triplets with first-person heads into one subject "self". Also, triplets were lowercased before merging. Note this consolidation was only performed for annotations and thus the number of triplets differs from those present in Table 7. 4 NLI for Narrative Adaptation When applying the trained persona extraction model to LIGHT, we noticed that a persona re- lation is always extracted, regardless of the input statement/utterance. This is expected because per- sona extraction training data does not contain nega- tive examples (i.e., statement/utterance without a persona). As a result, it is important to have an au- tomatic method for filtering “hallucinated” persona output from the persona extraction model. We leverage natural language inference (NLI) – whether a statement follows or can be inferred from another statement (MacCartney and Manning, 2008) – for filtering out the hallucinated persona. The relationship between dialogue and a corre- sponding persona statement differs from the typical statement pairs used in conventional NLI training, which often come from formally written text (Bow- man et al., 2015; Williams et al., 2018). To al- leviate potential issues caused by a domain shift, we fine-tune an existing NLI model on (dialogue) utterance-persona statement pairs. As discussed in Section 2, we use the ComFact dataset instead of Dialog-NLI due to train-test leakage with our train- ing dataset, PersonaExt. We refer to this fine-tuned model as “NLI-Persona” and the non-fine-tuned version as “NLI-base”. 4.1 Persona-NLI Model 4.1.1 Persona-NLI Dataset the ComFact contains many annotations at conversation- or paragraph-level, including whether a fact is relevant at a specific timestep or I silently swoop down into the forest to carry off another. I do what I can to please the high priestess, who is my mother. I feast on the bones of the hunter. I like being scratched behind my ear and laying by the fire. Extracted Persona Triplet Annotation (self, characteristic, like food nutritious) (self, goal_plan, want do hurts) (self, characteristic, like food venison) (my mother, routine_habit, has profession high priestess) Yes – Directly (self, routine_habit, has profession priestess) (self, characteristic, like activity play) Yes – Reasonable Yes – Reasonable Yes – Reasonable No - Contradictory Yes - Reasonable Table 4: Example annotations for extracted persona triplets for the “a dragon” character in LIGHT. context window. To match our utterance-fact setup, we reduce ComFact to only the current statement and facts that were labeled as “relevant without context” (RPA) in Gao et al. (2022). Pairs without relation labels were kept as negative examples. We also filter non-persona related relation types and limit to HasProperty, CapableOf, Desires, xNeed, xAttr, xEffect, xReact, xWant, xIntent, which were identified as persona-related by Gao et al. (2023). We use the original train/dev/test splits in ComFact. Since PersonaExt is derived from PersonaChat, we removed any entries in ComFact Persona-Atomic subset that also occurred in our test split to make sure there is no training-testing overlap. We also include the training and dev splits of PeaCoK-PersonaExt and ablate the model training over each data subset (PeaCoK- PersonaExt, ComFact-Persona, and combined PeaCoK-PersonaExt + ComFact-Persona). 4.1.2 Training Starting from an existing high-performing NLI model, nli-deberta-v3-base (Reimers and Gurevych, 2019) 5 we fine-tune on our ComFact- derived Persona-NLI dataset for 5 epochs, batch size of 32, learning rate of 2e ´ 05, and AdamW optimizer on a single 48GB NVIDIA RTX A6000 GPU. nli-deberta-v3-base was trained on SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018). While MNLI does contain training data in the fiction domain («20%), it is prose from contemporary fiction and not dialogue utterances. We chose the final Persona-NLI model based on F1 score (binary). Since we only care about “entailment” versus “no entailment”, we do not pe- nalize the model for incorrectly predicting “neutral” 5https://www.sbert.net/docs/ pretrained_cross-encoders.html#nli versus “contradiction”, and merge those two labels into a binary task.6 Table 5 shows the results across training (rows) and testing (columns) on the different versions of Persona-NLI dataset. The model trained on the ComFact-Persona + PeaCoKPersonaExt dataset performed the best and we refer to it as Persona- NLI. An important note is while the PeaCoK- PersonaExt appears to have perfect scores, this dataset is very skewed towards the positive class and never predicts “no entailment”, thus we use the model trained on the more balanced combined dataset of ComFact-Persona + PeaCoKPersonaExt. The fine-tuning is needed since the NLI model (“w/o fine-tuning”) does not perform as well, which is expected due to domain adaptation issues. 4.2 NLI-Reranking As shown in Section 2, we can adjust the model’s final generated output by first generating multi- ple candidates, scoring each candidate with the Persona-NLI model, and then selecting the one with the new highest score. The NLI score for each candidate is based on the entailment score of (utterance, candidate triplet) pairs. Since Persona-NLI expects a sentence and not a triplet, we provide each candidate as a sentence by concatenating the head and tail, e.g., “i want magic” (Section 2). If the pair is determined to have an entailment relationship, we then adjust the final score of the candidate as the language model score (average token log probability) plus the log probability of entailment. If there is no entailment relation, the score is kept the same, i.e., only the language model score. This new score is designed 6The loss is still measured on all three labels, only the accuracy and F1 metrics are measured on the binary classifi- cation. Persona-NLI Dataset Acc. F1 Precision Recall Acc. PeaCoK-PersonaExt ComFact-Persona F1 Precision Recall Acc. ComFact-Persona+PeaCoKPersonaExt Recall F1 Precision w/o fine-tuning 0.51 0.38 PeaCoK-PersonaExt +ComFact-Persona 1.00 1.00 1.00 1.00 0.51 1.00 1.00 0.54 0.81 0.55 1.00 1.00 0.11 0.89 0.10 0.75 0.55 0.50 0.72 0.56 0.67 0.66 0.50 0.79 0.52 0.94 0.38 0.94 0.70 0.74 0.94 0.67 0.52 0.94 Table 5: Results for NLI-Base model fine-tuned and evaluated on three persona data subsets. The base “w/o fine-tuning” model is not trained and only evaluated. F1 metrics are macro-F1. The model fine-tuned on ComFact- Persona+PeaCoK-PersonaExt performs the best. Accuracy is shortened to “Acc.” for space. The 100% accuracy is due to the large class imbalance in PersonaExt-PeaCoK, which is mostly positive examples. to promote candidates that have high entailment scores, but not penalize the candidates determined to be highly likely by the language model. This method can only be used with a model that can output multiple candidates, such as BART with a beam search or sampling-based decoding meth- ods. We refer to this as NLI-re-ranking. 4.3 NLI Classification A stricter use of the NLI model is to completely remove any candidates that cannot be entailed by the utterance, i.e., has a neutral or contradictory relationship. Since there is no re-ranking, it can be used by models that only generate one candidate. We refer to this as Neutral Removed. 5 Persona Extraction Results We first evaluate the extraction model performance on the PeaCoK-PersonaExt dataset and then evalu- ate the best-performing models on the new narra- tive setting of LIGHT. 5.1 PersonaExt-PeaCoK (“In-domain”) The summarized results are shown in Table 6 (see Appendix D.1 for detailed results). We do not include the Persona-NLI re-ranking and removal since those are for adapting the model to the new narrative setting. The takeaways are as follows: Our model and PAED perform similarly. While PAED is the best-performing model, with an overall triplet accuracy of 0.61, our model is close behind with an accuracy of 0.60. PAED in its origi- nal zero-shot setting performs poorly. Upon further examination, this is due to the “not” label having the highest likelihood compared to the other labels. The original PersonaExt relation types were de- tailed (e.g., “school_status”), so perhaps the model would perform better with more explicitly named relations (e.g., “attribute_of” instead of “character- istic”). Relation-label accuracy is impacted by train- ing prevalence. As seen in the per-label scores (Table 17), all models are best at identifying the “Characteristic” and “Routine or Habit” relations, which are the most prevalent in the training dataset. The models are the least accurate at identifying “Experience”, “Goal or Plan”, and the “Not a Rela- tion” categories. Predicting head entity is a simple task. All models, except PAED (zero-shot) have 0.95+ ac- curacy on identifying the Head entity in the triplet. This is because there are only a few options for head entities and the vast majority are “i” or “my” in the training set. This could impact the general- izability of the model, which we analyze in Sec- tion 5.2. Triplet accuracy is penalized by tail entity. All models perform poorly on predicting the tail en- tity, with an accuracy of only 0.55-0.62. This low tail accuracy lowers the overall triplet accuracy, which is mostly accurate for the head entities and relations, with accuracy above 0.97 and 0.81, re- spectively. The difficulty in tail predicting is most likely due to the length, which is often several tokes as compared to the 1-3 tokens in the head entity, and not all the tail entity tokens are taken directly from the context. This is clear from the results shown in Appendix Table 20, where the original PAED model is trained and tested on a version of PersonaExt-PeaCoK with the original tail entities (i.e., no tail phrase). While the tail entity accuracy is significantly better at 0.8, the overall accuracy remains very similar at 0.63. Decoding method has no impact on results. Our model performs similarly with regard to triplet accuracy across the three evaluated decoding meth- ods of greedy, beam, and diverse beam search. Our theory is the model is well-trained for the PeaCoK- PersonaExt dataset and thus the most likely candi- Label Precision Recall Head Tail F1 Acc. Acc. Overlap Acc. Our Model Greedy Search Beam Search Diverse Beam Search PAED (fine-tuned) (zero-shot) 0.85 0.85 0.85 0.83 0.52 0.84 0.83 0.83 0.83 0.06 0.84 0.84 0.83 0.83 0.03 0.84 0.83 0.83 0.83 0.06 0.96 0.96 0.96 0.95 0.08 0.78 0.78 0.78 0.75 0.07 0.67 0.67 0.67 0.62 0.07 Triplet Acc. 0.66 0.66 0.66 0.61 0.06 Table 6: Results on the “in-domain” dataset, PersonaExt-PeaCoK. Evaluated on the test split. Accuracy is abbreviated to “Acc”. dates are similar across decoding methods. 5.2 New Narrative Setting (LIGHT) As discussed in Section 3.5, there are no ground- truth labels for the LIGHT dataset so we turn to intrinsic and human evaluation instead. The re- sults are shown in Tables 7 and 8, respectively. For human evaluation, we annotated extracted triplets from 10 randomly sampled characters across all the models (5304 generations with 1556 unique relations). The overall annotations (i.e., “Yes” or “No”) had an inter-annotator agreement (IAA) of 0.90 and an IAA of 0.85 for the detailed annota- tions as measured by Krippendorff’s Alpha.7 The takeaways are as follows: NLI Removal has the largest impact. Remov- ing non-entailed triplets reduces the number of ex- tracted triplets (i.e., not [no_relation]) by roughly 90%. This extreme reduction also impacts persona recall and utterance coverage. There is also a dif- ference between Persona-NLI and the general NLI models, as Persona-NLI setting keeps more can- didates. This is most likely due to Persona-NLI model having more exposure to the format of (ut- terance, persona) pairs. Intrinsic metrics show no difference between decoding methods. Similar to the results on PersonaExt-PeaCoK, there is little performance dif- ference (˘0.2) between decoding methods within a setting (e.g., Neutral Removed with NLI model) according to the automated (intrinsic) metrics. The differences are more apparent in the human evalua- tion. Most extracted persona are about the charac- ter. Across all models, except PAED, there are high ratios of extracted persona that are first-person 7https://github.com/LightTag/ simpledorff (0.9`, e.g., head entity is “I”). The lowest rates of first-person relations are from the PAED models, with only 67% and 43% of relations being first- person from the retrained and zero-shot versions, respectively. Greedy Search with removing non-entailed triplets from with Persona-NLI is the best model. The differences in extracted persona relations are more apparent with human evaluation than from automatic evaluation alone (Table 8). The number of extracted relations are different for each method, so we focus on the ratio of accepted (i.e., labeled “Yes”) relations. With this metric, using Persona- NLI to remove non-entailed persona generated with greedy search is the best performing, with 68% of extracted relations being accepted. Interestingly, removing the non-entailed relations hurts perfor- mance across all other models, as compared to the base setup (i.e., no re-ranking or removal). 6 Discussion The use-case of our persona extraction model is to adapt a character knowledge graph (PeaCoK) given past dialogue from a narrative setting that differs from the original training data. We showcase the ability of our best-performing model by building a graph with a few of the manually annotated LIGHT characters from Urbanek et al. (2019a), shown in Figure 1. Only three of the ten annotated characters with the extracted persona manually determined to be related are shown (Section 5.2). This graph can then be used for persona-grounded dialogue (Gao et al., 2023). From the dialogue, the persona extraction model was able to extract persona information beyond the persona. For example, while the Pirate obviously (from the description) works as a pirate and is from a village, they also have a pet dog and own a sword. Not surprisingly, the model with greedy search Recall Cov. Cov. (unique) 1st person Unique persona N. persona Cov. Cov. (unique) 1st person N. persona Per character Overall Neutral Removed Re-ranking Base PAED NLI Persona-NLI NLI Persona-NLI Beam Search Diverse Beam Search Greedy Search Beam Search Diverse Beam Search Greedy Search Beam Search Diverse Beam Search Beam Search Diverse Beam Search Beam Search Diverse Beam Search Greedy Search (fine-tuned) (zeroshot) 0.02 0.02 0.02 0.07 0.07 0.06 0.30 0.31 0.29 0.31 0.30 0.31 0.31 0.06 0.00 0.09 0.08 0.06 0.21 0.15 0.09 0.85 0.89 0.87 0.89 0.85 0.89 0.87 0.83 0.03 0.08 0.07 0.05 0.20 0.14 0.08 0.78 0.81 0.79 0.81 0.78 0.81 0.79 0.79 0.03 0.87 0.84 0.83 0.97 0.95 0.94 0.89 0.89 0.90 0.89 0.89 0.89 0.89 0.67 0.43 0.90 0.87 0.82 0.93 0.92 0.90 0.91 0.91 0.92 0.92 0.91 0.91 0.91 0.96 0.71 7.55 6.46 4.71 17.69 12.38 7.54 69.65 72.77 70.93 72.96 69.44 72.57 70.64 72.99 2.77 0.09 0.08 0.06 0.21 0.15 0.09 0.85 0.89 0.87 0.89 0.85 0.89 0.87 0.83 0.03 0.05 0.04 0.03 0.06 0.04 0.03 0.31 0.32 0.31 0.32 0.31 0.32 0.32 0.46 0.02 0.92 0.91 0.94 0.99 0.98 4030 0.89 0.88 0.89 0.89 0.89 0.89 0.88 0.66 0.57 6053 5020 3756 7806 5783 0.99 40331 42469 40226 41866 40100 42288 41709 59787 2417 Table 7: Quantitative results with intrinsic metrics across all models on the new narrative setting (fantasy, LIGHT). “Cov.” is short for utterance coverage. The Overall metrics are across all extracted persona and not analyzed for each character, thus the persona recall metric is not applicable. NLI Model Decoding Method No Yes Ratio accepted Neutral Removed Re-ranking Base NLI Persona-NLI NLI Persona-NLI Beam Search Diverse Beam Search Greedy Search Beam Search Diverse Beam Search Greedy Search Beam Search Diverse Beam Search Beam Search Diverse Beam Search Beam Search Diverse Beam Search Greedy Search 30 21 18 75 49 23 195 202 209 206 193 202 194 38 32 27 82 68 49 332 334 324 333 331 335 338 0.56 0.60 0.60 0.52 0.58 0.68 0.63 0.62 0.61 0.62 0.63 0.62 0.64 Table 8: Qualitative, human evaluations for our model evaluated on the new fantasy narrative setting (LIGHT). Inter-annotator agreement (IAA, Krippendorff’s Alpha measured with two annotators) of 0.90. “Ratio Accepted” refers to extracted persona annotated with the “Yes” category (i.e., marked as appropriate by both annotators). Ethical Considerations The ethical concerns of this work center on the pos- sibility of automatically impersonating an existing person, rather than the intended use case of fictional characters. Further, our model is trained on the Per- sonaExt dataset (derived from the crowdsourced PersonaChat), so we cannot guarantee no presence of offensive language. Manual evaluation is always the best final step, and we encourage developers who use our method for persona extraction to add a toxicity (e.g., hate or offensive speech) evaluation step in addition to quality evaluation. performed the best, with regards to manually ac- cepted persona, over the diverse decoding methods of beam and diverse beam search. This nods to the “quality vs diversity” trade-off seen in other areas of text generation. Another trade-off was “quality over quantity”, since the Neutral Removed models returned signif- icantly less persona than the Base and Re-ranking models. This was a benefit since there were less personas to annotate (e.g., an average of 4-20 ex- tracted persona per character instead of 70 (Ta- ble 7)). Since even the best model had an accep- tance rate of only 68%, there is still a need for a human evaluation step. This is further enforced by the lack of clear relationship between the intrinsic, quantitative metrics and the annotations. The sil- ver persona “recall” metric proved uninformative and ended up having higher correlation with the number of returned persona than an indication of quality. 7 Conclusion Our goal was to address the challenges in adapting a persona extraction model trained on one narrative setting (e.g., real-world “chit-chat”) to another set- ting (fantasy). We modeled persona extraction as a seq2seq problem and fine-tuned BART on Person- aExt, a dataset of (utterance, persona) pairs built from DialogNLI and PersonaChat. In order to ex- tract persona information applicable to any narra- tive setting, we converted the “chit-chat” Person- aExt to the general relations from PeaCoK (e.g., “characteristic”). With our trained BART-based per- sona extraction model, we evaluated two different post-hoc techniques to extract persona from dia- logue from a different narrative setting, the fantasy world of LIGHT. We experimented with natural language inference (NLI)-based re-ranking and re- moval of persona candidates, and determined that leveraging inference information to remove per- sona candidates that can not be inferred from the utterance worked the best according to human eval- uation. Limitations The main limitation of this work is assuming that existing dialogue from a character is available to ex- tract persona information from. Further, our meth- ods are only evaluated on dialogue utterances and not on other character-related text such as prose (see Appendix A). References Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer. 2020. STO- RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6470–6484, Online. Association for Computational Linguistics. Prithviraj Ammanabrolu, Jack Urbanek, Margaret Li, Arthur Szlam, Tim Rocktäschel, and Jason Weston. 2021. How to motivate your dragon: Teaching goal- driven agents to speak and act in fantasy worlds. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 807–833, Online. Association for Computa- tional Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Compu- tational Linguistics. Faeze Brahman, Meng Huang, Oyvind Tafjord, Chao Zhao, Mrinmaya Sachan, and Snigdha Chaturvedi. 2021. “let your characters tell their story”: A dataset for character-centric narrative understanding. In Findings of the Association for Computational Lin- guistics: EMNLP 2021, pages 1734–1752, Punta Cana, Dominican Republic. Association for Compu- tational Linguistics. Chris Callison-Burch, Gaurav Singh Tomar, Lara Mar- tin, Daphne Ippolito, Suma Bailis, and David Reit- ter. 2022. Dungeons and dragons as a dialog chal- lenge for artificial intelligence. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing, pages 9379–9393, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Silin Gao, Beatriz Borges, Soyoung Oh, Deniz Bayazit, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, and Antoine Bosselut. 2023. PeaCoK: Persona common- sense knowledge for consistent and engaging narra- tives. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 6569–6591, Toronto, Canada. Association for Computational Linguistics. Silin Gao, Jena D. Hwang, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, and Antoine Bosselut. 2022. Com- Fact: A benchmark for linking contextual common- In Findings of the Association sense knowledge. for Computational Linguistics: EMNLP 2022, pages 1656–1675, Abu Dhabi, United Arab Emirates. Asso- ciation for Computational Linguistics. Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. (Comet-) Atomic 2020: On Sym- bolic and Neural Commonsense Knowledge Graphs. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7):6384–6392. Number: 7. Huda Khayrallah and João Sedoc. 2021. Measuring the ‘I don’t know’ problem through the lens of Gricean In Proceedings of the 2021 Conference quantity. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5659–5670, Online. Association for Computational Linguistics. Donghyun Kim, Youbin Ahn, Chanhee Lee, Wongyu Kim, Kyong-Ho Lee, Donghoon Shin, and Yeonsoo Lee. 2023. Concept-based persona expansion for improving diversity of persona-grounded dialogue. In Proceedings of the 17th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 3471–3481, Dubrovnik, Croatia. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computa- tional Linguistics. Aaron W. Li, Veronica Jiang, Steven Y. Feng, Julia Sprague, Wei Zhou, and Jesse Hoey. 2020. ALOHA: Artificial Learning of Human Attributes for Dialogue Agents. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8155–8163. Number: 05. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 994–1003, Berlin, Germany. Associa- tion for Computational Linguistics. Yifan Liu, Wei Wei, Jiayi Liu, Xianling Mao, Rui Fang, and Dangyang Chen. 2022. Improving Personality Consistency in Conversation by Persona Extending. In Proceedings of the 31st ACM International Con- ference on Information & Knowledge Management, CIKM ’22, pages 1350–1359, New York, NY, USA. Association for Computing Machinery. Hongyuan Lu, Wai Lam, Hong Cheng, and Helen Meng. 2022. Partner personas generation for dialogue re- sponse generation. In Proceedings of the 2022 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, pages 5200–5212, Seattle, United States. Association for Computational Lin- guistics. Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in nat- ural language inference. In Proceedings of the 22nd International Conference on Computational Linguis- tics (Coling 2008), pages 521–528, Manchester, UK. Coling 2008 Organizing Committee. A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bor- des, D. Parikh, and J. Weston. 2017. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476. Jian Ni, Gaetano Rossiello, A. Gliozzo, and Radu Flo- rian. 2022. A Generative Model for Relation Extrac- tion and Classification. ArXiv. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Com- putational Linguistics. Yisi Sang, Xiangyang Mou, Mo Yu, Shunyu Yao, Jing Li, and Jeffrey Stanton. 2022. TVShowGuess: Char- acter comprehension in stories as speaker guessing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4267–4287, Seattle, United States. Association for Computational Linguistics. Kurt Shuster, Jack Urbanek, Arthur Szlam, and Jason Weston. 2022. Am I me or you? state-of-the-art dia- logue models cannot maintain an identity. In Find- ings of the Association for Computational Linguis- tics: NAACL 2022, pages 2367–2387, Seattle, United States. Association for Computational Linguistics. Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Ja- son Weston. 2019a. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 673–683, Hong Kong, China. Association for Computational Lin- guistics. Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Ja- son Weston. 2019b. Learning to Speak and Act in In Proceedings a Fantasy Text Adventure Game. of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 673–683, Hong Kong, China. Association for Computational Lin- guistics. Judith van Stegeren and Mariet Theune. 2020. Fantastic Strings and Where to Find Them: The Quest for High- Quality Video Game Text Corpora. In Proceedings of the 12th Intelligent Narrative Technologies (INT) workshop, volume 2862, page 8. CEUR. Ashwin K. Vijayakumar, Michael Cogswell, Ram- prasath R. Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models. ArXiv:1610.02424 [cs]. Zhilin Wang, Xuhui Zhou, Rik Koncel-Kedziorski, Alex Marin, and Fei Xia. 2022. Extracting and inferring personal attributes from dialogue. In Proceedings of the 4th Workshop on NLP for Conversational AI, pages 58–69, Dublin, Ireland. Association for Com- putational Linguistics. Nathaniel Weir, Ryan Thomas, Randolph D’Amore, Kel- lie Hill, Benjamin Van Durme, and Harsh Jhamtani. 2023. Ontologically Faithful Generation of Non- Player Character Dialogues. ArXiv:2212.10618 [cs]. Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3731–3741, Florence, Italy. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguis- tics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics. Wangchunshu Zhou, Qifei Li, and Chenle Li. 2023. Learning to predict persona information for dialogue personalization without explicit persona description. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2979–2991, Toronto, Canada. Association for Computational Linguistics. Andrew Zhu, Karmanya Aggarwal, Alexander Feng, Lara Martin, and Chris Callison-Burch. 2023a. FIRE- BALL: A dataset of dungeons and dragons actual- play with structured game state information. In Pro- ceedings of the 61st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 4171–4193, Toronto, Canada. Associ- ation for Computational Linguistics. Luyao Zhu, Wei Li, Rui Mao, Vlad Pandelea, and Erik Cambria. 2023b. PAED: Zero-shot persona attribute extraction in dialogues. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9771– 9787, Toronto, Canada. Association for Computa- tional Linguistics. A Narrative-Specific Datasets C.1 BART Fine-tuning Templates We experimented with variations of the structured input and output templates for fine-tuning BART. The templates are shown in Table 15 and the results are in Table 17 (summarized in Table 16). Our models perform similarly on the overall ac- curacy (i.e., triplet accuracy) regardless of the input or output template, other than Relation-first, with scores around 0.55. The model with the Relation- first template had a higher overall accuracy score of 0.60, which is comparable to the score from com- parative model PAED (0.61). We use the Relation- first template for all other experiments. C.2 Comparative Model: PAED Since we need the model to generate tail phrases, we trained PAED on the modified PersonaExt dataset with tail phrases Appendix B. We train two versions of the model: zero-shot on PersonaExt (with tail phrases) and fine-tuned on PersonaExt- PeaCoK. We use the authors’ released codebase (Zhu et al., 2023b) and their default hyperparam- eters (we release our copy of their codebase for reproducibility). Zero-shot We trained the model in the zero-shot setting with 10 unseen labels and seed 0 (Table 19). The reported results for zero-shot PAED on their PersonaExt dataset are as follows: 0.40 for n “ 5, 0.32 for n “ 10, and 0.23 for n “ 15. The triplet accuracy score of 0.34 is close to the reported score of 0.32, despite the tail phrase modification of the dataset. Non-zero shot (fine-tuned) To evaluate the PAED model equally with our model, we train PAED in a non-zero-shot setting on PersonaExt- PeaCoK. We use the same default settings as in the released PAED codebase. D Detailed Results D.1 PeaCoK-PersonaExt The detailed per-label results are in Table 18. D.2 LIGHT The detailed annotations are in Table 21. While there are many existing datasets to support narrative analyses, we require a dataset that sup- ports our tasks of persona extraction from dialogue and dialogue agent roleplay. The datasets we con- sidered are in Table 9. The datasets varied in size, type of dialogue, and type of persona description. The dialogue from a character can either be pre- sented in prose or conversational dialogue form, e.g., “as the knight slew the dragon he yelled ”for glory!’" versus “Knight: For glory!". Also, to eval- uate our extracted person graph we need ground- truth persona data in the form of character-level information. This persona information can either be absent, structured, or unstructured. Structured persona information is either provided as sentences, as in LIGHT (Urbanek et al., 2019a), or in tabular format (Li et al., 2020; Zhu et al., 2023a). B PersonaExt-PeaCoK Dataset Details We modified the PersonaExt dataset (Zhu et al., 2023b) for compatibility with PeaCoK (Gao et al., 2023) by re-labeling the relations and converting the single-word tails to phrases. We refer to the rela- beled PersonaExt dataset as PersonaExt-PeaCoK. The train, validation, and test sets for model train- ing were stratified-split according to the new labels (see Table 12 for details.) Conversion to PeaCoK Labels The 105 Person- aExt labels were mapped to one of four PeaCoK labels: routine or habit, characteristic, goal or plan, and experience. While there is also a “relationship” label, this is a meta-label that can occur between attributes across characters, so we do not include it as a label for the extractor. Two authors man- ually mapped the labels and disagreements were discussed until annotations were unanimous. The relation mappings are shown in Table 11. Tail Phrase Creation To create the tail phrases we combine a PersonaExt relation with the tail, e.g., “(have_family, wife)” becomes “have family wife.” This can create phrases which are not proper English and we leave addressing this to future ex- periments. C Persona Extraction Model Training Details We fine-tuned BART (Lewis et al., 2020) with the HuggingFace Seq2SeqTrainer (Wolf et al., 2020). The hyperparameter settings are shown in Table 14. Name Source of Data World Setting N. characters Dialogue? Persona? Dungeons and Dragons (Callison-Burch et al., 2022) LIGHT (Urbanek et al., 2019a) TorchLight II (van Stegeren and Theune, 2020) Fireball (Zhu et al., 2023a) KNUDGE (Weir et al., 2023) Storium (Akoury et al., 2020) TVShowGuess (Sang et al., 2022) Star Wars: KOTOR (van Stegeren and Theune, 2020) Star Wars: KOTOR HLA-Chat (Li et al., 2020) LiSCU (Brahman et al., 2021) Play-By-Post LIGHT crowdsource platform Torchlight II Avrae bot on Discord The Outer Worlds video game Storium TVMegaSite Dungeons and Dragons LIGHT Torchlight Dungeons and Dragons The Outer Worlds video game Multiple Multiple Star Wars universe TV Tropes Multiple schmoop, SparkNotes, cliffNotes, LitCharts Multiple 7168 (est.) 1755 82 160K 81 25,955 556 45,821 9,499 prose dialogue dialogue prose dialogue prose dialogue dialogue dialogue prose No structured No unstructured unstructured unstructured No No structured no Table 9: Datasets from the literature considered in this work for persona knowledge graph narrative adaptation. LIGHT stands for Learning in Interactive Games with Humans and Text. KOTOR stands for Knights of the Old Republic. LiSCU stands for Literature Summary and Character Understanding. Utterance PersonaExt Triplet PeaCoK Triplet clothes . school . i am going to auburn for med lol , maybe . i usually wear band shirts and ruffle sleeves , skinny jeans and leggings hmmm . i would travel more if i had the money . you travel or sing ? i never flew a plane , but i have flow from france to canada where i live do you have friends here , i have lots here . You going back to school ? (i, attend_school, auburn) (i, routine_habit, attend school auburn) (i, favorite, ruffle sleeves) (i, characteristic, favorite ruffle sleeves) (i, want, money) (i, goal_plan, want money) (i, place_origin, france) (i, experience, place origin france) (i, misc_attribute, friends) (i, not, misc attribute friends) Table 10: Example datapoints from PersonaExt modified to fit PeaCoK format. Relation PersonaExt Relations Characteristic Experience Goal plan Not Routine habit belief, favorite, favorite_activity, favorite_animal, favorite_book, favorite_color, favorite_drink, fa- vorite_food, favorite_hobby, favorite_movie, favorite_music, favorite_music_artist, favorite_place, fa- vorite_season, favorite_show, favorite_sport, gender, has_ability, has_age, have_chidren, have_children, have_family, have_no, have_no_children, have_no_family, have_no_sibling, have_pet, have_sibling, have_vehicle, health_status, like_activity, like_animal, like_character, like_color, like_drink, like_food, like_general, like_subject, like_watching, name, physical_attribute, race, scared_of, sexual_orientation, weakness like_movie, like_music, like_sports, like_sport, like_goto, like_read, has_degree, pre_employed_by_general, previous_profession, raised_by, used_to international_exp, nationality, place_origin, pre_employed_by_company, want, want_do, want_job, want_no fall_out, have, industry, misc_attribute, other, work_schedule allergy_to, attend_school, collect, diet, dislike, dislike_activity, dislike_animal, dislike_color, dislike_drink, dislike_food, dislike_job, dislike_music, dislike_sport, dislike_subject, do_not_do, do_not_drink, do_not_eat, doing, employed_by_company, employed_by_general, get_along, has_hobby, has_profession, job_status, live_in_citystatecountry, live_in_general, live_with, marital_status, mem- ber_of, never_do, own, relationship, school_status, teach, worry_about Table 11: Corresponding PeaCoK relation type for each of the 105 PersonaExt relations. The ’not’ category refers to not being mapped to a PeaCoK relation type. Characteristc Experience Goal or Plan Routine or Habit Not a Relation Train 0.57 0.03 0.04 0.31 0.04 Dev 0.57 0.03 0.04 0.31 0.04 Test 0.57 0.03 0.04 0.31 0.04 Total 0.57 0.03 0.04 0.31 0.04 All 28412 3157 3508 35077 Table 12: Size and label distribution for each split in the PeaCoK-PersonaExt dataset. The Characteristic and Routine or Habit attributes are the most frequent. The train, validation, and test sets for model training were stratified-split according to the labels. Top Entity Values Head i (2920), my (309), me (90), we (33), my dad (27), my mom (25), mine (17), my mother (17), my parents (16), my brother (8), my father (7), dog (5), parents (4), my wife (4), my boyfriend (4), my family (4), family (3), my husband (2), best friend (2), friends (1), our (1), friend (1), mom (1), my girlfriend (1), us (1), cat (1), my dads (1), my son (1), my grandma (1), my best friend (1) Tail have pet dog (39), have pet cat (37), favorite food pizza (24), diet vegan (21), have pet cats (18), like animal dogs (18), marital status married (18), like music country (17), has profession teacher (17), have friends (17), attend school high school (17), like activity hiking (15), has profession nurse (15), like activity video games (15), have vehicle car (14), like activity shopping (14), have pet dogs (14), like animal dog (13), like color blue (13), like activity travel (13), has profession artist (11), have sibling sister (11), physical attribute short (11), like sports basketball (10), like goto beach (10), physical attribute tall (10), like drink coffee (10), like activity reading (10), place origin farm (10) Table 13: The most prevalent values for the Head and Tail entities in the PeaCoK-PersonaExt dataset. Only the tail entities with at least 10 occurances are shown. There are 30 unique head entitites and 1708 unique tail entities. Param. Batch size Epochs Seed Optimizer Value 32 20 42 AdamW 5e ´ 5 linear 1 epoch 0 0.9 0.999 1e ´ 8 Learning Rate Scheduler Warmup weight decay adam_beta1 adam_beta2 adam_epsilon Table 14: Hyperparameter settings for the persona extraction model. The best model was saved at the end according to evaluation loss on the validation set. Model was trained on 4 GPUs, batch size shown is the effective batch size. Name Input Output PAED Tokens Context : {context} Head Entity : <mask>, Tail Entity : <mask> , Relation : <mask> . Head Entity : {head} , Tail Entity : {tail} , Relation : {relation} . [CONTEXT] {context} [HEAD] <mask> [TAIL] <mask> [RELATION] <mask> [HEAD] {head} [TAIL] {tail} [RELA- TION] {relation} Relation-first [CONTEXT] [RELATION] context <mask> [HEAD] <mask> [TAIL] <mask> [RELATION] {relation} [HEAD] {head} [TAIL] {tail} Relation-first-nomask [CONTEXT] {context} [RELATION] {relation} [HEAD] {head} [TAIL] {tail} Table 15: Evaluated input and output prompts for fine-tuning BART for persona extraction task. Prec. Recall Relation Tail F1 Acc. Acc. Acc. Head Token prompt Relation-first Relation-first-nomask PAED prompt 0.81 0.83 0.81 0.81 0.83 0.81 0.83 0.81 0.81 0.81 0.83 0.81 0.81 0.81 0.83 0.81 0.81 0.97 0.98 0.98 0.97 0.56 0.61 0.55 0.56 0.83 0.83 0.83 0.98 0.62 BART PAED Triplet Acc. 0.55 0.60 0.55 0.56 0.61 Table 16: Summary results for our model (BART) trained with the variety of input and output templates (Table 15) and comparative model PAED trained and tested on the PersonaExt-PeaCoK dataset. Results are on the test set. Prec. Recall Relation Tail F1 Acc. Acc. Acc. Head PAED prompt Tokens BART Relation-first Relation-first-nomask PAED Char. Exp. Goal Routine Not Overall Char. Exp. Goal Routine Not Overall Char. Exp. Goal Routine Not Overall Char. Exp. Goal Routine Not Overall Char. Exp. Goal Routine Not Overall 0.85 0.70 0.64 0.79 0.69 0.81 0.84 0.65 0.66 0.80 0.67 0.81 0.86 0.80 0.69 0.80 0.84 0.83 0.87 0.71 0.64 0.75 0.70 0.81 0.87 0.70 0.73 0.81 0.69 0.83 0.91 0.64 0.76 0.70 0.54 0.81 0.91 0.67 0.73 0.69 0.58 0.81 0.93 0.66 0.67 0.74 0.55 0.83 0.87 0.68 0.70 0.76 0.58 0.81 0.91 0.68 0.75 0.74 0.65 0.83 0.88 0.67 0.69 0.75 0.61 0.81 0.88 0.66 0.70 0.74 0.62 0.81 0.89 0.73 0.68 0.77 0.66 0.83 0.87 0.70 0.67 0.75 0.64 0.81 0.89 0.69 0.74 0.78 0.67 0.83 0.91 0.64 0.76 0.70 0.54 0.81 0.91 0.67 0.73 0.69 0.58 0.81 0.93 0.66 0.67 0.74 0.55 0.83 0.87 0.68 0.70 0.76 0.58 0.81 0.91 0.68 0.75 0.74 0.65 0.83 0.99 0.91 0.99 0.96 0.95 0.97 0.98 0.92 0.98 0.97 0.96 0.97 0.99 0.93 0.98 0.97 0.94 0.98 0.98 0.93 0.98 0.97 0.94 0.98 0.99 0.92 0.98 0.97 0.97 0.98 0.57 0.54 0.64 0.57 0.46 0.56 0.57 0.57 0.58 0.54 0.49 0.56 0.62 0.61 0.57 0.60 0.47 0.61 0.54 0.58 0.60 0.58 0.48 0.55 0.63 0.65 0.67 0.62 0.60 0.62 Triplet Acc. 0.56 0.50 0.64 0.56 0.46 0.56 0.57 0.52 0.58 0.54 0.49 0.55 0.62 0.58 0.57 0.59 0.45 0.60 0.53 0.53 0.59 0.58 0.47 0.55 0.62 0.60 0.67 0.60 0.60 0.61 Table 17: Detailed results for our model (BART) trained with the variety of input and output templates (Table 15) and comparative model PAED trained and tested on the PersonaExt-PeaCoK dataset. Results are on the test set. Greedy Search BART Beam Search Diverse Beam Search (fine-tuned) PAED (zeroshot) Label Precision Recall F1 Acc Head Acc Overlap Acc Tail Triplet Acc 0.93 0.78 0.68 0.73 0.82 0.85 0.93 0.78 0.65 0.73 0.83 0.85 0.93 0.78 0.67 0.73 0.85 0.85 0.87 0.70 0.73 0.81 0.69 0.83 0.87 0.00 0.48 0.00 0.04 0.52 0.83 0.73 0.75 0.88 0.69 0.84 0.83 0.74 0.77 0.87 0.68 0.83 0.83 0.74 0.76 0.88 0.63 0.83 0.91 0.68 0.75 0.74 0.65 0.83 0.01 0.00 0.42 0.00 1.00 0.06 0.88 0.75 0.71 0.80 0.75 0.84 0.88 0.76 0.70 0.79 0.75 0.84 0.88 0.76 0.71 0.79 0.73 0.83 0.89 0.69 0.74 0.78 0.67 0.83 0.01 0.00 0.45 0.00 0.08 0.03 0.83 0.73 0.75 0.88 0.69 0.84 0.83 0.74 0.77 0.87 0.68 0.83 0.83 0.74 0.76 0.88 0.63 0.83 0.91 0.68 0.75 0.74 0.65 0.83 0.01 0.00 0.42 0.00 1.00 0.06 0.98 0.92 0.97 0.98 0.69 0.96 0.98 0.92 0.97 0.98 0.68 0.96 0.98 0.92 0.98 0.98 0.63 0.96 0.97 0.88 0.97 0.96 0.66 0.95 0.02 0.01 0.42 0.03 1.00 0.08 0.75 0.77 0.77 0.83 0.69 0.78 0.75 0.78 0.78 0.83 0.68 0.78 0.76 0.78 0.77 0.83 0.63 0.78 0.77 0.74 0.75 0.72 0.66 0.75 0.01 0.00 0.40 0.02 1.00 0.07 0.62 0.67 0.67 0.75 0.69 0.67 0.62 0.69 0.69 0.75 0.68 0.67 0.63 0.67 0.67 0.75 0.63 0.67 0.61 0.65 0.67 0.62 0.66 0.62 0.01 0.00 0.39 0.02 1.00 0.07 0.62 0.64 0.67 0.74 0.69 0.66 0.62 0.65 0.69 0.74 0.68 0.66 0.62 0.64 0.67 0.74 0.63 0.66 0.61 0.60 0.67 0.60 0.65 0.61 0.00 0.00 0.39 0.00 1.00 0.06 Characteristic Experience Goal plan Routine habit Not Overall Characteristic Experience Goal plan Routine habit Not Overall Characteristic Experience Goal plan Routine habit Not Overall Characteristic Experience Goal plan Routine habit Not Overall Characteristic Experience Goal plan Routine habit Not Overall Table 18: Detailed per-label results on PersonaExt-PeaCoK test split. Label Precision Recall Tail F1 Acc. Acc. Acc. Head Triplet Acc. dislike favorite_place get_along have_no_children have_no_sibling have_sibling like_color like_general used_to want_job Overall 0.79 0.38 0.00 0.00 0.00 0.51 0.00 0.49 0.18 0.84 0.52 0.57 0.50 0.00 0.00 0.00 0.89 0.00 0.60 0.16 0.88 0.67 0.43 0.00 0.00 0.00 0.65 0.00 0.54 0.17 0.86 0.57 0.50 0.00 0.00 0.00 0.89 0.00 0.60 0.16 0.88 0.92 1.00 0.89 1.00 1.00 0.85 0.99 0.98 1.00 0.99 0.50 0.72 0.71 0.79 0.90 0.64 0.80 0.40 0.68 0.73 0.58 0.54 0.58 0.95 0.58 0.32 0.42 0.00 0.00 0.00 0.56 0.00 0.25 0.11 0.68 0.34 Table 19: Results for original PAED model on PersonaExt (with “tail phrases” (see Appendix B)) with 10 unseen labels and seed 0. We refer to this model as PAED (zero-shot). Relation Head Tail Overall Precision Recall F1 Acc. Acc. Acc. Acc. Characteristic Experience Goal or Plan Routine or Habit Not Overall 0.92 0.33 0.41 0.77 0.46 0.81 0.79 0.90 0.85 0.65 0.83 0.75 0.85 0.48 0.55 0.71 0.59 0.79 0.90 0.85 0.65 0.83 0.99 0.90 0.99 0.97 0.97 0.80 0.83 0.78 0.80 0.74 0.65 0.75 0.77 0.57 0.72 0.77 0.75 0.98 0.80 0.63 Table 20: PAED results on the PersonaExt-PeaCoK dataset on the test set without the tail phrase modification. Contradictory Malformed Non-specific relation Unreasonable Directly Reasonable Not Accepted (No) Accepted (Yes) Ratio Accepted Neutral Removed Re-ranking Base NLI Persona-NLI NLI Persona-NLI Beam Search Diverse Beam Search Greedy Search Beam Search Diverse Beam Search Greedy Search Beam Search Diverse Beam Search Beam Search Diverse Beam Search Beam Search Diverse Beam Search Greedy Search 8 6 2 10 5 3 60 61 58 58 57 59 54 0 0 0 0 3 0 1 4 1 4 1 4 4 12 8 8 51 32 14 85 92 99 99 86 92 86 10 7 8 14 9 6 61 62 63 62 61 64 62 3 2 1 9 8 6 40 38 36 37 40 39 44 36 31 26 75 61 44 320 326 313 324 320 326 325 0.57 0.61 0.60 0.53 0.58 0.68 0.63 0.62 0.61 0.62 0.64 0.62 0.64 Table 21: Detailed results for human evaluation on personas extracted from LIGHT dataset. IAA is 0.85.
ai_researcher
1
A_Mechanistic_Explanatory_Strategy_for_XAI.pdf
A Mechanistic Explanatory Strategy for XAI Forthcoming in Müller, V. C., Dewey, A. R., Dung, L., & Löhr, G. (Eds.), Philosophy of Artificial Intelligence: The State of the Art, Synthese Library, Berlin: Springer Nature. Please cite the published version. Marcin Rabiza 1,2 1 Institute for Philosophy, Leiden University, Leiden, The Netherlands 2 Institute of Philosophy and Sociology, Polish Academy of Sciences, Warsaw, Poland [email protected] https://orcid.org/0000-0001-6217-6149 Abstract: Despite significant advancements in XAI, scholars note a persistent lack of solid conceptual foundations and integration with broader scientific discourse on explanation. In response, emerging XAI research draws on explanatory strategies from various sciences and philosophy of science literature to fill these gaps. This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems, situating recent advancements in AI explainability within a broader philosophical context. According to the mechanistic approach, the explanation of opaque AI systems involves identifying mechanisms that drive decision-making. For deep neural networks, this means discerning functionally relevant components—such as neurons, layers, circuits, or activation patterns—and understanding their roles through decomposition, localization, and recomposition. Proof-of-principle case studies from image recognition and language modeling align these theoretical approaches with the latest research from AI labs like OpenAI and Anthropic. This research suggests that a systematic approach to studying model organization can reveal elements that simpler (or “more modest”) explainability techniques might miss, fostering more thoroughly explainable AI. The paper concludes with a discussion on the epistemic relevance of the mechanistic approach positioned in the context of selected philosophical debates on XAI. 1 Introduction In recent years, deep learning has emerged as the dominant approach in artificial intelligence (AI). Its algorithms are characterized by "black box" functions that are too complex to fully understand, making it difficult to determine how specific computations based on certain inputs lead to particular predictions. To restore trust in automated decision-making, researchers focus on the explainable AI (XAI) research program, which aims to achieve transparency, interpretability, and explainability to validate the decision-making processes of opaque AI systems, making model behavior understandable to stakeholders with various epistemic needs.1 ————————— 1 Interpretability is often defined as “the degree to which an observer can understand the cause of a decision” (Miller, 2019, p. 8). Explanation, therefore, is one mode through which an observer may obtain such understanding. In the machine learning literature, the terms “interpretability” 1 Despite significant technical advancements in XAI (e.g., Montavon et al., 2018; Linardatos et al., 2021), scholars highlight various shortcomings in its conceptual foundations and a lack of integration with the broader scientific discourse on explanation. While many current XAI methods excel in producing localized explanations, they often fall short of providing a comprehensive understanding of the complex mechanisms governing AI systems, which is crucial in high-stakes decision-making contexts (Mittelstadt et al., 2019). Furthermore, the dominant technology-centered approach in AI explanations has largely ignored substantial theoretical and empirical contributions from fields like philosophy, psychology, and cognitive sciences, leaving a significant research area underexplored (Miller, 2019). In response, an emerging strand of fundamental XAI research is drawing on coordinated explanatory strategies from various scientific disciplines and the philosophy of science literature to address these conceptual gaps (e.g., Miller et al., 2017; Lipton, 2018; Mittelstadt et al., 2019; Páez, 2019; Zednik, 2019; Zerilli et al., 2019; Erasmus et al., 2021; Watson & Floridi, 2020; Beisbart & Räz, 2022). This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems, situating recent advancements in XAI methods—especially the mechanistic interpretability movement—within a broader philosophical context. To this end, I draw upon the tradition of the new mechanism in the philosophy of science while leveraging examples derived from XAI engineering practices. The structure of the paper is as follows. Section 2 introduces the concept of a mechanism based on a minimal neomechanistic theory of explanation and argues for its applicability to XAI. Subsequently, Section 3 presents a mechanistic interpretation of deep learning, explaining the operations of deep neural networks by identifying decision-making mechanisms through discovery heuristics of decomposition, localization, and recomposition. Section 4 then utilizes proof-of-principle case studies from image recognition and language modeling to align these theoretical approaches with the latest research from leading AI labs like OpenAI and Anthropic. Section 5 concludes with a brief discussion of the epistemic relevance of the mechanistic approach in the context of XAI. ————————— and “explainability” are often used interchangeably (e.g., Molnar, 2022, §3)—a convention I will follow for now until I introduce a specific understanding of interpretable and comprehensible systems from Doran et al. (2017) later in the paper. However, it should be noted that in some contexts, researchers differentiate these two notions (e.g., Dorsch & Moll, Forthcoming). 2 2 Neomechanistic Theory of Explanation In scientific discourse, one prominent approach is characterized by the neomechanistic theory of explanation, which emphasizes the logic of “explaining why by explaining how” (Bechtel & Abrahamsen, 2005, p. 422). According to the new mechanists, explaining why something happens often involves identifying the underlying mechanisms that give rise to observed behavior. Mechanisms are identified by the phenomena they produce (Illari & Williamson, 2012), their functional roles (Machamer, Darden, & Craver, 2000; hereinafter MDC), and by their operating “parts” and “interactions.” (Bechtel & Abrahamsen, 2005). For example, Glennan (1996, p. 52) defines a mechanism as “a complex system which produces that behavior by the interaction of a number of parts according to direct causal laws.” According to MDC (2000, p. 3), mechanisms are identified and individuated by the “activities” and “entities” that constitute them, as well as their functional roles, and setup and termination conditions. Entities are components or parts defined by their properties—such as location, structure, and orientation— that engage in activities based on specific characteristics. Activities are temporal producers of change, characterized by aspects such as spatiotemporal location, rate, duration, types of entities involved, and other operational properties. The roles that entities play through their activities are considered their functions within the mechanism. The specific organization of these elements determines how collectively they produce the observed phenomenon (MDC, 2000). Mechanistic explanation begins by characterizing the phenomenon under study and then identifying the mechanisms responsible for it. According to Bechtel and Abrahamsen (2005), to explain a mechanism, scientists must pinpoint its components, understand the functions these parts perform, and determine their organization to produce the phenomenon. This process relies on scientific discovery methods, incorporating heuristic strategies such as decomposition and localization (Bechtel & Richardson, 2010). Decomposition involves breaking the mechanism into its structural or functional components. Structural decomposition focuses on the physical aspects of parts like size or shape, while functional decomposition looks at the parts' roles, causal powers, and overall contributions to the mechanism (Piccinini & Craver, 2011). Functional decomposition assumes that the system’s behavior results from its sub-functions. Structural decomposition further breaks down these functions into their physical components. This process starts with hypothetical component parts, refining the breakdown as more is learned about the system’s 3 operations, with both types of decomposition eventually integrating to form a complete explanation. Localization complements the decomposition process by mapping component operations onto their respective parts. While decomposition involves breaking down the mechanism into parts and operations, localization identifies activities from the task decomposition and links them to component behaviors or capabilities (Wright & Bechtel, 2007). Sometimes, physical components can be directly identified, but often their existence is inferred using functional tools without direct observation. Bechtel and Abrahamsen (2005) note that even inferred components are treated as essential parts of the mechanism. Localization involves a genuine commitment to the functions identified in the task decomposition and the use of appropriate methods to demonstrate that something within the system is performing each of these functions. Mechanistic explanations describe the relevant entities, their properties, and the activities that connect them, by demonstrating how actions at one stage influence those at successive stages. Glennan (2017) notes that mechanistic models can be depicted through diagrams accompanied by linguistic explanations. These diagrams typically illustrate spatial relations and structural features of the mechanism, with related activities depicted as labeled arrows (see Figure 2.1). Although the basic arrangement of a mechanism might be linear, it can also include more complex structures like forks, joins, cycles, and non-linear interactions. Fig. 2.1 A diagrammatic representation of a mechanism (reproduced from Craver, 2007). At the top is the phenomenon: some system S engaged in behavior ψ. Beneath it are the parts (the Xs) and their activities (the φs) organized together. As Figure 2.1 might suggest, mechanisms form nested, multilevel hierarchies, in which lower-level entities and activities serve as components for higher-level phenomena, effectively becoming mechanisms themselves. These hierarchies have a finite structure and do not decompose indefinitely; the lowest level of description is determined by practical 4 considerations and stakeholder interests. Although all mechanisms are ultimately based on fundamental, non-causal physical laws, seeking explanations at this level is typically neither practical nor beneficial. Instead, explanations tend to bottom out at components that are fundamental or unproblematic within a specific scientific discipline or explanatory practice (MDC, 2000). In the optional stage of mechanism discovery, as described by Bechtel and Abrahamsen (2013), scientists may recompose what they have learned about the functional parts into an explanatory model, such as a mathematical or computational model. The goal is to create a model from a catalog of entities likely to be causally relevant to a phenomenon based on their identified internal division of labor. The mechanistic explanatory strategy that emerges from this description resembles something like a causal narrative, in the sense that it outlines sequences of events involving entities interacting with each other, illustrating how their spatial and temporal arrangement produces or sustains the explanandum (Glennan, 2017). An important question at hand is: How might this framework be adapted to analyze and explain opaque AI systems? 3 Mechanistic Interpretation of Deep Learning 3.1 Mechanistic Structure of Deep Neural Networks The mechanistic explanatory strategy has been adopted across various scientific fields, being especially prevalent in neuroscience (Kostić & Halffman, 2023). Given the biological inspiration of deep neural networks (DNNs), along with their computational parallels and predictive capabilities akin to the brain's categorization processes (Schyns et al., 2022), it may be promising to apply mechanistic principles to AI systems using opaque deep learning algorithms. Indeed, Kästner and Crook (2024) argue that as AI systems grow in complexity, they should be analyzed similarly to biological organisms, emphasizing their functional organization. To this end, applying the mechanistic approach to DNNs could help us discover how these models work internally, illuminating not only how specific computations—given specific inputs—produce unique predictive patterns, but perhaps also explaining the models themselves in a holistic way. In the neomechanistic framework, explaining AI systems entails identifying the mechanisms behind their decision-making processes using discovery heuristics of decomposition, localization, and recomposition. The goal is to understand the properties driving 5 behavior and how these are orchestrated through component interactions (see Figure 3.1). By dissecting neural networks into comprehensible components, researchers can better grasp each part's function and structure, gaining insights into the network's overall behavior. Fig. 3.1 Schematic representation: internal structure of the mechanism of the deep learning model analyzed in the neomechanistic framework. The first step in this endeavor involves identifying the correct components for analysis— the candidates for the AI system’s epistemically relevant elements (EREs) (Humphreys, 2009; Kästner & Crook, 2024). Neurons, while fundamental computational units of neural networks, often fall short as effective units for human interpretation. Despite performing straightforward arithmetic, individual neuron-like units viewed in isolation fail to clearly demonstrate how their functions contribute to the network's overall behavior. Consequently, researchers seek other network components as potential EREs that could offer more comprehensible units of analysis—robust patterns that sustain system behavior and are pertinent to explanatory functions (Kästner & Crook, 2024). In principle, this approach should be compatible with deep learning, as DNNs are built from causally or functionally relevant components. These include entities (medium- independent vehicles) and activities (the manipulations these vehicles undergo) which are central to the computational mechanisms of deep learning. While various types of networks exist, they share common entities such as neurons, connections, filters, circuits, or features. 2 DNNs develop their functional organization through automated training processes. Although models are processed as multidimensional arrays of numbers with mathematical operations ————————— 2 It might not be clear in what way network structures qualify as mechanistic entities. While this issue deserves a more detailed discussion, for the purpose of this paper, I will assume that neurons are entities in the sense that they are mathematical abstractions of certain components ultimately belonging to the underlying physical hardware. Similar reasoning applies to activities, which eventually bottom out at the level of physical phenomena. 6 defined over vectors in programming languages such as Python, computer scientists interested in mechanistic interpretability—a particular approach to XAI focused on deciphering the internal workings of machine learning models—recognize that “neural network parameters are in some sense a binary computer program which runs on one of the exotic virtual machines we call a neural network architecture” (Olah, 2022). DNN components can be identified within such environments. During training, these entities engage in activities such as activation, back- propagation, and error minimization, triggered by properties like incoming signals exceeding certain thresholds. This interaction fosters the development of specialized roles within the system, often unforeseen by programmers, contributing to the emergence of observed behavior and supporting the assumptions of mechanistic interpretability. An example of this “mechanistic compatibility” can be seen in deep convolutional neural networks (CNNs), primarily used in image recognition and computer vision, which somewhat mimic the organization of the animal visual cortex. During training, a CNN processes a two- dimensional labeled image to generate weights that encode extracted data patterns. CNNs employ organized entities—such as neurons, larger neuronal circuits, convolutional kernels (filters), or various kinds of layers—along with activities like convolutions, ReLU and softmax activations, and pooling, orchestrating complex mechanisms of feature extraction and image classification (see Figure 3.2). Fig. 3.2 Architecture of a deep convolutional neural network (reproduced from Shahriar, 2023). The inference phase of image recognition begins with setup conditions that include initializing model parameters. An input image is then introduced and subjected to a series of transformations across multiple layers to extract and refine features. Intermediate activities of this process involve convolution, activation, and pooling. Convolutional kernels slide across the input signal, detecting features similar to biological neural networks' receptive fields and 7 generating feature maps for subsequent layers. ReLU activations eliminate negative values, activating only nodes that exceed a certain threshold. Pooling layers reduce data dimensionality by merging outputs from neuron clusters into single neurons, using hierarchical patterns to evolve simpler features into more complex ones. As processing continues, layers represent diverse image features such as edges, lines, and curves, with higher layers capturing more complex, “meaningful” and abstract shapes. The process culminates in fully connected layers in which the softmax function transforms raw outputs into class probability scores, marking the termination condition of the inference phase. This sequence, maintained under stable and consistent conditions, demonstrates the deterministic regularity characteristic of genuine mechanisms. In this setup, CNN mechanisms form multilevel hierarchies, in which lower-level entities and activities act as enabling components for higher-level phenomena, thus illustrating the mechanistic nature of their internal organization. For example, input convolution is crucial for feature mapping, which, when applied iteratively, integrates into the overarching mechanism of image classification in fully connected layers. 3.2 Implementation of Mechanism Discovery Heuristics Having explored the mechanistic structure of DNNs, it's important to consider how we can systematically apply discovery heuristics of decomposition and localization to dissect and understand the roles of EREs within these networks. In XAI practice, these strategies can be implemented through established analytical techniques tailored to specific applications. In computer vision, for example, input heatmapping and feature visualization techniques can be used to generate saliency maps that highlight specific pixels or regions in an input image that are highly predictive of the output. Such visualizations can also clarify the operations performed by DNN components across layers, thus aiding the functional decomposition of the network. Concrete examples include activation maximization, regularized optimization, network inversion, deconvolutional neural networks, network dissection-based visualization, or layer- wise relevance propagation (Yosinski et al., 2015; Qin et al., 2018; Montavon et al., 2018). These techniques allow researchers to observe how each network level transforms input into increasingly abstract and meaningful representations, which is crucial for dissecting complex mechanisms like face recognition into simpler components that recognize individual features such as ears, eyes, or noses. An example of hierarchical feature representations processed within a CNN is shown in Figure 3.3. 8 Fig. 3.3 Visualization of features on various layers of a CNN for input images of faces (reproduced from Karmakharm, 2018). While saliency methods may not provide complete mechanistic explanations by themselves, they are instrumental in the mechanistic decomposition of DNNs by identifying distributed structures beyond individual neurons as potential EREs of AI decision-making mechanisms. Recent advancements by leading AI organizations such as OpenAI, Anthropic, Google, Redwood Research, ARC, and Conjecture demonstrate a similar interest in such “mechanistic” understanding, particularly within the AI safety community (e.g., Olah et al., 2017; Cammarata et al., 2021; Elhage et al., 2021; Chan et al., 2022; Christiano, 2022; Olsson et al., 2022; Bricken et al., 2023a; Cunningham et al., 2023; Conmy et al., 2023; Schwettmann et al., 2023). This trend is evident in the mechanistic interpretability movement, which aims to go beyond simple input-output analysis and examine the internal workings of AI models to enhance epistemic trust, aid in debugging, remove biases, and prevent models from “going rogue.” This approach is exemplified by the work of Cammarata et al. (2021) at OpenAI, who aim to reverse-engineer image classification neuron families known as curve detectors into understandable explanations and then implement the inferred algorithms into a new DNN from scratch. Their research particularly focuses on analyzing a curve detector circuit within the fifth convolutional layer of the InceptionV1 neural network. These circuits, which are sub-graphs within the network, are not predefined as distinct parts of the DNN's architecture but are integrated into the model’s learned structure, emerging as functional units that neurons self- organize into during training. To understand the functional organization of these circuits (termed the “mechanics of curve detectors” by the authors), the team employs decomposition and localization strategies using decomposed feature visualization to create a grid that illustrates amplified weights from an upstream layer to a downstream neuron of interest. By iteratively applying this technique to each neuron, labeling them, and grouping them, they categorize the neurons in the first five convolutional layers of InceptionV1 into layer-wise “families” that form the curve detection 9 mechanisms. Upon identifying curve detectors, the researchers traced their connections to discern how upstream neurons affected their activities by visualizing connection weights. Extending this visualization back to the input layer provided a detailed view of the interactions within the circuit, enabling them to classify the circuit as a mechanistic ERE distinct from the surrounding network (cf. Kästner & Crook, 2024, p. 10). The team then developed a high-level “circuit schematic” that details the primary components of curve detection and their functional organization, forming a clear narrative: “Gabor filters evolve into proto-lines, which assemble into lines and early curves, ultimately forming curves” (Cammarata et al., 2021). Leveraging insights from the discovery process, the researchers recomposed a curve detection mechanism of InceptionV1 by manually configuring the weights of a blank DNN to replicate the identified neuron families and circuit interactions. They compared the behavior of the manually designed network with InceptionV1 using identical synthetic stimuli and analyzed responses with feature visualization and other XAI techniques. The experiments showed that the artificially recomposed curve detectors closely resembled those trained naturally. This evidence suggests that the functional decomposition of InceptionV1 was indeed successful and accurately reflects the mechanistic organization of curve detection circuits. Anthropic, known for developing Claude—a large language model that rivals OpenAI's ChatGPT—has employed decomposition and localization strategies to break down language models into interpretable, structurally distinct functional components. Bricken et al. (2023a; 2023b), recognizing that individual neurons are not the most effective units for analysis, partly due to their polysemanticity, employ a sparse autoencoder—a type of weak dictionary learning algorithm—to identify better candidates for EREs in small transformer models. These units, termed “features,” represent distinct patterns within the model and are essentially linear combinations of neuron activations. In their study of a transformer language model, the researchers decomposed a layer with 512 neurons into over 4,000 features by training sparse autoencoders on multilayer perceptron activations from 8 billion data points. Each feature captured a unique concept, such as DNA sequences, legal language, HTTP requests, Hebrew text, and nutrition statements. To evaluate the interpretability of these features—that is, the degree to which humans can understand them—they conducted an assessment with a blinded human evaluator (Figure 3.4), validating the practical utility and clarity of the decomposed features. 10 Fig. 3.4 Interpretability scores of the identified features (red) compared to the neurons (teal) (reproduced from Bricken et al., 2023b). The results showed that the features were significantly more interpretable than individual neurons, revealing functional properties that were not apparent when examining neurons alone. Moreover, the authors conducted “autointerpretability” tests, in which a language model generated concise descriptions of the small model's features. The evaluation of these descriptions was based on another model's ability to predict a feature's activations from its description. The features consistently received higher scores than the neurons, confirming their coherent and stable interpretation and their impact on model behavior. Decomposing the language model into features offers a targeted method for guiding models, in which the activation of specific features leads to predictable changes in behavior. Researchers also developed a “knob” to adjust the resolution at which the model is viewed and experimented with the number of features learned. They found that decomposing the model into a small set of features offers a coarse but clear view, while a larger set reveals more refined and subtle properties of the model. Additionally, these learned features have proven to be universal across a range of models, demonstrating the enhanced generalizability of this explanatory approach. Overall, evidence from case studies supports the idea that a coordinated, systematic research agenda focused on uncovering the mechanistic organization of DNNs can provide explanations of the way that systems operate at various levels of their structure. The pursuit of mechanistic explanations through functional decomposition can reveal previously unknown EREs in opaque AI systems, which might remain obscured with non-coordinated individual explainability methods, ultimately leading to more thoroughly explainable AI. 11 4 Are Deep Neural Networks Genuine Mechanisms? There may be several objections from orthodox mechanists regarding the characterization of DNNs as mechanisms, primarily questioning whether DNNs meet the criteria for genuine mechanisms as defined in neomechanistic literature. Despite possible skepticism, I argue that the discovery strategies proposed by neomechanistic philosophy—decomposition, localization, and recomposition—can help researchers at least partially open the deep learning black box. A key standard in the general account of mechanistic explanation is the demand for completeness as causal models, necessitating that all causally relevant parts and operations be explicitly detailed without gaps or placeholders (Craver, 2007). A fully adequate mechanistic explanation must provide structural details at all levels of the mechanism, including components and activities that contribute to concrete computations. This requirement conflicts with the abstractness and medium-independence typical for computational explanations (Haimovici, 2013; Coelho Mollo, 2018). Under such scrutiny, AI models like DNNs might not qualify as genuine mechanisms because they typically represent abstract, formal specifications of computation that lack detailed structural information. DNNs are usually simulated through matrix operations rather than being implemented on physical nodes (except in rare cases involving one-to-one mapping, such as on neuromorphic hardware, cf. e.g., Schuman et al., 2022). To be fully mechanistically adequate, they would require additional specifications concerning physically instantiated computers: instantiation blueprints (Miłkowski, 2014). On the other hand, there are some compelling reasons to treat DNNs as if they were mechanisms. While this paper cannot fully explore the debate over the ontic status of AI models due to space constraints, I will briefly outline two primary arguments defending the idea that DNNs can be meaningfully interpreted through a mechanistic approach. First, DNNs maintain their mechanistic status through weak structural constraints, typical of functional explanations that require structural properties to realize a functional characterization (Piccinini, 2015). Piccinini and Craver (2011, p. 302) state, “The functional properties of black boxes are specified in terms of their inputs and outputs (plus the algorithm, perhaps), independently of their physical implementation.” DNNs exemplify this, as their inputs, outputs, and mapping algorithms can be defined without specific physical ties. However, the functional properties of DNNs impose certain constraints on the structural components regarding the degrees of freedom necessary for implementing an algorithm. This limits the functional analysis of an AI system to explaining how the algorithm, data, and network architecture determine the required degrees of freedom. If the system's functional components 12 are organized and can reliably differentiate between computational vehicles, the same computations can be implemented across various physical media—mechanical, electromechanical, electronic, or magnetic systems—without necessarily being affected by the specific properties of the physical medium. Therefore, deep learning models can be viewed as mathematically defined systems that describe concrete, physically instantiated systems to some degree of approximation (cf. Piccinini, 2015). These models' functional properties specify the necessary degrees of freedom that a concrete system requires to perform computations. As the models are not causally complete enough to be considered typical mechanisms, their functional analyses can be described as “mechanism sketches,” in which some structural aspects of a mechanistic explanation are intentionally omitted (Piccinini & Craver, 2011). This level of characterization remains relevant for XAI research, where low-level physical details are not always needed to determine the success or failure of algorithmic decision-making. An exception is when, for instance, the speed of information processing provided by the physical medium is crucial for avoiding miscomputation—for example when running a large language model on a smartphone or using an inadequate processing unit for image recognition in autonomous vehicles. However, case studies from OpenAI and Anthropic show that mechanistic decomposition of an AI system can focus on functional analysis without covering every aspect of computational phenomena in concrete processing systems. Secondly, deep learning models can be seen as teleofunctional mechanisms, or simply functional mechanisms, which are defined by having teleological functions: specific “purposes” or “ends.” Computing systems are generally designed with the teleological function of computing, which involves manipulating variables or specific values of variables according to rules sensitive to their properties (Piccinini, 2015; Coelho Mollo, 2018). In this context, deep learning systems perform digital computations by manipulating digits and strings of digits, which, although eventually translating to physical quantities—intervals of voltage values—are numerical vector representations of input data and internal states manipulated through matrix operations. DNNs operate with medium-independent vehicles, which are the functional components of a mechanism (its entities), and the manipulations that these vehicles undergo (activities), according to mappings from inputs to outputs determined by a transition function set by the learning algorithm. These systems consist of organized components, each with specific functions, embodying the teleofunctional nature of DNN computation. When properly 13 organized and functioning, the coordinated activities of these components define the capabilities of DNN-style computation, provided that physical instantiation details offer the necessary degrees of freedom and do not lead to miscomputations. Assessing deep learning mechanisms to ascertain whether they fulfill their intended teleological functions involves decomposing the model into its components to understand their contributions. This extends to computing systems defined purely mathematically, which stand to concrete ones in roughly the same relation that the triangles of geometry stand to concrete triangular objects: A similar notion of functional mechanism applies to computing systems that are defined purely mathematically, such as (unimplemented) Turing machines. Turing machines consist of a tape divided into squares and a processing device. The tape and processing device are explicitly defined as spatiotemporal components. They have functions (storing letters; moving along the tape; reading, erasing, and writing letters on the tape) and an organization (the processing device moves along the tape one square at a time, etc.). Finally, the organized activities of their components explain the computations they perform (Piccinini, 2015, pp. 119–120). Thus, decomposing functional mechanisms, even if the target system is defined mathematically, results in an elliptical mechanistic explanation of the system’s capacities. To summarize, the core concept of the mechanistic approach to deep learning is that despite possible skepticism regarding the ontic status of DNNs, we can still effectively utilize neomechanistic discovery strategies—decomposition, localization, and recomposition—to gain valuable insights into the internal workings of these systems. By pursuing mechanistic analysis augmented by case-specific analytical explainability techniques, researchers can identify functionally relevant components within the system and determine their precise roles, thereby obtaining EREs. 5 Epistemic Relevance of the Mechanistic Approach While the relevance of the mechanistic approach in the XAI landscape has been addressed in existing technical literature and philosophical works, such as those by Kästner and Crook (2024), this paper advances the discourse by specifically focusing on the epistemic advantages and limitations of the mechanistic explanatory strategy and situating them within selected philosophical debates on XAI. 5.1 Epistemic Advantages First, mechanistic decompositions of AI systems align with the interpretability criteria articulated by Doran et al. (2017). According to their framework, an interpretable system allows users not only to see, but also to study and understand how inputs are mathematically mapped to outputs. This is said to “help probe the mechanisms of ML systems.” The authors cite regression, support vector machines, decision trees, ANOVAs, and data clustering models as 14 examples of interpretable systems. While acknowledging the interpretability challenges posed by DNNs, which autonomously learn and transform input features through nonlinear operations, mechanistic decomposition emerges as a promising method for examining their internal dynamics. By identifying human-understandable functional components, this approach aids technically proficient stakeholders in analyzing algorithmic mechanisms. Second, mechanistic decomposition offers valuable insights into the internal mappings of AI systems, enhancing their structural transparency. Creel (2020) defines structural transparency as understanding how an algorithm is realized in code, involving knowledge of sub-components and their relationships, typically gained through analyzing system interactions. This understanding allows for modeling the system's structure and behavior using tools like code maps, flowcharts, and diagrams, closely aligning with mechanistic principles. Creel, however, is skeptical about the effectiveness of such decomposition in reducing the structural opacity of DNNs, noting that “when the functional units of the program are tiny, simple, and numerous, as are the neurons of a deep neural network, a subcomponent map would prove insufficient” (Creel, 2020, pp. 19–20). Nevertheless, recent advancements in XAI suggest that deeper insights into model functionality can be achieved through alternative units of analysis beyond individual neurons. Examples include neuron circuits (from OpenAI case study; Cammarata et al., 2021), and patterns of neuron activations (from Anthropic case study; Bricken et al., 2023a; 2023b), identified using mechanistic discovery strategies and techniques like feature visualization or dictionary learning. While the decomposition of complex neural networks into functionally relevant parts might obscure some neuron-specific operations, the mechanistic approach can significantly reduce structural opacity. Third, the mechanistic explanatory strategy can yield highly generalizable and counterfactual explanations for AI decision-making. Drawing on Woodward's (2003) interventionist account of causality, Buijsman (2022) argues that effective AI explanations should demonstrate outcomes in counterfactual scenarios. Counterfactual descriptions compare a model's actual outcome with hypothetical alternatives to reveal input–output correlations. Buijsman suggests these contrasts are explanatory if they show generalizable correlations inferred from counterfactual reasoning, covering a range of “what-if-things-had-been-different” scenarios. Although he doesn't specifically analyze the mechanistic approach to XAI, mechanistic AI explanations can pinpoint some general rules governing model behavior across various models. This meets Buijsman's “reasonable generalizations” criterion, at least to the extent of addressing “what-if” questions. 15 Although mechanistic approaches typically focus on actual causal processes and may overlook potential counterfactual scenarios, Buijsman's perspective is based on Woodward’s (2003) definition of causation, which involves “interventions”: altering one variable without changing others that could affect the outcome. This defines counterfactual dependence as “x causes y if an intervention on x changes the value of y” (Buijsman, 2022, p. 566). The adoption of this view integrates causal-mechanistic interventions within “what-if” scenarios, suggesting that mechanistic considerations could fulfill Buijsman’s criteria for counterfactual reasoning. Researchers can leverage typically mechanistic knowledge to explore such scenarios, for example, by perturbing input images to assess network resilience or by evaluating the impact of removing specific entities from the model. Finally, Tomsett et al. (2018) argue that explainability of a machine learning system should be assessed relative to specific stakeholders or tasks. The black box problem arises because developers struggle to explain system behaviors through their learnable parameters; however, other stakeholders may seek different explanations to meet their needs (Zednik, 2019). Stakeholders range from developers responsible for engineering and maintaining the system to end users concerned with fairness, each requiring tailored explanations (Tomsett et al., 2018; Hind, 2019; Kasirzadeh, 2021).3 In many cases, understanding an opaque system does not involve detailing parameter values, rather it involves comprehending the environmental patterns and abstract representational structures that the system models (Buckner, 2019; Zednik, 2019). This highlights the importance of recognizing diverse epistemic needs among stakeholders, especially regarding complexity and domain-specific language, to effectively fulfill their roles within an AI ecosystem. The adoption of a mechanistic strategy for XAI, which emphasizes multilevel hierarchies of mechanisms, can address stakeholders' needs by offering understandable explanations at multiple levels. Mechanistic reasoning enables AI developers to use nested hierarchical structures to identify key causal variables, like learnable parameters and representational structures, that transform inputs into outputs. Consequently, detailed lower-level explanations can help developers identify errors or enhance performance, addressing their epistemic needs for model control, manipulation, and prediction. Conversely, based on empirical studies, Ribeiro et al. (2016) argue that while machine learning experts can navigate such complex landscapes, laypersons prefer explanations that reduce models to a small number of weighted ————————— 3 Specific understanding of stakeholders is brought forward by the EU AI Act, which focuses on deployers: natural or legal persons, including a public authority, agency or other body, using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity (European Union, 2024). 16 features. Therefore, high-level, simplified descriptions of mechanisms and abstract schemata may better accommodate end users' epistemic and practical needs related to trust by distilling complex information into comprehensible knowledge. This also aligns with Buijsman’s (2022) call for abstract variables in explanations to increase generality and reduce cognitive load. A mechanistic multilevel approach may thus allow researchers to tailor explanations to diverse audiences, assisting also in meeting legal compliance requirements, such as those posed by the EU AI Act (European Union, 2024), by varying abstraction levels. Recall Anthropic's research on language model decomposition, which involved creating a “knob” to adjust the model's visibility resolution and experimenting with the number of features learned (Bricken et al., 2023a). This way, while the mechanistic explanatory strategy aims to provide a detailed understanding of the internal workings of AI models, it also offers a means for crafting human-grounded explanations that align with the epistemic requirements of various stakeholders. Overall, to properly assess the impact of such multilevel approaches, more empirical studies involving multiple AI stakeholders are essential in gaining insights into the way explanations are perceived and understood by various audiences (cf., e.g., review of empirical studies on human-grounded explanations in Dorsch & Moll, Forthcoming). 5.2 Epistemic Limitations While a theoretically grounded mechanistic exploratory strategy appears promising for the XAI program in terms of its epistemic advantages, its overall utility is also heavily constrained by certain epistemic limitations. First, adopting Doran et al.’s (2017) XAI typology, mechanistic explanations enhance interpretability by revealing AI systems' inner workings, but they do not necessarily improve the comprehensibility of such system. Comprehensibility involves users making sense of outputs through interpretable symbols like words or visualizations, regardless of the system's internal opacity. “Auto-interpretability” techniques, such as those by Bricken et al. (2023b) and Schwettmann et al. (2023), generate natural language descriptions of model components to aid comprehension. Yet, understanding these symbols typically depends on the user's implicit knowledge. While visualization techniques might display recognizable features in image recognition systems, XAI methods often identify subtle, complex features that may elude human understanding (Buckner, 2019; Zednik, 2019). Thus, although explanations should 17 ideally present mechanisms in human-understandable terms, the statistical nature of deep learning frequently diverges from intuitive concepts. Moreover, comprehending these explanations requires a certain level of technical proficiency in AI methods, which varies across types of explanations. Higher-level explanations that abstract away from intricate details generally require less expertise compared to detailed, lower-level explanations. This need for varying levels of expertise was evident in Ribeiro et al.'s (2016) evaluation of the LIME method, which specifically involved trained computer science graduates. This presents a significant challenge, as stakeholders beyond system developers will continue to demand explanations for system behavior, even when they lack the necessary technical background. Second, Creel's (2020) distinction between types of transparency indicates that while mechanistic treatment may support structural transparency, it may fall short in achieving algorithmic and run transparency. Algorithmic transparency refers to knowledge of the algorithmic functioning of the whole, revealing high-level logical rules governing system transformations, which is not secured by mechanistic function-by-function decomposition. Run transparency, on the other hand, requires knowledge of specific program operations, including hardware specifics and input data. It involves observing how programs execute on particular hardware with real data. Since the mechanistic explanatory strategy presented here focuses on abstract models defined by mathematical constructs and specified degrees of freedom, it may not capture the artifacts of real-time interactions between software and hardware, unexpected data inputs, or the effects of software being converted into machine code. Finally, there is the issue of complexity. While classical computer systems are relatively transparent, deep learning systems are considered black boxes due to the complex interdependencies among millions of parameters composing their internal states. This complexity enables neural networks to excel in problem solving but complicates the dissection of their causal–mechanical structure. Kostić (2023) points out that the opacity resulting from a model's functional complexity makes achieving a mechanistic explanation—requiring detailed knowledge of its components, activities, and organization—practically unattainable from the start. Even if not epistemically impossible, realistically addressing this challenge is practically daunting, given the limitations of current engineering methods, resources, and the increasing demand for explanations in rapidly evolving AI technologies. Perhaps the best we can hope for with the mechanistic approach is the examination of small, localized mechanisms, akin to that which occurs in neuroscience. 18 Complex DNNs often resemble non-decomposable systems, in which each component's behavior is heavily influenced by its interactions with many others (cf. Rathkopf, 2018). While decomposition helps in managing the complexity of representing every element, thereby mitigating combinatorial explosion, it often results in representations that are limited in scope and applicable mainly to specific subsystems or simplified toy models. Creel (2020) notes that while some input–output paths of a model might be straightforward, fully understanding all sub-components can be excessively complex. To address this challenge, there is growing interest in scaling microscopic insights from mechanistic interpretability research to broader understanding of larger models. 4 However, skepticism about such scalability persists due to computational challenges, high costs, and unresolved methodological questions (e.g., Nanda, 2023; Casper, 2023; Greenblatt et al., 2023). Evidence from small-scale models does not guarantee that real-world DNNs can be effectively decomposed for a thorough mechanistic understanding. When part–whole decomposition isn't feasible, alternative approaches that are fueled by a system’s complexity, such as network science and topological explanations (e.g., Rathkopf, 2018; Kostić, 2022), should be considered. 6 Conclusions The mechanistic explanatory strategy for XAI focuses on identifying the mechanisms that drive automated decision-making. In the case of deep neural networks, this requires discerning functionally relevant components—such as neurons, layers, circuits, or activation patterns— and understanding their exact roles through heuristic discovery strategies of decomposition, localization, and recomposition. Research suggests that such a coordinated, systematic approach to studying the functional organization of models can expose previously unrecognized elements that simple explainability techniques might miss, ultimately fostering more explainable AI. In this spirit, supported by real-world examples from image recognition and language modeling, this philosophical analysis underscores the value of mechanistic reasoning in XAI. The mechanistic approach offers significant epistemic benefits, enhancing AI interpretability, structural transparency, and enabling crafting counterfactual and highly ————————— 4 For instance, in OpenAI's research on curve detectors, researchers demonstrated how the first four layers of the InceptionV1 network gradually build towards curve detectors in the fifth layer, reverse engineering the operation of a family of 10 curve-detecting neurons (Cammarata et al., 2021). However, the complete InceptionV1 model consists of 22 layers (27 layers if counting pooling) with between 5 and 6 million parameters. Similarly, in an anthropic study on a transformer language model, researchers chose to examine a small, one-layer transformer with a 512-neuron layer (Bricken et al., 2023a). In comparison, GPT-3, the immediate predecessor of GPT-3.5 used in ChatGPT, features 175 billion parameters and operates within 96 layers. 19 generalizable explanations. This approach aids in prediction and system manipulation, as understanding internal dynamics allows for effective interventions and forecasting future states in new contexts. Additionally, it leverages multilevel hierarchical explanations, making complex AI systems more accessible and manageable for diverse stakeholders. Deepening our understanding of AI mechanisms and their causal relationships can improve performance evaluation and identify areas for improvement. Consequently, the advancement of a mechanistic framework in XAI may be crucial for overcoming trust and transparency challenges in high-stakes algorithmic decision-making. However, despite its theoretical promise, the mechanistic strategy's utility faces significant epistemic limitations. These challenges include ambiguous effects on algorithmic and run transparency, the limited comprehensibility of explanations due to the lack of suitable concepts, the necessity for recipients to have technical proficiency in AI, and difficulties in decomposing complex, real-world systems, which are not evident in simpler toy model examples. Several areas of future research stem from these considerations. Primarily, it is important to assess the applicability and scalability of the mechanistic approach beyond deep learning toy models to more complex AI systems. Further, investigating the way that individual stakeholders comprehend mechanistic AI explanations through user studies complimented by suitable epistemological theory of understanding would be a reasonable next step. Given the identified limitations of the mechanistic approach, it is also vital to explore other philosophically informed explanatory strategies. These should be ones that thrive on—rather than are hindered by— system complexity, and that also address other contexts of opacity. Funding This work was supported by the National Science Centre, Poland, under PRELUDIUM grant no. 2023/49/N/HS1/02461, and by the Polish National Agency for Academic Exchange under NAWA STER project no. BPI/STE/2021/1/00030/U/00001. Acknowledgements I would like to thank Marcin Miłkowski, Dimitri Coelho Mollo, Lena Kästner, Barnaby Crook, Kristian González Barman, and John Dorsch for their constructive comments that helped improve this manuscript. 20 References Bechtel, W., & Abrahamsen, A. (2005). Explanation: a mechanist alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421–441. https://doi.org/10.1016/j.shpsc.2005.03.010 Bechtel, W., & Abrahamsen, A. A. (2013). Thinking dynamically about biological mechanisms: Networks of coupled oscillators. Foundations of Science, 18, 707–723. https://doi.org/10.1007/s10699-012-9301-z Bechtel, W., & Richardson, R.C. (2010). Discovering Complexity: Decomposition and Localization as Strategies Second Edition. Cambridge, MA: MIT Press/Bradford Books. Scientific Research. in https://doi.org/10.7551/mitpress/8328.001.0001 Beisbart, C., & Räz, T. (2022). Philosophy of science at sea: Clarifying the interpretability of machine learning. Philosophy Compass. https://doi.org/10.1111/phc3.12830 Bricken, T., Templeton, A., Batson, J., Olah, C., Henighan, T., Carter, S., Hume, T., Burke, J. E., McLean, B., Nguyen, K., Tamkin, A., Joseph, N., Maxwell, T., Schiefer, N., Kravec, S., Wu, Y., Lasenby, R., Askell, A., Denison, C., … Chen, B. (2023a, October 4). Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread, Anthropic. Retrieved from https://transformer- circuits.pub/2023/monosemantic-features/index.html. Bricken, T., Templeton, A., Batson, J., Olah, C., Henighan, T., Carter, S., Hume, T., Burke, J. E., McLean, B., Nguyen, K., Tamkin, A., Joseph, N., Maxwell, T., Schiefer, N., Kravec, S., Wu, Y., Lasenby, R., Askell, A., Denison, C., … Chen, B. (2023b, October 5). Decomposing language models into understandable components. Transformer Circuits Thread, Anthropic. Retrieved from https://www.anthropic.com/index/decomposing- language-models-into-understandable-components Buckner, C. (2019). Deep learning: A philosophical introduction. Philosophy Compass, 14(10), e12625. https://doi.org/10.1111/phc3.12625 Buijsman, S. (2022). Defining explanation and explanatory depth in XAI. Minds and Machines, 32(3), 563–584. https://doi.org/10.1007/s11023-022-09607-9 Cammarata, N., Goh, G., Carter, S., Voss, C., Schubert, L., & Olah, C. (2021). Curve circuits. Distill, 6(1), e00024.006. https://doi.org/10.23915/distill.00024.006 Casper, S. (2023, February 17). EIS VI: Critiques of mechanistic interpretability work in AI safety. AI Alignment Forum. Retrieved from https://www.alignmentforum.org/posts/wt7HXaCWzuKQipqz3/eis-vi-critiques-of- mechanistic-interpretability-work-in-ai Chan, L., Garriga-Alonso, A., Goldowsky-Dill, N., Greenblatt, R., Nitishinskaya, J., Radhakrishnan, A., Shlegeris, B., & Thomas, N. (2022, December 3). Causal scrubbing: A method for rigorously testing interpretability from hypotheses. https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously- testing Alignment Retrieved Forum. AI Christiano, P. (2022, November 25). Mechanistic anomaly detection and ELK. AI Alignment / Medium. Retrieved from https://ai-alignment.com/mechanistic-anomaly-detection-and-elk-fb84f4c6d0dc Coelho Mollo, D. (2018). Functional individuation, mechanistic implementation: the proper way of seeing the mechanistic view of concrete computation. Synthese, 195, 3477–3497. https://doi.org/10.1007/s11229-017- 1380-5 Conmy, A., Mavor-Parker, A.N., Lynch, A., Heimersheim, S., & Garriga-Alonso, A. (2023). Towards automated abs/2304.14997. interpretability. mechanistic discovery ArXiv, for circuit https://doi.org/10.48550/arXiv.2304.14997 Craver, C.F. (2007). Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience, Oxford: Clarendon Press. Creel, K. A. (2020). Transparency in Complex Computational Systems. Philosophy of Science, 87(4), 568–589. https://doi.org/10.1086/709729 Cunningham, H., Ewart, A., Riggs, L., Huben, R., & Sharkey, L. (2023). Sparse autoencoders find highly interpretable features in language models. ArXiv, abs/2309.08600. https://doi.org/10.48550/arXiv.2309.08600 Doran, D., Schulz, S.C. & Besold, T.R. (2017). What Does Explainable AI Really Mean? A New 2071. Proceedings, Workshop CEUR of In Conceptualization Perspectives. https://doi.org/10.48550/arXiv.1710.00794 Dorsch, J., & Moll, M. (Forthcoming). Explainable and Human-Grounded AI for Decision Support Systems: The Theory of Epistemic Quasi-Partnerships. In Müller, V. C., Dewey, A. R., Dung, L., & Löhr, G. (Eds.), Philosophy of Artificial Intelligence: The State of the Art. Synthese Library, Berlin: Springer Nature. https://doi.org/10.48550/arXiv.2409.14839 21 Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., ... & Olah, C. (2021). A mathematical framework for transformer circuits. Transformer Circuits Thread. Retrieved from https://transformer- circuits.pub/2021/framework/index.html Erasmus, A., Brunet, T.D.P., & Fisher, E. (2021). What is Interpretability? Philosophy & Technology, 34, 833– 862. https://doi.org/10.1007/s13347-020-00435-2 European Union. (2024). EU Artificial Intelligence Act. Retrieved from https://artificialintelligenceact.eu Glennan, S.S. and The Nature of Causation. Erkenntnis, 44, 49–71. (1996). Mechanisms https://doi.org/10.1007/BF00172853 Glennan, S.S. (2017). The New Mechanical Philosophy. Oxford: Oxford University Press. Greenblatt, R., Nanda, N., Buck, Shlegeris, B., Habryka, O. (2023, December 1). How useful is mechanistic interpretability? Lesswrong. Retrieved from https://www.lesswrong.com/posts/tEPHGZAb63dfq2v8n/how- useful-is-mechanistic-interpretability Haimovici, S. (2013). A problem for the mechanistic account of computation. Journal of Cognitive Science, 14(2), 151–181. http://doi.org/10.17791/jcs.2013.14.2.151 Hind, M. (2019). Explaining Explainable AI. XRDS: Crossroads, The ACM Magazine for Students, 25, 16–19. https://doi.org/10.1145/3313096 Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. https://doi.org/10.1007/s11229-008-9435-2 Illari, P., & Williamson, J. (2012). What is a mechanism? Thinking about mechanisms across the sciences. European Journal for Philosophy of Science, 2(1), 119–135. http://dx.doi.org/10.1007/s13194-011-0038-2 Karmakharm, T. (2018). Image classification with DIGITS. NVIDIA Deep Learning Institute. Retrieved from https://rse.shef.ac.uk/assets/slides/2018-07-19-dl-cv/image-classification.pdf Kasirzadeh, A. (2021). Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT 14. '21), Association https://doi.org/10.1145/3442188.3445866 for Computing Machinery, New York, NY, USA, Kästner, L., & Crook, B. (2024). Explaining AI through mechanistic interpretability. European Journal for Philosophy of Science, 14, 52. https://doi.org/10.1007/s13194-024-00614-4 Kostić, D. (2022). Topological explanations: An opinionated appraisal. In I. Lawler, E. Shech, & K. Khalifa (Eds.), Scientific Understanding and Representation: Mathematical Modeling in the Life and Physical Sciences (pp. 96–115). Routledge. https://doi.org/10.4324/9781003202905-9 Kostić, D. (2023). Pragmatics of Explainability Relevance in XAI. Manuscript. Kostić, D., & Halffman, W. (2023). Mapping explanatory language in neuroscience. Synthese, 202, 112. https://doi.org/10.1007/s11229-023-04329-6 Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S.B. (2021). Explainable AI: A review of machine learning interpretability methods. Entropy, 23(1), 18. https://doi.org/10.3390/e23010018 Lipton, Z.C. (2018). The Mythos of Model Interpretability. In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31–57. https://doi.org/10.1145/3236386.3241340 Machamer, P.K., Darden, L., & Craver, C.F. (2000). Thinking about Mechanisms. Philosophy of Science, 67(1), 1–25. https://doi.org/10.1086/392759 Miłkowski, M. (2014). Computational mechanisms and models of computation . Philosophia Scientiæ , 18 - 3, 215 – 228. https://doi.org/10.4000/philosophiascientiae.1019 Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007 Miller, T., Howe, P.D., & Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. ArXiv, abs/1712.00547. https://doi.org/10.48550/arXiv.1712.00547 Mittelstadt, B.D., Russell, C., & Wachter, S. (2019). Explaining Explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT* ’19), Association for Computing Machinery, New York, NY, USA, 279–288. https://dl.acm.org/doi/10.1145/3287560.3287574 Molnar, C. (2022). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2nd ed.). Retrieved from https://christophm.github.io/interpretable-ml-book/ Montavon, G., Samek, W., & Müller, K.R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15. https://doi.org/10.1016/j.dsp.2017.10.011 Nanda, N. (2023, July 6). Concrete open problems in mechanistic interpretability: A technical overview. Effective Altruism Forum. Retrieved from https://forum.effectivealtruism.org/posts/EMfLZXvwiEioPWPga/concrete- open-problems-in-mechanistic-interpretability-a Olah, C. (2022). Mechanistic interpretability, variables, and the importance of interpretable bases. Transformer Circuit Thread, OpenAI. Retrieved from. https://transformer-circuits.pub/2022/mech-interp-essay/index.html 22 Olah, C., Mordvintsev, A., & Schubert, L. (2017). Feature visualization. Distill. https://doi.org/10.23915/distill.00007 Olsson, C., Elhage, N., Nanda, N., Joseph, N., DasSarma, N., Henighan, T.J., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., Drain, D., Ganguli, D., Hatfield-Dodds, Z., Hernandez, D., Johnston, S., Jones, A., Kernion, J., Lovitt, L., … Olah, C. (2022). In-context Learning and Induction Heads. ArXiv, abs/2209.11895. https://doi.org/10.48550/arXiv.2209.11895 Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29, 441– 459. https://doi.org/10.1007/s11023-019-09502-w Piccinini, G. (2015). Physical Computation: A Mechanistic Account. Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199658855.001.0001 Piccinini, G., & Craver, C. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese 183, 283–311. https://doi.org/10.1007/s11229-011-9898-4 Qin, Z., Yu, F., Liu, C., & Chen, X. (2018). How convolutional neural network see the world — A survey of convolutional neural network visualization methods. Mathematical Foundations of Computing, 1(2), 149–180. https://doi.org/10.3934/mfc.2018008 Rathkopf, C. (2018) Network representation and complex systems. Synthese 195, 55–78. https://doi.org/10.1007/s11229-015-0726-0 Ribeiro, M., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. https://doi.org/10.1145/2939672.2939778 Schuman, C.D., Kulkarni, S.R., Parsa, M. et al. Opportunities for neuromorphic computing algorithms and applications. Nat Comput Sci, 2, 10–19 (2022). https://doi.org/10.1038/s43588-021-00184-y Schwettmann, S., Shaham, T.R., Materzynska, J., Chowdhury, N., Li, S., Andreas, J., Bau, D., & Torralba, A. (2023). FIND: A function description benchmark for evaluating interpretability methods. ArXiv, abs/ 2309.03886. https://doi.org/10.48550/arXiv.2309.03886 Schyns, P.G., Snoek, L., & Daube, C. (2022). Degrees of algorithmic equivalence between the brain and its DNN models. Trends in Cognitive Sciences, 26, 1090–1102. https://doi.org/10.1016/j.tics.2022.09.003 Shahriar, N. (2023, February 1) What is Convolutional Neural Network — CNN (Deep Learning). Retrieved from https://nafizshahriar.medium.com/what-is-convolutional-neural-network-cnn-deep-learning-b3921bdd82d5 Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to whom? A role-based abs/1806.07552. interpretable machine systems. learning ArXiv, for model analyzing https://doi.org/10.48550/arXiv.1806.07552 Watson, D.S., & Floridi, L. (2020). The explanation game: A formal framework for interpretable machine learning. Synthese, 198 (10), 9211–9242. https://doi.org/10.1007/s11229-020-02629-9 Woodward, J. (2003). Making Things Happen: A Theory of Causal Explanation. Oxford University Press. Wright, C., & Bechtel, W. (2007). Mechanisms and psychological explanation. In Thagard, P. (Ed.), Philosophy of psychology and cognitive science, Elsevier. Yosinski, J., Clune, J., Nguyen, A.M., Fuchs, T.J., & Lipson, H. (2015). Understanding neural networks through deep visualization. ArXiv, abs/1506.06579. https://doi.org/10.48550/arXiv.1506.06579 Zednik, C. (2019). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology. 34 (2), 265–288. https://doi.org/10.1007/s13347-019-00382-7 Zerilli, J., Knott, A., MacLaurin, J., & Gavaghan, C. (2019). Transparency in Algorithmic and Human Decision- Making: Is There a Double Standard? Philosophy & Technology, 32, 661–683. https://doi.org/10.1007/s13347- 018-0330-6 23
ai_researcher
3
Multi-Agent_Software_Development_through_Cross-Team_Collaboration.pdf
0 2 0 2 p e S 8 2 ] E N . s c [ 1 v 7 4 3 3 1 . 9 0 0 2 : v i X r a A Review of Evolutionary Multi-modal Multi-objective Optimization Ryoji Tanabe, Member, IEEE,and Hisao Ishibuchi, Fel- low, IEEE Abstract—Multi-modal multi-objective optimization aims to find all Pareto optimal solutions including overlapping solutions in the objective space. Multi-modal multi-objective optimization has been investigated in the evolutionary computation community since 2005. However, it is difficult to survey existing studies in this field because they have been independently conducted and do not explicitly use the term “multi-modal multi-objective optimization”. To address this issue, this paper reviews existing studies of evolutionary multi-modal multi-objective optimization, including studies published under names that are different from “multi-modal multi-objective optimization”. Our review also clarifies open issues in this research area. Index Terms—Multi-modal multi-objective optimization, evo- lutionary algorithms, test problems, performance indicators I. INTRODUCTION A multi-objective evolutionary algorithm (MOEA) is an efficient optimizer for a multi-objective optimization problem (MOP) [1]. MOEAs aim to find a non-dominated solution set that approximates the Pareto front in the objective space. The set of non-dominated solutions found by an MOEA is usually used in an “a posteriori” decision-making process [2]. A decision maker selects a final solution from the solution set according to her/his preference. Since the quality of a solution set is usually evaluated in the objective space, the distribution of solutions in the solution space has not received much attention in the evolutionary multi-objective optimization (EMO) community. However, the decision maker may want to compare the final solution to other dissimilar solutions that have an equivalent quality or a slightly inferior quality [3], [4]. Fig. 1 shows a simple example. In Fig. 1, the four solutions xa, xb, xc, and xd are far from each other in the solution space but close to each other in the objective space. xa and xb have the same objective vector. xc and xa are similar in the objective space. xd is dominated by these solutions. This kind of situation can be found in a number of real-world problems, including functional brain imaging problems [3], diesel engine design problems [5], distillation plant layout problems [6], rocket engine design problems [7], and game map generation problems [8]. If multiple diverse solutions with similar objective vectors like xa, xb, xc, and xd in Fig. 1 are obtained, the decision maker can select the final solution according to her/his pref- erence in the solution space. For example, if xa in Fig. 1 becomes unavailable for some reason (e.g., material shortages, R. Tanabe and H. Ishibuchi are with Shenzhen Key Laboratory of Computa- tional Intelligence, University Key Laboratory of Evolving Intelligent Systems of Guangdong Province, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China. e-mail: ([email protected], [email protected]). (Corresponding au- thor: Hisao Ishibuchi) 1 Fig. 1: Illustration of a situation where the four solutions are identical or close to each other in the objective space but are far from each other in the solution space (a minimization problem). optimization A multi-modal multi-objective mechanical failures, traffic accidents, and law revisions), the decision maker can select a substitute from xb, xc, and xd. A practical example is given in [4], which deals with two- objective space mission design problems. In [4], Sch¨utze et al. considered two dissimilar solutions x1 = (782, 1288, 1788)T and x2 = (1222, 1642, 2224)T for a minimization problem, whose objective vectors are f (x1) = (0.462, 1001.7)T and f (x2) = (0.463, 1005.3)T, respectively. Although x1 domi- nates x2, the difference between f (x1) and f (x2) is small enough. The first design variable is the departure time from the Earth (in days). Thus, the departure times of x2 and x1 782). If the decision maker differ by 440 days (= 1222 − accepts x2 with a slightly inferior quality in addition to x1, the two launch plans can be considered. If x1 is not realizable for some reason, x2 can be the final solution instead of x1. As explained here, multiple solutions with almost equivalent quality support a reliable decision-making process. If these solutions have a large diversity in the solution space, they can provide insightful information for engineering design [3], [5]. problem (MMOP) involves finding all solutions that are equivalent to Pareto optimal solutions [3], [9], [10]. Below, we explain the difference between MOPs and MMOPs using the two- objective and two-variable Two-On-One problem [11]. Figs. 2 (a) and (b) show the Pareto front F and the Pareto optimal solution set O of Two-On-One, respectively. Two-On-One has two equivalent Pareto optimal solution subsets O1 and O2 that are symmetrical with respect to the origin, where O = O1 O2. Figs. 2 (c) and (d) show O1 and O2, respectively. In Two-On-One, the three solution sets O, O1, and O2 (Figs. 2 (b), (c) and (d)) are mapped to F (Fig. 2 (a)) by the objective functions. On the one hand, the goal of MOPs is generally to find a solution set that approximates the Pareto front F in the objective space. Since O1 and O2 are mapped to the same F in the objective space, it is sufficient for MOPs to find either O1 or O2. On the other hand, the goal of MMOPs is to find the entire equivalent Pareto optimal solution set O = O1 O2 in the solution space. In contrast to MOPs, it is necessary to find both O1 and O2 in MMOPs. Since most MOEAs (e.g., NSGA-II [12] and SPEA2 [13]) do not have mechanisms to maintain the solution space diversity, it is expected that they do not work well for MMOPs. Thus, multi-modal multi-objective evolutionary algorithms (MMEAs) that handle the solution space diversity are necessary for MMOPs. ∪ ∪ This paper presents a review of evolutionary multi-modal Solution spaceObjective space 2 2) Definitions of MMOPs: The term “MMOP” was first coined in [3], [14] in 2005. However, “MMOP” was not used in most studies from 2007 to 2012. Terms that represent MMOPs were not explicitly defined in those studies. For example, MMOPs were referred to as problems of obtaining a diverse solution set in the solution space in [17]. It seems that “multi-modal multi-objective optimization” has been used again as of 2016. Apart from these instances, MMOPs were denoted as “Multi-objective multi-global optimization” and “Multi-modal multi-objective wicked problems” in [18] and [19], respectively. Although MMOPs have been addressed for more than ten years, the definition of an MMOP is still controversial. In this paper, we define an MMOP using a relaxed equivalency introduced by Rudolph and Preuss [17] as follows: Definition 1. An MMOP involves finding all solutions that are equivalent to Pareto optimal solutions. δ. − a (cid:107) (cid:107) f (x1) (cid:107) Definition 2. Two different solutions x1 and x2 are said to f (x2) be equivalent iff (cid:107) ≤ is an arbitrary norm of a, and δ is a non-negative where threshold value given by the decision maker. If δ = 0, the MMOP should find all equivalent Pareto optimal solutions. If δ > 0, the MMOP should find all equivalent Pareto optimal solutions and dominated solutions with acceptable quality. The main advantage of our definition of an MMOP is that the decision maker can adjust the goal of the MMOP by changing the δ value. Most existing studies (e.g., [9], [20], [21]) assume MMOPs with δ = 0. MMOPs with δ > 0 were discussed in [3], [4], [19], [22]. For example, xa, xb, and xc in Fig. 1 should be found for MMOPs with δ = 0. In addition, the non-Pareto optimal solution xd should be found for MMOPs with δ > 0 if (cid:107) ≤ Although there is room for discussion, MMOPs with δ > 0 may be more practical in real-world applications. This is because the set of solutions of an MMOP with δ > 0 can provide more options for the decision maker than that of an MMOP with δ = 0. While it is usually assumed in the EMO community that the final solution is selected from non- dominated solutions, the decision maker may also be interested in some dominated solutions in practice [3], [4]. Below, we use the term “MMOP” regardless of the δ value for simplicity. f (xd) (cid:107) f (xa) − δ. III. MMEAS This section describes 12 dominance-based MMEAs, 3 decomposition-based MMEAs, 2 set-based MMEAs, and a post-processing approach. MMEAs need the following three abilities: (1) the ability to find solutions with high quality, (2) the ability to find diverse solutions in the objective space, and (3) the ability to find diverse solutions in the solution space. MOEAs need the abilities (1) and (2) to find a solution set that approximates the Pareto front in the objective space. Multi-modal single-objective optimizers need the abilities (1) and (3) to find a set of global optimal solutions. In contrast, MMEAs need all abilities (1)–(3). Here, we mainly describe mechanisms of each type of MMEA to handle (1)–(3). (a) F (b) O (c) O1 (d) O2 Fig. 2: (a) The Pareto front F and (b) the Pareto optimal solution set O of Two-On-One [11]. Figs. (c) and (d) show the two Pareto optimal solution subsets O1 and O2, respectively. multi-objective optimization. This topic is not new and has been studied for more than ten years. Early studies include [3], [5], [11], [14]–[16]. Unfortunately, most existing studies were independently conducted and did not use the term “MMOPs” (i.e., they are not tagged). For this reason, it is difficult to survey existing studies of MMOPs despite their significant contributions. In this paper, we review related studies of MMOPs including those published under names that were different from “multi-modal multi-objective optimization”. We also clarify open issues in this field. Multi-modal single- objective optimization problems (MSOPs) have been well studied in the evolutionary computation community [10]. Thus, useful clues to address some issues in studies of MMOPs may be found in studies of MSOPs. We discuss what can be learned from the existing studies of MSOPs. This paper is organized as follows. Section II gives def- initions of MMOPs. Section III describes MMEAs. Section IV presents test problems for multi-modal multi-objective optimization. Section V explains performance indicators for benchmarking MMEAs. Section VI concludes this paper. II. DEFINITIONS OF MMOPS ∈ ⊆ → A solution x1 is said to dominate x2 iff fi(x1) 1) Definition of MOPs: A continuous MOP involves find- S RD that minimizes a given objective ing a solution x RM . Here, S is the D-dimensional function vector f : S solution space, and RM is the M -dimensional objective space. fi(x2) for all i and fi(x1) < fi(x2) for at least one index i. If x∗ is not dominated by any other solutions, it is called a Pareto optimal solution. The set of all x∗ is the Pareto optimal solution set, and the set of all f (x∗) is the Pareto front. The goal of MOPs is generally to find a non-dominated solution set that approximates the Pareto front in the objective space. 1, ..., M ∈ { ≤ } 8101214161820f1012345f2−2−1012x1−2−1012x2−2−1012x1−2−1012x2−2−1012x1−2−1012x2 1) Pareto dominance-based MMEAs: The most representa- tive MMEA is Omni-optimizer [9], [14], which is an NSGA- II-based generic optimizer applicable to various types of prob- lems. The differences between Omni-optimizer and NSGA-II are fourfold: the Latin hypercube sampling-based population initialization, the so-called restricted mating selection, the (cid:15)- dominance-based non-dominated sorting, and the alternative crowding distance. In the restricted mating selection, an indi- vidual xa is randomly selected from the population. Then, xa and its nearest neighbor xb in the solution space are compared based on their non-domination levels and crowding distance values. The winner among xa and xb is selected as a parent. The crowding distance measure in Omni-optimizer takes into account both the objective and solution spaces. For the i- th individual xi in each non-dominated front R, the crowding distance in the objective space cobj is calculated in a similar manner to NSGA-II. In contrast, the crowding distance value of xi in the solution space csol is calculated in a different 1, ..., D manner. First, for each j , a “variable-wise” } ∈ { crowding distance value of xi in the j-th decision variable csol i,j is calculated as follows:  (cid:16) xi+1,j −xi,j  j −xmin xmax (cid:16) xi,j −xi−1,j 2 xmax j −xmin  xi+1,j −xi−1,j j −xmin xmax else if xi,j = xmax if xi,j = xmin otherwise csol i,j = (1) (cid:17) (cid:17) 2 , j j i i j j j where we assume that all individuals in R are sorted based on their j-th decision variable values in descending order. In (1), xmin j = minx∈R{ . Unlike the } crowding distance in the objective space, an infinitely large value is not given to a boundary individual. j = maxx∈R{ and xmax xj xj } Then, an “individual-wise” crowding distance value csol i = ((cid:80)D is calculated as follows: csol i,j )/D. The average value csol avg of all individual-wise crowding distance values is avg = ((cid:80)|R| also calculated as follows: csol . Finally, the crowding distance value ci of xi is obtained as follows: j=1 csol i=1 csol i )/ | R | i (cid:40) ci = cobj max i { cobj min i { , csol i } , csol i } i > cobj if cobj otherwise avg or csol i > csol avg , (2) where cobj avg is the average value of all crowding distance values in the objective space. As shown in (2), ci in Omni-optimizer is the combination of cobj . Due to its alternative crowding distance, the results presented in [9] showed that Omni-optimizer finds more diverse solutions than NSGA-II. and csol i i In addition to Omni-optimizer, two extensions of NSGA- II for MMOPs have been proposed. DNEA [23] is similar to Omni-optimizer but uses two sharing functions in the objective and solution spaces. DNEA requires fine-tuning of two sharing niche parameters for the objective and solution spaces. The secondary criterion of DN-NSGA-II [24] is based on the crowding distance only in the solution space. DN-NSGA-II uses a solution distance-based mating selection. The following are other dominance-based MMEAs. An MMEA proposed in [25] utilizes DBSCAN [26] and the rake selection [27]. DBSCAN, which is a clustering method, is used for grouping individuals based on the distribution of 3 individuals in the solution space. The rake selection, which is a reference vector-based selection method similar to NSGA-III [28], is applied to individuals belonging to each niche for the environmental selection. SPEA2+ [5], [15] uses two archives Aobj and Asol to maintain diverse non-dominated individuals in the objective and solution spaces, respectively. While the environmental selection in Aobj is based on the density of individuals in the objective space similar to SPEA2 [13], that in Asol is based on the density of individuals in the solution space. For the mating selection in SPEA2+, neighborhood individuals in the objective space are selected only from Aobj. PQ,(cid:15)-MOEA [4], 4D-Miner [3], [29], and MNCA [19] are capable of handling dominated solutions for MMOPs with δ > 0. PQ,(cid:15)-MOEA uses the (cid:15)-dominance relation [30] so that an unbounded archive can maintain individuals with ac- ceptable quality according to the decision maker. Unlike other MMEAs, PQ,(cid:15)-MOEA does not have an explicit mechanism to maintain the solution space diversity. 4D-Miner was specially designed for functional brain imaging problems [3]. The population is initialized by a problem-specific method. 4D- Miner maintains dissimilar individuals in an external archive, whose size is ten times larger than the population size. The environmental selection in 4D-Miner is based on a problem- specific metric. Similar to DIOP [22] (explained later), MNCA simultaneously evolves multiple subpopulations P 1, ..., P S, where S is the number of subpopulations. In MNCA, the primary subpopulation P 1 aims to find an approximation that provides a target front for other of the Pareto front subpopulations P 2, ..., P S. While the update of P 1 is based on the same selection mechanism as in NSGA-II, the update of P 2, ..., P S is performed with a complicated method that takes into account both the objective and solution spaces. Although the above-mentioned MMEAs use genetic varia- tion operators (e.g., the SBX crossover and the polynomial mutation [12]), the following MMEAs are based on other approaches. Niching-CMA [20] is an extension of CMA- ES [31] for MMOPs by introducing a niching mechanism. The number of niches and the niche radius are adaptively adjusted in Niching-CMA. An aggregate distance metric in the objective and solution spaces is used to group individ- uals into multiple niches. For each niche, individuals with better non-domination levels survive to the next iteration. MO Ring PSO SCD [21], a PSO algorithm for MMOPs, uses a diversity measure similar to Omni-optimizer. However, MO Ring PSO SCD handles the boundary individuals in the objective space in an alternative manner. In addition, an index- based ring topology is used to create niches. Two extensions of artificial immune systems [32] have been proposed for MMOPs: omni-aiNet [18] and cob-aiNet [33]. These two methods use a modified version of the polynomial mutation [12]. The primary and secondary criteria of omni-aiNet are based on (cid:15)-nondomination levels [30] and a grid operation, respectively. In addition, omni-aiNet uses suppression and insertion operations. While the suppression operation deletes an inferior individual, the insertion operation adds new individuals to the population. The population size is not constant due to these two operations. The primary and secondary criteria of cob-aiNet are based on the fitness assignment method in SPEA2 [13] and a diversity measure with a sharing function in the solution space, respectively. The maximum population size is introduced in cob-aiNet. × × 2) Decomposition-based MMEAs: A three-phase multi- start method is proposed in [16]. First, (1, λ)-ES is carried out on each M objective functions K times to obtain M K best-so-far solutions. Then, an unsupervised clustering method is applied to the M K solutions to detect the number of equivalent Pareto optimal solution subsets s. Finally, s runs of (1, λ)-ES are performed on each N single-objective sub- problem decomposed by the Tchebycheff function. The initial individual of each run is determined in a chained manner. The best solution found in the j-th subproblem becomes an initial individual of (1, λ)-ES for the j + 1-th subproblem ). It is expected that s equivalent solutions (j } are found for each N decomposed subproblems. 1, ..., N ∈ { − 1 Two variants of MOEA/D [34] for MMOPs are proposed in [35], [36]. MOEA/D decomposes an M -objective problem into N single-objective subproblems using a set of weight vec- tors, assigning a single individual to each subproblem. Then, MOEA/D simultaneously evolves the N individuals. Unlike MOEA/D, the following two methods assign one or more individuals to each subproblem to handle the equivalency. The MOEA/D algorithm presented in [35] assigns K indi- viduals to each subproblem. The selection is conducted based on a fitness value combining the PBI function value [34] and two distance values in the solution space. K dissimilar individuals are likely to be assigned to each subproblem. The main drawback of the above methods [16], [35] is the difficulty in setting a proper value for K, because it is problem dependent. MOEA/D-AD [36] does not need such a parameter but requires a relative neighborhood size L. For each iteration, a child u is assigned to the j-th subproblem whose weight vector is closest to f (u), with respect to the perpendicular distance. Let X be a set of individuals already assigned to the jth-subproblem. If x in X is within the L nearest individuals from the child u in the solution space, x and u are compared based on their scalarizing function values g(x) and g(u). If g(u) g(x), x is deleted from the population and u enters the population. u also enters the population when no x in X is in the L neighborhood of u in the solution space. ≤ 3) Set-based MMEAs: DIOP [22] is a set-based MMEA that can maintain dominated solutions in the population. In the set-based optimization framework [37], a single solution in the upper level represents a set of solutions in the lower level (i.e., a problem). DIOP simultaneously evolves an archive A and a target population T . While A approximates only the Pareto front and is not shown to the decision maker, T obtains diverse solutions with acceptable quality by maximizing the following G indicator: G(T ) = wobjDobj(T ) + wsolDsol(T ). Here, wobj + wsol = 1. Dobj is a performance indicator in the objective space, and Dsol is a diversity measure in the solution space. In [22], Dobj and Dsol were specified by the hypervolume indicator [38] and the Solow-Polasky diversity measure [39], respectively. Meta-individuals in T that are (cid:15)- dominated by any meta-individuals in A are excluded for the calculation of the G metric. At the end of the search, T is likely to contain meta-individuals (i.e., solution sets of a 4 TABLE I: Properties of 18 MMEAs. µ and nmax denote the population size and the maximum number of evaluations used in each paper, respectively. “δ > 0” indicates whether each method can handle MMOPs with δ > 0. “U” means whether each method has an unbounded population/archive. Initial µ values are reported for omni- aiNet, cob-aiNet, PQ,(cid:15)-MOEA, and MOEA/D-AD. µ and nmax used in the post-processing step are shown for a method in [17]. MMEAs SPEA2+ [5], [15] Omni-optimizer [9], [14] 4D-Miner [3], [29] omni-aiNet [18] Niching-CMA [20] e A method in [25] c n a n i m o D PQ,(cid:15)-MOEA [4] cob-aiNet [33] MNCA [19] DN-NSGA-II [24] MO Ring PSO SCD [21] DNEA [23] . A method in [16] p m o c e D A method in [35] MOEA/D-AD [36] t DIOP [22] e S A method in [40] . A method in [17] P Year 2004 2005 2005 2006 2009 2010 2011 2011 2013 2016 2017 2018 2007 2018 2018 2010 2012 2009 µ 100 nmax 50 000 1 000 500 000 200 400 50 8 000 40 000 50 000 Not clearly reported 200 100 100 800 800 210 10 1 120 100 50 200 20 5 000 40 000 100 000 80 000 80 000 63 000 20 000 89 600 30 000 100 000 400 000 2 000 δ > 0 U (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) problem) (cid:15)-nondominated by meta-individuals in A. Another set-based MMEA is presented in [40]. Unlike DIOP, the proposed method evolves only a single population. Whereas DIOP maximizes the weighted sum of values of Dobj and Dsol, the proposed method treats Dobj and Dsol as meta two-objective functions. NSGA-II is used to simultaneously maximize Dobj and Dsol in [40]. 4) A post-processing approach: As pointed out in [17], it is not always necessary to locate all Pareto optimal solutions. Suppose that a set of non-dominated solutions A has already been obtained by an MOEA (e.g., NSGA-II) but not an MMEA (e.g., Omni-optimizer). After the decision maker has selected the final solution xfinal from A according to her/his preference in the objective space, it is sufficient to search solutions whose objective vectors are equivalent to f (xfinal). 1 x = = f (x) 2, f meta f (xfinal) 2 (cid:107) (x) A post-processing approach is proposed in [17] to han- dle this problem. First, the proposed approach formulates a meta constrained two-objective minimization problem where 2, and f meta 1 −(cid:107) (cid:107) − gmeta(x) = f meta θ < 0. The meta objective functions and f meta f meta represent the distance between x and xfinal in 2 1 the objective and solution spaces. Thus, smaller f meta (x) and f meta (x) indicate that x is similar to xfinal in the objective 2 space and far from xfinal in the solution space, respectively. The constraint gmeta with θ > 0 prevents f meta (x) from becoming an infinitely small value in unbounded problems. NSGA-II is used as a meta-optimizer in [17]. xfinal − − (cid:107) 1 2 5) Open issues: Table I summarizes the properties of the 18 MMEAs reviewed in this section. While some MMEAs require an extra parameter (e.g., L in MOEA/D-AD), Omni-optimizer does not require such a parameter. This parameter-less property is an advantage of Omni-optimizer. However, Omni-optimizer is a Pareto dominance-based MMEA. Since dominance-based MOEAs perform poorly on most MOPs with more than three objectives [28], Omni-optimizer is unlikely to handle many objectives. In addition to MMEAs, some MOEAs handling the solution space diversity have been proposed, such as GDEA [41], DEMO [42], DIVA [43], “MMEA” [44], DCMMMOEA [45], and MOEA/D-EVSD [46]. Note that solution space diversity management in these MOEAs aims to efficiently approximate the Pareto front for MOPs. Since these methods were not designed for MMOPs, they are likely to perform poorly for MMOPs. For example, “MMEA”, which stands for a model- based multi-objective evolutionary algorithm, cannot find mul- tiple equivalent Pareto optimal solutions [44]. Nevertheless, helpful clues for designing an efficient MMEA can be found in these MOEAs. The performance of MMEAs has not been well analyzed. The post-processing method may perform better than MMEAs when the objective functions of a real-world problem are computationally expensive. However, an in-depth investigation is necessary to determine which approach is more practical. Whereas the population size µ and the maximum number of evaluations nmax were set to large values in some studies, they were set to small values in other studies. For example, Table I shows that µ = 1 000 and nmax = 500 000 for Omni-optimizer, while µ = 50 and nmax = 50 000 for Niching-CMA. It is unclear whether an MMEA designed with large µ and nmax values works well with small µ and nmax values. While MMOPs with four or more objectives appear in real-world applications (e.g., five-objective rocket engine design problems [7]), most MMEAs have been applied to only two-objective MMOPs. A large-scale benchmarking study is necessary to address the above-mentioned issues. The decision maker may want to examine diverse dominated solutions. As explained in Section I, dominated solutions found by PQ,(cid:15)-MOEA support the decision making in space mission design problems [4]. The results presented in [29] showed that diverse solutions found by 4D-Miner help neuro- scientists analyze brain imaging data. Although most MMEAs assume MMOPs with δ = 0 as shown in Table I, MMEAs that can handle MMOPs with δ > 0 may be more practical. Since most MMEAs (e.g., Omni-optimizer) remove dominated they are unlikely to find individuals from the population, diverse dominated solutions. Some specific mechanisms are necessary to handle MMOPs with δ > 0 (e.g., the multiple subpopulation scheme in DIOP and MNCA). As explained at the beginning of this section, MMEAs need the three abilities (1)–(3). While the abilities (1) and (2) are needed to approximate the Pareto front, the ability (3) is needed to find equivalent Pareto optimal solutions. Most existing studies (e.g., [9], [20], [21], [36]) report that the abilities (1) and (2) of MMEAs are worse than those of MOEAs. For example, the results presented in [36] showed that Omni-optimizer, MO Ring PSO SCD, and MOEA/D- AD perform worse than NSGA-II in terms of IGD [47] (explained in Section V). If the decision maker is not interested in the distribution of solutions in the solution space, it would 5 be better to use MOEAs rather than MMEAs. The poor perfor- mance of MMEAs for multi-objective optimization is mainly due to the ability (3), which prevents MMEAs from directly approximating the Pareto front. This undesirable performance regarding the abilities (1) and (2) is an issue in MMEAs. What to learn from MSOPs: An online data repository • (https://github.com/mikeagn/CEC2013) that provides results of optimizers on the CEC2013 problem suite [48] is available for MSOPs. This repository makes the comparison of optimizers easy, facilitating constructive algorithm development. A simi- lar data repository is needed for studies of MMOPs. The number of maintainable individuals in the popula- tion/archive strongly depends on the population/archive size. However, it is usually impossible to know the number of equivalent Pareto optimal solutions of an MMOP a priori. The same issue can be found in MSOPs. To address this issue, the latest optimizers (e.g., dADE [49] and RS-CMSA [50]) have an unbounded archive that maintains solutions found during the search process. Unlike modern optimizers for MSOPs, Table I shows that only three MMEAs have such a mechanism. The adaptive population sizing mechanisms in omni-aiNet, PQ,(cid:15)-MOEA, and MOEA/D-AD are advantageous. A general strategy of using an unbounded (external) archive could im- prove the performance of MMEAs. IV. MULTI-MODAL MULTI-OBJECTIVE TEST PROBLEMS 2 and f2(y) = (y1 This section describes test problems for benchmarking MMEAs. Unlike multi-objective test problems (e.g., the DTLZ [51] test suite), multi-modal multi-objective test problems were explicitly designed such that they have multiple equiv- alent Pareto optimal solution subsets. The two-objective and two-variable SYM-PART1 [16] is one of the most represen- tative test problems for benchmarking MMEAs: f1(y) = (y1 +a)2 +y2 2. Here, y1 and y2 are t1(c+2a) translated values of x1 and x2 as follows: y1 = x1 and y2 = x2 t2b. In SYM-PART1, a controls the region of Pareto optimal solutions, and b and c specify the positions of the Pareto optimal solution subsets. The so-called tile identifiers t1 and t2 are randomly selected from 1, 0, 1 . } Fig. 3(a) shows the shape of the Pareto optimal solutions of SYM-PART1 with a = 1, b = 10, and c = 8. As shown in Fig. 3(a), the equivalent Pareto optimal solution subsets are on nine lines in SYM-PART1. a)2 +y2 {− − − − the Superspheres problem [52], Other test problems include the Two-On-One [11] problem, the Omni-test problem [9], the SYM-PART2 and SYM-PART3 problems [16], the EBN problem [53], the two SSUF problems [24], and the Polygon problems [54]. Fig. 3 also shows the distribution of their Pareto optimal solutions. Since there are an infinite number of Pareto optimal solutions in the EBN problem, we do not show them. Source codes of the ten problems can be downloaded from the supplementary website (https://sites.google.com/view/emmo/). In Omni-test, equivalent Pareto optimal solution subsets are regularly located. SYM-PART2 is a rotated version of SYM- PART1. SYM-PART3 is a transformed version of SYM- PART2 using a distortion operation. The Superspheres prob- lem with D = 2 has six equivalent Pareto optimal solution 6 TABLE II: Properties of multi-modal multi-objective test problems, where M , D, and P denote the number of objectives, design variables, and equivalent Pareto optimal solution subsets, respectively. If a problem has irregularity, the shapes of its multiple equivalent Pareto optimal solution subsets differ from each other. (a) SYM-PART1 (b) SYM-PART2 (c) SYM-PART3 Test problems SYM-PART problems [16] Two-On-One problem [11] Omni-test problem [9] Superspheres problem [52] EBN problem [53] M 2 2 2 2 2 Polygon problems [54] Any (d) Two-On-One (e) Omni-test (f) Superspheres MMF suite [21] HPS suite [57] SSUF problems [24] 2 2 2 Irregularity (cid:88) D 2 2 Any Any Any 2 2 2 P 9 2 3D Unknown ∞ Any 2 2 or 4 Any Any (g) SSUF1 (h) SSUF3 (i) Polygon Fig. 3: Distribution of the Pareto optimal solutions for the eight problems. Only x1 and x2 are shown on Omni-test. subsets. However, the number of its P is unknown for D > 2. EBN can be considered as a real-coded version of the so-called binary one-zero max problem. All solutions in the solution space are Pareto optimal solutions. SSUF1 and SSUF3 are extensions of the UF problems [55] to MMOPs. There are two symmetrical Pareto optimal solution subsets in SSUF1 and SSUF3. Polygon is an extension of the distance minimization problems [56] to MMOPs, where P equivalent Pareto optimal solution subsets are inside of P regular M -sided polygons. In addition, the eight MMF problems are presented in [21]. Similar to SSUF1 and SSUF3, the MMF problems are derived from the idea of designing a problem that has multiple equiv- alent Pareto optimal solution subsets by mirroring the original one. A bottom-up framework for generating scalable test problems with any D is proposed in [57]. P equivalent Pareto optimal solution subsets are in P hyper-rectangular located in the solution space similar to the SYM-PART problems. While the first k variables play the role of “position” parameters in the solution space, the other D k variables represent “distance” parameters. The six HPS problem instances were constructed using this framework in [57]. − If a given problem has the multi-modal fitness landscape, it may have multiple non-Pareto fronts whose shapes are similar to the true Pareto front. Such a problem (e.g., ZDT4 [58]) is referred to as a multi-frontal test problem [59]. If the δ value (defined in Subsection II-2) is sufficiently large, a multi-frontal test problem can be regarded as a multi-modal multi-objective test problem. In fact, ZDT4 was used in [19] as a test problem. The Kursawe problem [60] is a multi-modal and nonseparable test problem with a disconnected Pareto front. The Kursawe problem has two fronts in the objective space similar to multi- frontal problems. Thus, the Kursawe problem can be used as a multi-modal multi-objective test problem. 1) Open issues: Table II summarizes the properties of multi-modal multi-objective test problems reviewed here. In Table II, P of Omni-test adheres to [22]. Table II indicates that scalable test problems do not exist, in terms of M , D, and P . Although the SYM-PART problems have some desirable properties (e.g., their adjustable and straightforward Pareto optimal solution shapes), M , D, and P are constant in these problems. Only Polygon is scalable in M . While most test problems have only two design variables, Omni-test and HPS are scalable in D. Unfortunately, P increases exponentially with increased D in Omni-test due to the combinatorial nature of variables. Although the idea of designing scalable SYM-PART and Polygon problems to D is presented in [61], [62], they have similar issues to Omni-test. Although the HPS problems do not have such an issue, it is questionable whether there exists a real-world problem with design variables affecting only the distance between the objective vectors and the Pareto front. Only SYM- PART3 has irregularity. Since the shapes of the Pareto optimal solution subsets may be different from each other in real-world problems, we believe that test problems with the irregularity are necessary to evaluate the performance of MMEAs. The performance of an MMEA with an absolutely defined niching radius (e.g., DNEA) is likely to be overestimated in test problems without irregularity. In addition, the relation between synthetic test problems and real-world problems has not been discussed. The idea of designing a Polygon problem based on a real-world map is presented in [63]. However, this does not mean that such a Polygon problem is an actual real-world problem. What to learn from MSOPs: Some construction methods • for multi-modal single-objective test problems are available, such as the software framework proposed in [64], the con- struction method for various problems [65], and Ahrari and Deb’s method [66]. Borrowing ideas from such sophisticated construction methods is a promising way to address the above-mentioned issues of multi-modal multi-objective test −15015x1−15015x2−15015x1−15015x2−8−4048x1−15015x2−2−1012x1−2−1012x20123456x10123456x20π/4π/2x1012345x2123x1−101x20246810x1×10−105101520x2×10−10246810x10246810x2 problems. In [64], R¨onkk¨onen et al. present eight desirable properties for multi-modal single-objective problem generators such as scalability in D, control of the number of global and local optima, and regular and irregular distributions of optima. These eight properties can be a useful guideline for designing multi-modal multi-objective problem generators. V. PERFORMANCE INDICATORS FOR MMEAS Performance indicators play an important role in quanti- tatively evaluating the performance of MOEAs as well as MMEAs. Since performance indicators for MOEAs consider only the distribution of objective vectors (e.g., the hypervol- ume, GD, and IGD indicators [38], [47]), they cannot be used to assess the ability of MMEAs to find multiple equivalent Pareto optimal solutions. For this reason, some indicators have been specially designed for MMEAs. Performance indicators for MMEAs can be classified into two categories: simple extensions of existing performance indicators for MOEAs and specific indicators based on the distributions of solutions. IGDX [4], [44] is a representative example of the first approach. The IGD and IGDX indicators are given as follows: 7 TABLE III: Properties of performance indicators for MMEAs (convergence to Pareto optimal solution subsets, diversity, uniformity, spread, the use of reference solution sets, and possibility to compare solution sets with different sizes). Indicators GDX [4] IGDX [4], [44] Hausdorff distance [4] CR [21] PSP [21] Pairwise distance [20] CS [16] SPS [16] Solow-Polasky [39] PSV [57] Conv. (cid:88) Div. Unif. Spr. Dif. Ref. (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) IGD(A) = IGDX(A) = 1 |A∗| 1 |A∗|   (cid:88) z∈A∗  (cid:88)  z∈A∗ ED(cid:0)f (x), f (z)(cid:1)(cid:111) (cid:110) min x∈A   , ED(cid:0)x, z(cid:1)(cid:111) (cid:110) min x∈A   , (3) (4) where A is a set of solutions obtained by an MMEA and A∗ is a set of reference solutions in the Pareto optimal solution set. ED(x1, x2) denotes the Euclidean distance between x1 and x2. While A with a small IGD value is a good approximation of the Pareto front, A with a small IGDX approximates Pareto optimal solutions well. Other indicators in the first category include GDX [4], the Hausdorff distance indicator [67] in the solution space [4], CR [21], and PSP [21]. GDX is a GD indicator in the solution space similar to IGDX. CR is an alternative version of the maximum spread [38] to measure the spread of A. PSP is a combination of IGDX and CR. Performance indicators in the second category include the mean of the pairwise distance between two solutions [20], CS [16], SPS [16], the Solow-Polasky diversity measure [39] used in [22], [40], and PSV [57]. CS is the number of Pareto optimal solution subsets covered by at least one individual. SPS is the standard deviation of the number of solutions close to each Pareto optimal solution subset. PSV is the percentage of the volume of A in the volume of A∗ in the solution space. 1) Open issues: Table III shows the properties of perfor- mance indicators for MMEAs reviewed in this section, where the properties are assessed based on the description of each indicator. While the properties of the performance indicators for MOEAs have been examined (e.g., [38], [67]), those for MMEAs have not been well analyzed. Performance indicators for MMEAs should be able to evaluate the three abilities (1)–(3) explained in Section III. Although IGDX is frequently used, it should be noted that IGDX does not evaluate the distribution of solutions in the objective space. Fig. 4 shows the distribution of two solu- tion sets A1 and A2 for SYM-PART1 in the solution and (a) A1 in the solution space (b) A2 in the solution space (c) A1 in the objective space (d) A2 in the objective space Fig. 4: Comparison of solution sets A1 and A2 for SYM-PART1. | | A2 and A1 | objective spaces, where are 27. While the | solutions in A1 are evenly distributed on one of the nine Pareto optimal solution subsets, the solutions in A2 are evenly distributed on all of them. Although A1 has 27 objective vectors that cover the Pareto front, A2 has only 3 equivalent objective vectors. The IGDX and IGD values of A1 and A2 are as follows: IGDX(A1) = 15.92, IGDX(A2) = 0.25, IGD(A1) = 0.06, and IGD(A2) = 0.81. We used 5 000 Pareto optimal solutions for A∗. Although A2 has a worse distribution in the objective space than A1, IGDX(A2) is significantly better than IGDX(A1). As demonstrated here, IGDX can evaluate the abilities (1) and (3) but cannot evaluate the ability (2) to find diverse solutions in the objective space. Since the other indicators in Table III do not take into account the distribution of objective vectors similar to IGDX, they are likely to have the same undesirable property. For a fair performance comparison, it is desirable to use the indicators −15015x1−15015x2−15015x1−15015x201234f101234f201234f101234f2 for MOEAs (e.g., hypervolume and IGD) in addition to the indicators for MMEAs in Table III. What to learn from MSOPs: It is desirable that the indicators • for multi-modal single-objective optimizers evaluate a solution set without the knowledge of the fitness landscape such as the positions of the optima and the objective values of the optima [68]. The same is true for indicators for MMEAs. Table III shows that most indicators (e.g., IGDX) require A∗. Since A∗ is usually unavailable in real-world problems, it is desirable that indicators for MMEAs evaluate A without A∗. Since the archive size in modern multi-modal single- objective optimizers is unbounded in order to store a number of local optima [10], most indicators in this field can handle solution sets with different sizes (e.g., the peak ratio and the success rate [48]). For the same reason, it is desirable that indicators for MMEAs evaluate solution sets with different sizes in a fair manner. However, it is difficult to directly use indicators for multi-modal single-objective optimizers to evaluate MMEAs. VI. CONCLUSION The contributions of this paper are threefold. The first contribution is that we reviewed studies in this field in terms of definitions of MMOPs, MMEAs, test problems, and perfor- mance indicators. It was difficult to survey the existing studies of MMOPs for the reasons described in Section I. Our review helps to elucidate the current progress on evolutionary multi- modal multi-objective optimization. The second contribution is that we clarified open issues in this field. In contrast to multi-modal single-objective optimization, multi-modal multi- objective optimization has not received much attention despite its practical importance. Thus, some critical issues remain. The third contribution is that we pointed out an issue as- sociated with performance indicators for MMEAs. Reliable performance indicators are necessary for the advancement of MMEAs. We hope that this paper will encourage researchers to work in this research area, which is not well explored. ACKNOWLEDGMENT This work was supported by the Program for Guang- dong Introducing Innovative and Enterpreneurial Teams (Grant No. 2017ZT07X386), Shenzhen Peacock Plan (Grant No. KQTD2016112514355531), the Science and Technol- ogy Innovation Committee Foundation of Shenzhen (Grant No. ZDSYS201703031748284), the Program for Univer- sity Key Laboratory of Guangdong Province (Grant No. 2017KSYS008), and National Natural Science Foundation of China (Grant No. 61876075). REFERENCES [1] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms. John Wiley & Sons, 2001. [2] K. Miettinen, Nonlinear Multiobjective Optimization. Springer, 1998. [3] M. Sebag, N. Tarrisson, O. Teytaud, J. Lef`evre, and S. Baillet, “A Multi-Objective Multi-Modal Optimization Approach for Mining Stable Spatio-Temporal Patterns,” in IJCAI, 2005, pp. 859–864. [4] O. Sch¨utze, M. Vasile, and C. A. C. Coello, “Computing the Set of Epsilon-Efficient Solutions in Multiobjective Space Mission Design,” JACIC, vol. 8, no. 3, pp. 53–70, 2011. 8 [5] T. Hiroyasu, S. Nakayama, and M. Miki, “Comparison study of SPEA2+, SPEA2, and NSGA-II in diesel engine emissions and fuel economy problem,” in IEEE CEC, 2005, pp. 236–242. [6] M. Preuss, C. Kausch, C. Bouvy, and F. Henrich, “Decision Space Diversity Can Be Essential for Solving Multiobjective Real-World Problems,” in MCDM, 2008, pp. 367–377. [7] F. Kudo, T. Yoshikawa, and T. Furuhashi, “A study on analysis of design variables in Pareto solutions for conceptual design optimization problem of hybrid rocket engine,” in IEEE CEC, 2011, pp. 2558–2562. [8] J. Togelius, M. Preuss, and G. N. Yannakakis, “Towards multiobjective procedural map generation,” in PCGames, 2010. [9] K. Deb and S. Tiwari, “Omni-optimizer: A generic evolutionary algo- rithm for single and multi-objective optimization,” EJOR, vol. 185, no. 3, pp. 1062–1087, 2008. [10] X. Li, M. G. Epitropakis, K. Deb, and A. P. Engelbrecht, “Seeking Multiple Solutions: An Updated Survey on Niching Methods and Their Applications,” IEEE TEVC, vol. 21, no. 4, pp. 518–538, 2017. [11] M. Preuss, B. Naujoks, and G. Rudolph, “Pareto Set and EMOA Behavior for Simple Multimodal Multiobjective Functions,” in PPSN, 2006, pp. 513–522. [12] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE TEVC, vol. 6, no. 2, pp. 182–197, 2002. [13] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the Strength Pareto Evolutionary Algorithm,” ETHZ, Tech. Rep., 2001. [14] K. Deb and S. Tiwari, “Omni-optimizer: A Procedure for Single and Multi-objective Optimization,” in EMO, 2005, pp. 47–61. [15] M. Kim, T. Hiroyasu, M. Miki, and S. Watanabe, “SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2,” in PPSN, 2004, pp. 742–751. [16] G. Rudolph, B. Naujoks, and M. Preuss, “Capabilities of EMOA to Detect and Preserve Equivalent Pareto Subsets,” in EMO, 2007, pp. 36– 50. [17] G. Rudolph and M. Preuss, “A multiobjective approach for finding equiv- alent inverse images of pareto-optimal objective vectors,” in MCDM, 2009, pp. 74–79. [18] G. P. Coelho and F. J. V. Zuben, “omni-aiNet: An Immune-Inspired Approach for Omni Optimization,” in ICARIS, 2006, pp. 294–308. [19] E. M. Zechman, M. H. G., and M. E. Shafiee, “An evolutionary algorithm approach to generate distinct sets of non-dominated solutions for wicked problems,” Eng. Appl. of AI, vol. 26, no. 5-6, pp. 1442–1457, 2013. [20] O. M. Shir, M. Preuss, B. Naujoks, and M. T. M. Emmerich, “Enhancing Decision Space Diversity in Evolutionary Multiobjective Algorithms,” in EMO, 2009, pp. 95–109. [21] C. Yue, B. Qu, and J. Liang, “A Multi-objective Particle Swarm Optimizer Using Ring Topology for Solving Multimodal Multi-objective Problems,” IEEE TEVC, 2018 (in press). [22] T. Ulrich, J. Bader, and L. Thiele, “Defining and Optimizing Indicator- Based Diversity Measures in Multiobjective Search,” in PPSN, 2010, pp. 707–717. [23] Y. Liu, H. Ishibuchi, Y. Nojima, N. Masuyama, and K. Shang, “A Double-Niched Evolutionary Algorithm and Its Behavior on Polygon- Based Problems,” in PPSN, 2018, pp. 262–273. [24] J. J. Liang, C. T. Yue, and B. Y. Qu, “Multimodal multi-objective optimization: A preliminary study,” in IEEE CEC, 2016, pp. 2454–2461. [25] O. Kramer and H. Danielsiek, “DBSCAN-based multi-objective niching to approximate equivalent pareto-subsets,” in GECCO, 2010, pp. 503– 510. [26] M. Ester, H. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise,” in KDD, 1996, pp. 226–231. [27] O. Kramer and P. Koch, “Rake Selection: A Novel Evolutionary Multi- Objective Optimization Algorithm,” in KI, 2009, pp. 177–184. [28] K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints,” IEEE TEVC, vol. 18, no. 4, pp. 577–601, 2014. [29] V. Krmicek and M. Sebag, “Functional Brain Imaging with Multi- objective Multi-modal Evolutionary Optimization,” in PPSN, 2006, pp. 382–391. [30] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining Conver- gence and Diversity in Evolutionary Multiobjective Optimization,” Evol. Comput., vol. 10, no. 3, pp. 263–282, 2002. [31] N. Hansen and A. Ostermeier, “Completely derandomized self- adaptation in evolution strategies,” Evol. Comput., vol. 9, no. 2, pp. 159–195, 2001. 9 [58] E. Zitzler, K. Deb, and L. Thiele, “Comparison of Multiobjective Evolutionary Algorithms: Empirical Results,” Evol. Comput., vol. 8, no. 2, pp. 173–195, 2000. [Online]. Available: http://dx.doi.org/10. 1162/106365600568202 [59] S. Huband, P. Hingston, L. Barone, and R. L. While, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE TEVC, vol. 10, no. 5, pp. 477–506, 2006. [60] F. Kursawe, “A Variant of Evolution Strategies for Vector Optimization,” in PPSN, 1990, pp. 193–197. [61] V. L. Huang, A. K. Qin, K. Deb, E. Zitzler, P. N. Suganthan, J. J. Liang, M. Preuss, and S. Huband, “Problem Definitions for Performance Assessment on Multi-objective Optimization Algorithms,” NTU, Tech. Rep., 2007. [62] H. Ishibuchi, M. Yamane, N. Akedo, and Y. Nojima, “Many-objective and many-variable test problems for visual examination of multiobjective search,” in IEEE CEC, 2013, pp. 1491–1498. [63] H. Ishibuchi, N. Akedo, and Y. Nojima, “A many-objective test problem for visually examining diversity maintenance behavior in a decision space,” in GECCO, 2011, pp. 649–656. [64] J. R¨onkk¨onen, X. Li, V. Kyrki, and J. Lampinen, “A framework for generating tunable test functions for multimodal optimization,” Soft Comput., vol. 15, no. 9, pp. 1689–1706, 2011. [65] B. Y. Qu, J. J. Liang, Z. Y. Wang, Q. Chen, and P. N. Suganthan, “Novel benchmark functions for continuous multimodal optimization with comparative results,” SWEVO, vol. 26, pp. 23–34, 2016. [66] A. Ahrari and K. Deb, “A Novel Class of Test Problems for Performance Evaluation of Niching Methods,” IEEE TEVC, vol. 22, no. 6, pp. 909– 919, 2018. [67] O. Sch¨utze, X. Esquivel, A. Lara, and C. A. C. Coello, “Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary Multiobjective Optimization,” IEEE TEVC, vol. 16, no. 4, pp. 504–522, 2012. [68] J. Mwaura, A. P. Engelbrecht, and F. V. Nepocumeno, “Performance measures for niching algorithms,” in IEEE CEC, 2016, pp. 4775–4784. [32] D. Dasgupta, S. Yu, and F. Ni˜no, “Recent Advances in Artificial Immune Systems: Models and Applications,” Appl. Soft Comput., vol. 11, no. 2, pp. 1574–1587, 2011. [33] G. P. Coelho and F. J. V. Zuben, “A Concentration-Based Artificial Immune Network for Multi-objective Optimization,” in EMO, 2011, pp. 343–357. [34] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE TEVC, vol. 11, no. 6, pp. 712–731, 2007. [35] C. Hu and H. Ishibuchi, “Incorporation of a decision space diversity maintenance mechanism into MOEA/D for multi-modal multi-objective optimization,” in GECCO (Companion), 2018, pp. 1898–1901. [36] R. Tanabe and H. Ishibuchi, “A Decomposition-Based Evolutionary Algorithm for Multi-modal Multi-objective Optimization,” in PPSN, 2018, pp. 249–261. [37] E. Zitzler, L. Thiele, and J. Bader, “On Set-Based Multiobjective Optimization,” IEEE TEVC, vol. 14, no. 1, pp. 58–79, 2010. [38] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fon- seca, “Performance assessment of multiobjective optimizers: an analysis and review,” IEEE TEVC, vol. 7, no. 2, pp. 117–132, 2003. [39] A. R. Solow and S. Polasky, “Measuring biological diversity,” Environ. Ecol. Stat., vol. 1, no. 2, pp. 95–103, 1994. [40] H. Ishibuchi, M. Yamane, N. Akedo, and Y. Nojima, “Two-objective solution set optimization to maximize hypervolume and decision space diversity in multiobjective optimization,” in SCIS, 2012, pp. 1871–1876. [41] A. Toffolo and E. Benini, “Genetic Diversity as an Objective in Multi- Objective Evolutionary Algorithms,” Evol. Comput., vol. 11, no. 2, pp. 151–167, 2003. [42] T. Robiˇc and B. Filipiˇc, “DEMO: differential evolution for multiobjective optimization,” in EMO, 2005, pp. 520–533. [43] T. Ulrich, J. Bader, and E. Zitzler, “Integrating decision space diversity into hypervolume-based multiobjective search,” in GECCO, 2010, pp. 455–462. [44] A. Zhou, Q. Zhang, and Y. Jin, “Approximating the Set of Pareto- Optimal Solutions in Both the Decision and Objective Spaces by an Estimation of Distribution Algorithm,” IEEE TEVC, vol. 13, no. 5, pp. 1167–1189, 2009. [45] H. Xia, J. Zhuang, and D. Yu, “Combining Crowding Estimation in Objective and Decision Space With Multiple Selection and Search Strategies for Multi-Objective Evolutionary Optimization,” IEEE Trans. Cyber., vol. 44, no. 3, pp. 378–393, 2014. [46] J. C. Castillo, C. Segura, A. H. Aguirre, G. Miranda, and C. Le´on, “A multi-objective decomposition-based evolutionary algorithm with enhanced variable space diversity control,” in GECCO (Companion), 2017, pp. 1565–1571. [47] C. A. C. Coello and M. R. Sierra, “A Study of the Parallelization of a Coevolutionary Multi-objective Evolutionary Algorithm,” in MICAI, 2004, pp. 688–697. [48] X. Li, A. Engelbrecht, and M. G. Epitropakis, “Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization,” RMIT Univ., Tech. Rep., 2013. [49] M. G. Epitropakis, X. Li, and E. K. Burke, “A dynamic archive niching differential evolution algorithm for multimodal optimization,” in IEEE CEC, 2013, pp. 79–86. [50] A. Ahrari, K. Deb, and M. Preuss, “Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations,” Evol. Comput., vol. 25, no. 3, pp. 439–471, 2017. [51] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable Test Prob- lems for Evolutionary Multi-Objective Optimization,” in Evolutionary Multiobjective Optimization. Theoretical Advances and Applications. Springer, 2005, pp. 105–145. [52] M. T. M. Emmerich and A. H. Deutz, “Test problems based on lam´e superspheres,” in EMO, 2006, pp. 922–936. [53] N. Beume, B. Naujoks, and M. T. M. Emmerich, “SMS-EMOA: multiobjective selection based on dominated hypervolume,” EJOR, vol. 181, no. 3, pp. 1653–1669, 2007. [54] H. Ishibuchi, Y. Hitotsuyanagi, N. Tsukamoto, and Y. Nojima, “Many- Objective Test Problems to Visually Examine the Behavior of Multiob- jective Evolution in a Decision Space,” in PPSN, 2010, pp. 91–100. [55] Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari, “Multiobjective optimization Test Instances for the CEC 2009 Special Session and Competition,” Univ. of Essex, Tech. Rep., 2008. [56] M. K¨oppen and K. Yoshida, “Substitute Distance Assignments in NSGA- II for Handling Many-objective Optimization Problems,” in EMO, 2007, pp. 727–741. [57] B. Zhang, K. Shafi, and H. A. Abbass, “On Benchmark Problems and Metrics for Decision Space Performance Analysis in Multi-Objective Optimization,” IJCIA, vol. 16, no. 1, pp. 1–18, 2017.
ai_researcher
4
Exploring_the_Potential_of_Large_Language_Models_in_Graph_Generation.pdf
4 2 0 2 r a M 3 1 ] L C . s c [ 2 v 0 8 7 4 0 . 3 0 4 2 : v i X r a MuseGraph: Graph-oriented Instruction Tuning of Large Language Models for Generic Graph Mining Yanchao Tan College of Computer and Data Science, Fuzhou University, China [email protected] Hang Lv College of Computer and Data Science, Fuzhou University, China [email protected] Xinyi Huang College of Computer and Data Science, Fuzhou University, China [email protected] Jiawei Zhang College of Computer and Data Science, Fuzhou University, China [email protected] Shiping Wang College of Computer and Data Science, Fuzhou University, China [email protected] Carl Yang∗ Department of Computer Science, Emory University, USA [email protected] ABSTRACT Graphs with abundant attributes are essential in modeling inter- connected entities and improving predictions in various real-world applications. Traditional Graph Neural Networks (GNNs), which are commonly used for modeling attributed graphs, need to be re-trained every time when applied to different graph tasks and datasets. Although the emergence of Large Language Models (LLMs) has introduced a new paradigm in natural language processing, the generative potential of LLMs in graph mining remains largely under- explored. To this end, we propose a novel framework MuseGraph, which seamlessly integrates the strengths of GNNs and LLMs and facilitates a more effective and generic approach for graph min- ing across different tasks and datasets. Specifically, we first in- troduce a compact graph description via the proposed adaptive input generation to encapsulate key information from the graph under the constraints of language token limitations. Then, we pro- pose a diverse instruction generation mechanism, which distills the reasoning capabilities from LLMs (e.g., GPT-4) to create task- specific Chain-of-Thought-based instruction packages for different graph tasks. Finally, we propose a graph-aware instruction tuning with a dynamic instruction package allocation strategy across tasks and datasets, ensuring the effectiveness and generalization of the training process. Our experimental results demonstrate significant improvements in different graph tasks, showcasing the potential of our MuseGraph in enhancing the accuracy of graph-oriented downstream tasks while keeping the generation powers of LLMs. KEYWORDS Generic graph mining, Large language models, Instruction tuning 1 INTRODUCTION Graphs with abundant attributes are widely used to model inter- connected entities, and they are pivotal for enhancing downstream predictions across various real-world applications. The complexity and diversity of tasks across different datasets necessitate advanced models capable of harnessing graph information effectively. Recently, Graph Neural Networks (GNNs) have become widely used in modeling attributed graphs [9, 16, 60]. However, they need to be re-trained whenever applied to different graph tasks and datasets. Inspired by the great success of Large Language Models ∗Carl Yang is the corresponding author. Figure 1: An illustrative example of the need for a generic graph model that can be directly applied to various graph tasks and datasets. (LLMs), the combination of GNNs and LLMs aims to enhance the processing of text-attributed graphs, which can be categorized into two main approaches. The first line tries to train GNNs with LLM- enhanced features (e.g., OFA [34], ALL-in-One [50], GIMLET [77], ExpAsFeat [18], LLMForGraph [5] and PRODIGY [22]). The second line tunes LLMs for graph applications (e.g., InstructGLM [71], NLGraph [57], LMMoL [45], and GPT4GRAPH [15]). Despite the promising direction of integrating LLMs with GNNs, the full exploration of LLMs’ generative capabilities has yet to be harnessed for a wide array of graph-oriented tasks and datasets. Figure 1 illustrates the necessity for a generic graph framework, which shows the multitude of task and dataset combinations possi- ble. Such a generic graph model can not only capture the semantic and structural information of graphs for tasks like node classifica- tion and link prediction but also retain the rich generative power inherent in LLMs for tasks like graph-to-text applications. Such a model can also be adept across a spectrum of datasets (e.g., from MIMIC’s clinical data to arXiv’s academic texts), without sacrific- ing the generative abilities that are central to language models. However, three obstacles stand in the way of achieving this goal. Challenge I: How to extract informative graph descriptions under the limitation of input token? To harness the full potential of Large Language Models for generic graph mining, an essential hurdle is translating the graph with abundant semantics and complex struc- tures into a format that LLMs can process effectively, especially under strict input token limitations. Without a compact graph de- scription that extracts key information from graphs within the LLMs’ token limitation, the model’s ability to grasp and utilize the graph’s semantic and structural richness is severely limited, poten- tially leading to suboptimal performance in graph applications. 1 Toy???Node classificationLink predictionGraph-to-text………?Different tasksDifferent datasets Conference’17, July 2017, Washington, DC, USA Yanchao Tan, Hang Lv, Xinyi Huang, Jiawei Zhang, Shiping Wang, and Carl Yang Challenge II: How to automatically generate diverse instructions? Creating a diverse set of high-quality instructions for fine-tuning LLMs is essential for generic graph mining tasks. However, these instructions are often neither readily available nor economically feasible to produce manually. While advanced language models like GPT-4 possess the capability to generate diverse instructions, it remains unknown how to effectively leverage these generative abilities to produce relevant and task-specific instructions. Challenge III: How to properly allocate instructions for graph- oriented instruction tuning? The effectiveness of instruction tuning for LLMs is significantly influenced by the selection of instruc- tions, which enables the LLMs to comprehend and execute generic graph mining tasks effectively. However, maintaining broad task and dataset coverage while averting catastrophic forgetting poses a significant challenge, especially when dealing with a variety of tasks and datasets in one generic graph model. To tackle these challenges, we propose Graph-oriented Instruc- tion Tuning of Large Language Models for Generic Graph Mining (MuseGraph), which consists of three pivotal steps: (i) Development of Compact Graph Descriptions, where we introduce a novel “node energy” metric to textualize graphs with essential semantic and structural details under limited tokens; (ii) Generation of Diverse Instructions, which invokes the advanced generative abilities of GPT-4 to create CoT-based instruction packages tailored for vari- ous graph tasks, thus enriching LLMs’ capabilities in understanding and analyzing graph data without the expense of manual instruc- tion crafting; (iii) Graph-aware Instruction Tuning, which introduces a dynamic instruction package allocation strategy based on the spe- cific needs of each graph mining task, ensuring comprehensive and effective LLM tuning. Our overall contributions in this work are summarized as follows: • Formulation of generic graph mining. We establish a generic graph framework that effectively transforms graph structures into LLM- friendly formats while preserving the generative capabilities of necessary for diverse graph-oriented tasks. • Effective Model Designs. We design and implement a set of models and mechanisms including the development of compact graph descriptions, automatically generating diverse, task-specific in- structions, and graph-aware instruction tuning, targeting a generic graph model across different tasks and datasets. • Extensive Experiments Across Graph Tasks and Datasets. We con- duct thorough experiments to validate our approach, demon- strating its superiority over existing methods in a variety of graph tasks and datasets, highlighting its effectiveness and in- terpretability in enhancing LLMs for graph mining. 2 RELATED WORK 2.1 Semantic-rich Graph Representation Learning Graph representation learning has emerged as a key technique for the complex structures of networks with abundant attributes [4, 17, 59, 66, 78]. Among existing node embedding methods, many have analyzed and utilized the great promise of random walks (RWs) in capturing the topological structures of graphs [11, 14, 23, 43]. However, the above methods ignore the abundant attribute informa- tion surrounding the nodes and edges [36]. Recently, Graph neural networks (GNNs) learn node representations through aggregating 2 Table 1: A comparison between MuseGraph and related meth- ods, where GNN* denotes the method with GNN training and LLM* denotes the one with LLM fine-tuning. The partial checkmark ✓∼ for some methods means that they can be extended to achieve the targets, but this is not their main research focus. GNN* ✓ ✓ ✓ ✓ ✓ ✓ ExpAsFeat [18] PRODIGY [22] LLMForGraph [5] All-in-One [50] OFA [34] NLGraph [57] GPT4GRAPH [15] GraphGPT [15] InstructGLM [71] MuseGraph LLM* Cross-task Cross-dataset ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓∼ ✓ ✓ ✓ ✓∼ ✓∼ ✓ ✓∼ ✓ information from neighboring nodes on graphs [16, 28, 70]. How- ever, most existing GNNs are established in a supervised learning setting, which requires abundant task-specific labeled data that may not be available in real-world applications [8, 80], and the embeddings that they learned are not generalizable across differ- ent downstream tasks [73], which has to be re-trained whenever applied to different graph tasks and datasets. Although some studies tried to reduce the labeling effort by pre-training an expressive GNN model on unlabeled data with self- supervision methods (e.g., contrastive learning) [21, 24, 79], their performances in specific downstream tasks still relied much on the properly chosen self-supervision tasks and attribute encoders [49, 76]. Therefore, there is still a lack of a uniform framework for generic graph mining across different tasks and datasets. 2.2 Leveraging LLMs for Graph Mining Motivated by the impressive strides in Large Language Models (LLMs), combining Graph Neural Networks (GNNs) and LLMs is creating substantial progress in handling complex text-attributed graphs [5, 6, 32]. Existing methods leveraging this combination mainly fall into two broad categories (shown in Table 1). The first category is training GNNs with LLM-enhanced fea- tures, labeled GNN* in Table 1). For example, ExpAsFeat [50] and PRODIGY [22] utilized LLMs to encode the textual information as- sociated with nodes on the graph. LLMForGraph [5] enabled LLMs to improve text-attributed graph processing by deepening semantic understanding and leveraging extensive knowledge bases. All-in- One [50] combined NLP prompting concepts into graph down- stream tasks. OFA [34] prompted LLMs to translate diverse textual attributes into uniform feature vectors across domains. These models primarily utilized LLMs to process textual content within graphs and tend to be tailored to specific domains, while they often failed to be applied for cross-task and cross-dataset applica- tions and lost the generation capability from LLMs. Although these models effectively focus on leveraging LLMs’ textual prowess, most of them are domain-specific graph models, which fail to perform cross-task and cross-dataset. MuseGraph: Graph-oriented Instruction Tuning of Large Language Models for Generic Graph Mining Conference’17, July 2017, Washington, DC, USA Figure 2: The overall framework of MuseGraph. The second category denoted as LLM* in Table 1, mainly focuses on tuning LLMs for graph-related tasks. For example, Instruct- GLM [71], NLGraph [57], and GPT4GRAPH [15] leveraged LLMs to understand and analyze graph data through textual descriptions. GraphGPT [52] with one checkmark on GNN* and a partial check- mark on LLM* jointly trained task-specific GNN and one projector on the top of LLMs, which performed well across different datasets. Note that, the partial checkmark ✓∼ for some methods means that they can be extended to cross-dataset targets, but this is not their main research focus and they have no design mechanism to achieve it. Compared with GNN*-based methods, LLM*-based methods have the potential to perform generic graph mining with powerful cross-task and cross-dataset. However, such power remains largely under-explored. Table 1 summarizes the characteristics of different models. We can see that our MuseGraph is the most generic model among existing works. 3 THE MUSEGRAPH FRAMEWORK 3.1 Overview of Our Framework Objective: In this paper, we aim to pursue an ideal graph that can seamlessly integrate the strengths of GNNs and LLMs via graph- oriented instruction tuning of LLMs, through which we wish to facilitate a more effective and generic approach for graph mining across different tasks and datasets. Overview: To achieve this goal, we propose MuseGraph framework, which comprises three major components. Firstly, we develop a compact graph description mechanism that captures critical se- mantic and structural details within the constraints of language token limitations. Secondly, we generate a diverse range of instruc- tions through the reasoning capabilities of LLMs, thus facilitating task-specific and Chain-of-Thought-based instruction packages for 3 various graph tasks. Thirdly, we adopt a graph-aware instruction tuning, utilizing a dynamic allocation strategy for instruction pack- ages tailored to the unique demands of each graph task. The overall model architecture is shown in Figure 2, and we elaborate on the three main components. 3.2 Compact Graph Description Leveraging the capabilities of Large Language Models (LLMs) for graphs presents a unique set of challenges, primarily due to LLMs’ inherent limitations in directly inputting graph structures under the token limitation. Therefore, it is non-trivial to automatically extract key information from graphs under the token limitations, which requires a compact description including complex node and edge attributes along with structural details. Inspired by common graph analysis techniques such as walks and neighbors, we propose a novel method of textualization to describe graphs via these concepts. In this way, neighbors are helpful to understand local connectivity and feature distribution [16, 28, 70], providing a granular view of node attributes; while walks offer a dynamic method to explore the graph’s structure and the relation- ships between nodes, highlighting the diversity of connectivity and paths [23, 39]. The integration of neighbors and walks can achieve a holistic understanding of graph structures. Given LLM token limitations and the varied contribution of neighbors and walks to node understanding, we further develop an adaptive input generation mechanism to ensure the compactness of the description. Specifically, we first design a “node energy” metric, 𝐻 (𝑣𝑖 ) assessing node information from two perspectives: token count in node attributes and node degree count. This metric enables us to effectively filter and select neighbor nodes and walk nodes, prioritizing those that are abundant in semantic information and possess a significant number of neighboring nodes, thus enhancing FrameworkLarge Language Modelωdownωup×LoRAOriginalAttributed GraphTunedFrozen(3) Graph-aware Instruction Tuning InstructionTuningDynamic Allocation…MuseGraph𝜈𝜈1𝜈𝜈∗𝜈𝜈2𝜈𝜈3𝜈𝜈4𝜈𝜈5𝜈𝜈6𝜈𝜈7𝜈𝜈8Input: The Compact graph description of this node v* is listed as follows:Ego graph nodes: {Revisiting learning, Image recognition, …}One-hop neighbors: {1. Image recognition 2. …}Random walks: {1. Revisiting learning cited Residual learning for image recognition. 2. …}Adaptive Input Generation(1) Compact Graph DescriptionWithin Token Limitation 𝐿𝐿(𝑣𝑣∗)Random Walks…𝜈𝜈∗𝜈𝜈2𝜈𝜈7𝜈𝜈8One-hop Neighbors𝜈𝜈2𝜈𝜈∗𝜈𝜈3𝜈𝜈6𝜈𝜈7(2) Diverse Instruction GenerationCoTStep1: …Step2: ……How?GPT-4Node Classification Link Prediction Graph-to-TextCompact Graph DescriptionCoT-based Instruction PackageArxivMIMIC-IIICoraAGENDA Conference’17, July 2017, Washington, DC, USA Yanchao Tan, Hang Lv, Xinyi Huang, Jiawei Zhang, Shiping Wang, and Carl Yang Algorithm 1: Adaptive Input Generation Input: Attributed graph G with 𝑁 nodes, token count set T , node energy set H , target node 𝑣 ∗, token limitation 𝐿(𝑣 ∗) Output: Key neighbor set N (𝑣 ∗), key walk set W (𝑣 ∗) 1: Initialize N (𝑣 ∗), W (𝑣 ∗) as empty 2: Select 𝑣𝑖 ∈ 𝐺 (𝑣 ∗) with 𝐻 (𝑣𝑖 ) ≥ 𝐻 (𝑣 ∗) and 𝐿(𝑣 ∗) ≥ 𝑇 (𝑣𝑖 ) for N (𝑣 ∗), where 𝐺 (𝑣 ∗) is 𝑣 ∗’s one-hop neighbors 3: Expand W (𝑣 ∗) starting from 𝑣 ∗ based on G within 𝐿(𝑣 ∗) and 𝐻 (𝑣 ∗) constraints Figure 4: A process showing how to distill capabilities from GPT-4 via Chain-of-Thought with node classification. for 𝑣2. The 𝐻 metric thus finely balances neighbor and walk inclu- sion for each node, marrying information-rich and token-efficient characteristics. The process culminates in the textualization of each node’s key information, producing a tailored and compact graph description as depicted in the upper right of Figure 2. Note that, a graph-related task can involve multiple nodes. In this case, we further allocate each node to satisfy the token limit requirement by calculating the ratio of all node energies using the softmax function. 3.3 Diverse Instruction Generation With the compact graph descriptions of each node, another nec- essary step in fine-tuning LLMs for generic graph mining lies in crafting diverse and high-quality instructions [25, 41]. However, such well-constructed instructions are not readily available in every situation and manual construction is often very costly. To address these problems, we propose to distill generation capa- bilities from advanced LLMs [48] (e.g., GPT-4 [1] with over 200 bil- lion parameters) for graph-related tasks. Our approach, inspired by the Chain-of-Thought (CoT) processing methodology [63], prompts GPT-4 via a flexible template based on our compact graph descrip- tions for different tasks, and then constructs task-specific CoT-based instruction packages. Different from the existing methods that lever- age CoT in the prompting stage [63, 67], we directly construct CoT- based instruction packages, which can leverage its diversity and reasoning ability to facilitate instruction tuning with graph data. Specifically, we first design diverse CoT templates for targeted popular graph tasks, such as node classification, link prediction, and graph-to-text (shown in Appendix A.3 Table 7). Then, we prompt GPT-4 to generate a small number of CoT-based instructions across diverse tasks. This approach distills GPT-4’s vast knowledge base to augment the reasoning and analytical abilities of our MuseGraph, as depicted in Figure 4. To optimize the cost-effectiveness of querying GPT-4, we in- troduce the CoT-based instruction package. For every set of 1,000 standard instructions, we integrate 100 CoT-based instructions tai- lored to the same graph task. This approach not only proves to be economical but also broadens the diversity and flexibility of instructions, accommodating an array of graph-related tasks. Figure 3: An example illustrates how to extract the key infor- mation of nodes from neighbors and walks based on node energy 𝐻 (𝑣𝑖 ) and the limitation of language tokens 𝐿(𝑣𝑖 ). the expressiveness of the graph description. The calculation of node energy 𝐻 (𝑣𝑖 ) is formulated as follows: 𝐻 (𝑣𝑖 ) = 𝑇 (𝑣𝑖 ) ∗ ⌈log(𝐷 (𝑣𝑖 ) + 1)⌉, (1) where 𝑣𝑖 is the target node, 𝑇 (𝑣𝑖 ) is the number of node tokens processed by a tokenizer, and 𝐷 (𝑣𝑖 ) is the number of node degrees. Based on 𝐻 (𝑣𝑖 ), our method strategically incorporates neighbors and walks, as depicted in Algorithm 1. A neighbor 𝑣𝑖 is chosen to describe the target 𝑣 ∗ if its 𝐻 (𝑣𝑖 ) surpasses 𝐻 (𝑣 ∗), ensuring the included node can provide supplemental information. The process of including walks concludes when encountering a node whose 𝐻 (𝑣𝑖 ) does not meet the threshold set by the target’s 𝐻 (𝑣 ∗), thus refining the input to maximize relevance within the constraints of the token limit. To further demonstrate the adaptive input genera- tion for neighbors and walks tailored to each node, we present an example in Figure 3. Node 𝑣1’s with a relatively low 𝐻 (𝑣1) value, includes a wide range of neighbors to capture more context, ex- cluding 𝑣4 due to its even lower 𝐻 (𝑣4). Given the token limitations, only two walks are sampled for 𝑣1 to maintain a compact input. Conversely, 𝑣2 with its higher 𝐻 (𝑣2), inherently carries more infor- mation, prompting the selection of fewer neighbors. 𝑣1 is excluded since 𝐻 (𝑣1) < 𝐻 (𝑣2). This frees up tokens to detail more walks 4 #T#DH85161132214528428𝜈𝜈1𝜈𝜈2𝜈𝜈3𝜈𝜈4𝜈𝜈5𝜈𝜈6𝜈𝜈7𝜈𝜈11𝜈𝜈12𝜈𝜈10𝜈𝜈9𝜈𝜈8TargetKey Neighbors:𝐻𝐻(𝜈𝜈𝑖𝑖)≥𝐻𝐻(𝜈𝜈∗)Key Walks: 𝐻𝐻(𝜈𝜈𝑗𝑗)≥𝐻𝐻(𝜈𝜈∗)𝜈𝜈∗= 𝜈𝜈1{𝜈𝜈2,𝜈𝜈3,𝜈𝜈5,𝜈𝜈6}①𝜈𝜈1→𝜈𝜈2→𝜈𝜈7→𝜈𝜈12→⋯②𝜈𝜈1→𝜈𝜈3𝜈𝜈∗= 𝜈𝜈2{𝜈𝜈7,𝜈𝜈8}①𝜈𝜈2→𝜈𝜈7→𝜈𝜈11②𝜈𝜈2→𝜈𝜈7→𝜈𝜈12③𝜈𝜈2→𝜈𝜈8→𝜈𝜈10→⋯④𝜈𝜈2→𝜈𝜈8→⋯𝜈𝜈1:𝜈𝜈2:𝜈𝜈3:𝜈𝜈4:of social representationsDeepwalk: Online learningInductive Representation Learning on Large GraphsGcc: Graph contrastive coding for graph neuralnetwork pre-trainingEnsemble deep learning···Name···Given the classification of target PAPER {title} with<category> in the Arxivdataset, give your explanationbasedonthe provided Compact graph description.Focus your analysis on elucidating the reasons behind this classification in a clear Chain of Thought. Keep the analysis brief and to the point.The Compact graph description of this PAPER is listed as follows: {Egographnode:…;One-hopneighbors:…;Randomwalks:…}The classification of PAPER as <category> is accurate considering its ...1. The title of the PAPER...which is a common topic in...2. The abstract makes numerous references to commonly studied topics in ...3. The ego graph nodes... also predominantly fall under the thematic domain of ...4. The one-hop neighbors... further solidify the notion that...5. The data from random walks, which represents a wider view of the PAPER's citation network...In conclusion, considering the PAPER's Compact graph description, its classification is valid under <category>, because of ... MuseGraph: Graph-oriented Instruction Tuning of Large Language Models for Generic Graph Mining Conference’17, July 2017, Washington, DC, USA 3.4 Graph-aware Instruction Tuning The effectiveness of instruction tuning for LLMs is significantly in- fluenced by the selection of instructions, which enables the LLMs to comprehend and execute generic graph mining tasks effectively [61, 62]. However, maintaining broad task and dataset coverage while averting catastrophic forgetting [38, 40, 56], where LLMs might regress in their generative abilities, poses a significant challenge. To this end, we propose a dynamic instruction package allocation strategy to adjust the volume of task-specific CoT-based instruction packages based on the complexities of tasks and datasets, which ensures that more complex tasks/datasets receive a proportionally larger set of instructions for detailed guidance. The calculation process unfolds in two phases: For task complexity, we count the average output tokens for each task, reflecting the task-specific demands within a dataset. For dataset complexity, we compute the total node energy 𝐻 (𝑣𝑖 ) (cf. Eq. 1) of each graph data, using this metric to further fine-tune the instruction distribution. By performing a precise context-aware allocation of instructions, we can enhance the LLMs’ ability to learn and retain abundant knowledge across a diverse range of graph mining challenges. To achieve the balance between effectiveness and efficiency, we adopt a graph-aware instruction tuning mechanism, which can sufficiently utilize diverse and high-quality instructions for fur- ther fine-tuning LLMs [41]. Specifically, we adopt a general LLM LLaMA-V1-7B (abbr. LLaMA) [53] with LoRA [19] as our start- ing point for fine-tuning. Then, based on diverse instruction set I = {𝐼1, 𝐼2, . . . , 𝐼𝐷 } as model input, we adopt the negative log- likelihood loss as the fine-tuning objective [1, 71], which is formu- lated as follows: (cid:16) 𝑌𝑗,𝑘 |𝐼 𝑗 , 𝑌𝑗,<𝑘 (cid:17) 𝑝𝜃 = 𝐿𝐿𝑀𝜃 (cid:16) 𝐼 𝑗 , 𝑌𝑗,<𝑘 (cid:17) , L𝜃 = − |𝑌𝑗 | ∑︁ 𝑘=1 log 𝑝𝜃 (cid:16) 𝑌𝑗,𝑘 |𝐼 𝑗 , 𝑌𝑗,<𝑘 (cid:17) , (2) (3) where 𝜃 is the learnable parameters of our graph-aware LLM, 𝐼 𝑗 ∈ I is the input sentence of LLM, and 𝑌𝑗 is the output sentence of LLM. After obtaining the fine-tuned LLM for generic graph mining (i.e., MuseGraph), we can apply it to various downstream tasks, such as node classification, link prediction, and graph-to-text. Model Extension. Our framework is a fundamental approach to integrate LLMs and graph data for generic graph models, which can be based on different LLMs (e.g., Baichuan-7B [68] and vicuna- 7B-v1.1 [7]). Moreover, we can adopt different parameter-efficient training approaches (e.g., QLoRA [10] and AdaLoRA [74]). A full ex- ploration of different LLMs is orthogonal to the main contributions in this work, which is left as future work. Datasets Extension. Our method introduces a generic graph frame- work to simultaneously capture the semantic and structural infor- mation of attributed graphs across tasks and datasets. This allows us to leverage the capabilities of LLMs for genetic graph learning. Note that, our method only requires some meaningful attributes on the graphs, which are available in most real-world graphs such as biological networks, social networks, and knowledge graphs. 5 Table 2: Statistics of the datasets for node classification. Dataset # Nodes # Edges # Label nodes # Classes Arxiv 169,343 1,116,243 169,343 40 MIMIC-III 32,267 559,290 4,880 19 Cora 25,120 182,280 17,093 70 Table 3: Statistics of the datasets for graph-to-text. Dataset # Graphs # Relations Avg. # Nodes Avg. # Triples Avg. Length AGENDA WebNLG 40,720 7 12.37 4.48 140.36 19,945 373 5 4 24.21 4 EXPERIMENT In this section, our research undertakes multiple experiments to confirm MuseGraph framework’s efficiency in diverse conditions, addressing essential research inquiries: • RQ1: How does MuseGraph perform in comparison to state-of- the-art graph-oriented methods for generic graph mining? • RQ2: What are the effects of different model components? • RQ3: Can MuseGraph provide the generation capability and in- terpretability for generic graph mining? 4.1 Experimental Setup 4.1.1 Datasets. To comprehensively verify the effectiveness of our method, we use three real-world datasets for node classification, i.e., OGB-arxiv (abbr. Arxiv) [20], MIMIC-III [26] and Cora [37], and two for graph-to-text, i.e., AGENDA [29] and WebNLG [13]. The detailed statistics are shown in Table 2 and Table 3. To verify the generalizability and adaptability of models in node classification, we perform dataset splits for node classification following different ratios of 5:1:4 for Arxiv, 7:1:2 for MIMIC-III, and 2:1:7 for Cora. AGENDA and WebNLG are partitioned into training, validation, and testing sets with a 7:1:2 ratio. For more detailed descriptions of datasets, please refer to Appendix A.1. 4.1.2 Evaluation Protocols. We evaluate the node classification performance using three commonly adopted evaluation metrics: Macro-F1 [51], Micro-F1 [51], and Weighted-F1 [12]; while eval- uating graph-to-text performance using four evaluation metrics: BLEU-4 [42], METEOR [3], ROUGE-L [33], and CHRF++ [44]. The F1 score is a metric of the model’s accuracy in binary and multi- class classification tasks, which considers both precision and re- call. BLEU-4 and ROUGE-L compute the ratios of overlapping and matching between generated and real text. METEOR computes the harmonic mean of precision and recall. CHRF++ computes F-score averaged on both character and word-level n-grams. Implementation Details. All the compared baselines are opti- 4.1.3 mized through the Adam optimizer and the learning rate is searched in [1e-4, 1e-2]. The hyper-parameters of baselines are chosen care- fully based on either grid search or their official source codes. The training process of instruction-tuning is carried out for one epoch. Conference’17, July 2017, Washington, DC, USA Yanchao Tan, Hang Lv, Xinyi Huang, Jiawei Zhang, Shiping Wang, and Carl Yang Table 4: Experimental results on three benchmark datasets for node classification. The best performances are highlighted in boldface and the second runners are underlined. MuseGraph achieves the best performance on both datasets. Method Dataset MLP GraphSAGE GCN GAT RevGNN DGI GKD GLNN NodeFormer DIFFormer BART T5 LLaMA MuseGraph Arxiv Macro-F1 Micro-F1 Weighted-F1 Macro-F1 Micro-F1 Weighted-F1 Macro-F1 Micro-F1 Weighted-F1 MIMIC-III 0.3996 0.5861 0.5768 0.6452 0.5733 0.4931 0.4826 0.5765 0.5861 0.5954 0.5896 0.5983 0.5989 0.7275 Cora 0.1213 0.1518 0.1479 0.1480 0.1488 0.1278 0.1577 0.1729 0.1542 0.1518 0.1581 0.1567 0.1584 0.2013 0.3480 0.5614 0.5446 0.6331 0.5571 0.4748 0.4671 0.5527 0.5795 0.5907 0.5822 0.5873 0.5881 0.7168 0.3987 0.5169 0.5009 0.4878 0.5071 0.4348 0.4402 0.5889 0.5203 0.5252 0.5191 0.5226 0.5182 0.6773 0.0854 0.0927 0.0902 0.0879 0.0913 0.0785 0.0450 0.1059 0.0938 0.0946 0.0989 0.0955 0.0986 0.1197 0.2376 0.3025 0.3158 0.2997 0.3049 0.2792 0.2843 0.3542 0.3082 0.3195 0.3285 0.3147 0.3159 0.4245 0.4048 0.5252 0.5119 0.4983 0.5182 0.4443 0.4513 0.6025 0.5321 0.5374 0.5267 0.5309 0.5274 0.6920 0.3079 0.5227 0.5396 0.6193 0.5419 0.4657 0.4593 0.5346 0.5612 0.5792 0.5787 0.5681 0.5624 0.6875 0.1173 0.1418 0.1459 0.1432 0.1469 0.1256 0.1358 0.1628 0.1539 0.1436 0.1543 0.1522 0.1545 0.2109 Table 5: Experimental results on two benchmark datasets for graph-to-text. The best performances are highlighted in boldface and the second runners are underlined. Method Dataset GraphWriter CGE-LW BART T5 GPT-3.5 MuseGraph BLEU-4 METEOR ROUGE-L CHRF++ BLEU-4 METEOR ROUGE-L CHRF++ AGENDA WebNLG 0.1413 0.1801 0.2365 0.2215 0.1057 0.2231 0.1892 0.2234 0.2519 0.2373 0.1702 0.2457 0.2761 0.2562 0.2876 0.3068 0.2522 0.3098 0.3835 0.4679 0.5044 0.4872 0.4586 0.4913 0.4584 0.4860 0.5249 0.5878 0.1108 0.5359 0.4021 0.4345 0.4223 0.4418 0.2389 0.4386 0.6062 0.6252 0.6561 0.6824 0.3587 0.6973 0.5543 0.5856 0.7204 0.7439 0.4875 0.7276 4.1.4 Methods for Comparison. In our experiment, we consider various state-of-the-art methods for comprehensive evaluation. For node classification, we compare our proposed MuseGraph with the following 13 representative baselines from two main perspectives: • GNN-based methods: MLP [71], GraphSAGE [16], GCN [27], GAT [54], RevGNN [31], DGI [55], GKD [69], GLNN [75], NodeFormer [65], and DIFFormer [64]. • LLM-based methods: BART-large (abbr. BART) [30], T5-large (abbr. T5) [46], and LLaMA-V1-7B (abbr. LLaMA) [53]. For graph-to-text, we compare MuseGraph with the following 5 baselines from two perspectives: • GNN-based methods: GraphWriter [29] and CGE-LW [47]. • LLM-based methods: BART-large (abbr. BART) [30], T5-large (abbr. T5) [46], and GPT-3.5 [72]. For more details of the compared baselines, refer to Appendix A.2. 4.2 Overall Performance Comparison (RQ1) To evaluate our MuseGraph framework, we conduct extensive test- ing across node classification and graph-to-text tasks using five real-world datasets. 4.2.1 Main Results Compared with GNN-based Methods. MuseGraph stands out with its single model finely tuned across the Arxiv, MIMIC-III, Cora, AGENDA, and WebNLG datasets. This answers 6 RQ1, showcasing remarkable cross-task and cross-dataset general- ization capabilities. Moreover, our generic MuseGraph is in contrast to most GNN-based methods that typically require dataset-specific training, such as GAT and GLNN for node classification while GraphWriter and CGE-LW for graph-to-text. In the node classification task, MuseGraph demonstrates signif- icant performance gains, ranging from reasonably large (12.33% over GAT on MIMIC-III) to significantly large (19.67% over GLNN on Cora). In the graph-to-text task, the performance gains of MuseGraph over all GNN-based baselines range from 1.02% (achieved in ME- TEOR on WebNLG) to 57.89% (achieved in BLEU-4 on AGENDA). Note that, GNN-based methods (e.g., GLNN and GAT) only achieve second-runner in the node classification task. Since the genera- tive and reasoning abilities of GNN-based methods are weaker than LLM-based ones in generative tasks, they can not achieve the second-runner in the graph-to-text task. In particular, GLNN is optimized for attribute-rich datasets (e.g., Arxiv and Cora) through its knowledge distillation mechanisms, while GAT is tailored for datasets with basic attributes (e.g., MIMIC-III) through its ability to weigh nodes differently. Compared with GLNN and GAT, our MuseGraph with graph-aware instruction tuning with the diverse CoT-based instruction packages can effectively comprehend graph data and achieve accurate predictions of various downstream tasks. MuseGraph: Graph-oriented Instruction Tuning of Large Language Models for Generic Graph Mining Conference’17, July 2017, Washington, DC, USA One step further, since one key challenge of node classifica- tion lies in the generalizability and adaptability of models, we de- sign a few-shot setting on Cora to evaluate models in extending knowledge to unseen scenarios and adapting to new tasks with limited training data. As shown in Table 4, our MuseGraph can stay strong with a small size of training data, where we win the perfor- mance gains over the second-best performance ranging from 13.03% in Macro-F1 to 29.55% in Weighted-F1. Through the dynamic in- struction package allocation strategy across tasks and datasets, our proposed MuseGraph can understand and analyze the meaningful graph attributes and structures, which can help MuseGraph adapt to different graphs. Consequently, MuseGraph maintains superior performance even in the few-shot setting. 4.2.2 Main Results Compared with LLM-based Methods. To fairly compare our MuseGraph with the other LLM-based methods, we fine-tune LLMs like BART, T5, and LLaMA with LoRA to better adapt them to our target graph mining tasks. In node classification tasks, these LLM-based methods demonstrate stable performance across various tasks and datasets. However, their predictive ca- pabilities are primarily confined to text-based information. With the designed compact graph description, diverse instruction gen- eration, and graph-aware fine-tuning mechanism, MuseGraph sur- passes these LLM-based methods by harnessing both textual and structural graph information, leading to a more comprehensive understanding and enhanced performance (with up to 38.57% com- pared with the second-runner). Compared with GNN-based methods, BART and T5 exhibit su- perior performance in graph-to-text, reaching state-of-the-art on certain metrics (e.g., BART on AGENDA and T5 on WebNLG). The success of BART and T5 is attributed to their sophisticated pre- training on diverse text corpora, which equips them with a compre- hensive understanding of complex knowledge graph structures and precise word usage. MuseGraph, with less pre-training data towards graph-to-text task and mainly targeting generic graph mining, still retains competitive performance against these models by effectively leveraging our graph-oriented instruction tuning of LLMs. Note that, MuseGraph can sometimes exceed all baselines with ROUGE-L metric, achieving an average performance gain of 1.58% over the second-runner T5 on average. By adaptively allocating the neighbors and walks under the input token limitation, MuseGraph adeptly extracts a compact graph description. This enables it to ac- curately grasp both the semantic and structural nuances of graphs through a graph-aware fine-tuning mechanism. Given that ROUGE- L prioritizes the longest common subsequence between generated and reference texts, MuseGraph’s focus on key graph information translates into texts that are both accurate and coherent. This capa- bility is particularly valued in ROUGE-L assessments, which do not require strict adherence to the sequence of words and phrases at the word or phrase level. As a result, MuseGraph’s generated texts can achieve high scores even when the structure is more flexible. 4.3 Ablation Studies (RQ2) To study the effectiveness of different allocations of CoT-based instruction packages across various tasks and datasets, we conduct ablation studies for node classification on Arxiv, which is based on our proposed compact graph description for inputting graph. Using Figure 5: Comparison of results for node classification on Arxiv within different allocations of CoT-based instruction packages across various tasks and datasets. LLaMA as our baseline model, the studies evaluate our MuseGraph across various configurations, while maintaining an equal total count of instruction packages. We study our MuseGraph as follows: • 1 Task+1 Dataset: Utilizes our proposed CoT-based instructions for node classification on Arxiv. • 2 Task+1 Dataset: Incorporates instructions from both node clas- sification and link prediction on Arxiv. • 2 Task+2 Dataset: Expands to include node classification and link prediction instructions from both Arxiv and Cora. • 3 Task+3 Dataset: Engages with instructions across node classi- fication, link prediction on Arxiv and Cora, and graph-to-text on AGENDA. As shown in Figure 5, we have the following observations: Compared with 1 Task+1 Dataset and LLaMA, 2 Task+1 Dataset leads to performance gains ranging from 3.22% (achieved in Micro- F1) to 18.30% (achieved in Macro-F1). Such results show the effec- tiveness of diverse task-specific CoT-based instruction packages within the token limitation for instruction tuning, which can enable the LLMs to comprehend and execute generic graph mining tasks. Furthermore, with increasing the number of datasets, the perfor- mance gains of 2 Task+2 Dataset over 2 Task+1 Dataset ranges from 4.96% (achieved in Micro-F1) to 7.24% (achieved in Weighted-F1). This indicates that reasonable allocation of the different volumes of CoT-based instruction packages across datasets can help LLMs ef- fectively and efficiently perform predictions of graph-related tasks. With adding more types of tasks and datasets, our proposed MuseGraph can achieve further improvements, where 3 Task+3 Dataset outperforms 2 Task+2 Dataset ranging from 3.03% in Macro- F1 to 3.94% in Weighted-F1. Such observation strongly indicates that the dynamic instruction package allocation strategy can enhance the accuracy of graph tasks and avert catastrophic forgetting, where LLMs might regress in their generative abilities. 4.4 Case Studies (RQ3) To demonstrate the advantages of MuseGraph over the LLM-based methods on the generation tasks, we provide the abstract of the graph-to-text task prompted by LLaMA, GPT-3.5, and our MuseGraph on AGENDA. The provided graph structure and its ground-truth abstract are displayed at the top of Table 6. We standardize the process by employing our compact graph description as the input across all models. 7 00.10.20.30.40.50.60.70.8Macro-F1Micro-F1Weighted-F1LLaMA1 Task+1 Dataset2 Task+1 Dataset2 Task+2 Dataset3 Task+3 Dataset Conference’17, July 2017, Washington, DC, USA Yanchao Tan, Hang Lv, Xinyi Huang, Jiawei Zhang, Shiping Wang, and Carl Yang Table 6: Example graph-to-text results generated from LLaMA, GPT-3.5 and MuseGraph on AGENDA. The output includes the generated abstract and the corresponding Chain-of-Thought. Best viewed in color. Input: The compact graph description of this PAPER is listed as follows: Ego graph nodes: {1. METHOD: [Incremental fuzzy Learning Algorithm, Fuzzy Intelligent System]; 2. OTHERSCIENTIFICTERM: [Priority Values, Fuzzy Rule]; 3. TASK: [Automotive Engineering Diagnosis]} Random walks: {1. Incremental fuzzy Learning Algorithm USED-FOR Automotive Engineering Diagnosis. 2. Fuzzy Intelligent System USED-FOR Automotive Engineering Diagnosis. 3. Priority Values USED-FOR Incremental fuzzy Learning Algorithm USED-FOR Fuzzy Intelligent System USED-FOR Automotive Engineering Diagnosis. 4. Fuzzy Rule USED-FOR Fuzzy Intelligent System USED-FOR Automotive Engineering Diagnosis.} Instruction: Given the target PAPER Incremental Learning in a Fuzzy Intelligent System with the compact graph description. Please generate the target PAPER’s abstract and the Chain-of-Thought process that could lead to this generation. LLaMA: In this paper, we propose an incremental fuzzy learning algorithm for a fuzzy intelligent system. The incremental fuzzy learning algorithm is based on the priority values of fuzzy rules and the fuzzy belief values of the fuzzy rules. The incremental fuzzy learning algorithm is applied to an automotive engineering diagnosis. The CoT for this generation is as follows: 1. Focus: Fuzzy learning algorithm for diagnosis. 2. Function: Uses priority values for decision-making. 3. Incrementality: Adjusts based on new data, and maintains old knowledge. 4. Usage: Part of a fuzzy system for vehicle diagnostics. 5. Outcome: Aimed at better diagnosis through incremental learning and data emphasis. GPT-3.5: This paper presents a novel approach to incremental learning within a fuzzy intelligent system framework tailored for automotive engineering diagnosis. We introduce an Incremental Fuzzy Learning Algorithm designed to enhance the adaptability and efficiency of diagnostic systems in automotive engineering contexts. Leveraging fuzzy logic principles, our approach allows for continuous learning and refinement of diagnostic capabilities over time. Central to our methodology is the integration of priority values and fuzzy rules, which facilitate the dynamic adjustment of learning priorities and the refinement of fuzzy inference mechanisms. Through extensive experimentation and evaluation, we demonstrate the effectiveness and practical utility of our approach in enhancing the accuracy and reliability of automotive engineering diagnosis systems. Our findings underscore the importance of incremental learning techniques within fuzzy intelligent systems for advancing diagnostic capabilities in complex engineering domains. The CoT for this generation is as follows: 1. Main method: Incremental fuzzy learning algorithm. 2. Purpose: Improve automotive engineering diagnosis. 3. Components: Prioritizes fuzzy belief values, priority values, and rules. 4. Learning process: Algorithm learns incrementally, adapting to new data. 5. System integration: Used within a fuzzy intelligent system for diagnostics. MuseGraph: This paper describes an incremental fuzzy learning algorithm for a fuzzy intelligent system that is used in automotive engineering diagnosis. The incremental fuzzy learning algorithm uses priority values to determine the fuzzy rules that should be updated. The incremental fuzzy learning algorithm also uses fuzzy belief values to determine which fuzzy rules should be updated. The incremental fuzzy learning algorithm is incremental in that incremental fuzzy learning algorithm only updates the fuzzy rules that have changed. The incremental fuzzy learning algorithm is also incremental in that the incremental fuzzy learning algorithm only updates the fuzzy rules that are necessary to maintain the fuzzy intelligent system’s performance. The CoT for this generation is as follows: 1. Identify main concept: Incremental fuzzy learning algorithm within a fuzzy intelligent system 2. Clarify task relevance: Algorithm is used for automotive engineering diagnosis. 3. Detail key attributes: Algorithm utilizes fuzzy belief values and priority values. 4. Define algorithm benefits: Detects and updates from new data, retains useful knowledge, emphasizes new learning. 5. Apply to system: The fuzzy intelligent system incorporates the algorithm for enhanced diagnostic capabilities. As shown in Table 6, the abstract generated by LLaMA offers a basic summary while overlooking some key details, such as the specific incremental learning aspect of the fuzzy algorithm (shown in the blue text). Compared with LLaMA, GPT-3.5 generates even three times longer content with an elaborate description. However, the model inevitably includes some fabricated sentences, such as the “Through extensive experiment and evaluation, ...” in red, which is irrelevant to the provided ground-truth abstract. Since our method performs graph-aware instruction tuning with compact graph descriptions, we can produce precise abstracts by extracting the key information from the graph. Moreover, by in- corporating the distilled reasoning and generative capacity from CoT-based instructions, MuseGraph can not only produce an ab- stract that aligns with the core details of the ground truth but also provides fluent and precise CoT inherited from the advanced large language models. 8 5 CONCLUSION In this paper, we introduce MuseGraph, an effective and generic ap- proach for graph mining across different tasks and datasets, which can also alleviate the challenges posed by language token limita- tions. Through the innovative design of compact graph descrip- tions with adaptive input generation, the generation of diverse, task-specific instructions, and the implementation of graph-aware instruction tuning, our MuseGraph integrates the strengths of Graph Neural Networks (GNNs) and Large Language Models (LLMs). Our comprehensive experimental results demonstrate MuseGraph’s su- perior performance against state-of-the-art baselines across graph tasks and datasets, illustrating its ability not only to enhance the precision of graph-oriented downstream tasks but also to preserve the generative capabilities of LLMs, which is further consolidated with our real case study results. Incremental fuzzy Learning AlgorithmAutomotive EngineeringDiagnosisFuzzy Intelligent System Priority Values Fuzzy Rule Abstract: This paper presents an incremental fuzzy learning algorithm within the framework of a fuzzy intelli-gent system. The incremental fuzzy learning algorithm is based on priority values attached to fuzzy rules. The priority value of a fuzzy rule is generated based on the fuzzy belief values of the fuzzy rule derived from the training data. The incremental fuzzy learning algorithm has three important properties. It can detect and recover from incorrect knowledge once new knowledge is available; itwill not lose the useful knowledge generated from the old data while it attempts to learn from new data; and it provides a mechanism allowing to emphasize on knowledge learnt from the new data. The incremental fuzzy learning algorithm has been implemented in a fuzzy intelligent system for automotive engineering diagnosis. its performance is presented in the paper.The compact graph descriptionof this PAPER is listed as follows: Ego graph nodes:{1. METHOD: [Incremental fuzzy Learning Algorithm, Fuzzy Intelligent System];2. OTHERSCIENTIFICTERM: [Priority Values, Fuzzy Rule];3. TASK: [Automotive EngineeringDiagnosis]} Random walks: {1. Incremental fuzzy Learning Algorithm USED-FOR Automotive EngineeringDiagnosis. 2. Fuzzy Intelligent System USED-FOR Automotive EngineeringDiagnosis. 3. Priority Values USED-FOR Incremental fuzzy Learning Algorithm USED-FOR Fuzzy Intelligent SystemUSED-FOR Automotive EngineeringDiagnosis.} MuseGraph: Graph-oriented Instruction Tuning of Large Language Models for Generic Graph Mining Conference’17, July 2017, Washington, DC, USA REFERENCES [1] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. [2] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives. Dbpedia: A nucleus for a web of open data, in ‘the semantic web’, vol. 4825 of lecture notes in computer science, 2007. [3] S. Banerjee and A. Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72, 2005. [4] H. Cai, V. W. Zheng, and K. C. Chang. A comprehensive survey of graph em- bedding: Problems, techniques, and applications. IEEE Trans. Knowl. Data Eng., 30(9):1616–1637, 2018. [5] Z. Chen, H. Mao, H. Li, W. Jin, H. Wen, X. Wei, S. Wang, D. Yin, W. Fan, H. Liu, et al. Exploring the potential of large language models (llms) in learning on graphs. arXiv preprint arXiv:2307.03393, 2023. [6] Z. Chen, H. Mao, H. Wen, H. Han, W. Jin, H. Zhang, H. Liu, and J. Tang. Label-free node classification on graphs with large language models (llms). arXiv preprint arXiv:2310.04668, 2023. [7] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023. [8] T. Cooray and N.-M. Cheung. Graph-wise common latent factor extraction In Proceedings of the AAAI for unsupervised graph representation learning. Conference on Artificial Intelligence, volume 36, pages 6420–6428, 2022. [9] G. Cui, J. Zhou, C. Yang, and Z. Liu. Adaptive graph encoder for attributed graph embedding. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 976–985, 2020. [10] T. Dettmers, A. Pagnoni, A. Holtzman, and L. Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023. [11] Y. Dong, N. V. Chawla, and A. Swami. metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 135–144, 2017. [12] X. Fu, Y. Wei, Q. Sun, H. Yuan, J. Wu, H. Peng, and J. Li. Hyperbolic geometric graph representation learning for hierarchy-imbalance node classification. In Proceedings of the ACM Web Conference 2023, pages 460–468, 2023. [13] C. Gardent, A. Shimorina, S. Narayan, and L. Perez-Beltrachini. Creating training corpora for nlg micro-planners. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 179–188. Association for Computational Linguistics, 2017. [14] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864, 2016. [15] J. Guo, L. Du, and H. Liu. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066, 2023. [16] W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. [17] W. L. Hamilton, R. Ying, and J. Leskovec. Representation learning on graphs: Methods and applications. IEEE Data Eng. Bull., 40(3):52–74, 2017. [18] X. He, X. Bresson, T. Laurent, and B. Hooi. Explanations as features: Llm-based features for text-attributed graphs. arXiv preprint arXiv:2305.19523, 2023. [19] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. [20] W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133, 2020. [21] Z. Hu, Y. Dong, K. Wang, K.-W. Chang, and Y. Sun. Gpt-gnn: Generative pre- training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1857– 1867, 2020. [22] Q. Huang, H. Ren, P. Chen, G. Kržmanc, D. Zeng, P. Liang, and J. Leskovec. arXiv preprint learning over graphs. Prodigy: Enabling in-context arXiv:2305.12600, 2023. [23] S. Ivanov and E. Burnaev. Anonymous walk embeddings. conference on machine learning, pages 2186–2195. PMLR, 2018. In International [24] X. Jiang, T. Jia, Y. Fang, C. Shi, Z. Lin, and H. Wang. Pre-training on large-scale heterogeneous graph. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 756–766, 2021. [25] Y. Jiao, M. Zhong, S. Li, R. Zhao, S. Ouyang, H. Ji, and J. Han. Instruct and extract: Instruction tuning for on-demand information extraction. arXiv preprint arXiv:2310.16040, 2023. [26] A. E. Johnson, T. J. Pollard, L. Shen, H. L. Li-Wei, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark. Mimic-iii, a freely accessible critical care 9 database. Scientific data, 2016. [27] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolu- tional networks. arXiv preprint arXiv:1609.02907, 2016. [28] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolu- tional networks. In International Conference on Learning Representations, 2017. [29] R. Koncel-Kedziorski, D. Bekal, Y. Luan, M. Lapata, and H. Hajishirzi. Text generation from knowledge graphs with graph transformers. arXiv preprint arXiv:1904.02342, 2019. [30] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019. [31] G. Li, M. Müller, B. Ghanem, and V. Koltun. Training graph neural networks with 1000 layers. In International conference on machine learning, pages 6437–6449. PMLR, 2021. [32] Y. Li, Z. Li, P. Wang, J. Li, X. Sun, H. Cheng, and J. X. Yu. A survey of graph meets large language model: Progress and future directions, 2024. [33] C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81, 2004. [34] H. Liu, J. Feng, L. Kong, N. Liang, D. Tao, Y. Chen, and M. Zhang. One for all: Towards training one graph model for all classification tasks. ICLR, 2024. [35] Y. Luan, L. He, M. Ostendorf, and H. Hajishirzi. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. arXiv preprint arXiv:1808.09602, 2018. [36] Q. Mao, Z. Liu, C. Liu, and J. Sun. Hinormer: Representation learning on het- erogeneous information networks with graph transformer. In Proceedings of the ACM Web Conference 2023, pages 599–610, 2023. [37] A. K. McCallum, K. Nigam, J. Rennie, and K. Seymore. Automating the construc- tion of internet portals with machine learning. Information Retrieval, 3:127–163, 2000. [38] M. McCloskey and N. J. Cohen. Catastrophic interference in connectionist net- works: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109–165. Elsevier, 1989. [39] S. Micali and Z. A. Zhu. Reconstructing markov processes from independent and anonymous experiments. Discrete Applied Mathematics, 200:108–122, 2016. [40] M. Mosbach, M. Andriushchenko, and D. Klakow. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. In International Confer- ence on Learning Representations, 2020. [41] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instruc- tions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022. [42] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318, 2002. [43] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social repre- sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710, 2014. [44] M. Popović. chrf++: words helping character n-grams. In Proceedings of the second conference on machine translation, pages 612–618, 2017. [45] C. Qian, H. Tang, Z. Yang, H. Liang, and Y. Liu. Can large language models empower molecular property prediction? arXiv preprint arXiv:2307.07443, 2023. [46] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020. [47] L. F. Ribeiro, Y. Zhang, C. Gardent, and I. Gurevych. Modeling global and local node contexts for text generation from knowledge graphs. Transactions of the Association for Computational Linguistics, 8:589–604, 2020. [48] K. Shridhar, A. Stolfo, and M. Sachan. Distilling reasoning capabilities into smaller language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7059–7073, 2023. [49] M. Sun, K. Zhou, X. He, Y. Wang, and X. Wang. Gppt: Graph pre-training and prompt tuning to generalize graph neural networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1717– 1727, 2022. [50] X. Sun, H. Cheng, J. Li, B. Liu, and J. Guan. All in one: Multi-task prompting for graph neural networks. Proceedings of the 29rd ACM SIGKDD international conference on knowledge discovery and data mining, 2023. [51] Y. Tan, Z. Zhou, H. Lv, W. Liu, and C. Yang. Walklm: A uniform language model fine-tuning framework for attributed graph embedding. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. [52] J. Tang, Y. Yang, W. Wei, L. Shi, L. Su, S. Cheng, D. Yin, and C. Huang. Graphgpt: Graph instruction tuning for large language models. arXiv preprint arXiv:2310.13023, 2023. [53] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. Conference’17, July 2017, Washington, DC, USA Yanchao Tan, Hang Lv, Xinyi Huang, Jiawei Zhang, Shiping Wang, and Carl Yang [54] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, Y. Bengio, et al. Graph ICLR, 2024. attention networks. stat, 1050(20):10–48550, 2017. [55] P. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm. Deep Graph Infomax. In International Conference on Learning Representations, 2019. [56] S. Vijay and A. Priyanshu. Nerda-con: Extending ner models for continual learning–integrating distinct tasks and updating distribution shifts. International Conference on Machine Learning, 2022. [57] H. Wang, S. Feng, T. He, Z. Tan, X. Han, and Y. Tsvetkov. Can language models solve graph problems in natural language?, 2024. [58] K. Wang, Z. Shen, C. Huang, C.-H. Wu, Y. Dong, and A. Kanakia. Microsoft academic graph: When experts are not enough. Quantitative Science Studies, 1(1):396–413, 2020. [59] X. Wang, D. Bo, C. Shi, S. Fan, Y. Ye, and P. S. Yu. A survey on heterogeneous graph embedding: Methods, techniques, applications and sources. IEEE Trans. Big Data, 9(2):415–436, 2023. [68] A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin, C. Lv, D. Pan, D. Wang, D. Yan, et al. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305, 2023. [69] C. Yang, Q. Wu, and J. Yan. Geometric knowledge distillation: Topology com- pression for graph neural networks. Advances in Neural Information Processing Systems, 35:29761–29775, 2022. [70] C. Yang, J. Zhang, H. Wang, S. Li, M. Kim, M. Walker, Y. Xiao, and J. Han. Relation learning on social networks with multi-modal graph edge variational autoen- coders. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 699–707, 2020. [71] R. Ye, C. Zhang, R. Wang, S. Xu, and Y. Zhang. Natural language is all a graph needs. arXiv preprint arXiv:2308.07134, 2023. [72] S. Yuan and M. Färber. Evaluating generative models for graph-to-text generation. arXiv preprint arXiv:2307.14712, 2023. [60] X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, and P. S. Yu. Heterogeneous graph [73] M. Zhang and Y. Chen. Link prediction based on graph neural networks. Advances attention network. In The world wide web conference, pages 2022–2032, 2019. in neural information processing systems, 31, 2018. [61] Y. Wang, H. Ivison, P. Dasigi, J. Hessel, T. Khot, K. R. Chandu, D. Wadden, K. MacMillan, N. A. Smith, I. Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023. [62] Y. Wang, Y. Kordi, S. Mishra, A. Liu, N. A. Smith, D. Khashabi, and H. Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022. [63] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022. [64] Q. Wu, C. Yang, W. Zhao, Y. He, D. Wipf, and J. Yan. Difformer: Scalable (graph) transformers induced by energy constrained diffusion. arXiv preprint arXiv:2301.09474, 2023. [65] Q. Wu, W. Zhao, Z. Li, D. P. Wipf, and J. Yan. Nodeformer: A scalable graph struc- ture learning transformer for node classification. Advances in Neural Information Processing Systems, 35:27387–27401, 2022. [66] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu. A comprehensive survey on graph neural networks. IEEE Trans. Neural Networks Learn. Syst., 32(1):4–24, 2021. [67] Z. Xiang, F. Jiang, Z. Xiong, B. Ramasubramanian, R. Poovendran, and B. Li. Badchain: Backdoor chain-of-thought prompting for large language models. [74] Q. Zhang, M. Chen, A. Bukharin, P. He, Y. Cheng, W. Chen, and T. Zhao. Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512, 2023. [75] S. Zhang, Y. Liu, Y. Sun, and N. Shah. Graph-less neural networks: Teaching old mlps new tricks via distillation. arXiv preprint arXiv:2110.08727, 2021. [76] W. Zhang, Y. Zhu, M. Chen, Y. Geng, Y. Huang, Y. Xu, W. Song, and H. Chen. Struc- ture pretraining and prompt tuning for knowledge graph transfer. In Proceedings of the ACM Web Conference 2023, pages 2581–2590, 2023. [77] H. Zhao, S. Liu, C. Ma, H. Xu, J. Fu, Z.-H. Deng, L. Kong, and Q. Liu. Gimlet: A unified graph-text model for instruction-based molecule zero-shot learning. bioRxiv, pages 2023–05, 2023. [78] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun. Graph neural networks: A review of methods and applications. AI open, 1:57–81, 2020. [79] Q. Zhu, C. Yang, Y. Xu, H. Wang, C. Zhang, and J. Han. Transfer learning of graph neural networks with ego-graph information maximization. Advances in Neural Information Processing Systems, 34:1766–1779, 2021. [80] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021, pages 2069–2080, 2021. 10 MuseGraph: Graph-oriented Instruction Tuning of Large Language Models for Generic Graph Mining Conference’17, July 2017, Washington, DC, USA A APPENDIX A.1 Detailed Descriptions of Datasets This section provides detailed descriptions of each graph dataset used in our experiment. • OGB-arxiv (abbr. Arxiv) represents a directed graph that cap- tures the citation network among computer science arXiv papers indexed by MAG [58]. Each paper in the dataset is associated with a research category, manually labeled by the authors and arXiv moderators. These research categories are selected from a set of 40 subject areas. • MIMIC-III is a graph of diseases, patients, and visits, where nodes and relations are extracted from clinical records [26]. Dis- eases are classified into 19 categories according to ICD-9-CM. • Cora is known as the “Cora Research Paper Classification” dataset. It is a collection of research papers that are linked to each other through citations. The abstract of a paper is deemed a text doc- ument. The papers are classified into a topic hierarchy with 73 leaves. We utilize an expanded version, which is larger and has more classes (70 in total) compared to previous versions [27]. • AGENDA (Abstract Generation Dataset) is a dataset that pairs knowledge graphs with paper abstracts from scientific domains. The graphs in AGENDA were automatically extracted from the SciIE information extraction system [35]. Each instance in AGENDA includes the title, entities, graph, and abstract of a paper. We use the entities and graphs as input for the models. • WebNLG is a crowd-sourced RDF triple-to-text dataset manu- ally crafted by human annotators. The dataset contains graphs from DBpedia [2] with up to 7 triples paired with one or more reference texts. We take data from WebNLG v2.1 for experiments. A.2 Details of Compared Baselines • MLP [71] The method employs a Multi-layer Perceptron for node representation. • GraphSAGE [16] is a general, inductive framework that lever- ages node feature information, such as text attributes to effi- ciently generate node embeddings. • GCN [27] scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. • GAT [54] novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. • RevGNN [31] is designed to enable the capture of long-range interactions in graph-structured data without suffering from the vanishing gradient issue, due to its reversible computation frame- work that preserves information flow throughout the layers. • DGI [55] is a general approach for learning node representations within graph-structured data in an unsupervised manner. • GKD [69] is a new paradigm of knowledge transfer that aims at encoding graph topological information into graph neural networks (GNNs) by distilling knowledge from a teacher GNN model trained on a complete graph to a student GNN model operating on a smaller or sparser graph. • GLNN [75] is a combination of GNNs and MLPs through knowl- edge distillation (KD). • NodeFormer [65] is a scalable graph structure learning trans- former for node classification. • DIFFormer [64] is a scalable graph transformer induced by energy-constrained diffusion. • GraphWriter [29] uses Graph Neural Networks (GNNs) and transformer-based models to produce coherent texts from struc- tured graph inputs. • CGE-LW [47] proposes a graph-to-text model by combining both global and local node aggregation strategies. • BART-large [30] linearizes the KG into sequence and applies BART-large to generate text. • T5-large [46] linearizes KG into a triple sequence and employs T5-large to generate text. • LLaMA-V1-7B [53] is an open-sourced Large Language Model (LLM) designed for natural language understanding and genera- tion tasks. • GPT-3.5 [41] uses both Graph Neural Networks (GNNs) and transformer-based models to generate coherent text based on input structured graphs. A.3 Instruction Templates across Different Tasks and Datasets We provide instruction templates for various graph-related tasks (i.e., node classification, link prediction, and graph-to-text) across datasets. These templates are shown in Table 7. 11 Conference’17, July 2017, Washington, DC, USA Yanchao Tan, Hang Lv, Xinyi Huang, Jiawei Zhang, Shiping Wang, and Carl Yang Table 7: Instruction templates across different tasks and datasets. Diverse Instructions for Graph Learning Node Classification (Arxiv) Input: The compact graph description of this PAPER is listed as follows: Title: {title of the PAPER}. Abstract: {abstract of the PAPER}. Ego graph nodes: {ego graph node list}. One-hop neighbors: {1-hop neighbor list}. Random walks: {random walk paths}. Standard Instruction: Given the target PAPER with the compact graph description in the Arxiv dataset, which of the following subcategories of computer science does this PAPER belong to {the category list}. Directly give the most likely category of this PAPER. Standard Output: {<category>}. CoT Instruction: Given the classification of target PAPER title with <category> in the Arxiv dataset, give your explanation based on the provided compact graph description. Focus your analysis on elucidating the reasons behind this classification in a clear Chain of Thought. Keep the analysis brief and to the point. CoT Output: Considering the PAPER’s compact graph description, its classification is valid under <category>, because {reasoning process and answers}. Link Prediction (Arxiv) Input: The compact graph description of this PAPER 1 is listed as follows: Title: {title of the PAPER 1}. Abstract: {abstract of the PAPER 1}. Ego graph nodes: {ego graph node list of the PAPER 1}. One-hop neighbors: {1-hop neighbor list of the PAPER 1}. Random walks: {random walk paths of the PAPER 1}. The compact graph description of this PAPER 2 is listed as follows: Title: {title of the PAPER 2}. Abstract: {abstract of the PAPER 2}. Ego graph nodes: {ego graph node list of the PAPER 2}. One-hop neighbors: {1-hop neighbor list of the PAPER 2}. Random walks: {random walk paths of the PAPER 2}. Standard Instruction (pair-wise): Given the compact graph descriptions of PAPER 1 and PAPER 2 in the Arxiv dataset. If the connection between the PAPERs represents the relationship between them, are they connected? Give me a direct answer of “yes” or “no”. Standard Output (pair-wise): {yes} or {no}. CoT Instruction (pair-wise): Given the established link between PAPER 1 and PAPER 2 in the Arxiv dataset, give your explanation based on the provided compact graph description. Focus your analysis on elucidating the reasons behind this link in a clear Chain of Thought. Keep the analysis brief and to the point. CoT Output (pair-wise): The connection between PAPER 1 and PAPER 2 seems to be grounded on {reasoning process and answers}. Graph-to-Text (AGENDA) Input: The compact graph description of this PAPER is listed as follows: Title: {title of the PAPER}. Ego graph nodes: {ego graph node list}. One-hop neighbors: {1-hop neighbor list}. Random walks: {random walk paths}. Standard Instruction: Given the target PAPER in the AGENDA dataset with the compact graph description. Please generate the target PAPER abstract from the compact graph description. Standard Output: {abstract of the PAPER} . CoT Instruction: Given the generated abstract of the target PAPER in the AGENDA dataset. Please use the provided compact graph description, to examine how these elements influenced the generation of the abstract with a clear Chain of Thought (CoT). Keep the CoT brief and to the point. CoT Output: The CoT for this generation is as follows: {the content of the CoT }. 12
ai_researcher
2
I_Need_Help_Evaluating_LLM’s_Ability_to_Ask_for_Users’_Support_A_Case_Study_on_Text-to-SQL_Generation.pdf
I Need Help! Evaluating LLM’s Ability to Ask for Users’ Support: A Case Study on Text-to-SQL Generation Cheng-Kuang Wu1,2*, Zhi Rui Tam1*, Chao-Chung Wu1, Chieh-Yen Lin1, Hung-yi Lee2†, Yun-Nung Chen2† 1Appier AI Research 2National Taiwan University 4 2 0 2 p e S 0 3 ] L C . s c [ 2 v 7 6 7 4 1 . 7 0 4 2 : v i X r a Abstract This study explores the proactive ability of LLMs to seek user support. We propose met- rics to evaluate the trade-off between perfor- mance improvements and user burden, and in- vestigate whether LLMs can determine when to request help under varying information avail- ability. Our experiments show that without external feedback, many LLMs struggle to recognize their need for user support. The findings highlight the importance of exter- nal signals and provide insights for future re- search on improving support-seeking strate- gies. Source code: https://github.com/ appier-research/i-need-help. 1 Introduction The impressive instruction-following (Wei et al., 2021) abilities of large language models (LLMs) have enabled their out-of-the-box usage to solve problems. However, these models generate hallu- cinated content (Rawte et al., 2023) or incorrect predictions in their efforts to fulfill user instruc- tions, which undermines their reliability. When LLMs generate incorrect outputs for a given instruction, the issue can be examined from multiple perspectives. One is that the model simply lacks the competence to satisfy the instruction, sug- gesting a straightforward solution: enhancing the model’s capabilities, which is the focus of most pre- vious research. Another is that the model could ac- tually solve the task with additional support. For in- stance, Pourreza and Rafiei (2023) found that mod- els often fail due to underspecified natural language queries. Similarly, Li et al. (2024) showed that while GPT-4 struggles initially, its performance can improve by up to 20.01% with human-annotated external knowledge. In such cases, models should proactively seek help rather than attempting to sat- isfy instructions with insufficient information. *Equal contribution †Equal advisorship Figure 1: Overview of our experiments on text-to-SQL. LLMs struggle to determine when they need help based solely on the instruction (x) or their output (ˆy). They require external feedback, such as the execution results (ˆr) from the database, to outperform random baselines. Motivated by these considerations, we aim to investigate whether LLMs can identify when to ask for user support. Since providing such sup- port requires additional effort from users, there is an inherent trade-off between “LLM performance improvement from user support” and “user bur- den”. Therefore, we seek to answer the following research questions: RQ1: How can we design eval- uation metrics to quantify this trade-off? RQ2: How effectively do LLMs manage this trade-off, and what strategies are effective in improving it? In this work, we focus on the text-to-SQL task as a case study to empirically investigate the afore- mentioned research questions. We chose the text- to-SQL task for several reasons: (1) Its promis- ing applicability, empowering lay users to retrieve data with natural language queries. (2) The inher- ent ambiguity in some natural language queries, leading to uncertainty in the generation of SQL code (Pourreza and Rafiei, 2023), making it suit- able for scenarios where additional user support is beneficial. (3) There exists a large-scale BIRD dataset (Li et al., 2024) with human-annotated ex- ternal knowledge, providing a valuable source of user support for our empirical investigation. 1 Direct AskWrite then AskNeed Help?Instruction ()Need Help?Instruction ()Execute then AskDBI Need Help!Instruction ()Output ()Results () Our contributions can be summarized as follows: 2.3 Methods for Seeking Support 1. We propose metrics for evaluating the trade- off between performance improvement from user support and the associated user burden. 2. We conduct experiments using various meth- ods to balance this trade-off, providing in- sights into LLMs’ capabilities in seeking user support and identifying effective strategies for enhancing their performance. 2 Formulation for Seeking Support 2.1 General Setup Consider an LLM f parameterized by θ, along with a prompt template p(·). Given a natural language instruction x, we use z to represent support, which should enhance the LLM’s ability to fulfill x. For- mally, ˆyz = f (p(x, z) | θ) is more likely to satisfy x compared to ˆy = f (p(x) | θ). We denote the "ask for support" signal emitted by the LLM as ˆa, defined as a confidence score in the range [0, 1], where 1 indicates an absolute need for support. A threshold τ is then used to determine whether to request z. In practice, ˆa could also be a natural lan- guage request specifying the type of support needed by the LLM, which we leave for future work. 2.2 Evaluation To measure the trade-off between performance im- provement from user support and user burden, we need 2-dimensional evaluation. One dimension is the user burden (B), which we define as the propor- tion of instances where the LLM ask for support: B = Nask N where Nask is the number of instances where the LLM asks for support, and N denotes the total num- ber of instances in the test set. The other dimension is the performance improvement (∆, Delta): ∆ = 1 N Nask(cid:88) i=1 (h(yi, ˆyi,z) − h(yi, ˆyi)) where h(·) is the evaluation function of a given task, which takes ground truth yi and model output ˆyi as arguments (ˆyi,z is an output with the help of z). Inspired by the idea behind the ROC curve (Majnik and Bosni´c, 2013), we illustrate this trade-off with a graph, where the performance curve is plotted by adjusting the threshold τ from high to low along the x-axis. We refer to this curve as Delta-Burden Curve (DBC) (see the leftmost subplot of Figure 2). We design a prompt template pask(·) to enable LLMs to request support by ˆa = s(f (pask(w) | θ)). Here, w represents the textual information that the LLM f uses to determine whether it needs to seek support, and s is the scoring function that converts the probability distribution of output tokens into a confidence score ˆa ∈ [0, 1]. We propose methods with varying compositions of w to explore the in- formation LLMs require to achieve better trade-off under DBC. Note that pask remains the same across all methods to minimize prompt engineering. An overview of these methods is shown in Figure 1. Direct Ask (DA): w = (db, x), composed of database schema db and user data requirement x. Write then Ask (WA): w = (db, x, ˆy), where the LLM generates the SQL code ˆy = f (p(db, x) | θ) first and then use this self-generated output as the additional information in w. Execute then Ask (EA): w = (db, x, ˆy, ˆr), where the execution results ˆr is returned by the database by executing LLM-generated SQL ˆy. 3 Experiments 3.1 Dataset We use BIRD (Li et al., 2024), which includes human-annotated external knowledge that serves as z. For example, z might be domain-specific knowledge, such as how to calculate financial in- dicators from database values. The instruction x represents the users’ data requirements, paired with the ground truth SQL y. It uses Execution Accu- racy (EX) as the evaluation metric, where h(yi, ˆyi) is defined as 1(ri = ˆri). Here, ri is the SQL ex- ecution result of yi, and ˆri is the execution result of ˆyi. Simply put, EX is the proportion of testing instances where ri and ˆri are identical. 3.2 Implementation For open-weight LLMs, we use WizardCoder- 34B (Luo et al., 2023), Llama-3-70b-chat, DeepSeek-Coder-33B (Guo et al., 2024), and Mixtral-8x22B (Jiang et al., 2024) for diversity of different LLM families. For closed-source LLMs, we use gpt-3.5-turbo-0125, gpt-4-turbo-2024-04- 09, and gpt-4o-2024-05-13 (OpenAI, 2023). The prompt pask(w) (included in Appendix A) instructs the model to output a single token Yes/No to in- dicate whether it needs support. We define the scoring function s as the softmax of Yes over log probabilities of Yes and No to derive ˆa ∈ [0, 1]. 2 Methods/LLMs Wizard Llama3 DPSeek GPT-3.5 Mixtral GPT-4t GPT-4o Random Baseline 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 Direct Ask Write then Ask Execute then Ask 0.4915 0.4759 0.5096 0.4834 0.4497 0.4987 0.4976 0.4857 0.5848 0.4390 0.4735 0.6313 0.5301 0.5677 0.6242 0.5758 0.5807 0.6641 0.5479 0.5740 0.5989 Table 1: Area Under Delta-Burden Curve (AUDBC) across different methods and LLMs. Text in bold denotes the method with the best performance, while underlined text means better than random (uniform sampling of ˆa ∈ [0, 1]). Support/LLMs Wizard Llama3 DPSeek GPT-3.5 Mixtral GPT-4t GPT-4o w/o user support w/ full user support 0.1721 0.2764 0.1767 0.3475 0.2360 0.4185 0.3064 0.4668 0.2419 0.4126 0.3142 0.4889 0.3096 0.5117 Table 2: Execution accuracy (EX) of different support levels. Full user support means B = 1 (see Section 2.2). 4 Main Results Using the formulation in Section 2.2, we quantify the performance of different methods with the Area Under Delta-Burden Curve (AUDBC) in Table 1. Visualized DBCs are available in the leftmost sub- plots in Figure 2. Note that AUDBC should only be compared between methods under the same LLM, as it is normalized to the range of [0, 1] by dividing the area under the curve by the maximum square area, which depends on the scale of ∆EX and dif- fers across LLMs, as shown in Table 2. There are three major findings: (1) Execution then Ask consistently improves the performance- burden trade-off for LLMs, although Llama-3-70b- chat fails to outperform the random baseline. (2) The leftmost four LLMs in Table 1 do not surpass the random baseline without the assistance of ˆr, indicating that many current LLMs still struggle to determine the need for support based on x and ˆy alone. (3) Despite this, the rightmost three LLMs outperform the random baseline with the Write then Ask (x, ˆy) or even Direct Ask (x) methods. Nevertheless, the inclusion of ˆr remains beneficial for further enhancing the trade-off between perfor- mance improvement and user burden. Practical implications of the third point include the potential for cost savings by trading off the execution of ˆy to obtain ˆr in certain resource-constrained scenarios. 5 Discussion 5.1 Analysis on the Delta-Burden Curves The Delta-Burden Curves (DBCs) plotted in Fig- ure 2 quantify the following practical question: Under the same user burden, which method can achieve more performance boost? To further ana- lyze how this performance boost is achieved, we decompose the concept into two abilities: 1. The ability to ask for support when the LLM cannot satisfy the instruction originally. 2. The ability to utilize support effectively to flip the incorrect output to the correct output. 1. For the first ability, we introduce the following metrics inspired by the precision-recall trade-off: Precision of Asking for Support (Pask) When the LLM asks for support, it should be the case that the LLM cannot satisfy the instruction originally, or it would cause unnecessary user burden: Pask = #(AskforSupport & OriginallyWrong) #AskforSupport Recall of Asking for Support (Rask) When the LLM is not able to satisfy the instruction originally, it should identify this need and ask for support: Rask = #(AskforSupport & OriginallyWrong) #OriginallyWrong PR Curve of Asking for Support Similar to how DBC is plotted, one can also adjust the threshold τ ∈ [0, 1] from high to low along the x-axis to plot the Precision-Recall Curve of Asking for Support. 2. For the second ability, we introduce Flip Rate: Flip Rate: This metric is calculated as the propor- tion of instances where the LLM’s initially incor- rect answers were corrected after receiving support, divided by the total number of instances where support was requested. Formally, it is defined as: F R = 1 Nask Nask(cid:88) i=1 (h(yi, ˆyi,z) − h(yi, ˆyi)) 3 Figure 2: Performance curves of gpt-3.5-turbo-0125. Curves of other LLMs are shown in Appendix B. Methods/LLMs Wizard Llama3 DPSeek GPT-3.5 Mixtral GPT-4t GPT-4o Gemini Claude Random Baseline 0.5000 0.5000 0.5000 EA (real logprobs) EA (verbalized) 0.5096 0.5011 0.4987 0.5333 0.5848 0.4964 0.5000 0.6313 0.5945 0.5000 0.5000 0.5000 0.5000 0.5000 0.6242 0.6226 0.6641 0.4850 0.5989 0.5152 - 0.5624 - 0.6174 Table 3: Area Under Delta-Burden Curve (AUDBC) with the verbalized token log probabilities approach. Text in bold denotes the method with the best performance, while underlined text means better than random. Different from ∆ defined in Section 2.2, this metric emphasizes the efficiency of leveraging support in- stead of the total improvement on the test set. Like DBC, one may adjust the threshold τ to plot the Flip Rate Curve (FRC). With the definition of these two abilities, we plot the DBC, PR Curve, and FRC on Figure 2. Although the Write then Ask method shows near-random performance in DBC, the PR Curve indicates it achieves better-than-random per- formance in identifying when support is needed. However, its lower Flip Rate suggests it is less ef- ficient in utilizing the support to correct mistakes. These two abilities, represented by the PR Curve and FRC, respectively, balance each other out, re- sulting in near-random performance on the DBC. This finding shows that the ability to identify the need for support and the ability to utilize that sup- port are distinct. In future work, it is worth explor- ing how to further enhance each of these abilities. 5.2 LLMs without Access to Log Probabilities Given that not all LLMs provide access to token log probabilities, we discuss how our method can be adapted for these “black-box” models. We modify the prompt template pask to pverb, which instructs the LLM to output the verbalized confidence score ˆa directly by specifying the range and meaning of ˆa ∈ [0, 1] in pverb (attached in Appendix A.2). In addition to the seven LLMs mentioned in Sec- tion 3.2, we also include two black-box models: gemini-1.0-pro-001 and claude-3-haiku-20240307. The results, shown in Table 3, indicate that using verbalized confidence scores generally degrades performance for most LLMs. However, it remains a promising alternative for black-box LLMs such as Gemini and Claude to surpass the random baseline. 6 Related Work The ability of LLMs to identify the need for support relies on their well-calibratedness (Kadavath et al., 2022), which refers to their capacity to recognize uncertainty. Previous studies focus on enhancing the calibration of predictions (Xiao et al., 2022; Kuhn et al., 2023), or using verbalized token prob- abilities to achieve better calibration (Tian et al., 2023). Our work extends this line of research by exploring how LLMs can effectively seek user sup- port by leveraging their well-calibrated property. The major distinction between this and existing calibration studies lies in extending the focus from identifying the uncertainty to utilizing support. 7 Conclusion We propose a framework for LLMs to seek support, and evaluate methods on Text-to-SQL generation. Our findings suggest the importance of external signals, such as SQL execution results, in helping LLMs better manage performance-burden trade-off. We further decompose DBC into the ability of iden- tify the need for support and the ability to utilize the support. Future works may explore a broader range of tasks or develop methods to improve both the identification and utilization of support. 4 020406080100User Burden (%)0246810121416Execution Accuracy (%)Delta-Burden Curve (DBC)RandomDirect AskWrite then AskExecute then Ask020406080100Recall of Asking for Support (%)5060708090100Precision of Asking for Support (%)PR Curve of Asking for SupportRandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)0510152025303540Flip Rate (%)Flip Rate Curve (FRC)RandomDirect AskWrite then AskExecute then Ask 8 Limitations 8.1 Task Coverage The scope of our experiments is limited to the Text- to-SQL task. While this task provides a useful case study for evaluating LLMs’ ability to seek and utilize support, it does not encompass the full range of potential applications for LLMs. Future work should extend the evaluation to a broader set of tasks to ensure the generalizability of our findings. 8.2 Types of Support In this study, we primarily focus on a single type of support: human-annotated external knowledge. However, there are many other types of support that LLMs might require. Future works could explore how LLMs can request and utilize these various forms of support to enhance their performance. 8.3 Dependence on External Feedback Our findings indicate that LLMs significantly ben- efit from external signals, such as SQL execution results. However, this reliance on external feedback may not always be feasible in practical applications, where immediate execution or access to external data might be limited. Developing methods that en- able LLMs to better manage without such feedback remains an important area for future exploration. References Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Yu Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wen- feng Liang. 2024. Deepseek-coder: When the large language model meets programming - the rise of code intelligence. ArXiv, abs/2401.14196. Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bam- ford, Devendra Singh Chaplot, Diego de Las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, L’elio Renard Lavaud, Lucile Saulnier, Marie- Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2024. Mix- tral of experts. ArXiv, abs/2401.04088. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Language models Tran-Johnson, et al. 2022. arXiv preprint (mostly) know what they know. arXiv:2207.05221. 5 Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for un- certainty estimation in natural language generation. arXiv preprint arXiv:2302.09664. Jinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Ruiying Geng, Nan Huo, et al. 2024. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls. Advances in Neural Information Processing Systems, 36. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xi- ubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. Wizardcoder: Empowering code large language models with evol- instruct. ArXiv, abs/2306.08568. Matjaž Majnik and Zoran Bosni´c. 2013. Roc analysis of classifiers in machine learning: A survey. Intelligent data analysis, 17(3):531–558. OpenAI. 2023. Gpt-4 technical report. Mohammadreza Pourreza and Davood Rafiei. 2023. Evaluating cross-domain text-to-sql models and benchmarks. arXiv preprint arXiv:2310.18538. Vipula Rawte, Amit Sheth, and Amitava Das. 2023. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922. Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. 2023. Just ask for cali- bration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5433–5442. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned lan- guage models are zero-shot learners. arXiv preprint arXiv:2109.01652. Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, and Louis- Philippe Morency. 2022. Uncertainty quantification with pre-trained language models: A large-scale em- pirical analysis. arXiv preprint arXiv:2210.04714. A.2 Prompt for Seeking Support (Verbalized) The prompt template for generating verbalized probabilities in LLMs without access to token log probabilities (e.g., Gemini and Claude families): You are currently doing the text-to-SQL task. Based on the information provided ({items}), you have to determine whether additional hints are required for you to generate the SQL correctly to answer the user’s question. You should only ask for additional hints when you actually need them, since you will also be eval- uated based on the number of times you ask for hints, which would be provided by the user. information provided (enclosed by triple backticks): “‘ {information} “‘ Do you need additional hints? Provide the precise probability that you need hints (closer to 0 means you don’t need hints, closer to 1 means you need hints). Give ONLY the precise probability to five decimal places (format: 0.abcde, where abcde can be different digits), no other words or explanations are needed. The prompt template is similar to the original template shown in A.1, except that the last few sentences are modified. A Prompt Templates We include the prompt templates used in this work. A.1 Prompt for Seeking Support The prompt template pask(w) used to instruct LLMs for seeking support is as follows: You are currently doing the text-to-SQL task. Based on the information provided ({items}), you have to determine whether additional hints are required for you to generate the SQL correctly to answer the user’s question. You should only ask for additional hints when you actually need them, since you will also be eval- uated based on the number of times you ask for hints, which would be provided by the user. information provided (enclosed by triple backticks): “‘ {information} “‘ Answer a single word Yes if you need hints (since the information provided is not enough to generate SQL correctly). Answer a single word No if hints are not required (since you are already confident to generate SQL). Do you need additional hints? Answer (Yes / No): In this template, the actual contents of {items} and {information} depend on the method used. The contents are summarized in Table 4. For example, w = (db, x, ˆy, ˆr) in Execute then Ask (EA), so {items} will be filled with the four item names and {information} will be replaced by actual informa- tion of the four items. Similarly for Write then Ask (w = (db, x, ˆy)) and Direct Ask (w = (db, x)). Item Item Name Information db x ˆy ˆr Database schema User’s question Generated SQL SQL execution results {db_schema} {question} {gen_sql} {exe_results} Table 4: Contents in the prompt pask(w), where {items} will be filled with words in the “Item Name” column, while {information} is replaced with actual information of text in {red}. 6 A.3 Prompt for Generating SQL Code The prompt template p(·) for converting user data requirement x into SQL code is as follows: {db_schema} – Using valid SQLite, answer the fol- lowing questions for the tables provided above. – Question: {question} Now, generate the correct SQL code directly in the format of “‘sql\n<your_SQL_code>\n“‘: If user support z is provided (i.e., when LLMs ask for support), the prompt template is slightly modified as follows: {db_schema} – External Knowledge: {support} – Using valid SQLite, answer the following questions for the tables provided above. You can use the provided External Knowledge to help you generate valid and correct SQLite. – Question: {question} Now, generate the correct SQL code directly in the format of “‘sql\n<your_SQL_code>\n“‘: In these two templates, {db_schema} is db, {question} is user data requirement x, and {sup- port} is user support z, which is human-annotated external knowledge in BIRD (Li et al., 2024). B Performance Curves We present visualizations of all performance curves in Table 3, 4, 5, 6, 7, 8, and 9. 7 Figure 3: Performance curves of WizardCoder-Python-34B-V1.0. Figure 4: Performance curves of Llama-3-70b-chat-hf. Figure 5: Performance curves of deepseek-coder-33b-instruct. Figure 6: Performance curves of gpt-3.5-turbo-0125. 8 020406080100User Burden (%)0246810Execution Accuracy (%)Delta-Burden Curve (DBC)RandomDirect AskWrite then AskExecute then Ask020406080100Recall of Asking for Support (%)7580859095100Precision of Asking for Support (%)PR Curve of Asking for SupportRandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)10010203040506070Flip Rate (%)Flip Rate Curve (FRC)RandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)0.02.55.07.510.012.515.017.5Execution Accuracy (%)Delta-Burden Curve (DBC)RandomDirect AskWrite then AskExecute then Ask020406080100Recall of Asking for Support (%)80.082.585.087.590.092.595.097.5100.0Precision of Asking for Support (%)PR Curve of Asking for SupportRandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)01020304050607080Flip Rate (%)Flip Rate Curve (FRC)RandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)0.02.55.07.510.012.515.017.5Execution Accuracy (%)Delta-Burden Curve (DBC)RandomDirect AskWrite then AskExecute then Ask020406080100Recall of Asking for Support (%)80859095100Precision of Asking for Support (%)PR Curve of Asking for SupportRandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)05101520253035Flip Rate (%)Flip Rate Curve (FRC)RandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)0246810121416Execution Accuracy (%)Delta-Burden Curve (DBC)RandomDirect AskWrite then AskExecute then Ask020406080100Recall of Asking for Support (%)5060708090100Precision of Asking for Support (%)PR Curve of Asking for SupportRandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)0510152025303540Flip Rate (%)Flip Rate Curve (FRC)RandomDirect AskWrite then AskExecute then Ask Figure 7: Performance curves of Mixtral-8x22B-Instruct-v0.1. Figure 8: Performance curves of gpt-4-turbo-2024-04-09. Figure 9: Performance curves of gpt-4o-2024-05-13. 9 020406080100User Burden (%)0.02.55.07.510.012.515.017.5Execution Accuracy (%)Delta-Burden Curve (DBC)RandomDirect AskWrite then AskExecute then Ask020406080100Recall of Asking for Support (%)7580859095100Precision of Asking for Support (%)PR Curve of Asking for SupportRandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)010203040506070Flip Rate (%)Flip Rate Curve (FRC)RandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)0.02.55.07.510.012.515.017.5Execution Accuracy (%)Delta-Burden Curve (DBC)RandomDirect AskWrite then AskExecute then Ask020406080100Recall of Asking for Support (%)707580859095100Precision of Asking for Support (%)PR Curve of Asking for SupportRandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)020406080100Flip Rate (%)Flip Rate Curve (FRC)RandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)0.02.55.07.510.012.515.017.520.0Execution Accuracy (%)Delta-Burden Curve (DBC)RandomDirect AskWrite then AskExecute then Ask020406080100Recall of Asking for Support (%)65707580859095100Precision of Asking for Support (%)PR Curve of Asking for SupportRandomDirect AskWrite then AskExecute then Ask020406080100User Burden (%)020406080100Flip Rate (%)Flip Rate Curve (FRC)RandomDirect AskWrite then AskExecute then Ask
ai_researcher
1
Re-imagining_health_and_well-being_in_low_resource_African_settings_using_an_augmented_AI_system_and_a_3D_digital_twin.pdf
Person Re-Identification using Deep Learning Networks: A Systematic Review Ankit Yadav1, Dinesh Kumar Vishwakarma2 Biometric Research Laboratory, Department of Information Technology, Delhi Technological University, Bawana Road, Delhi-110042, India [email protected], [email protected] Abstract: Person re-identification has received a lot of attention from the research community in recent times. Due to its vital role in security based applications, person re-identification lies at the heart of research relevant to tracking robberies, preventing terrorist attacks and other security critical events. While the last decade has seen tremendous growth in re-id approaches, very little review literature exists to comprehend and summarize this progress. This review deals with the latest state-of-the-art deep learning based approaches for person re- identification. While the few existing re-id review works have analysed re-id techniques from a singular aspect, this review evaluates numerous re-id techniques from multiple deep learning aspects such as deep architecture types, common Re-Id challenges (variation in pose, lightning, view, scale, partial or complete occlusion, background clutter), multi-modal Re-Id, cross- domain Re-Id challenges, metric learning approaches and video Re-Id contributions. This review also includes several re-id benchmarks collected over the years, describing their characteristics, specifications and top re-id results obtained on them. The inclusion of the latest deep re-id works makes this a significant contribution to the re-id literature. Lastly, the conclusion and future directions are included. Keywords: Person Re-Identification, Deep Learning, Convolutional Neural Network, Feature Extraction & Fusion 1. Introduction Security and surveillance based applications in computer vision have gained tremendous popularity in recent times. Currently, security surveillances record videos and images whose analysis requires manual human interaction. Looking into yesterday’s robbery can be challenging as it involves a manual search through twenty hours of surveillance videos by humans prone to fatigue and making errors. The problem quickly becomes infeasible as the time span of recorded media under analysis is increased. The development of machine learning and later, deep learning approaches have opened a wide range of advanced possibilities that could lead to safer homes, offices, neighbourhood, bus stops, airports etc. The idea that machines can be taught to identify individuals of interest is a promising step towards a more secure environment. Person re-identification means to find a person of interest in a query image/video from a large collection of recorded images and/or videos. While machine learning algorithms played crucial part in the early days, re-id approaches have made significant improvements with the rise of deep learning based systems [1]. Several deep learning based approaches proposed in recent years have boosted the matching accuracy results, significantly outperforming the handcrafted feature based machine learning algorithms [2], [3]. Since deep learning models require a large number of training samples, recent years have also witnessed the collection of several medium to large-scale re-id datasets for training and testing different deep based re-id approaches. 2. Research Methodology This section details the major contributions of this review, techniques followed in preparing this review and a comparison with some of the existing deep Re-Id reviews/surveys. 2.1 Contributions of this Review This review studies deep learning based person re-identification. While person re- identification is not a new topic in the research community, deep learning based methods have become increasingly popular due their tremendous success in various computer vision domains. Hence, it is natural that deep learning based methods take lead in the Re-Id problem due to their superior feature learning capability when compared to hand-crafted feature based methods. Introduction Research Methodology • Comparison to Previous Reviews • Review Techniques • Contributions of this Review ReID Datasets • Image Based Datasets • Video Based Datasets Image Based ReID • ReID Architectures • ReID Challenges • ReID Modality • Cross-Domain ReID • ReID Metric Learning Video Based ReID • Advantages • Challenges Conclusion and Future Directions References Fig. 1 Organization of this Review The focus of this review is to conduct an exhaustive study of the recent deep learning based Re-Id methods, specifying the various benchmark datasets and the classification of various image and video based deep methods as shown in Fig. 1, describing the organization of this review. Recent years have witnessed a sharp increase in the number of deep learning based Re- Id approaches and this review cites the latest developments in deep learning based Re-Id research. Fig. 2 describes the taxonomy of deep learning based Re-Id methods included in this review. Re-Id methods have been categorized into image-based and video-based methods. Image-based contributions have been explored from numerous aspects such as architectures involved, common visual challenges, modality specific methods, cross-domain approaches and metric learning methods. Person Re- Person Re- Identification Identification Image based Image based ReID ReID Video based Video based ReID ReID Deep ReID Deep ReID Architectures Architectures Methods based Methods based on ReID on ReID Challenges Challenges Modality based Modality based ReID ReID Cross-Domain Cross-Domain ReID ReID Metric Metric Learning based Learning based ReID ReID Advantages Advantages Classification Classification models models Pose Variation Pose Variation RGB Image RGB Image based methods based methods Cross-Modality Cross-Modality Based ReID Based ReID methods methods DTML DTML Challenges Challenges Verfication Verfication models models Lightning Lightning Variation Variation Triplet based Triplet based models models Scale Variation Scale Variation Part based Part based models models View Variation View Variation Attention Attention models models Misalignment Misalignment Occlusion Occlusion Background Background Clutter Clutter RGB + IR RGB + IR DHSLDHSL RGB + Depth RGB + Depth RDML RDML RGB + Text RGB + Text DLML DLML MvDML MvDML PIDML PIDML The major contributions of this review are as follows: Fig. 2 Taxonomy of Re-Id methods  Provides a comprehensive review of deep learning based person re-identification methods.  Conducts an exhaustive study of deep Re-Id methods (Table 3-14) by describing the “key ideas” behind numerous approaches mentioned.  Since deep learning methods have gained popularity in recent years, this exhaustive review automatically incorporates the most recent contributions to deep learning based Re-Id approaches.  Details several image based and video based benchmark datasets, specifying their technical specifications, challenges posed by samples and top results reported on them.  Analyses deep learning based methods from several crucial aspects like architecture types, loss functions, Re-Id challenges, data modalities, cross-domain approaches and metric learning methods, helping the readers to understand and appreciate deep Re-Id from multiple perspectives.  Explores the growing popularity of video based deep Re-Id methods that combine temporal data providing important motion cues with the usual visual characteristics.  This review helps readers to gain a comprehensive and exhaustive understanding of the recent deep Re-Id contributions by categorically analysing contributions from like architecture, challenges, modality, cross-domain numerous perspectives approaches, metric learning and video based methods. 2.2 Review Techniques This review includes journal papers, conference and workshop papers from several well- known repositories including IEEE Xplore, ScienceDirect, Springer, ACM and Google Scholar. The keywords used to search for relevant contributions include “person re- identification”, “Re-Id”, “deep”, “deep learning”, “review” and “survey”. Due to the growing popularity of deep approaches, this initial search resulted in a comprehensive list of contributions. Higher priority was given to publications from high quality journals such as IEEE Transactions, Pattern Recognition, Neurocomputing etc and top conferences such as CVPR, ECCV and ICCV. A separate search was conducted to include Re-Id dataset contributions. Finally, the included papers were analysed and a taxonomy of deep Re-Id methods was formulated as demonstrated in Fig. 2. Fig. 3 shows the graph of year-wise deep Re-Id contributions cited in this review clearly depicting a high percentage of recent deep Re- Id works. s r e p a P f o r e b m u N 70 60 50 40 30 20 10 0 Upto 2015 2016 2017 2018 2019 2020 Years Fig. 3 Count of cited articles (year-wise) 2.3 Comparison with Existing Reviews While the number of deep Re-Id implementations have grown exponentially in recent years, there are very few reviews/surveys keeping pace with the growth of deep Re-Id research. This section presents a comparative analysis of this review against some existing deep learning based Re-Id surveys. Compared to other surveys, this review includes the latest research publications (up to 2020). Table 1 presents Re-Id comparisons on the basis of several deep learning aspects such as architecture types, Re-Id datasets, challenges, modality, cross-domain approaches and metric learning methods. The green rows indicate the presence of theoretical analysis while the orange rows indicate the presence of tabular information. TABLE 1 COMPARISON OF THIS CURRENT REVIEW WITH THE EXISTING DEEP LEARNING BASED RE-ID REVIEWS/SURVEYS. ROWS IN GREEN INDICATE THEORETICAL ANALYSIS WHILE ORANGE ROWS DEMONSTRATE TABULAR INFORMATION Deep Re-Id Reviews/Surveys Wang et al. [4] Almasawa et al. [5] Wu et al. [6] Islam [7] Year of Publication Year of Latest Citation Re-Id Datasets Image & Video Re-Id Methods Architecture Types Challenges Deep Re-Id Analysis Modality Cross Domain Metric Learning Theory Table Theory Table Theory Table Theory Table Theory Table Theory Table Theory Table 2018 2018 ✓ ✓ 2019 2019 ✓ ✓ ✓ ✓ ✓ ✓ 2019 2018 ✓ ✓ ✓ ✓ ✓ ✓ 2020 2020 ✓ ✓ ✓ ✓ ✓ Current Review - 2020 ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Table 1 clearly shows the comprehensive nature of this review when compared to other existing surveys, presenting both theoretical and tabular analysis of various deep Re-Id aspects. 3. Benchmark Datasets for Person Re-identification Several benchmark datasets have been collected over the years to train and test the robustness of person re-identification systems. These datasets are useful in validating different re-identification approaches in terms of recognition accuracy. Table 2. gives detailed information about various re-id datasets. TABLE 2 RE-ID BENCHMARK DATASETS Camera Identity Count Images/Videos Count Challenges Dataset Type Dataset VIPeR [8] iLIDS [10] Release Year 2007 2009 s t e s a t a D d I - e R e g a m I GRID [12] 2010 CAVIAR4RE- ID [14] 2011 CUHK01 [16] 2012 2 2 8 2 2 1264 images 476 images 632 119 1800 72 720 images 971 1942 images RGBD-ID [18] 2012 1 (RGB-D) 79 316 images Top Results - Rank 1 Accuracy (%) 67.21 [9] 82.20 [11] 28.00 [13] 53.60 [15] 98.73 [17] 76.70 [19] Viewpoint variation Illumination variation and occlusion Viewpoint, lightning and color variations, occlusion Resolution, light and pose variations, occlusion Pose, viewpoint and lightning variations View and variation identities clothing same for Dataset Type Dataset Release Year Camera Identity Count Images/Videos Count Challenges Top Results - Rank 1 Accuracy (%) CUHK02 [20] 2013 10 1816 7264 images CUHK03 [21] 2014 2 6 1360 13164 images 1501 32668 images 2015 2015 1 RGB-D 71 483 videos 2017 8 1852 Market-1501 [22] Kinect-Re-Id [23] DukeMTMC- Re-Id [24] RegDB [25] 2017 SYSU-MM01 [27] 2017 1 RGB 1 IR 4 RGB 2 IR Airport [29] 2019 ETHZ [30] 2007 PRID2011 [32] 2011 3DPES [34] 2011 iLIDS-VID [36] 2014 6 2 2 6 2 412 491 9651 983 200 300 4120 images 4120 thermal images 287628 images 15792 IR images 3.13 images per person on average 100 to 150 images per individual 500 videos 600 videos, 73 frames per video on average MARS [38] 2016 6 1261 20715 videos s t e s a t a D d I - e R o e d i V Illumination and pose variations, partial occlusion Alignment variation, occlusion, missing body parts Illumination, scale and pose variations, partial occlusion Viewpoint lightning variations Illumination, view and variations, pose background clutter, occlusion and Pose, distance lightning variation and Color and exposure variation Viewpoint and illumination variation, detection error, occlusion, background clutter Partial occlusion pose, Viewpoint, lightning and background variations and Viewpoint variation, lightning cluttered occlusion, background, similar clothing for different identities Viewpoint and pose partial variations, occlusion, detection error, small inter-class and intra-class variation large - 96.43 [2] 95.34 [17] 99.40 [19] 88.19 [17] 48.43 [26] 65.10 [28] - 93.00 [31] 98.80 [33] 72.23 [35] 88.00 [37] 87.30 [39] DukeMTMC- VideoRe-Id [40] 2019 1812 12 frames per second in a tracklet, 2196 training tracklets with 369656 frames, 2636 testing tracklets having 445764 Lightning, pose and variation, viewpoint noisy background, occlusion - Re-Id datasets can be broadly classified into two types: image-based and video based datasets. 3.1. Image Based Re-Id Datasets The VIPeR [8] dataset has been created for viewpoint invariant pedestrian recognition. It contains 1264 images of 632 identities that have been captured from 2 cameras. The iLIDS [10] dataset has been acquired at an airport arrival hall and has 476 images of 19 identities from 2 cameras. The GRID [12] dataset is acquired from a busy underground train station having images of 800 identities from 8 cameras. CUHK03 [21] is another large scale re-id dataset having 13164 images of 1360 identities collected using 2 cameras. Datasets like Market-1501 [22], DukeMTMC-Re-Id [24] and CUHK02 [20] have used 6,8 and 10 cameras respectively, thereby increasing the number of camera views used to collect people images. A few multi-modal re-id datasets also exist such as the RGBD-ID [18], KinectRe-Id [23] based on RGB-depth images and RegDB [25], SYSU-MM01 [27] containing both RGB and infrared images. The largest image based re-id dataset is the Airport [29] collected from 6 cameras of a mid-sized airport containing images for 9651 identities. Fig. 4 shows some sample images from the Market-1501 dataset. 3.2. Video Based Re-Id Datasets Fig. 4 Sample Images from Market-1501 dataset PRID2011 [32] dataset contains 983 identities collected from 2 cameras. iLIDS-VID [36] contains 600 videos of 300 people from 2 cameras. Another popular video based re-id dataset is the large scale dataset MARS [38] which contains 20715 videos of 1261 identities from 6 cameras. The most recent addition to the video re-id dataset is the DukeMTMC-VideoRe-Id [40] containing 1812 identities. Considering the growth of re-id datasets over the years, several inferences can be made. Firstly, Re-Id datasets have grown both in number and scale which is a great benefit since deep models require large amount of samples for effective training. Secondly, the variety of samples within these datasets present numerous re-id challenges such as variations in pose, lightning and scale, occlusion, background clutter, same people wearing different clothing (large intra- class disparity) or different people wearing similar clothing (small inter-class disparity), thereby allowing deep models to learn effective generalization of appearance. Thirdly, very few datasets are multi-modal [23], [18] leading to an over reliance on RGB image and video based approaches. Fourthly, in a supervised environment, deep models require labelled samples for learning. As the size of datasets grow, it becomes less feasible to annotate them manually. While most of the datasets discussed above have manually annotated samples, datasets like Market-1501 or CUHK03 have used Deformable Part Model (DPM) [41] for sample labelling. 4. Image Based Deep Re-Id Contributions This section details the recent image based deep Re-Id contributions. These contributions have been categorized according to the following aspects: 1) Architecture types for Re-Id 2) Re-Id challenges 3) Data Modality for Re-Id 4) Cross-Domain Re-Id 5) Metric Learning for Re-Id. These categories are not exclusive and often overlap in various implementations but each has a distinct conceptual aspect to it. 4.1 Deep Re-Id Architecture Types This section talks about the different kinds of architectures used for deep learning based re- identification. Specifically, the Re-Id contributions have been categorized as 1) Classification Models 2) Verification Models 3) Triplet Based Models 4) Part-Based Models 5) Attention Based Models as shown in Fig. 5. ReID Architecture Types Classification Models Verification Models Triplet Based Models Part Based Models Attention Models Fig. 5 Different Kinds of Deep Architectures used for Re-Id methods 4.1.1 Classification Models for Re-Id Classification models (also known as Identification models) consider Re-Id as a multi-class classification problem [42]. Given a dataset with a finite number of identities and each identity having a number of samples, these models are trained using identity labels from samples. Classification models can be formally described as follows. Let there be an image gallery of 𝜅 people 𝑃 = {𝑝1, 𝑝2, 𝑝3. . . 𝑝𝜅} with identity labels 𝐿 = {𝑙1, 𝑙2, 𝑙3. . . 𝑙𝜅}. The model is trained with labels 𝐿 from various sample identities of people 𝑃. After training, given a query sample 𝑝𝑥 having identity 𝑙𝑥 the classification model tries to output a high score for label 𝑥 and low score for all other identity labels. Since Re-Id models are trained on samples from various identities, they require large number of samples per identity to capture diverse features from each individual. Lack of diverse samples often lead to over-fitting. The softmax loss is usually employed in classification models which encourages the separation of different identity samples [43]. However, Re-Id presents large intra-class variations such as pose variations, view variations, lightning variations, occlusion, background clutter etc, for which the softmax loss has performed poorly. Several improvements have been suggested over the softmax loss to handle these intra-class challenges. Zhu et al. [43] aim to overcome the inability of softmax loss to handle intra-class variations by using it in conjunction with the center loss [44] which was originally used for facial recognition. Authors train a convolutional neural network (CNN) with the proposed combination of softmax and center loss to extract discriminative features and obtain better Re- Id results as shown in Fig. 6. Fig. 6 CNN training based on combination of Softmax and Center loss [43] Zhong et al. [42] enhance the Re-Id classification performance by using a multi-loss training setup having a combination of softmax loss, center loss and a novel “inter-center loss” . While the softmax loss differentiates between different identity samples, the center loss brings same class identities closer to their center and the inter-center loss maximizes the distance between centers of different identity as shown in Fig. 7. Fig. 7 a) Softmax loss separates different identity samples. b) Center loss pulls same class samples closer to their center. c) Inter-Center loss pushes different identity centers further apart [42] Fan et al. [45] propose a novel “Sphere Softmax Loss” by modifying the softmax loss. Instead of mapping sample images to a Euclidean space embedding, sphere loss maps sample images to the surface of a hypersphere manifold, thereby limiting spatial distribution of data points to angular variations. The proposed loss trains using the angle between sample vector and target class vector. 4.1.2 Verification Models for Re-Id Verification Models consider Re-Id to be a binary-classification problem. Given a pair of images, these models classify them to be either same or different. These models implement a pair of CNNs to extract features from input pair and compare their similarity. Verification models use the contrastive loss [46] which was first used for dimensionality reduction. In Re- Id, the contrastive loss tries to pull same identity pairs to zero distance in the feature space while pushing different identity pairs beyond a given margin. Verification models suffer from the class imbalance problem. Consider a dataset having К number of identities, each of which has м number of image samples. The dataset is balanced with respect to different identity samples. However, if we consider the number of positive and negative samples present with respect to a single identity, there are м positive samples and (К-1)м number of negative samples. This leads to class imbalance in training verification based models. Zhang et al. [47] combine the verification and classification Re-Id models to learn “deep features from body and parts” (DFBP). Specifically, the verification model is implemented using two neural networks that are trained by comparing body parts from image sub-regions of input pair while the global region features are extracted from a classification model to learn body-based features. The concatenated body-based and part-based features form the final representation. Zhong et al. [48] propose a novel “Feature Aggregation Network” (FAN) which also combines the classification and verification tasks. FAN extracts multi-level CNN features from input image pairs. Then, Recurrent Comparative Network (RCN) containing attention module compares appearance of input image pairs for verification loss. The CNN features are pooled directly using the Global Average Pooling (GAP) for classification loss. Fig. 8 shows the proposed model. Fig. 8 Overview of proposed hybrid model. CNN features are extracted from input image pair using FAN networks. RCN learns joint representation for verification task while GAP is used for recognition task [48]. 4.1.3 Triplet Based Re-Id Models Triplet models for Re-Id take triplet input units. Each triplet unit contains three image samples: the anchor, a positive sample (having same identity as the anchor) and a negative sample (different identity from the anchor). The triplet loss [49] is trained to keep the Euclidean distance between anchor and positive sample less than anchor and negative sample. Let Τ𝑖 𝑝, Τ𝑖 𝑎, positive sample represent the ith triplet such that Τ𝑖 = 〈Τ𝑖 𝑝 and negative sample Τ𝑖 𝑛. Γ(𝐼) represents the extracted CNN features for image I. ‖𝑥‖ Τ𝑖 represents the ℒ2 norm. The proposed triplet loss enforces the following condition: 𝑛〉 having anchor image Τ𝑖 𝑎, Τ𝑖 ‖Γ(Τ𝑖 𝑎) − Γ(Τ𝑖 𝑝)‖ < ‖Γ(Τ𝑖 𝑎) − Γ(Τ𝑖 𝑛)‖ (1) Fig. 9 demonstrates the preservation of condition (1). Fig. 9 a) Three triplet units having anchor identity (yellow triangle), positive sample (also yellow triangle) and negative sample (green/purple/red shapes). b) The Triplet Loss enforces the features of positive samples closer meanwhile pushing the negative samples away [49] The main drawback of triplet models is that they only use weak annotations from a triplet to learn discriminative features, unlike a classification model learning from numerous samples of a given identity available in the dataset [50]. The traditional triplet loss has faced the issue of slow convergence and hence, several improvements have been formulated to improve the discrimination ability of triplet based models. Table 3 demonstrates these novel improved triplet losses. References Year Triplet Loss Improvements Benefit TABLE 3 TRIPLET LOSS IMPROVEMENTS FOR RE-IDENTIFICATION Ding et al. [49] 2015 Triplet Loss Works on relative distance among intra-class and inter-class identities. Cheng et al. [51] 2016 Improved Triplet Loss Ensures intra-class compactness Hermans et al. [52] 2017 Batch Hard Triplet Loss Zhu et al. [53] 2017 Hash Based Triplet Loss Su et al. [54] 2018 Attribute Triplet Loss Wu et al. [50] 2019 OIM + Improved Triplet Loss Removes triplet mining step overhead. Learning from similar inter-class and varying intra-class samples Ensures that the hamming distance of hash codes from the anchor and positive sample is less than that of anchor and negative sample Learns a large number of human attributes considering their contextual cues. Learn similarity metric and fully utilize label information of samples Yuan et al. [55] 2019 Mini-Cluster Loss Reduce intra-class and enlarge inter-class differences Si et al. [56] 2019 Compact Triplet Loss Reduce intra-class and enlarge inter-class differences Yang et al. [57] 2019 Adaptive Nearest Neighbour Loss (ANN) Solves slow convergence and local optima for triplet based loss Choe et al. [31] 2019 Mixed Distance Maximization Loss Zhou et al. [1] 2019 Symmetric Triplet Loss Maximize intra-distance and keep triplet distance larger than sample distance Gradients derived for positive samples are symmetric allowing consistent minimization of intra-class distances Fan et al. [45] 2019 Sphere Softmax Loss Maps image samples to a hypersphere manifold Zhang et al. [58] 2020 Hybrid Triplet Loss (HTL) Learns domain invariant and camera invariant properties Jiang et al. [59] 2020 Weighted Triple-Sequence Loss (WTSL) Reduce impact of outlier frames in video sequences References Year Triplet Loss Improvements Benefit Zhang et al. [60] 2020 Wasserstein Triplet Loss Sikdar et al. [61] 2020 Batch Adaptive Triplet Loss 4.1.4 Part-Based Re-Id Models Uses the Wasserstein Distrance to rearrange global distance between samples Exponential learning from hard positives compared easier positives in triplet scheme In the initial years, deep Re-Id methods focussed mostly on extracting global image-level features to identify individuals. However, this approach quickly became ineffective in handling small inter-class variations such as identifying different people wearing same color clothes. This has led to a gain in popularity of part-based Re-Id methods due to their superior discrimination capability based on finer part-level cues which are usually suppressed while extracting global features [2]. Part-Based Re-Id methods extract different image regions to find discriminative part-level features. Fig. 10 Multiple feature attention networks used to generate different levels of part features. Extracted feature vectors are concatenated to obtain global representations [62] Yan et al. [62] propose a feature attention block for part-based Re-Id. The authors slice features maps into spatial features and assign them weights thereby highlighting the important part regions as demonstrated in Fig. 10. Tian et al. [63] propose a novel Strong Part Based Network (SP_Net) that divides feature maps into ℕ parts, thereby learning part level features using ℕ part losses combined to obtain a local loss. The local loss is combined with global loss in a weighted manner to obtain discriminative capabilities. The major challenges faced by part- based models are the variations in pose, alignment and scale of corresponding image regions (parts) under comparison. 4.1.5 Attention-Based Re-Id Models Attention based Re-Id models aim to selectively choose regions of high interest from input information. The proposed “Attention Modules” focus on extracting regions containing highly discriminative features while ignoring other regions having little or no discriminative capability. Such an approach of targeting specific regions helps to overcome Re-Id challenges like background clutter, misalignment etc [64]. Attention models have proven their superior performance in several computer vision applications including Re-Id with the growth of Recurrent Neural Networks (RNN) based on Long Short Term Memory (LSTM) [65]. Various Re-Id implementations have incorporated the attention mechanism to enhance their performance. Table 4 presents an analysis of contributions of attention modules in Re-Id methods. TABLE 4 CONTRIBUTIONS OF ATTENTION MODULES IN RE-ID METHODS Reference Year Region of Attention Benefit Yang et al. [2] 2019 Whole body and body parts Discriminative feature extraction Bao et al. [66] 2019 Body parts Robust to part misalignment and background clutter Zhou et al. [1] 2019 Image foreground Robust to background clutter Wu et al. [67] 2019 Spatial regions in video frames, temporal pooling over entire video Extract discriminative Re-Id features from essential frames within video Li et al. [68] 2019 Convolutional features Zhang et al. [37] 2019 Video frames Learns interdependence of channels within convolutional features Select informative frames for dimensionality reduction of features Wan et al. [69] 2019 Spatial region of input images Local parts discovery Tay et al. [70] 2019 Physical appearance attributes like gender, hair, upper clothing color, lower clothing color, pant etc Discriminative appearance attributes (Fig. 11) representation of identity based on Hou et al. [71] 2019 Generated Frames created by GAN generator Generate occluded regions in video frames using temporal attention module to remove occlusion Subramaniam et al. [72] 2019 Common visual features across frames of a video Robust to background noise and extracts common features in video frames Chen et al. [73] 2019 Spatial and Channel-wise attention Improved attention quality due to self-critical attention learning Zhang et al. [74] 2020 Video frames focussed at multiple scales Spatio-temporal video representations Li et al. [75] 2020 Global image features at multiple scales Robust dependencies to spatial misalignment and local feature Qian et al. [17] 2020 Spatial features regions of input, multi-scale Discriminative feature extraction Zhang et al. [60] 2020 Global and local features Robust to misalignment, helps to distinguish between important and misleading parts Fig. 11 Attribute Attention Map (AAM) generated from six heat maps corresponding to attributes such as gender, hair, clothing etc [70] Table 5 gives a detailed overview of various deep Re-Id contributions based on their architecture types. Reference Year Architecture Key idea Loss Function TABLE 5 RE-ID CONTRIBUTIONS BASED ON DIFFERENT ARCHITECTURES Ding et al. [49] Zhu et al. [43] Zhu et al. [53] Huang et al. [76] Zhu et al. [43] Koo et al. [77] Tao et al. [78] Su et al. [54] 2015 Triplet Propose “Triplet loss” the utilizes three samples to learn discriminative features and also propose a triplet generation scheme Triplet loss 2017 Classification Combine Softmax and Center Loss [44] to improve discriminative capability of CNN features Softmax + Center Loss 2017 Triplet + Part Propose a “Part Based Deep Hashing” (PDH) network that integrates hashing for higher efficiency of large scale Re-Id. Implement a triplet loss that reduces the Hamming distance of positive sample parts and increase that of negative sample parts. Triplet loss 2017 Part to evaluate similarity between Propose a novel method DeepDiff corresponding parts using original data, feature maps and spatial variations from three deep subnets Softmax loss 2018 Classification Combine softmax and center loss to overcome intra-class variations in samples Softmax loss + Center loss 2018 Part 2018 Triplet 2018 Triplet Use information from face and body to obtain discriminative representations in indoor camera surveillance environment Softmax loss Propose a “Deep Multi-View Feature Learning” (DMVFL) method that fuses handcrafted and deeply learned features to obtain robust representations Triplet loss Propose a three-stage “Weakly Supervised Multi-Type Attribute Learning Framework” using a novel “Attribute Triplet Loss” to predict visually invariant features containing contextual cues Attribute triplet loss Zhang et al. [47] 2018 Classification + Verification + Part Learn deep features from body parts and entire body using verification and classification models respectively to obtain final representation Softmax loss Zhong et al. [42] Fan et al. [45] 2019 Classification Propose a novel “Inter-Center Loss” to improve the discriminative capability of CNN features 2019 Classification Propose a novel CNN based network “SphereRe-Id” adopting a novel sphere loss mapping sample images to hypersphere manifold and a balanced sampling strategy to address class imbalance Bao et al. [66] 2019 Classification + Attention Propose a dual-branch CNN based network having a global branch to process features from overall human body and an attention branch to selectively focus on attentional part of information from input Softmax + Center + Inter- Center Loss Sphere Softmax Loss Softmax Loss Yang et al. [2] 2019 Part based + Attention Introduce a novel attention driven multi-branch network learn discriminative representations from whole body and body parts focussing on spatial and channel-wise information to Softmax loss Bao et al. [66] 2019 Classification + Attention Combine global features with attention focussed discriminative features to reduce impact of misalignment and background clutter Softmax loss Yan et al. [62] 2019 Part + Attention Propose a feature attention block for Re-Id that pays attention to sliced part features in a weighted manner to highlight the most discriminative part regions which are robust to misalignment Softmax loss Zhou et al. [1] 2019 Triplet + Part + Attention Propose a novel “Foreground Attentive Neural Network” (FANN) that utilizes a foreground attentive subnetwork and a novel regression loss function to learn foreground regions which is then fed to body part features using novel symmetric triplet loss Wu et al. [3] 2019 Classification + Triplet Combine classification loss, triplet loss and center loss to constrain Euclidean distance of same identity samples closer than those of different identities Zhang et al. [79] 2019 Classification + Triplet Introduce a dual-branch “Multi Branch Slice-Based Network” (MSN) learning multi-level local and global features using a novel “triplet-center loss” Regression loss + Symmetric Triplet loss Softmax loss + Triplet loss + Center loss Triplet-Center loss Zhao et al. [80] 2019 Triplet Introduce a multi-level triplet model MT-net, extracting multi-level features which are both global and local Triplet loss Yuan et al. [81] 2019 Classification + Triplet Introduce a deep joint embedding learning framework that uses classification and an improved triplet loss. The improved triplet loss works on hard triplets generated Softmax loss + Improved Triplet loss Reference Year Architecture Key idea Wu et al. [50] 2019 Classification + Triplet Combine classification and triplet loss to make full use of labels as well as learn similarity measure simultaneously Loss Function Online Instance Matching loss + Triplet loss Tian et al. [63] 2019 Part Propose a two-branch CNN to combine learning global and local part level features simultaneously Softmax loss Li et al. [68] 2019 Classification + Verification + Attention Propose a novel network with attention module to highlight essential features and a multi-loss function to reduce intra-class distance and increase inter- class distance Cross-entropy loss + novel verification loss Yuan et al. [55] Fan et al. [45] 2019 Triplet Propose a novel “mini-cluster” loss that ensures the largest distance of same identity samples (inner divergence) to be less than the smallest distance of different identity samples (outer divergence) Mini-cluster loss 2019 Classification Propose a novel classification CNN called SphereRe-Id that uses a novel “Sphere Softmax loss” mapping samples to a hypersphere manifold Sphere Softmax loss Ling et al. [82] 2019 Classification + Verification Introduce MTNet with four losses for identification and verification of person identity and person attributes Softmax loss Si et al. [56] 2019 Classification + Triplet Propose a novel “Compact Triplet Loss” that improves the batch hard triplet loss to reduce intra-class variations and increase inter-class differences. Combine with a classification loss for better discrimination capability Compact triplet loss + Softmax loss Tian et al. [83] Zhong et al. [48] Wang et al. [84] 2019 2019 Verification + Triplet Propose a novel “Adaptive Verification Loss” (ADV loss) that learns only from meaningful hard sample pairs mined by using a weighted triplet loss. ADV loss + Triplet loss Classification + Verification Introduce a novel “Feature Aggregation network” (FAN) network to learn features from various layers of deep network along with multiple losses Softmax loss 2019 Part Propose a novel “Part-Based Pyramid Loss” that takes quadruplet input samples and learns body part features using relationship of distance and angle among samples Part based pyramid loss Yao et al. [85] 2019 Classification + Part Propose a “Part Loss Network” (PL-net) that trains on body part and global features Classification loss + Part loss He et al. [86] 2019 Classification + Verification Adopt the “lifted structured loss” due to its superiority to contrastive and triplet losses. Combine it with identification loss to learn relative identity information from pairs and true identity information Softmax loss + Lifted structured loss Choe et al. [31] 2019 Triplet Consider intra-distance between positive samples of a triplet and distance between triplets using a novel “mixed distance” function to improve Re-Id performance Mixed distance loss Quispe et al. [87] 2019 Classification + Triplet Propose a novel “Saliency Semantic Parsing Re-Id” (SSP-Re-Id) network. The Saliency guided subnetwork learns from essential parts of image while semantic parsing guided subnetwork deals with Re-Id challenges Softmax loss + Triplet loss Wan et al. [69] 2019 Part + Attention Propose a novel “Concentrated SPR network” (CSPR-Net) having a “constrained attention module” to find discriminative local parts that work better than body parts and a novel “statistical-positional-relational (SPR) descriptor that gives better the performance than global features Classification loss Zhang et al. [60] 2020 Classification + Triplet + Attention Propose a novel triplet loss based on the “Wasserstein distance” (Earth Mover distance) to handle the part misalignment problem using part probabilities obtained from attention maps and part features. Softmax loss + Wasserstein triplet loss Li et al. [88] 2020 Part Propose a novel “Attributes-Aided Part Detection and Refinement Network” (APDR) that uses attribute learning for part localization handling the misalignment problem. Attribute features are fused to obtain discriminative features Softmax loss + Triplet loss Qian et al. [17] 2020 Classification + Triplet + Attention Propose a novel “Multi-Scale Deep Architechture” (MuDeep) having a “multi-scale deep learning layer” to learn features at different scales and a “leader-based attention learning layer” to determine optimal weighting for features from each scale Softmax + Triplet loss Bai et al. [89] 2020 Classification + Triplet + Part Propose a three-branch “Deep Person” framework that learns contextual body part information using LSTM and learn discriminative features using identification (global and part level) and triplet losses. Zhang et al. [90] 2020 Part Propose a dual-branch “Heterogeneous Part-Based Deep Network” (HPDN) to learn part based features using batch hard triplet and cross entropy loss Softmax loss + Triplet loss Batch hard triplet loss + Cross-entropy loss Reference Year Architecture Li et al. [75] 2020 Classification + Triplet + Attention Key idea Loss Function Introduce a self-attention guided model that learns weighted features from different regions of human image Softmax loss + Triplet loss The highlights of deep Re-Id architectures are:  Deep learning based Re-Id architectures can be classification models, verification models, triplet-based models, part-based models and attention based models.  Classification models treat Re-Id as a multi-class classification problem that use the softmax loss to predict the class of an input query. Softmax loss encourages the separation of different classes but struggles with large intra-class variations. Several Re-Id works combine classification with other model types to overcome this limitation [44], [91], [82].  Verification models treat Re-Id as a binary classification problem, taking a pair of inputs and categorizing them as same or different. These models suffer from the class imbalance problem since the number of positive pair combinations is far less than the negative pairs where each identity contains same number of samples in the dataset.  Triplet models input triplets of images containing an anchor, a positive and a negative sample. These models are trained on the triplet loss that aims to pull the positive sample closer and pushes the negative sample away in feature space. Several improvements have been suggested over the traditional triplet loss having convergence issues. These improvements include solving slow convergence [57], removing triplet mining overhead [52], maintaining intra-class compactness [51] etc.  Part-based Re-Id models aim to focus on sub-regions within input to extract finer feature representations crucial in differentiating samples with small inter-class variations, usually missed in global image level representations. Focus on different parts of feature maps [62], [63], face and body regions [77], attribute guided body parts [88] etc, have yielded discriminative feature representations.  Attention models highlight regions of high interest within input holding highly discriminative information. Gaining popularity since the growth on RNN and LSTM, attention modules have helped in implementing spatial attention within image/frame regions such as human body parts [66], foreground [1], physical attributes. [70] and channel attention within deep features [68]. 4.2 Methods Based on Re-Id Challenges The task of person re-identification has faced several challenges like sample variations in view, pose, lightning and scale, partial or complete occlusion, background clutter etc. Fig. 12 demonstrates some of these Re-Id challenges presented by samples from various Re-Id datasets as detailed in Table 2. Any re-id system striving to achieve competent recognition rates must be able to counter these challenges effectively. Numerous research works have been conducted with this motivation. i ii iii iv v vi vii viii ix x Pose Variations Scale Variations Occlusion Background Clutter View Variations Fig. 12 . Re-Id challenges posed by dataset samples. Each column demonstrates samples from a unique identity. Columns (i) and (ii) show variations in pose, (iii) and (iv) show scale variations, (v) and (vi) display occlusion examples, (vii) and (viii) describe background clutter, (ix) and (x) show view variations in samples. All samples have been derived from the Market-1501 dataset. Feng et al. [28] attempt to overcome the challenge of large intra-class disparity caused by view variations from images captured by cameras placed at different viewpoints. Authors propose a framework capable of learning view-specific features consistent with each camera view which utilizes a cross-view Euclidean constraint (CV-EC) and cross-view center loss to decrease the distance of features of same person from different views. Qi et al. [92] tackle lightning and viewpoint variation at two levels. Firstly, they extract deep features trained on multiple datasets which are robust to differences in illumination and view. Secondly, they use the learned features to find an optimal ensemble of metrics including the Cosine distance metric that reduces the intra-class disparity even further. Sikdar et al. [61] achieve scale-invariance by modifying the convolution functionality within a deep network. Instead of learning a kernel on a fixed scale input, the input is first transformed to a pyramid of multiple resolutions. The network then learns multiple scaled feature maps which are then re-scaled to original size before applying max pooling. Such an operation has proven to produce scale invariant results for re-id system. Input image misalignment can seriously hamper the feature learning and matching process. To handle the misalignment problem, Zheng et al. [93] introduce pose invariant embedding (PIE) which aligns identities within sample images to a standard pose using pose estimation. The transformed standard pose promotes learning of discriminative feature extraction and matching and is alignment invariant. Table 6 presents some novel Re-Id contributions that are robust to Re-Id challenges. TABLE 6 CONTRIBUTIONS ROBUST TO RE-ID CHALLENGES: VARIATIONS IN POSE (P), SCALE (S), LIGHTNING (L) AND VIEW (V), OCCLUSION (O), BACKGROUND CLUTTER (B) AND MISALIGNMENT (M) Reference Year Key Idea to Avert Re-Id Challenges Robust to Challenges P S L O B V M Yu et al. [94] 2018 Use skeleton joint information and cloth-color type features to achieve pose and lightning invariance ✓ ✓ Feng et al. [28] 2018 Use Cross-View Euclidean constraint (CV-EC) to reduce distance of deep features of an identity from multiple views ✓ Zhou et al. [35] 2018 Propose a “self-paced learning” (SPL) method to isolate noisy samples in weighted way using model age and iteration and train CNN model with faithful samples gradually moving from easy to hard samples ✓ ✓ Chen et al. [95] 2018 Fuse attribute features learned from part-specific CNN and fuse them with Local Maximal Occurance (LOMO) features to obtain robust features ✓ ✓ ✓ ✓ Fu et al. [96] 2019 Introduce a two stream spatial segmentation network that derives spatial and fine local features ✓ ✓ ✓ Zheng et al. [97] 2019 Use a novel “alignment branch” to learn the affine transformation of high level convolutional features utilizing a spatial transformer network that crops images with too much background and pads zero borders to missing part features thereby solving scale variation and misalignment ✓ ✓ ✓ Chen et al. [98] 2019 Study correlation among cross-view visual data from multiple camera views to compose view-specific representations ✓ Reference Year Key Idea to Avert Re-Id Challenges Robust to Challenges P S L O B V M Luo et al. [99] 2019 Propose a novel “Dynamically Matching Local Information” (DMLI) method that automatically aligns horizontal stripes from input samples without any labelling supervision or pose estimation ✓ Zhou et al. [1] 2019 Use a novel “foreground attentive subnetwork” having a decoder trained on a novel “local regression loss” to create a binary mask suppressing background ✓ Qi et al. [92] 2019 Train CNN on six Re-Id datasets to learn robust features ✓ ✓ Yang et al. [57] 2019 Learn spatial dependencies among local regions of pedestrians in both horizontal and vertical directions using LSTM to overcome occlusion ✓ Zheng et al. [93] 2019 Propose a novel “pose invariant embedding” (PIE) by constructing a novel PoseBox through pose estimation and then training using two-stream PoseBox Fusion network ✓ Wei et al. [100] 2019 Estimate four human key points that are robust to pose and view variations. Head, upper body and lower body features are generated using these key points and a four- stream CNN is trained to generate the novel “Global Local Alignment Descriptor” (GLAD) features from both global and local regions ✓ Wu et al. [101] 2020 Use adversarial learning to learn asymmetric transformations to transform view- specific distribution to a generic view invariant feature space Li et al. [88] 2020 Use attribute learning to detect body parts thereby solving part misalignment problem ✓ ✓ ✓ Tang et al. [102] 2020 Propose a novel “Gradual Background Suppression” method that extract CNN features based on different weights assigned to body parts and background thereby suppressing the background ✓ Sikdar et al. [61] 2020 Resize input to different scales and convolve with a fixed size filter to obtain multi- resolution pyramid which is re-scaled back to fixed size to obtain scale invariant features ✓ Li et al. [75] 2020 Propose a “multi-scale attention” model evaluating important person regions in a weighted fashion and train on features fused globally and locally using cross-entropy and triplet loss ✓ ✓ ✓ The highlights of contributions based on Re-Id challenges are:  Finding a person of interest is challenging due to visual variations in pose, view, lightning, scale, partial or complete occlusion, background clutter and misalignment.  Several deep Re-Id contributions have aimed to develop robust methodologies against these Re-Id challenges.  Skeleton joints data and clothe colors have produced pose and lightning invariance [94], learning view-specific representations for view invariance [98], utilizing foreground attentive network to suppress noisy background [1], convolving with multi-scale input to obtain scale invariant features [61] and using pose estimation to achieve pose invariance [93] are some of efforts to overcome these challenges. 4.3 Re-Id Methods Based On Modality Visible images have proven to be the most common source of discriminative information crucial for identifying individuals. Hence, research literature is filled with visible image based Re-Id methods due to their superior identification power. Despite their popularity, the visible image based methods are prone to several challenges already discussed in Section 4.2. Hence, some cross-modal Re-Id methods have also been proposed to enhance the ability of Re-Id systems further. This section discusses various visible light based and cross-modality based Re-Id methods. 4.3.1 Visible Image Based Re-Id Methods Numerous visible image based Re-Id contributions have focussed upon extracting discriminative deep features to achieve high recognition rates for re-identification [103], [104]. Table 7 details some novel Re-Id image based contributions. Reference Year Key Idea Benefit Dataset TABLE 7 IMAGE BASED RE-ID CONTRIBUTIONS Wu et al. [105] 2018 Introduced a deep embedding approach using optimized robust features, positive mining and local adaptive similarity learning Discriminative features Wu et al. [106] 2018 Introduce a multiplicative integration gating function combined with Hamadard product Cross-view feature alignment Ke et al. [107] 2018 Introduce ID-AdaptNet to adapt “seen” identity features to “unseen” identities Discriminative features Zhang et al. [108] 2019 Use group symmetry theory to extract and utilize information layers of Resnet50 (ResGroupNet) from middle Jiang et al. [109] 2019 Introduce PH-GCN network to learn spatial relation among body parts using hierarchical graphs Wu et al. [110] Liu et al. [111] 2019 Introduce a five-branch deep model that learns body features in horizontal and vertical directions, relationship between feature channels and part features 2019 Fuse Gaussian features with deep features Zhang et al. [112] 2019 Introduce a PAAN network using layered partition strategy to fully utilize part-level and global attributes Tian et al. [63] 2019 Introduce a two-branch RJLN network to jointly learn global and local features Wu et al. [113] 2019 Introduce a multi-branch MFML network to represent features from multiple layers Zhao et al. [80] 2019 Introduce a multi-level triplet model MT-net, extracting multi-level features which are both global and local Discriminative features Context aware discriminative features Discriminative features Fused discriminative features Fused discriminative features Fused discriminative features Weighted multi- layered fused features Fused discriminative features Wang et al. [114] 2020 Propose a novel exclusively regularized softmax objective function Multi-scale multi- patch features Wang et al. [115] 2020 Fuse handcrafted features from local and global regions with deep features Fused discriminative features VIPeR CUHK01 CUHK03 Market-1501 VIPeR CUHK03 Market-1501 Market-1501 DukeMTMC CUHK03 Market-1501 DukeMTMC CUHK03 Market-1501 DukeMTMC CUHK03 Market-1501 DukeMTMC-Re-Id CUHK03 Market-1501 DukeMTMC-Re-Id Market-1501 VIPeR Market-1501 DukeMTMC-Re-Id CUHK03 Market-1501 DukeMTMC-Re-Id CUHK03 Market-1501 DukeMTMC-Re-Id CUHK03 Market-1501 CUHK03 Market-1501 DukeMTMC VIPeR CUHK01 Rank 1 Result (%) 49.04 71.60 73.02 84.14 49.11 73.23 67.15 93.91 83.35 30.36 81.59 67.77 71.20 92.80 86.20 64.90 93.50 85.00 95.00 94.70 86.70 84.40 57.20 91.86 81.73 66.60 93.70 85.50 94.40 92.50 84.00 79.34 81.95 70.40 93.70 84.40 52.22 71.91 Wu et al. [110] propose a five-branch deep model that is capable of learning features not only from the usual horizontal direction but also in the vertical direction. The model scans for spatial information of body parts from left to right and head to foot, thereby learning discriminative information. Working with one type of features is often limiting in finding discriminative capabilities for re-id systems. Hence, an obvious direction of improvement is to fuse different features together to obtain higher differentiation in re-id. Zhao et al. [80] introduce a novel deep triplet model (MT-net) performing multi-level feature extraction. Both detailed and global features from each layer are combined together in an optimal proportion through training. The fused features prove to have high discriminative capabilities. While most deep learning based re-id approaches extract features only from the top layer, middle layer features can also contribute to discriminative capabilities of the model in certain situations. Wu et al. [113] introduce a multi-level feature network with multiple losses (MFML) which is a multi-branch architecture representing multiple middle layer representations trained on triplet loss and top layer representation trained on the hybrid loss. The representation from various layers are fused in a weighted manner based on their importance in obtaining differentiating characteristics. Based on the idea that color features hold key information in reference to re-id task, Liu et al. [111] fuse traditional Gaussian of Gaussian (GOG) features from four color channels (RGB, Lab, HSV, RnG) with deep features to obtain highly discriminative features which achieve state-of-the-art performances. 4.3.2 Cross-Modality Re-Id Methods: While image based methods have proven to be most popular in the Re-Id research community, these visual feature based methods have several limitations. Visual Re-Id methods face several challenges such as variations in pose, view, lightning, scale, partial or complete occlusion, background clutter which are already discussed in Section 4.2. Also, visual Re-Id methods are highly ineffective in dark environments such as night-time, due to the poor illumination that suppresses most of the visual cues [26]. This has led to the development of multi-modal Re-Id methods that combine data from multiple modalities to reduce dependency on visual information. Table 8 lists the novel multi-modal Re-Id methods combining RGB, depth, text and infrared (thermal) based data modalities. TABLE 8 CROSS MODALITY RE-ID CONTRIBUTIONS Reference Year Modality Key Idea Device Ren et al. [116] 2017 RGB + Depth Fuse anthropometric features from depth images and visual features from RGB images using a novel “multi-modal fusion layer” to obtain discriminative features RGB Camera + Kinect V1 sensor Feng et al. [26] 2019 RGB + Infrared Data Extract “modality specific representations” (MSR) from modality- specific networks trained on a modality-specific loss to learn discriminative features from each domain. Use a cross-modality Euclidean constraint to learn modality-invariant features RGB + Infrared Cameras Xiang et al. [117] 2019 RGB + Infrared Propose a dual-branch neural network that fuses modality specific information from two branches using multiple granularity network (MGN) to obtain modality shared features RGB + Infrared Cameras Ren et al. [19] 2019 RGB + Depth Propose a novel “uniform and vibrational deep learning” (UVDL) method that uses an auto-encoder to visible and deep features extracted from two neural nets into a common features space RGB-D camera (Kinect) Chang et al. [118] 2020 RGB + Textual Combine visual CNN features with textual CNN features from textual descriptions including details of gender, clothes color and bag type information to obtain a generalized feature embedding RGB Camera Gohar et al. [119] 2020 Gait Data Wang et al. [120] 2020 RGB + Infrared Propose a novel “non-visual gait based” Re-Id method that uses gait data captured from accelerometer and gyroscope to learn discriminative features using Gated Recurrent Unit (GRU) to capture discriminative temporal information from input sequences Accelerometer, Gyroscope Propose a novel “multi-patch networking network” (MPMN) that utilizes a single deep neural net to process both RGB and thermal images. A novel “multi-patch modality alignment” loss mines for hard subspaces. A novel “cross-patch correlation distillation” (CPCD) loss enforces cross-patch similarity to boost cross-modality embedding. A novel “patch aware priority attention” (PAPA) method prioritizes training of hard patch tasks over others. RGB + Infrared Cameras Chang et al. [118] learn a similarity metric for visual and textual representations as demonstrated in Fig. 13. Authors use the Resnet architecture to extract visual features from image samples. A 2500-dimensional textual feature embedding is extracted from textual description of each sample (gender, clothe color etc) using tokenization, lemmatization and stemming. The model is trained in an end to end manner using a triplet architecture on both visual and textual representations. Fig. 13 Triplet Setup to reduce the distance of a positive textual description (top blue box) from an image sample identity (center blue box) and increase the distance from the negative textual description (bottom blue box) [118] Gait information has proven to be highly discriminative for Re-Id tasks. Gohar et al. [119] formulate a multi-modal Re-Id method performing non-visual gait analysis on information extracted using wearable sensors namely, accelerometer and gyroscope. The proposed method learns discriminative information integrating the temporal aspect of gait data using Recurrent Gated Units (GRU). The highlights of modality based deep Re-Id contributions are:  Visible RGB images are the most widely used data modality for deep Re-Id methods due to the rich variety of visual information they contain and the growth of several image based Re-Id datasets developed over the years.  Visible RGB image methods aim to extract discriminative features by using multi- branch networks [110], fusing multiple features together [80], proposing novel loss functions [114] etc.  Visible RGB image methods face several visual challenges as discussed in Section 4.2 and lose most of their discriminative capabilities in dark/night environments.  Multi-modal Re-Id methods reduce the dependency on visual information for extracting discriminative features.  Several multi-modal Re-Id contributions have extracted anthropometric features from depth data [116], modality specific representations from thermal images [117], gait information from accelerometer and gyroscope [119] and combined with visual information to formulate multi-modal representations. 4.4 Cross-Domain Re-Id Methods Based on the kinds of samples present, different re-id datasets hold different generalizations of human appearance. Hence a re-id model trained on one dataset performs poorly on a different dataset. Several works have addressed this issue with domain adaptation techniques attempting to bridge the gap between the learned source domain to the unknown target domain as demonstrated in Table 9. Zhang et al. [58] introduce a novel Dual Generation Learning (DGL) method for unsupervised domain adaption such that a re-id model when evaluated on any relevant dataset shows acceptable recognition results. The DGL method generates target style images for samples from source dataset and camera style images for those from target dataset, thereby expanding them to consider varying domain styles. Ganin et al. [121] propose to augment few standard layers and a novel gradient reversal layer into deep architectures to learn features that are trained on both labelled source domain as well as unlabelled target domain. Such features cannot discriminate between source and target domain and hence are suitable for domain adaptation. Wang et al. [122] propose a Deep Multi-Task Transfer Network (DMTNet) to transfer discriminative features learnt from source domain to target domain by utilizing a cluster estimating algorithm, attribute attention important learning and multi-task learning. Other domain adaptation based re-id works involve refining learned augmented attribute features according to target domain [123], image to image translation using generative adversarial network [124], viewpoint transfer using generative adversarial network [125] etc. Reference Year Xiao et al. [126] 2016 Ganin et al. [121] 2016 Xu et al. [123] 2019 Zhou et al. [124] Genc et al. [127] Wang et al. [122] 2019 2019 2020 Sun et al. [125] 2020 Zhang et al. [58] 2020 TABLE 9 CROSS-DOMAIN RE-ID CONTRIBUTIONS Cross-Domain Approach Learn generic CNN features embedding from multiple dataset domains and use a novel “Domain Guided Dropout” to mute neurons learning domain specific information thereby improving Re-Id performance Use few standard layers and a novel “gradient reversal layer” to learn from labelled source domain samples and unlabelled target domain samples Propose a novel “Deep Augmented Attribute Network” (DAAN) to learn augmented feature representations using augmented features and labels from source dataset and refine the learned features to unlabelled target dataset. Propose a novel “Multi-Camera GAN” (CTGAN) to transfer source dataset samples to multi-camera styles of target dataset Perform domain adaptation by training on different dataset combinations, learning part specific features and learning features form multiple layers and use a CycleGAN to perform camera view adaptations. Propose a novel “Deep Multi-Task Transfer Network” (DMTNet) network for unsupervised cross domain Re- Id including cluster number estimation algorithm, learning of attribute attention importance and transfer of specific multi-task learning across domains Propose a novel “Conditional Transfer Network” (cTransNet) implementing conditional viewpoint transfer using StarGAN and obtain hybrid feature embeddings from original and transformed images to obtain similarity rankings Propose a novel “Dual Generation Learning” (DGL) method to transfer source dataset images to target style domain and target dataset images to source camera styles to obtain better Re-Id results The highlights of cross-domain Re-Id methods are:  Most deep Re-Id methods that are trained/tested on few datasets perform poorly on other datasets.  Several cross-domain Re-Id approaches have been developed to either learn better generalization across multiple datasets or transfer learned characteristics of source domain to another target domain.  The cross-domain approaches have achieved sample style transfer from source to target dataset [58], training from multiple dataset combinations [127], transformation of learned features from labelled source to unlabelled target dataset [123], thereby improving the generalizability of deep Re-Id methods. 4.5 Metric Learning Methods for Re-Id Metric learning has proven to be a significant step in computer vision problems such as person re-identification, face recognition etc. Metric learning aims to find a similarity function on extracted features that is used to decrease positive pair distance while increasing negative pair distance [11]. Table 10 highlights the different kinds of metric learning contributions reviewed in this article. Since the underlying data distributions varies with the nature of the computer vision task, the ideal similarity function is mostly task-specific [128]. Traditional metric learning methods focussed on learning linear Mahalanobis based metrics. Such linear metrics utilized the sample distribution centroid and standard deviation to evaluate the similarity with a give sample point. However, such linear metrics often failed to comprehend nonlinear relationships among the samples. TABLE 10 DEEP METRIC LEARNING METHODS Reference Year Metric Learning Method 2016 Deep Transfer Metric Learning 2017 Generic Similarity Metric 2018 Deep Hybrid Similarity Learning Hu et al. [128] Lin et al. [129] Zhu et al. [130] Duan et al. [131] Short Form DTML - DHSL Key Idea Cross domain metric learning Robust to translation and shearing deformation More discriminative than Euclidean or Cosine distance based similarity 2018 Deep Localized Metric Learning DLML Metric learning over locally varying data Hu et al. [132] 2018 Sharable and Individual Multi-View Deep Metric Learning MvDML View invariant metric learning Chen et al. [133] Ren et al. [134] Ding et al. [135] Xiong et al. [136] 2018 Pose Invariant Deep Metric Learning PIDML Pose invariant metric learning 2019 Deep Structured Metric Learning - Robust person re-identification 2019 Robust Discriminative Metric Learning RDML Robust to noise 2019 Multiple Deep Metric Learning - Instead of feature extraction, utilize a stacked auto- encoder individuals using multiple similarity probabilities from Softmax regression models to recognize Recently, deep metric learning techniques have become increasingly popular due to their ability to capture hierarchal nonlinear affiliations. Hu et al. [128] propose a deep transfer metric learning (DTML) method that transfers discriminative information from labelled source domain to unlabelled target domain. DTML learns hierarchical nonlinear transformations, maximizing the inter-class disparity, minimizing the intra-class differences and restricting the divergence of source and target domain. Ding et al. [135] increase the generalizability of metric learning by introducing robust discriminative metric learning (RDML) which is insensitive to noise unlike most metric learning techniques. A fast low rank model is also used to discover global structure within data and ensure scalability to larger datasets. Lin et al. [129] argue that similarity transformations cannot capture deformations like translation and shearing in cross domain visual matching tasks. The authors propose a generic similarity metric that generalizes the similarity transformation to affine transformation capable of capturing complex deformations. While most metric learning techniques focus on the overall similarity learning of samples, Duan et al. [131] propose the deep localised metric learning (DLML) to learn multiple metrics fine-grained over numerous local subspaces. DLML specializes in handling data varying locally. Hu et al. [132] introduce the sharable and individual multi-view deep metric learning (MvDML) to utilize multi-view data in best possible manner. MvDML learns the best combination of distance metrics by focussing on both individual view specifics metric as well as a combined multi-view representation. Chen et al. [133] propose a novel pose invariant deep metric learning (PIDML) method that utilizes pose invariant embedding [93] and an improved triplet loss to achieve pose invariance for metric learning. Ren et al. [134] propose a deep structured metric learning method that utilizes a novel structured loss function to achieve robust person re-identification. The proposed loss skips positive sample pairs of small distance and negative sample pairs having large distance. The highlights of deep metric learning based Re-Id methods are:  Metric learning means to learn a similarity function that pulls features from same identity samples closer and pushes different identity features away.  Deep learning based metric learning have gained popularity in recent years due to their ability to learn nonlinear sample associations unlike the traditional Mahalanobis metric learning only linear relationships.  Several novel deep metric learning approaches have been proposed that give better discrimination capability than the usual Euclidean or Cosine measure [130], can perform cross-domain learning [128], and are robust to noise [135], deformation [129] etc. 5. Video Based Deep Re-Id Contributions This section explores the video based deep person re-identification methods. Temporal Information provides Motion Cues More Samples per Identity Combination of Visual and Temporal Features 2D Convolutions miss Temporal Information Variable Video Length and Frame Rates Outlier Frames Advantages Challenges Deep Video ReID Fig. 14 Advantages and Challenges of Video based Person Re-Identification methods Video Re-Id methods received less attention compared to image based techniques in the initial years of Re-Id research [67]. A major reason was the lack of large-scale video datasets. However, recent years have witnessed the growth of several video based Re-Id datasets such as [32], [36] and [38] encouraging the development of video Re-Id methods. Compared to images, video sequences have advantageous characteristics that can be exploited for Re-Id tasks. Firstly, a video sequence comprising of multiple frames holds essential temporal information across frames giving crucial motion cues which are absent from image samples. Secondly, multiple frames from each video provide numerous visual examples per identity adding diversity in samples. Thirdly, while image based methods mostly focus on exploiting visual features, video Re-Id methods target a combination of visual and temporal features to extract discriminative features for Re-Id tasks which prove to be more robust to Re-Id challenges. While video based methods give rise to more discriminative feature embeddings, they also add to the existing Re-Id challenges as demonstrated in Fig. 14. Firstly, since the simplest way of generating video features is by fusing frame level features together and frame level features are based on 2D convolution operations that totally neglect the temporal aspect of frame sequences, the fusion operation misses all of the temporal cues crucial for video Re- Id. Secondly, videos have different frame rates and different time series that makes it hard to make comparisons among samples. Thirdly, not all frames provide discriminative information and some outlier frames prove misleading in learning robust video representations. Recent developments in Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) models [65] have boosted video Re-Id by providing the capability of extracting motion cues from temporal information for robust video representations. McLaughlin et al. [137] propose a video re-identification system based on recurrent neural network for wide area tracking. The proposed network uses a recurrent layer to combine frame-level details from all frames into a single combined appearance feature representing the entire video showing competent recognition results. Attention modules have played key role in video Re-Id to identify and isolate outlier frames. They also help in extracting discriminative regions within video frames. Wu et al. [67] propose a Siamese attention network that learns to realize which regions (where) from which frames (when) are relevant for comparing identities. The attention mechanism learns the most relevant features by focussing on distinct regions helping to identify given identity. Zhang et al. [37] propose a self and collaborative attention network (SCAN) for video re-id. The proposed model takes a pair of videos as input, aligns and compares their discriminative frames using a generalized similarity measurement module and refines intra- sequence and inter-sequence features from videos using a non-parametric attention module. Wu et al. [138] argue that merely combining frame-wise features to obtain overall video features is ineffective as the temporal cues get lost in 2D convolutions. Authors propose a novel 3D Person VLAD Aggregation layer that helps to extract appearance and motion characteristics of entire video. The model also handles occlusions and misalignments through soft attention modules. Zhang et al. [74] propose a multi-scale spatial-temporal attention network (MSTA) that focusses attention to regions within video frames at different scales. It contains a ResNet50 based encoder responsible for extracting frame-level features from discriminative regions and an aggregator to fuse features from different scales Fig. 15. Video Sequence Encoder Local Region Features Aggregator Video Representations Fig. 15 Architecture of proposed MSTA model. The encoder extracts framewise features and the aggregator fuses the frame-level features to obtain video representations [74] Table 11 dives into several contributions highlighting the diverse approaches towards video person re-identification using deep learning. TABLE 11 VIDEO BASED DEEP RE-ID CONTRIBUTIONS Reference Year Key Idea Sun et al. [139] 2018 Extract visual features using a “Two-Branch Appearance Feature sub-structure” (TAF) and temporal features using a “Optical Flow Temporal Feature sub-structure” (OTF) and use a pair of Siamese networks to learn similarity measure among pairwise visual and temporal features. A saliency learning fusion layer learns fusion of global and local appearances. Wang et al. [140] 2018 Perform image-to-video Re-Id using a novel end-to-end “Point-to-Set Network” (P2SNet) that takes both image and video as input and jointly learns their features using point-to-set distance metric. A kNN-triplet module helps to focus only on relevant video frames Meng et al. [141] 2019 Learn view-specific and feature-specific transformations using a novel “Deep Asymmetric Metric Learning” from a two stream neural net to counter view-specific bias and feature-specific bias in appearance and motion features due to variations in view, lightning, background clutter etc Wu et al. [67] 2019 Propose a novel “Siamese attention architecture” to jointly learn feature representation and similarity measure for video Re-Id using an attention mechanism focussing on relevant frames (when) and regions of interest within frames (where) Ksibi et al. [33] 2019 Propose a “Deep Spatio-Temporal Appearance” (DSTA) descriptor that uses a “Deep Salient-Gaussian Wighted Fisher Vector” (SGFV) to exploit trajectory information to handle misalignment of person tracklets and eliminate background noise using Gaussian and Saliency maps Zhang et al. [37] 2019 Wu et al. [138] 2019 Liu et al. [142] 2019 Propose a novel “Self-and-Collaborative Attention Network” (SCAN) that utilizes two attention subnetworks (self-attention subnetwork and collaborative attention subnetwork) to select features from informative video frames and align discriminative frames from probe and gallery frames respectively and finally use a generalized similarity measure to compare video pair representations Utilize novel “3D Person VLAD Aggregation layer” based on the vector of locally aggregated descriptors to capture motion based features along with appearances which is usually missed by 2D ConvNets and learn “global representations” for a full length video robust to occlusion and misalignment using soft attention module to learn 3D part alignment Propose a novel “Dense 3D Convolutional Network” (D3DNet) that uses numerous three dimensional dense blocks and transition layers to extract discriminative features from appearance and temporal domains capturing visual and motion cues (both short term and long term) from videos without additional modules. Implement a combination of identification and center loss to reduce intra-class disparity and increase inter-class disparity Dataset MARS Rank 1 Result (%) 73.00 iLID-VIDS 59.00 PRID2011 MARS iLID-VIDS PRID2011 MARS iLID-VIDS PRID2011 MARS iLID-VIDS PRID2011 MARS iLID-VIDS PRID2011 MARS iLID-VIDS PRID2011 MARS iLID-VIDS PRID2011 MARS 79.00 55.25 73.31 40.00 74.65 77.33 87.00 86.60 86.50 98.80 76.70 80.00 92.70 87.20 88.00 95.30 80.80 69.40 87.60 76.00 iLID-VIDS 65.40 McLaughli n et al. [137] 2019 Extract frame-wise CNN features and use a temporal pooling recurrent layer to combine all time step features and obtain feature representation for entire video sequence iLID-VIDS PRID2011 Zhang et al. [74] 2020 Propose a novel “Multi-Scale Spatial-Temporal Attention” (MSTA) model that pays attention to different regions within each video frame at different scales to incorporate essential regions into whole video spatio-temporal representations. MSTA contains an encoder to extract frame wise features and an aggregator to fuse features Wu et al. [143] 2020 Use variational recurrent neural networks (VRNNs) to conduct a deep few-shot adversarial learning to extract discriminative features that are view invariant Avola et al. [144] 2020 Propose a novel “LSTM based Re-Id Hashing model” that exploits bone proportion, gait and movement features of 2D skeletons extracted from RGB video frames. LSTM is used to learn temporal correlation between different frames while two dense layers are responsible for implementing bodyprint hashing via binary coding capable of unbounded labelling of all individuals possible Wu et al. [145] 2020 Propose a novel “Adaptive Graph Representation Learning” method that uses adaptive structure-aware adjacency graphs via graph neural networks (GNN) highlighting two kinds of relations: pose alignment connection to capture human part relations and feature affinity connection to model semantic relationship among MARS iLID-VIDS PRID2011 MARS iLID-VIDS PRID2011 MARS iLID-VIDS PRID2011 MARS iLID-VIDS PRID2011 58.00 70.00 82.28 70.10 91.20 54.60 60.10 79.20 86.50 73.40 82.70 89.80 83.70 93.10 Reference Year Key Idea Dataset features from various regions across frames. A novel regularization is used to capture temporal resolution invariant features for the entire video sequence DukeMTMC VideoRe-Id Jiang et al. [59] 2020 Propose a novel framework “Spatial Transformed Partial Network” (STPN) that aligns frames to extract robust regional features. A novel “Weighted Triple- Sequence Loss” (WTSL) is used to exclude outlier frames in video-level features. Model is jointly optimized for frame-level and video level features. MARS iLID-VIDS PRID2011 Rank 1 Result (%) 96.70 85.90 82.20 95.20 6. Conclusion and Future Directions This review provides a comprehensive and exhaustive analysis of deep learning based person re-identification methods. The objective of this work is to give the readers a thorough understanding of different approaches towards deep Re-Id. This reviewed literature has been divided into several logical categories as demonstrated in the taxonomy diagram Fig. 2. These approaches have been classified on the basis of adopted architecture types (classification, verification, triplet based, part based and attention models) integrating different kinds of losses (softmax, triplet), the common Re-Id challenges faced (variations in pose, lightning, view, scale, partial or complete occlusion, background clutter), image based methods and multi- modal Re-Id methods reducing the dependency on visible RGB approaches, cross-domain methods to improve generalizability of approaches across different datasets, metric learning approaches to learn ideal similarity functions and deep video Re-Id methods exploiting both spatial and temporal cues from multiple frames of a video sequence. Each category presented as a separate section provides an extensive look into these contribution types and the highlights at the end of each section gives a quick overview of the reviewed methods. Tables 3-11 provide the key ideas behind numerous deep Re-Id methods across various categories. Part-Based and Attention architectures are the more popular architecture types in recent times owning to their ability to find regions of rich information and extraction of finer visual cues. Re-Id challenges have been conquered using different techniques like using pose estimation to achieve pose invariance [93], obtain scale invariance through multi-scale input convolutions [61] etc. Some multi-modal approaches have helped to reduce the over dependence on visible RGB based methods making them more viable for darker environments with fewer visual cues [117]. Deep video Re-Id methods provide more discriminative information when compared to image based approaches due the motion cues present in temporal information across frames. However, video Re-Id presents its own challenges of finding discriminative regions both spatially (within frames) and temporally (across frames) as demonstrated in Fig. 14. Based on this review of deep Re-Id methods, the following research gaps can guide the future research motivations:  Several Re-Id datasets have been collected under controlled environment like research labs, college campuses etc. While numerous Re-Id approaches have attained high performance on these datasets, their results suffer tremendously in realistic scenes. Datasets containing more real-like scenarios like [29] are needed to enhance the potential of proposed Re-Id methods.  Very few multi-modal datasets exist which seriously limit the growth of multi-modal Re-Id approaches. Preparation of large scale multi-modal datasets can greatly contribute to deep Re-Id research.  Gohar et al. [119] use gait-data collected from wearable sensors fixed on the chest of test subjects. Multi-modal methods can be developed that can process data from sensors without considering the positioning or orientation of data recording devices.  There are few end-to-end Re-Id research contributions that involve both person detection and re-identification together in a single framework. Since most datasets are collected under controlled environment, the person detection is usually performed automatically. End-to-end Re-Id is a promising research direction.  GAN based Re-Id works have greatly supported the style transfer requirements of cross-domain Re-Id approaches. However, the low to medium quality of generated samples has limited the performance of these approaches. An improvement in sample generation quality can significantly boost cross-domain Re-Id approaches.  Most part-based Re-Id contributions have focussed on systematic comparison between corresponding part regions of input pairs. But the contextual relationship among different regions is mostly ignored. Preserving the semantic relationship among different parts like [89] is a potential way of improving part-based methods further.  Attribute-based methods have become increasingly popular in finding finer visual cues. These attribute-based methods can be extended further for applications like part localization like in [88] to achieve better Re-Id performance. 7. References [1] S. Zhou, J. Wang, D. Meng, Y. Liang, Y. Gong and N. Zheng, “Discriminative Feature Learning With Foreground Attention for Person Re-Identification,” IEEE Transactions on Image Processing , vol. 28, no. 9, pp. 4671 - 4684, 2019. [2] F. Yang, K. Yan, S. Lu, H. Jia, X. Xie and W. Gao, “Attention driven person re-identification,” Pattern Recognition, vol. 86, pp. 143-155, 2019. [3] D. Wu, S.-J. Zheng, W.-Z. Bao, X.-P. Zhang, C.-A. Yuan and D.-S. Huang, “A novel deep model with multi-loss and efficient training for person re-identification,” Neurocomputing, vol. 324, pp. 69-75, 2019. [4] K. Wang, H. Wang, M. Liu, X. Xing and T. Han, “Survey on person re-identification based on deep learning,” CAAI Transactions on Intelligence Technology, vol. 3, no. 4, pp. 219 - 227, 2018. [5] M. O. Almasawa, L. . A. Elrefaei and K. Moria, “A Survey on Deep Learning-Based Person Re- Identification Systems,” IEEE Access, vol. 7, pp. 175228 - 175247, 2019. [6] D. Wu, S.-J. Zheng, X.-P. Zhang, C.-A. Yuan, F. Cheng, Y. Zhao, Y.-J. Lin, Z.-Q. Zhao, Y.-L. Jiang and D.-S. Huang, “Deep learning-based methods for person re-identification: A comprehensive review,” Neurocomputing, vol. 337, pp. 354-371, 2019. [7] K. Islam, “Person search: New paradigm of person re-identification: A survey and outlook of recent works,” Image and Vision Computing, vol. 101, 2020. [8] D. Gray, S. Brennan and H. Tao, “Evaluating appearance models for recognition, reacquisition, and tracking,” in IEEE International Workshop on Performance Evaluation for Tracking and Surveillance, Rio de Janeiro, 2007. J. C. J. Junior, X. Baró and S. Escalera, “Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification☆,” Image and Vision Computing, vol. 79, pp. 76-85, 2018. [9] [10] W.-S. Zheng, S. Gong and T. Xiang, “Associating Groups of People,” in British Machine Vision Conference, London, UK, 2009. [11] S. Bąk and P. Carr, “Deep Deformable Patch Metric Learning for Person Re-Identification,” IEEE Transactions on Circuits and Systems for Video Technology , vol. 28, no. 10, pp. 2690 - 2702, 2018. [12] C. C. Loy, T. Xiang and S. Gong, “Time-Delayed Correlation Analysis for Multi-Camera Activity Understanding,” International Journal of Computer Vision volume , vol. 90, pp. 106-129, 2010. [13] N. Perwaiz, M. M. Fraz and M. Shahzad, “Person Re-Identification Using Hybrid Representation Reinforced by Metric Learning,” IEEE Access, vol. 6, pp. 77334 - 77349, 2018. [14] D. S. Cheng, M. Cristani, M. Stoppa, L. Bazzani and V. Murino, “Custom Pictorial Structures for Re- identification,” in British Machine Vision Conference, Dundee, UK, 2011. [15] S.-Z. Chen, C.-C. Guo and J.-H. Lai, “Deep Ranking for Person Re-Identification via Joint Representation Learning,” IEEE Transactions on Image Processing, vol. 25, no. 5, pp. 2353 - 2367, 2016. [16] W. Li, R. Zhao and X. Wang, “Human Re-Identification with Transferred Metric Learning,” in Asian Conference on Computer Vision, Daejeon, Korea, 2012. [17] X. Qian, Y. Fu, T. Xiang, Y.-G. Jiang and X. Xue, “Leader-Based Multi-Scale Attention Deep Architecture for Person Re-Identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 42, no. 2, pp. 371-385, 2020. [18] I. B. Barbosa, M. Cristani, A. D. Bue, L. Bazzani and V. Murino, “Re-identification with RGB-D Sensors,” in European Conference on Computer Vision (ECCV) Workshop, Florence, Italy, 2012. [19] L. Ren, J. Lu, J. Feng and J. Zhou, “Uniform and Variational Deep Learning for RGB-D Object Recognition and Person Re-Identification,” IEEE Transactions on Image Processing , vol. 28, no. 10, pp. 4970-4983, 2019. [20] W. Li and X. Wang, “Locally Aligned Feature Transforms across Views,” in IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 2013. [21] W. Li, R. Zhao, T. Xiao and X. Wang, “DeepRe-Id: Deep Filter Pairing Neural Network for Person Re- identification,” in IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014. [22] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang and Q. Tian, “Scalable Person Re-identification: A Benchmark,” in IEEE International Conference on Computer Vision, Santiago, Chile, 2015. [23] F. Pala, R. Satta , G. Fumera and F. Roli, “Multimodal Person Re-Identification Using RGB-D Cameras,” IEEE Transactions on Circuits and Systems for Video Technology , vol. 26, no. 4, pp. 788 - 799, 2016. [24] M. Gou, S. Karanam , W. Liu, O. Camps and R. J. Radke, “DukeMTMC4Re-Id: A Large-Scale Multi- camera Person Re-identification Dataset,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 2017. [25] D. T. Nguyen, H. G. Hong, K. W. Kim and K. R. Park, “Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras,” Sensors , vol. 17, no. 3, 2017. [26] Z. Feng, J. Lai and X. Xie, “Learning Modality-Specific Representations for Visible-Infrared Person Re- Identification,” IEEE Transactions on Image Processing, vol. 29, pp. 579 - 590, 2019. [27] A. Wu, W.-S. Zheng, . H.-X. Yu , S. Gong and J. Lai, “RGB-Infrared Cross-Modality Person Re- identification,” in IEEE International Conference on Computer Vision, Venice, Italy, 2017. [28] Z. Feng, J. Lai and X. Xie, “Learning View-Specific Deep Networks for Person Re-Identification,” IEEE Transactions on Image Processing , vol. 27, no. 7, pp. 3472 - 3483, 2018. [29] S. Karanam, M. Gou, Z. Wu, A. R. Borras, O. Camps and R. J. Radke, “A Systematic Evaluation and Benchmark for Person Re-Identification: Features, Metrics, and Datasets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 3, pp. 523 - 536, 2019. [30] A. Ess, B. Leibe and L. V. Gool, “Depth and Appearance for Mobile Scene Analysis,” in International Conference on Computer Vision, Rio de Janeiro, Brazil, 2007. [31] C. Choe, G. Choe, T. Wang, S. Han and C. Yuan, “Deep feature learning with mixed distance maximization for person Re-identification,” Multimedia Tools and Applications, vol. 78, p. 27719–27741, 2019. [32] M. Hirzer, C. Beleznai, P. M. Roth and H. Bischof, “Person Re-identification by Descriptive and Discriminative Classification,” in Scandinavian Conference on Image Analysis, Ystad, Sweden, 2011. [33] S. Ksibi, M. Mejdoub and C. B. Amar, “Deep salient-Gaussian Fisher vector encoding of the spatio- temporal trajectory structures for person re-identification,” Multimedia Tools and Applications, vol. 78, p. 1583–1611, 2019. [34] D. Baltieri, R. Vezzani and R. Cucchiara, “3DPeS: 3D people dataset for surveillance and forensics,” in Proceedings of the 2011 joint ACM workshop on Human gesture and behavior understanding, Scottsdale, Arizona, USA, 2011. [35] S. Zhou, J. Wang, D. Meng, X. Xin, Y. Li, Y. Gong and N. Zheng, “Deep self-paced learning for person re-identification,” Pattern Recognition, vol. 76, pp. 739-751, 2018. [36] T. Wang, S. Gong, X. Zhu and S. Wang, “Person Re-identification by Video Ranking,” in European Conference on Computer Vision, Zurich, Switzerland, 2014. [37] R. Zhang, J. Li, H. Sun, Y. Ge, P. Luo, X. Wang and L. Lin, “SCAN: Self-and-Collaborative Attention Network for Video Person Re-Identification,” IEEE Transactions on Image Processing, vol. 28, no. 10, pp. 4870 - 4882, 2019. [38] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang and Q. Tian, “MARS: A Video Benchmark for Large- Scale Person Re-Identification,” in European Conference on Computer Vision, Amsterdam, The Netherlands, 2016. [39] W. Song, S. Li, T. Chang, A. Hao, Q. Zhao and H. Qin, “Context-Interactive CNN for Person Re- Identification,” IEEE Transactions on Image Processing , vol. 29, pp. 2860 - 2874, 2020. [40] Y. Wu, Y. Lin, X. Dong , Y. Yan , W. Ouyang and Y. Yang , “Exploit the Unknown Gradually: One-Shot Video-Based Person Re-identification by Stepwise Learning,” in IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018. [41] P. F. Felzenszwalb, . R. B. Girshick , D. McAllester and D. Ramanan, “Object Detection with Discriminatively Trained Part-Based Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627-1645, 2010. [42] W. Zhong, T. Zhang, L. Jiang, J. Ji, Z. Zhang and H. Xiong, “Discriminative representation learning for person re-identification via multi-loss training,” Journal of Visual Communication and Image Representation, vol. 62, pp. 267-278, 2019. [43] F. Zhu, X. Kong, Q. Wu, H. Fu and M. Li, “A loss combination based deep model for person re- identification,” Multimedia Tools and Applications, vol. 77, 2018. [44] Y. Wen, K. Zhang, Z. Li and Y. Qiao, “A Discriminative Feature Learning Approach for Deep Face Recognition,” in European Conference on Computer Vision, Amsterdam, The Netherlands, 2016. [45] X. Fan, W. Jiang, H. Luo and M. Fei, “SphereRe-Id: Deep hypersphere manifold embedding for person re-identification,” Journal of Visual Communication and Image Representation, vol. 60, pp. 51-58, 2019. [46] R. Hadsell, S. Chopra and Y. LeCun, “Dimensionality Reduction by Learning an Invariant Mapping,” in IEEE Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 2006. [47] Z. Zhang and T. Si, “Learning deep features from body and parts for person re-identification in camera networks,” EURASIP Journal on Wireless Communications and Networking, 2018. [48] W. Zhong, L. Jiang, T. Zhang, J. Ji and H. Xiong, “Combining multilevel feature extraction and multi-loss learning for person re-identification,” Neurocomputing, vol. 334, pp. 68-78, 2019. [49] S. Ding, L. Lin, G. Wang and H. Chao, “Deep feature learning with relative distance comparison for person re-identification,” Pattern Recognition, vol. 48, no. 10, pp. 2993-3003, 2015. [50] D. Wu, S.-J. Zheng, C.-A. Yuan and D.-S. Huang, “A deep model with combined losses for person re- identification,” Cognitive Systems Research, vol. 54, pp. 74-82, 2019. [51] D. Cheng, Y. Gong, S. Zhou, J. Wang and N. Zheng, “Person Re-identification by Multi-Channel Parts- Based CNN with Improved Triplet Loss Function,” in IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016. [52] A. Hermans, L. Beyer and B. Leibe, “In Defense of the Triplet Loss for Person Re-Identification,” arXiv:1703.07737, 2017. [53] F. Zhu, X. Kong, L. Zheng, H. Fu and Q. Tian, “Part-Based Deep Hashing for Large-Scale Person Re- Identification,” IEEE Transactions on Image Processing , vol. 26, no. 10, pp. 4806-4817, 2017. [54] C. Su, S. Zhang, J. Xing, W. Gao and Q. Tian, “Multi-type attributes driven multi-camera person re- identification,” Pattern Recognition, vol. 75, pp. 77-89, 2018. [55] C. Yuan, J. Guo, P. Feng, Z. Zhao, Y. Luo, C. Xu, T. Wang and K. Duan, “Learning deep embedding with mini-cluster loss for person re-identification,” Multimedia Tools and Applications, vol. 78, p. 21145– 21166, 2019. [56] T. Si, Z. Zhang and S. Liu, “Compact Triplet Loss for person re-identification in camera sensor networks,” Ad Hoc Networks, vol. 95, 2019. [57] W. Yang, Y. Yan and S. Chen, “Adaptive deep metric embeddings for person re-identification under occlusions,” Neurocomputing, vol. 340, pp. 125-132, 2019. [58] Z. Zhang, Y. Wang and S. Liu, “Cross-domain person re-identification using Dual Generation Learning in camera sensor networks,” Ad Hoc Networks, vol. 97, no. 102019, 2020. [59] M. Jiang, B. Leng, G. Song and Z. Meng, “Weighted triple-sequence loss for video-based person re- identification,” Neurocomputing, vol. 381, pp. 314-321, 2020. [60] Z. Zhang, Y. Xie, D. Li, W. Zhang and Q. Tian, “Learning to Align via Wasserstein for Person Re- Identification,” IEEE Transactions on Image Processing, vol. 29, pp. 7104-7116, 2020. [61] A. Sikdar and A. S. Chowdhury, “Scale-invariant batch-adaptive residual learning for person re- identification,” Pattern Recognition Letters, vol. 129, pp. 279-286, 2020. [62] Y. Yan, B. Ni, J. Liu and X. Yang, “Multi-level attention model for person re-identification,” Pattern Recognition Letters, vol. 127, pp. 156-164, 2019. [63] Y. Tian, Q. Li, D. Wang and B. Wan, “Robust joint learning network: improved deep representation learning for person re-identification,” Multimedia Tools and Applications, vol. 78, p. 24187–24203, 2019. [64] H. Liu, J. Feng, M. Qi, J. Jiang and S. Yan, “End-to-End Comparative Attention Networks for Person Re- Identification,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3492 - 3506, 2017. [65] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, 1997. [66] T. Bao, B. Wang, S. Karmoshi , C. Liu and M. Zhu , “Learning Discriminative Features through an Individual's Entire Body and The Visual Attentional Parts for Person Re-Identification,” International Journal of Innovative Computing, Information and Control, vol. 15, no. 3, pp. 1037-1048, 2019. [67] L. Wu, Y. Wang, J. Gao and X. Li, “Where-and-When to Look: Deep Siamese Attention Networks for Video-Based Person Re-Identification,” IEEE Transactions on Multimedia , vol. 21, no. 6, pp. 1412-1424, 2019. [68] R. Li, B. Zhang, D.-J. Kang and Z. Teng, “Deep attention network for person re-identification with multi- loss,” Computers & Electrical Engineering, vol. 79, 2019. [69] C. Wan, Y. Wu, X. Tian, J. Huang and X.-S. Hua, “Concentrated Local Part Discovery With Fine-Grained Part Representation for Person Re-Identification,” IEEE Transactions on Multimedia, vol. 22, no. 6, pp. 1605-1618, 2019. [70] C.-P. Tay, S. Roy and K.-H. Yap, “AANet: Attribute Attention Network for Person Re-Identifications,” in Conference on Computer Vision and Pattern Recognition, California, 2019. [71] R. Hou, B. Ma, H. Chang, X. Gu, S. Shan and X. Chen, “VRSTC: Occlusion-Free Video Person Re- Identification,” in Conference on Computer Vision and Pattern Recognition, California, 2019. [72] A. Subramaniam, A. Nambiar and A. Mittal, “Co-Segmentation Inspired Attention Networks for Video- Based Person Re-Identification,” in International Conference on Computer Vision , Seoul, 2019. [73] G. Chen, C. Lin, L. Ren, J. Lu and J. Zhou, “Self-Critical Attention Learning for Person Re-Identification,” in International Conference on Computer Vision, Seoul, 2019. [74] W. Zhang, X. He, X. Yu, W. Lu, Z. Zha and Q. Tian, “A Multi-Scale Spatial-Temporal Attention Model for Person Re-Identification in Videos,” IEEE Transactions on Image Processing , vol. 29, pp. 3365 - 3373, 2020. [75] Y. Li, X. Jiang and J.-N. Hwang, “Effective person re-identification by self-attention model guided feature learning,” Knowledge-Based Systems, vol. 187, 2020. [76] Y. Huang, H. Sheng, Y. Zheng and Z. Xiong, “DeepDiff: Learning deep difference features on human body parts for person re-identification,” Neurocomputing, vol. 241, pp. 191-203, 2017. [77] J. H. Koo, S. W. Cho, N. R. Baek, M. C. Kim and K. R. Park, “CNN-Based Multimodal Human Recognition in Surveillance Environments,” Sensors, vol. 18, no. 9, 2018. [78] D. Tao, Y. Guo, B. Yu, J. Pang and Z. Yu, “Deep Multi-View Feature Learning for Person Re- Identification,” IEEE Transactions on Circuits and Systems for Video Technology , vol. 28, no. 10, pp. 2657 - 2666, 2018. [79] Y. ZHANG, Z. ZHOU, B. LI, Y. HUANG, J. HUANG and Z. CHEN , “Improving Slice-Based Model for Person Re-ID with Multi-Level Representation and Triplet-Center Loss,” IEICE TRANSACTIONS on Information and Systems, Vols. 102-D, no. 11, pp. 2230-2237, 2019. [80] C. Zhao, K. Chen, Z. Wei, Y. Chen, D. Miao and W. Wang, “Multilevel triplet deep learning model for person re-identification,” Pattern Recognition Letters, vol. 117, pp. 161-168, 2019. [81] C. Yuan, J. Guo, P. Feng, Z. Zhao, C. Xu, T. Wang, G. Choe and K. Duan, “A jointly learned deep embedding for person re-identification,” Neurocomputing, vol. 330, pp. 127-137, 2019. [82] H. Ling, Z. Wang, P. Li, Y. Shi, J. Chen and F. Zou, “Improving person re-identification by multi-task learning,” Neurocomputing, vol. 347, pp. 109-118, 2019. [83] H. Tian, X. Zhang, L. Lan and Z. Luo, “Person re-identification via adaptive verification loss,” Neurocomputing, vol. 359, pp. 93-101, 2019. [84] Y. Wang, Z. Wang and M. Jiang, “Part-based pyramid loss for person re-identification,” International Journal of Information and Communication Technology, vol. 15, no. 4, 2019. [85] H. Yao, S. Zhang, R. Hong, Y. Zhang, C. Xu and Q. Tian, “Deep Representation Learning With Part Loss for Person Re-Identification,” IEEE Transactions on Image Processing , vol. 28, no. 6, pp. 2860-2871, 2019. [86] Z. He, C. Jung, Q. Fu and Z. Zhang, “Deep feature embedding learning for person re-identification based on lifted structured loss,” Multimedia Tools and Applications, vol. 78, p. 5863–5880, 2019. [87] R. Quispe and H. Pedrini, “Improved person re-identification based on saliency and semantic parsing with deep neural network models,” Image and Vision Computing, vol. 92, 2019. [88] S. Li, H. Yu and R. Hu, “Attributes-aided part detection and refinement for person re-identification,” Pattern Recognition, vol. 97, no. 107016, 2020. [89] X. Bai, M. Yang, T. Huang, Z. Dou, R. Yu and Y. Xu, “Deep-Person: Learning discriminative deep features for person Re-Identification,” Pattern Recognition, vol. 98, 2020. [90] Z. Zhang and M. Huang, “Person Re-Identification Based on Heterogeneous Part-Based Deep Network in Camera Networks,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 4, no. 1, pp. 51-60, 2020. [91] C. Shen, Z. Jin, W. Chu, R. Jiang, Y. Chen, G.-J. Qi and X.-S. Hua, “Multi-level Similarity Perception Network for Person Re-identification,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 15, no. 2, 2019. [92] M. Qi, J. Han, J. Jiang and H. Liu, “Deep feature representation and multiple metric ensembles for person re-identification in security surveillance system,” Multimedia Tools and Applications, vol. 78, p. pages27029–27043, 2019. [93] L. Zheng, Y. Huang, H. Lu and Y. Yang, “Pose-Invariant Embedding for Deep Person Re-Identification,” IEEE Transactions on Image Processing , vol. 28, no. 9, pp. 4500-4509, 2019. [94] T. Yu, H. Jin, W.-T. Tan and K. Nahrstedt, “SKEPRID: Pose and Illumination Change-Resistant Skeleton- Based Person Re-Identification,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 14, no. 4, 2018. [95] Y. Chen, S. Duffner, A. Stoian, J.-Y. Dufour and A. Baskurt, “Deep and low-level feature based attribute learning for person re-identification,” Image and Vision Computing, vol. 79, pp. 25-34, 2018. [96] M. Fu, S. Sun, N. Chen, D. Wang and X. Tong, “Deep Fusion Feature Presentations for Nonaligned Person Re-Identification,” IEEE Access, vol. 7, pp. 73253 - 73261, 2019. [97] Z. Zheng, L. Zheng and Y. Yang, “Pedestrian Alignment Network for Large-scale Person Re- Identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 10, pp. 3037 - 3045, 2019. [98] Y.-C. Chen, X. Zhu, W.-S. Zheng and J.-H. Lai , “Person Re-Identification by Camera Correlation Aware Feature Augmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 40, no. 2, pp. 392-408, 2018. [99] H. Luo, W. Jiang, X. Zhang, X. Fan, J. Qian and C. Zhang, “AlignedRe-Id++: Dynamically matching local information for person re-identification☆,” Pattern Recognition, vol. 94, pp. 53-61, 2019. [100] L. Wei, S. Zhang, H. Yao, W. Gao and Q. Tian, “GLAD: Global–Local-Alignment Descriptor for Scalable Person Re-Identification,” IEEE Transactions on Multimedia, vol. 21, no. 4, pp. 986 - 999, 2019. [101] L. Wu, R. Hong, Y. Wang and M. Wang, “Cross-Entropy Adversarial View Adaptation for Person Re- Identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 7, pp. 2081 - 2092, 2020. [102] Y. Tang, X. Yang, N. Wang, B. Song and X. Gao, “Person Re-Identification with Feature Pyramid Optimization and Gradual Background Suppression,” Neural Networks, vol. 124, pp. 223-232, 2020. [103] Z. Wang, J. Jiang, Y. Wu, M. Ye, X. Bai and S. Satoh, “Learning Sparse and Identity-Preserved Hidden Attributes for Person Re-Identification,” IEEE Transactions on Image Processing, vol. 29, pp. 2013 - 2025, 2020. [104] D. K. Vishwakarma and S. Upadhyay, “A Deep Structure of Person Re-Identification Using Multi-Level Gaussian Models,” IEEE Transactions on Multi-Scale Computing Systems, vol. 4, no. 4, pp. 513-521, 2018. [105] L. Wu, Y. Wang, J. Gao and X. Li, “Deep adaptive feature embedding with local sample distributions for person re-identification,” Pattern Recognition, vol. 73, pp. 275-288, 2018. [106] L. Wu, Y. Wang, X. Li and J. Gao, “What-and-where to match: Deep spatially multiplicative integration networks for person re-identification,” Pattern Recognition, vol. 76, pp. 727-738, 2018. [107] Q. Ke, M. Bennamoun, H. Rahmani, S. An, F. Sohel and F. Boussaid, “Identity Adaptation for Person Re- Identification,” IEEE Access, vol. 6, pp. 48147 - 48155, 2018. [108] J. Zhang, X. Hu, M. Wang, H. Qiao, X. Li and T. Sun, “Person Re-Identification via Group Symmetry Theory,” IEEE Access, vol. 7, pp. 133686 - 133693, 2019. [109] B. Jiang, X. Wang and B. Luo, “PH-GCN: Person Re-identification with Part-based Hierarchical Graph Convolutional Network,” arXiv:1907.08822, 2019. [110] D. Wu, H.-W. Yang, D.-S. Huang, C.-A. Yuan, X. Qin, Y. Zhao, X.-Y. Zhao and J.-H. Sun, “Omnidirectional Feature Learning for Person Re-Identification,” IEEE Access, vol. 7, pp. 28402 - 28411, 2019. [111] Y. Liu, N. Song and Y. Han, “Multi-cue fusion: Discriminative enhancing for person re-identification,” Journal of Visual Communication and Image Representation, vol. 58, pp. 46-52, 2019. [112] Y. Zhang, X. Gu, J. Tang, K. Cheng and S. Tan, “Part-Based Attribute-Aware Network for Person Re- Identification,” IEEE Access, vol. 7, pp. 53585 - 53595, 2019. [113] H. Wu, M. Xin, W. Fang, H.-M. Hu and Z. Hu, “Multi-Level Feature Network With Multi-Loss for Person Re-Identification,” IEEE Access, vol. 7, pp. 91052 - 91062, 2019. [114] C. Wang, L. Song, G. Wang, Q. Zhang and X. Wang, “Multi-scale multi-patch person re-identification with exclusivity regularized softmax,” Neurocomputing, vol. 382, pp. 64-70, 2020. [115] F. Wang, C. Zhang, S. Chen, G. Ying and J. Lv, “Engineering Hand-designed and Deeply-learned features for person Re-identification,” Pattern Recognition Letters, vol. 130, pp. 293-298, 2020. [116] L. Ren, J. Lu, J. Feng and J. Zhou, “Multi-modal uniform deep learning for RGB-D person re- identification,” Pattern Recognition, vol. 72, pp. 446-457, 2017. [117] X. Xiang, N. Lv, Z. Yu, M. Zhai and A. E. Saddik, “Cross-Modality Person Re-Identification Based on Dual-Path Multi-Branch Network,” IEEE Sensors Journal, vol. 19, no. 23, pp. 11706 - 11713, 2019. [118] Y.-S. Chang, M.-Y. Wang, L. He, W. Lu, H. Su, N. Gao and X.-A. Yang, “Joint deep semantic embedding and metric learning for person re-identification,” Pattern Recognition Letters, vol. 130, pp. 306-311, 2020. [119] I. Gohar, Q. Riaz, M. Shahzad, M. Z. Ul, H. Hashmi, H. Tahir and M. E. Ul Haq, “Person Re-Identification Using Deep Modeling of Temporally Correlated Inertial Motion Patterns,” Sensors, vol. 20, no. 3, 2020. [120] P. Wang, Z. Zhao, F. Su, Y. Zhao, H. Wang, L. Yang and Y. Li, “Deep Multi-Patch Matching Network for Visible Thermal Person Re-Identification,” IEEE Transactions on Multimedia, 2020. [121] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand and . V. . S. Lempitsky, “Domain-adversarial training of neural networks,” The Journal of Machine Learning Research, vol. 17, no. 1, 2016. [122] H. Wang and J. Hu, “Deep Multi-Task Transfer Network for Cross Domain Person Re-Identification,” IEEE Access, vol. 8, pp. 5339 - 5348, 2020. [123] B. Xu, J. Liu, X. Hou, K. Sun and G. Qiu, “Cross Domain Person Re-Identification With Large Scale Attribute Annotated Datasets,” IEEE Access, vol. 7, 2019. [124] S. Zhou, M. Ke and P. Luo, “Multi-camera transfer GAN for person re-identification,” Journal of Visual Communication and Image Representation, vol. 59, pp. 393-400, 2019. [125] R. Sun, W. Lu, Y. Zhao, J. Zhang and C. Kai, “A Novel Method for Person Re-Identification: Conditional Translated Network Based on GANs,” IEEE Access, vol. 8, pp. 3677 - 3686, 2020. [126] T. Xiao, H. Li, W. Ouyang and X. Wang, “Learning Deep Feature Representations with Domain Guided Dropout for Person Re-identification,” in IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016. [127] A. Genç and H. K. Ekenel, “Cross-dataset person re-identification using deep convolutional neural networks: effects of context and domain adaptation,” Multimedia Tools and Applications, vol. 78, p. 5843– 5861, 2019. [128] J. Hu, J. Lu, Y.-P. Tan and J. Zhou, “Deep Transfer Metric Learning,” IEEE Transactions on Image Processing, vol. 25, no. 12, pp. 5576 - 5588, 2016. [129] L. Lin, G. Wang, W. Zuo, X. Feng and L. Zhang, “Cross-Domain Visual Matching via Generalized Similarity Measure and Feature Learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 39, no. 6, pp. 1089 - 1102, 2017. [130] J. Zhu, H. Zeng, S. Liao, Z. Lei, C. Cai and L. Zheng, “Deep Hybrid Similarity Learning for Person Re- Identification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 11, pp. 3183 - 3193, 2018. [131] Y. Duan, J. Lu, J. Feng and J. Zhou, “Deep Localized Metric Learning,” IEEE Transactions on Circuits and Systems for Video Technology , vol. 28, no. 10, pp. 2644 - 2656, 2018. [132] J. Hu, J. Lu and Y.-P. Tan, “Sharable and Individual Multi-View Metric Learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 40, no. 9, pp. 2281 - 2288, 2018. [133] M. Chen, Y. Ge, X. Feng, C. Xu and D. Yang , “Person Re-Identification by Pose Invariant Deep Metric Learning With Improved Triplet Loss,” IEEE Access, vol. 6, pp. 68089 - 68095, 2018. [134] C.-X. Ren, X.-L. Xu and Z. Lei, “A Deep and Structured Metric Learning Method for Robust Person Re- Identification,” Pattern Recognition, vol. 96, 2019. [135] Z. Ding, M. Shao, W. Hwang, S. Suh, J.-J. Han, C. Choi and Y. Fu, “Robust Discriminative Metric Learning for Image Representation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 11, pp. 3173 - 3183, 2019. [136] M. Xiong, D. Chen, J. Chen, J. Chen, B. Shi, C. Liang and R. Hu, “Person re-identification with multiple similarity probabilities using deep metric learning for efficient smart security applications,” Journal of Parallel and Distributed Computing, vol. 132, pp. 230-241, 2019. [137] N. McLaughlin, J. M. d. Rincon and P. Miller, “Video Person Re-Identification for Wide Area Tracking Based on Recurrent Neural Networks,” IEEE Transactions on Circuits and Systems for Video Technology , vol. 29, no. 9, pp. 2613 - 2626, 2019. [138] L. Wu, Y. Wang, L. Shao and M. Wang, “3-D personVLAD: learning deep global representations for video-based person Re-Identification,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3347-3359, 2019. [139] R. Sun, Q. Huang, M. Xia and J. Zhang, “Video-Based Person Re-Identification by an End-To-End Learning Architecture with Hybrid Deep Appearance-Temporal Feature,” Sensors, vol. 18, no. 11, 2018. [140] G. Wang, J. Lai and X. Xie , “P2SNet: Can an Image Match a Video for Person Re-Identification in an End-to-End Way?,” IEEE Transactions on Circuits and Systems for Video Technology , vol. 28, no. 10, pp. 2777 - 2787, 2018. [141] J. Meng, A. Wu and W.-S. Zheng, “Deep asymmetric video-based person re-identification,” Pattern Recognition, vol. 93, pp. 430-441, 2019. [142] J. Liu, Z.-J. Zheng-Jun Zha, X. Chen, Z. Wang and Y. Zhang, “Dense 3D-Convolutional Neural Network for Person Re-Identification in Videos,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 15, no. 15, 2019. [143] L. Wu, Y. Wang, H. Yin, M. Wang and L. Shao, “Few-Shot Deep Adversarial Learning for Video-Based Person Re-Identification,” IEEE Transactions on Image Processing, vol. 29, pp. 1233 - 1245, 2020. [144] D. Avola, L. Cinque, A. Fagioli, G. L. Foresti, D. Pannone and C. Piciarelli, “Bodyprint—A Meta-Feature Based LSTM Hashing Model for Person Re-Identification,” Sensors, vol. 20, no. 18, 2020. [145] Y. Wu, O. E. F. Bourahla, X. Li, F. Wu, Q. Tian and X. Zhou, “Adaptive Graph Representation Learning for Video Person Re-Identification,” IEEE Transactions on Image Processing, vol. 29, pp. 8821 - 8830, 2020.
ai_researcher
1
A_Systematic_Study_on_methods_of_Spatiotemporal_Hotspot_Detection_and_Evaluation_metrics.pdf
4 1 0 2 n u J 3 1 ] I A . s c [ 1 v 6 0 5 3 . 6 0 4 1 : v i X r a Eigenspace Method for Spatiotemporal Hotspot Detection Hadi Fanaee-T and João Gama Laboratory of Artificial Intelligence and Decision Support (LIAAD), University of Porto INESC TEC, Rua Dr. Roberto Frias, Porto, Portugal [email protected] and [email protected] Abstract Hotspot detection aims at identifying subgroups in the observations that are unexpected, with respect to the some baseline information. For instance, in disease surveillance, the purpose is to detect sub-regions in spatiotemporal space, where the count of reported diseases (e.g. Cancer) is higher than expected, with respect to the population. The state-of-the-art method for this kind of problem is the Space-Time Scan Statistics (STScan), which exhaustively search the whole space through a sliding window looking for significant spatiotemporal clusters. STScan makes some restrictive assumptions about the distribution of data, the shape of the hotspots and the quality of data, which can be unrealistic for some nontraditional data sources. A novel methodology called EigenSpot is proposed where instead of an exhaustive search over the space, tracks the changes in a space- time correlation structure. Not only does the new approach presents much more computational efficiency, but also makes no assumption about the data distribution, hotspot shape or the data quality. The principal idea is that with the joint combination of abnormal elements in the principal spatial and the temporal singular vectors, the location of hotspots in the spatiotemporal space can be approximated. A comprehensive experimental evaluation, both on simulated and real data sets reveals the effectiveness of the proposed method. Keywords: Hotspot Detection, Spatiotemporal Data, Eigenspace, SVD, Outbreak Detection 1 Introduction Eigenspace techniques are very popular ones, which encompass many applications in data mining, signal processing, information retrieval and other domains. A famous instance of such an application relates to the current success of Google search engine. As described in an article entitled $25,000,000,000 Eigenvector [7] Google search engine is largely attributed to the eigenspace techniques. Another success story is related to the application of the Singular Value Decomposition (SVD) [13] in collaborative filtering. In the year 2008, the BellKor team won the $1,000,000 prize for the improvement of the system of the Netflix movie recommendation. The report later [5] stated that SVD was the key data analysis tools that they used. However, the application of the eigenspace techniques is not restricted to the above instances. There are several different examples in other areas and sciences. For instance, in face recognition [31] the specific set of the largest eigenvectors can be used to approximate the images of the human face. In structural engineering, both the eigenvalue and eigenvectors are used to estimate the vibration of structures. In control engineering, the eigenvalues of the linear system are used to assess the stability and response of the system [32]. Nevertheless, despite the merits of the eigenspace techniques, they have not been applied yet to some potential problems, such as hotspot detection. Hotspot detection which may come with different terminologies, such as outbreak detection, cluster detection or event detection is somehow related to the clustering and anomaly detection, however it is distinct from these two. In clustering, the entire data set is partitioned into some groups, but in the anomaly detection, the anomalous points are searched for and sought after. Hotspot detection addresses the same problem, but with this difference that anomalous instances are recognized given some baseline information. In other words, looking into the dataset, everything might seem normal, however, when the cases along the baseline are considered, some points 1 might be considered unexpected. A realistic scenario of the application of the hotspot detection is in disease surveillance. Suppose that we have the population of different postal codesŮduring a range of yearsŮas the baseline information and the count of the reported diseases in a range of postal codes, throughout different years as the cases dataset. The goal is to detect those spatiotemporal regions that contain unexpected counts. For instance, the output like zones S1,S2 and S3 during the years T1 to T5 might be considered a spatiotemporal hotspot. The detection of such hotspots enables the officials to better understand their target of interest for essential medical care and preventive measures. The current methods for hotspot detection are twofold: clustering-based techniques and the scan statistics-based ones. Clustering-based techniques, such as [21] infer some thresholds from the population data and then apply the thresholds for clustering of data points in the cases set. Their prominent benefit, as opposed to the other methods, is that they provide the exact shape of the clusters. However, handling complex data, such as spatiotemporal data is not straightforward for these techniques. Besides, the clustering methods do not consider chance and randomness issues, which are very important in sensitive applications, such as security and public health. Moreover, there is not a standard clustering method for hotspot detection, which is widely accepted by the community. The methods are mostly outspread and diverse, in terms of the technical details. The clustering methods also require restrictive input parameters, which make their usage limited in automatic settings. The second group of techniques which relies on scan statistics are widely used and accepted by the epidemiological community. These groups of techniques exhaustively scan the whole space to find inter- esting spatial and spatiotemporal clusters. A specific statistic is computed for each possible window and then potential clusters are sorted, based on the obtained statistics. Thereafter, the statistical significance of the top-k clusters is simulated via a Monte-Carlo simulation. Since these methods scan an entire space, they are extremely computationally expensive. Spatial scan statistics [14] requires computation time of O(N 3) and space-time scan statistics (STScan) [14, 15, 18, 19] requires O(N 4). Some recent efforts are made to reduce this complexity. For instance, [2] propose a method that requires O( 1 (cid:15) N 2Log2N ) for spatial scan, which is more efficient than O(N 3). However, the minimum complexity for space-time scan statistics in the best condition has not reached less than O(N 3). This high computational cost practically has restricted their use in real-time applications or large-scale data sets. Besides, scan statistics-based techniques are highly associated with the strong parametric model assumptions (e.g. Poisson or Gaussian counts) [28]. These assumptions mitigate the performance when the models are incorrect for nontradi- tional data sources. Additionally, scan statistics-based methods are not efficient for detection of irregular shape clusters [9, 30] apart from the circles (spatial scan) and cylinders (space-time scan). They also assume that data is presented in a high quality format, hence is vulnerable against the noises and outliers [27]. Our proposed method is a solution to some of the above mentioned issues, in scan statistics-based methods. An efficient method is proposed (linear with both space and time dimensions) for approximation of hotspots in the spatiotemporal space, without the need for exhaustive search. Instead of looking for deviations in the assumed parametric model, we track changes in the space-time correlation structure, using the eigenspace techniques. This approach enables us to detect irregular shape hotspots from even noisy data sets, without any prior knowledge about the data nature or hotspot characteristics. To the best of our knowledge this problem has not already been addressed by other researchers. Our approach also differs from those of ones that focus on the improvement of scan statistics-based methods efficiency (e.g. [2, 25, 26]). We do not improve the efficiency of scan statistics-based methods; rather we propose and examine a new methodology, which follows a different aim. Hence, this is not an “apples to apples” comparison, as both groups of approaches have inherent differences and subsequently their own applications. STScan can be more helpful for retrospective and sensitive applications, when some prior knowledge exists about the nature of hotspot and data. On the other hand, our presented approach focuses more on real-time applications, where neither the nature of data nor the hotspot characteristics is known in advance. In such circumstances, a computationally feasible approximation method that rapidly identifies the alarming areas, without any prior knowledge might be very useful. The rest of the paper is organized as follows: The section 2 describes the problem, the proposed solution and algorithm, as well as an illustrative example. The section 3 includes an experimental evaluation and results of the simulation study and the real case study. The last section concludes the exposition presenting the final remarks. 2 2 The proposed approach 2.1 The problem Given a spatiotemporal count matrix for the cases needed for the detection of those spatiotemporal regions (hotspots) that seem unexpected, given the baseline spatiotemporal matrix. Each cell in each matrix represents a count corresponding to a specific region and time. In particular, for disease outbreak detection, each cell in the baseline matrix represents the population corresponding to a region in a specific time period. Each cell in the matrix cases also represents the count of reported disease in a specific region, within a given time period, as well. The purpose is to determine those subgroups of the spatiotemporal space whose reported cases are unexpected. A baseline method that can be applied to the problem is to compute the ratio of the cases to the population for all possible spatiotemporal regions (each cell in the spatiotemporal matrix) and then compute the z-score of the ratios. Then the null hypothesis Ho: there is no hotspot is rejected, in case some spatiotemporal regions with z-score greater than a threshold are found. This approach theoretically and practically Ů as will be illustrated later Ůimposes too many false alarms, since for a n × m matrix, itŠs required to perform n × m comparison tasks. In this paper, an approach that performs only n + m comparisons is clearly proposed. An unsupervised method may be needed to use with the clustering on the ratios. However, it suffers from the same problem of the baseline method (it requires n × m comparisons and not n + m). Besides, the requirement for determination of appropriate cut-point or number of clusters adds more complexity and user involvement to the system. We are interested in developing a system, which has the following characteristics: 1) does not require any input parameter; 2) weighs all the possible hotspots, based on a standard metric like statistical significance (p-value). The benefit is this that the output can be compared to relevant systems or methods. The alpha threshold is also easy to estimate (usually alpha=0.01 or 0.05). 2.2 The Method In this section, the logic used in the method deployment is thoroughly described. Assume that we have two identical n × m matrices B (baseline) and C (cases) such that n be the number of components in the spatial dimension and m be the number of components in the temporal dimension. The SVD of the n × m matrix is a factorization of the form M = UΣV∗. The n columns of U and the m columns of V are called the left-singular vectors and right-singular vectors of the matrices, respectively. The left- singular vectors correspond to spatial dimension, while the right-singular ones correspond to the temporal dimension. In order to clarify and elaborate, a new terminology spatial singular vector is used along with temporal singular vector that respectively refers to the principal left singular vector and the principal right singular one. Note that we take only the singular vector corresponding to the largest eigenvalue for the comparison, due to the fact that the first principal singular vector represents the largest possible variance. Hence, it explains or extracts the largest part of the inertia of the data [1]. Now, let us denote the spatial singular vector of baseline (B) and cases (C) respectively with (cid:126)sb(sb1, sb2, ...sbn) and (cid:126)sc(sc1, sc2, ...scn). Then lets denote temporal singular vector of B and C respectively with (cid:126)tb(tb1, tb2, ...tbm) and (cid:126)tc(tc1, tc2...tcm). If we hypothetically assume that B=C, then (cid:126)sb = (cid:126)sc and (cid:126)tb = (cid:126)tc. In this condi- tion, the angles between (cid:126)sb and (cid:126)sc, and between (cid:126)tb and (cid:126)tc would be equal to almost zero. Now assume that some change occurs in the C and this change corresponds to a specific region and time period. Therefore, the matrices are no longer identical and subsequently the angles between their singular vec- tors rise up in value. From this angle change, we can only infer that some changes occur, but we do not know what subgroup of data is affected by this change. If we could identify those vector elements from (cid:126)sc and (cid:126)tc that caused this change, we would be able to identify the spatial and temporal components of the affected area. For instance, assume that through a hypothetical method we could identify that sc1 from the (cid:126)sc and tc1 from (cid:126)tc corresponding to the affected area. If we remove the region corresponding to sc1 and time related to tc1 from both baseline and cases data sets, the matrices should again become identical. Hence, we would have the angles between the pair singular vectors equal to almost zero. Here, (sc1, tc1) is called hotspot and sc1 and tc1 are respectively called the spatial and temporal components of the hotspot. The process of finding these components is also called hotspot detection. Note that in this work, the angle between the singular vectors is not computed. In the above, the angle concept is only 3 used to explain the rationale behind the proposed method. Some assumptions were made above, which were only for simplification of explanation. In practice, we rarely find two identical baseline and cases matrix. However, we are able to assume that in a normal condition, where no hotspot exists, both baseline and cases set should have a same space-time correlation structure. In this case, the pair singular vectors of baseline and cases sets, regardless of the data distri- bution should stay in a constant distance. Now, if a hotspot starts to grow in the cases set, this change can be directly observed from the changes in the singular vectors elements. In such cases, some distances between the singular vector elements become abnormal for elements corresponding to the affected areas in both the spatial and temporal dimension. We exploit this idea to develop our algorithm for hotspot detection. According to the explanation above, two kinds of tools are required, a tool for obtaining the singular vectors of a non-square matrix and a process control tool for monitoring distances between singular vector elements. SVD and statistical process control (SPC) are two powerful techniques, which for several years have been successfully applied to many problems in different domains and are state-of-the art methods to these requirements. In the next section, we explain how these techniques are exploited in the deployment of the solution. 2.3 The EigenSpot Algorithm Algorithm 1 EigenSpot //n: number of items in the spatial dimension //m: number of items in the temporal dimension //B: Baseline n × m spatiotemporal matrix //C: Cases n × m spatiotemporal Matrix //α: Statistical significance level (e.g. 0.05) dsi = sci − sbi (cid:46) baseline : (cid:126)sb: spatial singular vector, (cid:126)tb: temporal singular vector (cid:46) cases: (cid:126)sc: spatial singular vector, (cid:126)tc: temporal singular vector Input: B, C, α Output: Hotspots 1: [ (cid:126)sb, (cid:126)tb] = 1-rank SVD (B) 2: [ (cid:126)sc, (cid:126)tc] = 1-rank SVD (C) 3: for i=1:n do 4: 5: end for 6: for j=1:m do 7: 8: end for 9: Spatial out of control elements ← ControlChart(ds, α) 10: T emporal out of control elements ← ControlChart(dt, α) 11: Hotspots ← All joint combination of out of control elements in spatial and temporal dimensions (cid:46) dt: subtract vector corresponding temporal dimension (cid:46) ds: subtract vector corresponding spatial dimension dtj = tcj − tbj In this section, our proposed algorithm EigenSpot is explained in detail. The inputs of the algorithm 1 are n×m matrices for the baseline and cases where n represents the number of regions and m represents the number of temporal instants. We start by decomposing both matrices, using one-rank SVD. The one- rank SVD gives us the principal singular vector corresponding to the spatial and temporal dimensions (lines 1-2). The reason why the low-rank SVD is applied versus the full-rank SVD is that our approach requires only the principal singular vector for each matrix. The full-rank SVD is a more expensive method, because of the fact that a N × N matrix requires O(N 3) while the low-rank SVD requires O(kN 2) where in our case k=1 and therefore we require only O(N 2) for each one-rank SVD. The principal singular vector explains the majority of variance in both cases and baseline. Therefore, it is appropriate for matching purposes. In the next step, we subtract each element of the pair singular vectors together (lines 3-8). If we denote the spatial singular vector for baseline with [sb1 sb2 ...sbn] and spatial singular vector for cases with [sc1 sc2 ...scn], the subtract vector would be ds = [ds1 = sc1 −sb1 ds2 = sc2 −sb2 ... dsn = scn −sbn]. Similarly for the temporal dimension, we have dt = [dt1 = tc1 − tb1 dt2 = tc2 − tb2 ...dtm = tcm − tbm]. Subsequently, in order to identify the spatial and temporal components of the hotspot, a z-score control 4 For instance, assume that chart is applied on vectors ds and dt with significant level α. To do so, the standardized vector of z- scores is first computed for ds and dt. Thereafter, we obtain the equivalent two-tailed p-value for each z-score. Finally, those components of ds and dt that obtain p-value lower than α are considered abnormal. Finally, a joint combination of all spatial and temporal components to the original space gives us the approximation of hotspots. −→ sb = [0.25 0.10 0.75 0.20] be the spatial singular vector of baseline and −→sc = [0.30 0.90 0.80 0.15] be the spatial singular vector of cases. Each element in the spatial singular vector corresponds to a specific region. For instance, 0.30 and 0.25 in the first element corresponds to region 1. Similarly, the second, third and the fourth element corresponds to the region 2, 3 and 4, respectively. The angle between the two singular vectors sb and sc is equal to 68 degrees in this example. This angle does not tell us what elements of singular vector have contributed to this difference. However, −→ sb = [0.25 0.75 0.20] if in the above example, if we remove region 2 from the system, we have two vectors and −→sc = [0.30 0.80 0.15] where the angle between them is equal to 0.09 which is almost equal to In order to zero. Region 2 in this example is equivalent to the spatial component of the hotspot. −→ identify the region 2 in this example, a z-score control chart is applied on the subtract vector ds = [0.25 − 0.30 0.10 − 0.90 0.75 − 0.80 0.20 − 0.15] = [−0.05 − 0.80 − 0.05 0.05]. Afterwards, we compute the standardized z-scores of the subtract vector, which in this case is zds = [0.4119 − 1.4893 0.4119 0.6654]. As shown, z-score of -1.4893 is equivalent to the left-tailed P-value of 0.06. If we define α = 0.10, region 2 would be identified as hotspot spatial component. This is because its p-value is lower than 0.10. However, if we define α = 0.05, region 2 is not detected as hotspot spatial component. 2.3.1 The Algorithm Complexity If we assume that we have N regions and N time instants, EigenSpot requires two O(N 2) for two one- rank SVD for cases and baseline matrices and two O(N ) for elements matching corresponding spatial and temporal dimensions. This makes the EigenSpot require only O(2N 2) + O(2N ) = O(N 2) which is much more efficient than the STScan. Because, the STScan requires O(N logN ) and O(N 2logN ) for finding the relevant time and space cylinders and O(N 4) for finding the space-time cylinders as intersections of space and time cylinders [4]. Therefore, a single execution of the STScan procedure takes O(N logN ) + O(N 2logN ) + O(N 4) = O(N 4). 2.4 Illustrative Example Figure 1 demonstrates an illustrative example of how a hotspot can be identified by the EigenSpot algorithm. We are given two sets of baseline and cases that encompass three regions within four time windows. Each region can be a postal code or a city. In addition, each temporal window can be a time period, such as a year (e.g. T1= 2010). If we represent these two sets as a matrix, we have two sets of 3×4 matrices such that each cell represents the count. For instance, b11 represents the population of region 1 at time window T1 and c32 represents the count of reported disease in region 3 within the temporal window T2. The shaded area in the cases matrix (conjunction of third row with first-second columns) is the hotspot of interest that is required to be detected by the method. As demonstrated, the principal singular vector corresponding to the spatial and temporal dimensions is obtained via one-rank SVD. As a result, we have two singular vectors corresponding to the spatial and temporal dimensions for each set. In the next step, we subtract elements of each singular vectors pairs together. Therefore, we would have two vectors dt and ds which represent subtract vectors for the temporal and spatial dimensions, respectively. As demonstrated, dt has four elements and ds has three elements, each of which corresponds to the original regions and temporal windows (e.g. dt1 corresponds to T1 and ds1 corresponds to region 1). In the next step, we apply a z-score control chart with significance level α (e.g. α = 0.05) on both of these vectors to identify their abnormal elements. As it is hypothetically shown in the example, T1 and T2 are identified as temporal hotspot components, and region 3 is identified as the spatial hotspot component. We only need to combine spatial components with temporal components to approximate the hotspots in the spatiotemporal space. As shown, the identified hotspot of Region 3 , T1,T2 is equivalent to the shaded area in cases matrix (the target). 5 Figure 1: EigenSpot Algorithm is an Illustrative example. The goal of the approach is the identification of the shaded area in the cases matrix. The values c and b in the baseline and cases matrix are counts corresponding to a spatiotemporal window. The process is composed of the following four steps: 1) matrix decomposition; 2) subtraction of pair singular vectors elements; 3) applying the z-score control chart on the subtract vector; and 4) combining the spatial and temporal hotspots components. 3 Experimental Evaluation In this section, the effectiveness of our proposed approach is assessed, through a comprehensive experi- mental study. The data sets used in evaluation of hotspot detection techniques are usually threefold [8]: 1) wholly simulated: both baseline and cases and hotspots are simulated; 2) semi-realistic: baseline is taken from a real population, but cases and hotspots are simulated; and 3) real data: both baseline and cases are real and hotspots are verified by a domain specialist. In this paper, we evaluate the proposal, using the latter two strategies. We evaluate the algorithm performance, via the simulation study (section 3.1) and a real-world data (section 3.2). All experiments are conducted on a PC with Intel Core 2 Duo CPU and 3GB Ram. We use MATLAB 7 for the algorithm implementation and experiments and SatSan 9.2 [17] for experimenting with STScan. Our method is compared with two other techniques, including the STScan and a baseline method. STScan [14, 15, 18, 19] exhaustively moves a varying radius and height cylinder over the whole spatiotem- poral space. The height of the cylinder represents the time dimension and the surface corresponds to the space dimension. Furthermore, it scores each possible cylinder, based on likelihood ratio statistics. Next, it sorts cylinders based on an order of the highest to the lowest score. Finally, a randomization test is performed for obtaining the cylinders statistical significance. The cylinder whose p-value is lower than α (e.g. 0.05) is returned as hotspots. In the baseline method, we compute the ratio of the count of cases to the corresponding population for each matrix cell then we compute the z-score of the obtained ratios and obtain the p-value from z-score. Afterwards, we signal an alarm, when p-value for a cell goes lower than α. 3.1 Simulation Study Here, we describe how the simulated data is generated and subsequently present the obtained result. 3.1.1 Data Generation We generate 1500 sets of semi-real data, based on the extracted baseline data set from [18]. The baseline set includes the spatiotemporal distribution of population in New Mexico, USA during 1973-1991. In 6 Table 1: The mean accuracy for 173 α in the range of 0.20 to 0.01 averaged for 100 data sets Method EigenSpot Baseline STScan EigenSpot Baseline STScan EigenSpot Baseline STScan Impact 1.5 1.5 1.5 2.0 2.0 2.0 2.5 2.5 2.5 1 × 1 0.7011 0.7270 0.7966 0.8751 0.7259 0.8034 0.9393 0.7321 0.8069 2 × 2 0.7670 0.7417 0.7984 0.9588 0.7453 0.8130 0.9718 0.7511 0.8314 Size 3 × 3 0.8124 0.7523 0.8008 0.9588 0.7510 0.8171 0.9725 0.7588 0.8578 4 × 4 0.8574 0.7663 0.8030 0.9492 0.7662 0.8273 0.9675 0.7783 0.8629 5 × 5 0.8263 0.7669 0.8005 0.9498 0.7741 0.8267 0.9555 0.7879 0.8723 order to simulate the cases count, we initially obtain the maximum likelihood of the parameter of the Poisson distribution, λ from the first year of the baseline set. Let the vector of counts for the first year be (c1, c2, .., ci) where ≤< i ≤ n (n:number of spatial items). λ simply can be obtained by computing the means of the vector. Then, we multiply λ by a fixed constant of 1.2% for subsequent years (1.2% is the average population growth rate). Next, we generate random numbers from the Poisson distribution with corresponding estimated parameters for each year. In order to inject the hotspot into the cases, we select a matrix window with size H × H (hotspot size) and multiply the counts inside the window by a fixed value of I (hotspot impact). We then vary H from 1 to 5 and select I from (1.5, 2, 2.5). Since we generate data sets based on the random numbers, we generate 100 datasets for each setting to reduce the effect of randomness. Next section explores the evaluation results. 3.1.2 Performance Evaluation Hotspot detection can be considered a binary classification problem, because the detection approach marks each spatiotemporal window with hotspot or non-hotspot. However, in any approach, we determine a decision threshold to distinct hotspots from non-hotspots. Determination of this threshold becomes more important in sensitive applications, such as security and public health. For such applications, the evaluation of methods has to be evaluated within different ranges of decision thresholds. ROC curve [34] is a widely accepted method for such evaluation tasks. However, due to two reasons, ROC curve cannot be used as an appropriate strategy for the evaluation in this simulation study. On one hand, we want to evaluate the method performance on 100 random data sets for each 15 setting. Therefore, we have 1500 data sets, which require the analysis of 1500 ROC curves, which is infeasible. We also cannot reduce the number of data sets to one, because we are generating random sets and if we rely only on one data set then our results would be highly dependent on the chance and randomness. At the first glance, the Area Under ROC Curve (AUC) seems to be an appropriate choice, as the AUC does not have user-defined parameters. Besides, it is a summarized scalar and seems to be appropriate for mass comparison of the methods. However, the main criticism about use of AUC in applications, such as hotspot detection and outbreak detection is that AUC considers all thresholds equal, which is not true in many applications. In practice, in sensitive applications, such as epidemiology where we deal with the human lives, we are not interested in knowing how a method performs in high alpha values. The operational and practical p-value used is always low values. In other words, the alpha of interest is not between 0 and 1, rather is limited to lower values. Besides, AUC as a summarized scalar hides the real ROC curve behind the evaluation. In fact, AUC can give potentially misleading results if ROC curves cross [10]. Some detailed criticisms against AUC can be found in [10, 11, 22]. For this reason, instead of AUC we opt to use an averaging strategy for operation thresholds [33]. We compute the average accuracy for a range of operational significance levels, such as alpha from 0.20 to 0.01 for each data set and then the average obtained values for all 100 data sets for each setting. The range of alpha is obtained as follows: We vary z-score from 1.28 to 3 (equivalent to two-tailed p-value of 0.2005 to 0.0027) and then increase z-score 0.1 in each loop. We compare our method performance against both the STScan and the baseline approach Ů via control 7 Figure 2: Mean accuracy for 16000 data sets averaged for 173 α from 0.20 to 0.01. Table 2: The effect of hotspot size and impact on the performance (One-way ANOVA test). Factor Hotspot size Hotspot Impact STScan p = 1.6826 × 10−10 p=0.1834 (for impacts ≥ 2.5) EigenSpot p = 5.9713 × 10−13 p=0.9337 (for impacts ≥ 1.75) chart on ratios Ů described in section 2.1. The accuracy of methods in the identification of simulated hotspots is used as the criterion for the performance evaluation. The results are presented in Table 1. As seen, EigenSpot presents a better performance in almost all settings, except low-impact hotspots. The baseline method also as expected, due to the high rate of false positives, presents the lowest accuracy. The superiority of EigenSpot over STScan possibly relies on two reasons. One reason is related to the inherent methodological difference between the EigenSpot and STScan. STScan search the whole space to find some spatiotemporal windows that the data distribution inside them has some deviation to the standard distribution models (e.g. Poisson). This strict assumption makes this approach less effective, when the data in each of sets does not exactly follow the standard distribution model or some deviation occurs by the chance. EigenSpot, instead of putting this strict restriction search for changes in the correlation patterns and therefore is less sensitive to the deviations in data distribution. The second reason could be that the EigenSpot is a shape-free method and does not search for a particular shape hotspot, while STScan looks for specific shape hotspots. Some accuracy loss in STScan relates to different shapes of the simulated hotspot. STScan looks for cylinder-shape hotspots, while the simulated hotspots are in fact cubic. The results also show the performance of each method against noises. We intentionally design some low-impact and size settings for evaluating the ability of the methods in handling noises and outliers. A low-size and low-impact region like impact of 1.5 and size 1 × 1 more seem to be an outlier or anomaly, rather than a realistic hotspot. Therefore, we expect that the methods do not detect that region as hotspot and ignore that. In other words, the detection of such hotspot shows how a method wrongly identifies the outliers and noises as hotspots. Hence, the lower accuracy in this setting reveals the better performance of the method in dealing with noises and outliers. Since the EigenSpot is a spectral method, it definitely ignores such outliers and does not report them as hotspots, while STScan is vulnerable against such circumstances. For this reason, it presents a higher performance for low-size and impact regions. In the experiment, we presumed that a hotspot with small size of 1 × 1 or 2 × 2 and low impact of 1.5 is more a noise and not a real hotspot. However, since the hotspots are simulated, this is only an assumption. We may interpret the result in other way. If we assume that impact 1.5 is not noise and reveals a real hotspot, then we can infer that STScan outperforms EigenSpot for hotspots with low impacts and sizes. 3.1.3 Effect of hotspot size and impact on the performance In the previous section, we evaluated the methods performance for some limited important settings. In this section, with the same method of data generation and evaluation criterion, we study the stability 8 Figure 3: Mean accuracy of STScan and EigenSpot corresponding to different settings for 173 α from 0.20 to 0.01 averaged for 100 datasets. 9 Table 3: Average accuracy for 1500 data sets averaged for 173 α from 0.20 to 0.01 Method One-rank SVD O(N 2) Compution Cost Full SVD O(N 3) Implementation Aevrage Accuracy ARPACK IncPACK LAPACK PROPACK 0.8975 0.8387 0.8429 0.8177 of two algorithms STScan and EigenSpot for a wider range of hotspot sizes and impacts. We do not study the baseline method at this stage, since it is defeated in all previous cases. Hence, this time the hotspot size (H) varies from 1 to 10 and we test 16 different hotspot impacts (I) from 1.25 to 5 by step of 0.25. As formerly used, for each setting we generate 100 random data sets. Therefore, we generate 16000 = 10 × 16 × 100 data sets. We then apply STScan and EigenSpot on all data sets and measure their average accuracy on 100 data sets for each setting in the operational significance levels (p-values) from 0.20 to 0.01. Figure 2 shows the result of this comparison. In order to see that whether this improvement is obtained by chance or is statistically significant, we perform a paired student’s t-test between two sets of obtained performances for STScan and EigenSpot. The t-test confirms that the obtained improvement is statistically significant with p − value = 3.3591 × 10−89 ≈ 0. Figure 3 shows the performance of methods against different hotspot sizes and impact. The lowest performance for EigenSpot is obtained for impacts of 1.25 and 1.5, which is more related to the noises (as was already discussed). However, we can observe that both methods relatively are robust for a hotspot impact greater than a threshold. For instance, EigenSpot is robust for impacts over 1.75 and STScan is robust for impacts over 2.5. Regarding the hotspot size, EigenSpot has a descending trend by increasing the hotspot size. This implies that by increasing the hotspot size, we should expect lower performance from EigenSpot. It makes sense, as by increasing the size of hotspot, the affected areas gradually start to seem normal and are left undetected, via a spectral method like EigenSpot. EigenSpot, however exhibits more regular behavior comparing STScan in this matter. The variance of performance is almost zero for EigenSpot during different size of hotspots, while it can vary up to 0.20 for STScan. STScan, also opposed to EigenSpot, experiences both ascending and descending trend. For hotspot sizes 1 × 1 to 6 × 6 has an ascending trend and then tend to decrease for bigger sizes. For hotspot 9 × 9 it has relatively the same performance as 1 × 1. To understand whether the hotspot size and impact affect the performance of the methods, we perform an ANOVA test [23] on the obtained performance for different hotspot sizes and impacts. The null hypothesis H0 is that that the mean accuracy does not change for different sizes and impacts. The test result (Table 2) confirms our initial guess that both STScan and EigenSpot become independent of hotspot impact when impact goes upper than a specific threshold. However, a very low p-values for hotspot size indicates that the performance of both EigenSpot and STScan is dependent on the hotspot size. However, as observed, both methods do not differ in their dependence on hotspot impact and size. 3.1.4 The effect of SVD Implementation The central technique used in EigenSpot is the SVD. Two kinds of SVD can be used for this purpose: a full-rank SVD and a low-rank SVD. Here, four of SVD implementations are chosen, two from each category and their effect is studied on the EigenSpot performance. Table 3 demonstrates the average accuracy for 1500 data sets for the range of p-values from 0.20 to 0.01. As it is seen, the ARPACK implementation [29] (the default SVD implementation we use in the experiments) outperforms other methods. However, since both ARPACK and IncPACK [6] have the same computational cost, we perform an ANOVA test [23] to see whether using IncPACK affects the performance or not. The ANOVA test shows that two sets of accuracy obtained from these two implementations are not statistically different (p-value=0.26). Therefore, we can conclude that low-rank SVD implementation used does not affect the EigenSpot performance. Concerning full-rank SVD, we observe that LAPACK [3] outperforms PROPACK [20], however full-rank SVD, because of its computational cost is not of our interest. 10 Figure 4: Detected hotspot via STScan (left) and EigenSpot (Right) Table 4: Comparison of STScan versus EigenSpot in detection of hotspots. Incidence rates were adjusted for temporal trends, age, race and sex. Method STScan EigenSpot Affected Regions Santa Fe and Los Alamos Santa Fe, Bernalillo, Valencia Temporal Period P-value 1986-1989 1981, 1987 0.45 0.05 3.2 Experiment with real data In this section, we study the performance of EigenSpot on a real data set. The data set which is publicly available [16] is provided by surveillance, epidemiology and end results (SEER) program of the National Cancer Institute and collected by the New Mexico Tumor Registry between the years of 1973 to 1991 for 32 sub-regions of the New Mexico State, United states. There are 1175 reported cases of malignant neoplasm of the brain and the nervous system. The goal of the initial study was a response to a serious concern in 1991 in the New Mexico resident community about the correlation of wartime nuclear activities in Los Alamos with the recent brain tumor deaths in the neighborhood. The concern rapidly emerged at the local and national level and therefore became a center of attention by the local health departments. The data set was gathered, via a comprehensive review of the reported brain cancer incidence rates for the year 1973 through 1991 in order to identify the statistically significant spatiotemporal affected areas. STScan is already applied to this data [18]. The conclusion made from the previous study shows that excess of brain cancer in Los Alamos falls within the realm of chance, which confirms the final conclusion of the New Mexico Health Department. In order to compare the EigenSpot with STScan, the EigenSpot is applied on the same data set. In addition to the initial study [18] we use the adjusted incidences for temporal trends, age, race and sex. The results obtained via STScan and EigenSpot are shown in Table 4. STScan reports only Santa Fe and Los Alamos in the years 1986-1989 with a relative high p-value=0.45 which indicates that there is no significant hotspot. Applying EigenSpot with α = 0.05 we could find a significant hotspot, including areas of Santa Fe, Bernalillo, Valencia as spatial components and the years 1981 and 1987 as the temporal components. However, for α = 0.01 EigenSpot does not find any hotspot. If we look the hotspot spatial positions in Figure 4 we can find a meaningful relationship between STScan and EigenSpot results. Both candidate areas are found close the Los Alamos, the region of nuclear activities. The temporal component of 1987 also is appeared in the EigenSpot, which is located in the period detected by STScan. Based on EigenSpot result, it seems that in addition to the area close to Los Alamos and Santa Fe more areas were affected by the nuclear activities. These areas are Bernalillo and Valencia where the neighbors of 11 Santa Fe and Los Alamos are. The very interesting point about EigenSpot is that EigenSpot opposed to STScan did not aware about the geographic relationship of the regions. STScan knows in advance that for instance whether Santa Fe and Bernalillo are neighborhood or not, while EigenSpot does not have this prior knowledge. Based on the EigenSpot result, we can infer that the effect of nuclear activities in the neighborhood has experienced two peaks during the years 1981 and 1987. It makes sense, because the initial concerns about the effect of the nuclear activities started in 1991Ůfour years right after 1987 (second detected temporal component). Indeed, EigenSpot has truly approximated the hotspot spatially and temporally very close the nuclear activity area. Most interestingly, no other meaningless hotspots are detected by EigenSpot. However, lack of strong p-value for the recognized area reveals that that the neighborhood has been under a low effect of nuclear activities in Los Alamos, but has not had an enough support to be considered alarming. If we do not consider α = 0.05 significant we can confirm initial conclusion about the random incidence of brain cancers in Los Alamos. 4 Conclusion and future works A new methodology for hotspot detection is proposed, which is based on two robust techniques, including matrix factorization and process control. We evaluate and compare the performance of the algorithm for detection of a single hotspot against the state-of-the art and the baseline methods through a com- prehensive simulation study. The obtained results indicate a statistically significant improvement over the state-of-the-art method STScan. This improvement comes from the inherent methodological differ- ences of the two approaches. The STScan uses the deviation in probability model as the criteria for identification of hotspots, while our approach tracks the changes the correlation patterns in spatial and temporal dimension to approximate the hotspot location. Besides, our approach is a shape-free method and contrary to STScan it is robust to the noises and outliers. Our approach is also much more efficient than the scan statistics-based approaches. The main benefit of our approach is that it has linear complexity, in terms of both space and time. The comprehensive comparison of scan statistics-based methods in [2] reveals that any algorithms that even provide ap- proximately optimal answers to the problem must use space linear in the input. EigenSpot provides an approximate optimal answer and is linear with space and time dimensions and it therefore, meets this requirement. We also study the effect of hotspot size and impact in the methods performance. Based on this result, both the STScan and EigenSpot are independent of the hotspot impact in some specific ranges. However, both methods are dependent on the hotspot size. Nevertheless, EigenSpot exhibits a more regular trend against changes in hotspot size and impact. We also study the effect of SVD implementation on the Eigenspot performance. The study shows that there is no statistical difference between two low-rank SVD implementations ARPACK and IncPACK. Therefore, SVD implementation does not affect the performance of EigenSpot. Finally, we apply EigenSpot to a real data set and compare its performance to STScan. EigenSpot, as well as the STScan recognize the affected area close to the nuclear activity area, both in space and time, however as well as STScan, cannot provide the strong statistical evidence to identify this area as hotspot. EigenSpot can be used as an important component in surveillance systems in particular bio-surveillance systems. Some estimations [12] show that the timely detection of hotspots can save the live of thirty thousands people per day during a biogent release. It also prevents the economic cost of 250 million dol- lars per hour during a disease outbreak. Therefore, any early knowledge of hotspots plays an important role in improving response effectiveness. It is estimated [24] that diagnosing and controlling abnormal situations has an economic impact of at least $10 billion annually in the United States . Although EigenSpot is an ideal solution in terms of, both accuracy and computational cost for single hotspot detection. There is a doubt that this result is valid when multiple hotspots exist. In this work, we did not evaluate the performance of EigenSpot for multiple hotspot detection. However, theoretically we expect that STScan performs better for that purpose. Because, combining the spatial and temporal components of different hotspots together raise many false positives, which reduce the method perfor- mance, even though, this may not be considered a serious issue in disease surveillance, where in practice the most likely cluster is desired. There are two directions for future works. In the first direction, we intend to find a solution for adapting EigenSpot for multiple hotspot detection and in the second direction, we are going to apply 12 EigenSpot along with visualization tools for online and real-time monitoring purposes. Acknowledgments. This research was supported by the Projects NORTE-07-0124-FEDER-000059/000056 which is financed by the North Portugal Regional Operational Program (ON.2 O Novo Norte), under the National Strategic Reference Framework (NSRF), through the European Regional Development Fund (ERDF), and by national funds, through the Portuguese funding agency, Fundação para a Ciência e a Tecnologia (FCT). Authors also acknowledge the support of the European Commission through the project MAESTRA (Grant Number ICT-750 2013-612944). References [1] Abdi, H. and L. J. Williams (2010). Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics 2 (4), 433–459. [2] Agarwal, D., A. McGregor, J. M. Phillips, S. Venkatasubramanian, and Z. Zhu (2006). Spatial In Proceedings of the 12th ACM SIGKDD scan statistics: approximations and performance study. international conference on Knowledge discovery and data mining, pp. 24–33. ACM. [3] Anderson, E. (1999). LAPACK Users’ guide, Volume 9. Siam. [4] Assuncao, R., A. Tavares, and M. Kulldorff (2004). An early warning system for space-time cluster detection. Technical report, Laboratorio De EstatÃŋstica Espacial. [5] Bell, R. M., Y. Koren, and C. Volinsky (2007). The bellkor solution to the netflix prize. KorBell TeamŠs Report to Netflix . [6] Brand, M. (2006). Fast low-rank modifications of the thin singular value decomposition. Linear algebra and its applications 415 (1), 20–30. [7] Bryan, K. and T. Leise (2006). The $ 25,000,000,000 eigenvector: The linear algebra behind Google. Siam Review 48 (3), 569–581. [8] Buckeridge, D. L., H. Burkom, M. Campbell, W. R. Hogan, and A. W. Moore (2005). Algorithms for rapid outbreak detection: a research synthesis. Journal of Biomedical Informatics 38 (2), 99–113. [9] Duczmal, L. and R. Assuncao (2004). A simulated annealing strategy for the detection of arbitrarily shaped spatial clusters. Computational Statistics & Data Analysis 45 (2), 269–286. [10] Hand, D. J. (2009). Measuring classifier performance: a coherent alternative to the area under the roc curve. Machine learning 77 (1), 103–123. [11] Hand, D. J. and C. Anagnostopoulos (2013). When is the area under the receiver operating char- acteristic curve an appropriate measure of classifier performance? Pattern Recognition Letters 34 (5), 492–495. [12] Kaufmann, A. F., M. I. Meltzer, and G. P. Schmid (1997). The economic impact of a bioter- rorist attack: are prevention and postattack intervention programs justifiable? Emerging infectious diseases 3 (2), 83. [13] Klema, V. and A. Laub (1980). The singular value decomposition: applications. Automatic Control, IEEE Transactions on 25 (2), 164–176. Its computation and some [14] Kulldorff, M. (1997). A spatial scan statistic. Communications in Statistics-Theory and meth- ods 26 (6), 1481–1496. [15] Kulldorff, M. (1999). Spatial scan statistics: models, calculations, and applications. In Scan statistics and applications, pp. 303–322. Springer. 13 [16] Kulldorff, M. (2012a). Brain cancer incidence in new mexico. http://www.satscan.org/datasets/ nmbrain/index.html. Accessed: December 2012. [17] Kulldorff, M. (2012b). Satscan - software for the spatial, temporal, and space-time scan statistics. http://www.satscan.org. Accessed: December 2012. [18] Kulldorff, M., W. Athas, E. Feurer, B. Miller, and C. Key (1998). Evaluating cluster alarms: a space-time scan statistic and brain cancer in los alamos, new mexico. American journal of public health 88 (9), 1377–1380. [19] Kulldorff, M. and N. Nagarwalla (1995). Spatial disease clusters: detection and inference. Statistics in medicine 14 (8), 799–810. [20] Larsen, R. M. (1998). Lanczos bidiagonalization with partial reorthogonalization. DAIMI Report Series 27 (537). [21] Levine, N. (2006). Crime mapping and the crimestat program. Geographical Analysis 38 (1), 41–56. [22] Lobo, J. M., A. Jiménez-Valverde, and R. Real (2008). Auc: a misleading measure of the performance of predictive distribution models. Global ecology and Biogeography 17 (2), 145–151. [23] Montgomery, D. C., D. C. Montgomery, and D. C. Montgomery (1984). Design and analysis of experiments, Volume 7. Wiley New York. [24] Morrison, D., W. Foslien, W. MacArthur, P. Jofriet, and P. Eng (2010). The early event detection toolkit. Honeywell Process Solutions 14. [25] Neill, D. B. and G. F. Cooper (2010). A multivariate bayesian scan statistic for early event detection and characterization. Machine Learning 79, 261–282. [26] Neill, D. B. and A. W. Moore (2004). Rapid detection of significant spatial clusters. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’04, New York, NY, USA, pp. 256–265. ACM. [27] Neill, D. B. and M. R. Sabhnani (2007). A robust expectation-based spatial scan statistic. Advances in Disease Surveillance 2, 61. [28] Neill, D.B. (2006). Detection of spatial and spatio-temporal clusters. Ph. D. thesis, Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. [29] Sorensen, D. C. (1997). calculations. Springer. Implicitly restarted Arnoldi/Lanczos methods for large scale eigenvalue [30] Tango, T. and K. Takahashi (2005). A flexibly shaped spatial scan statistic for detecting clusters. International journal of health geographics 4 (1), 11. [31] Turk, M. and A. Pentland (1991). Eigenfaces for recognition. Journal of cognitive neuroscience 3 (1), 71–86. [32] Wikibooks (2012). Control systems/eigenvalues and eigenvectors — wikibooks, the free textbook project. [Online; accessed 28-October-2013]. [33] Wong, W.-K., A. Moore, G. Cooper, and M. Wagner (2005). What’s strange about recent events (wsare): an algorithm for the early detection of disease outbreaks. The Journal of Machine Learning Research 6, 1961–1998. [34] Zweig, M. H. and G. Campbell (1993). Receiver-operating characteristic (roc) plots: a fundamental evaluation tool in clinical medicine. Clinical chemistry 39 (4), 561–577. 14
ai_researcher
2
Benchmarking_Retrieval-Augmented_Generation_for_Medicine.pdf
4 2 0 2 p e S 6 2 ] R I . s c [ 2 v 3 6 7 5 1 . 9 0 4 2 : v i X r a IRSC: A Zero-shot Evaluation Benchmark for Information Retrieval through Semantic Comprehension in Retrieval-Augmented Generation Scenarios Hai Lin1,2*, Shaoxiong Zhan2*, Junyou Su3*, Haitao Zheng1,2, Hui Wang1† 1PengCheng Laboratory 2Shenzhen International Graduate School, Tsinghua University 3Southern University of Science and Technology Abstract of tasks: embedding models In Retrieval-Augmented Generation (RAG) tasks using Large Language Models (LLMs), the quality of retrieved information is critical to the final output. This paper introduces the IRSC benchmark for evaluating the performance in The benchmark multilingual RAG tasks. query encompasses five retrieval retrieval, title retrieval, part-of-paragraph retrieval, keyword retrieval, and summary retrieval. Our research addresses the current lack of comprehensive testing and effective comparison methods for embedding models in RAG scenarios. We introduced new metrics: the Similarity of Semantic Comprehension Index (SSCI) and the Retrieval Capability Contest Index (RCCI), and evaluated models such as Snowflake-Arctic, BGE, GTE, and M3E. Our contributions include: 1) the IRSC benchmark, 2) the SSCI and RCCI metrics, and 3) insights into the cross-lingual limitations of embedding models. The IRSC benchmark aims to enhance the understanding and devel- opment of accurate retrieval systems in RAG tasks. All code and datasets are available at: https://github.com/Jasaxion/IRSC_Benchmark 1 Introduction The rapid advancements in large language mod- els (LLMs) have demonstrated significant potential in natural language understanding and generation. However, these models still face challenges like fac- tual hallucination, knowledge updating, and lack of domain-specific expertise (Chen et al., 2024b). To address these issues, incorporating external knowledge through Retrieval-Augmented Genera- tion (RAG) has emerged as a promising approach (Chen et al., 2024b; Zhang et al., 2023). RAG enhances LLMs by integrating retrieved information from external sources, which helps *These authors contributed equally to this work. †Corresponding author. 1 mitigate hallucinations and provide more accurate, up-to-date responses (Chen et al., 2024b). Despite these advantages, existing benchmarks for evaluat- ing RAG models are limited in scope and do not fully address the diverse needs of various retrieval tasks (Chen et al., 2024b; Zhang et al., 2023). Most benchmarks focus primarily on tasks such as se- mantic textual similarity (STS), clustering, and re- ranking, but fail to provide RAG comprehensive evaluations across different retrieval scenarios. The IRSC Benchmark introduced in this study aims to fill this gap by evaluating Embedding mod- els across five distinct retrieval tasks: query-based retrieval, title-based retrieval, part-of-paragraph retrieval, keyword-based retrieval and summary- based retrieval. This benchmark is designed to reflect realistic application scenarios of RAG, con- sidering different types of queries and languages (English, Chinese, and Mixed-Language datasets) (Zhang et al., 2023). We evaluate models such as Snowflake-Arctic-Embed-S (Merrick et al., 2024), BGE-M3 (Chen et al., 2024a), and (Wang Yuxin, 2023) across different tasks and languages, provid- ing insights into their strengths and weaknesses in real-world RAG applications. Additionally, this benchmark includes innovative evaluation metrics to capture model performance differences across tasks and languages. And due to the differences in vector dimensions and values across various models, directly comput- ing cosine similarity between vectors (Steck et al., 2024) is not feasible for comparing the semantic similarity between different models (Zhou et al., 2022). To address this, we propose the Similarity of Semantic Comprehension Index (SSCI) in this paper. SSCI measures the similarity of semantic understanding between the model’s output and the ground truth. Our contributions are as follows: 1. We pro- pose a comprehensive IRSC Benchmark to evalu- ate the performance of embedding models in RAG retrieval tasks and languages. 2. We introduce the SSCI and the Retrieval Capability Contest Index (RCCI) as innovative metrics to evaluate and com- pare models’ semantic understanding and retrieval capabilities, respectively. 3. We conducted exper- iments on the retrieval effect of the model across languages and found the differences in the semantic understanding alignment of the model in different languages. 2 Related Work The field of Retrieval-Augmented Generation (RAG) has gained significant attention, especially in addressing the limitations of Large Language models (LLMs) in providing accurate and contex- tually relevant information. This section reviews notable works in this domain and situates our con- tribution within the existing research. Benchmarking in RAG Chen et al. developed the Retrieval-Augmented Generation Benchmark (RGB) to evaluate LLMs on four abilities: noise robustness, negative rejec- tion, information integration, and counterfactual robustness. Their findings highlight the need for nuanced evaluation metrics to improve RAG capa- bilities, as LLMs showed weaknesses in negative rejection, information integration, and handling false information(Chen et al., 2024b) . However, RGB primarily focuses on robustness aspects and does not provide comprehensive coverage of differ- ent retrieval tasks, which is crucial for real-world RAG applications. Multilingual Retrieval Datasets The MIRACL dataset, introduced by Zhang et al., supports multilingual information retrieval with 700,000 human-annotated query-passage pairs across 18 languages. It aims to advance retrieval models that handle linguistic diversity and resource variability (Zhang et al., 2023). While MIRACL provides valuable multilingual data, it is mainly focused on query-passage retrieval and does not ad- dress other important retrieval tasks like keyword or title retrieval. Evaluations of RAG Systems Ogundepo et al. provided a comprehensive sur- vey of current evaluation methods for RAG sys- tems, emphasizing the importance of various re- trieval tasks and metrics like nDCG, MRR, and MAP (Yu et al., 2024). Their work discusses chal- lenges and future directions for robust RAG bench- marks. Despite their comprehensive survey, there is a lack of practical benchmarks that integrate these varied metrics across different retrieval tasks. Benchmark for Evaluation of Information Re- trieval Models (BEIR) Thakur et al. introduced an evaluation bench- mark for retrieval models called BEIR, which in- cludes a diverse set of information retrieval tasks across different domains and data types(Thakur et al., 2021). BEIR offers a collection of heteroge- neous tasks and provides a unified and convenient framework for evaluating retrieval models based on natural language processing. However, BEIR only focuses on retrieval tasks between queries and paragraphs, and does not address the more complex retrieval tasks involving large-scale RAG (Retrieval-Augmented Generation) scenarios. Massive Text Embedding Benchmark (MTEB) Muennighoff et al. introduced MTEB, a bench- mark evaluating text embedding models across tasks such as bitext mining, classification, cluster- ing, reranking, retrieval, and semantic textual sim- ilarity(Muennighoff et al., 2023) . Their findings highlight the need for specialized models tailored to specific retrieval scenarios. However, MTEB does not focus specifically on the integration of retrieved information for generation tasks, which is a critical component of RAG systems. Multilingual Question Answering The MKQA dataset, presented by Longpre et al., evaluates multilingual open-domain question an- swering systems with parallel questions in multiple languages(Longpre et al., 2021). It facilitates com- parative analysis of retrieval and QA performance across different linguistic contexts. While useful for question answering, MKQA does not encom- pass the broader spectrum of retrieval tasks that are essential for RAG evaluations. Current Work on RAG Model Evaluation Our work extends these studies by proposing a novel Benchmark that evaluates retrieval perfor- mance across five tasks: query, keyword, title, sum- mary, and part of paragraph retrieval. Unlike previ- ous benchmarks, our dataset includes multilingual and cross-lingual components, addressing the need for robust evaluation in diverse linguistic environ- ments. Our benchmark aims to fill gaps in existing methods by providing a comprehensive assessment of model performance in RAG tasks, focusing on cross-lingual retrieval and integrating various re- trieval tasks into a unified framework. 2 3 The IRSC Benchmark 3.1 Desiderata The IRSC benchmark is designed to evaluate the effectiveness of embedding models specifically within the context of Retrieval-Augmented Genera- tion (RAG) tasks. Unlike traditional benchmarks that focus broadly on sentence or paragraph length, IRSC hones in on the unique needs of RAG appli- cations, which require supplementing knowledge to queries. This benchmark emphasizes five key data types to cover most RAG tasks: 1. Focus on RAG-Specific Retrieval Tasks: Unlike traditional benchmarks, IRSC focuses on expanding a query or brief information into a detailed response. 2. Emphasis on Cross-lingual Capabilities: IRSC evaluates models in multiple languages, particularly English and Chinese, to handle Mixed-Language queries and adapt to cross- lingual environments. 3. Comprehensive Evaluation Metrics: Stan- dard retrieval metrics (nDCG@10, MRR@10, MAP@10, precision@3, and recall@10) are used alongside new metrics like SSCI and RCCI for deeper insights into semantic com- prehension and retrieval capabilities. 4. Real-World Applicability: IRSC focuses on real-world RAG tasks like retrieving detailed knowledge based on a query or summary, en- suring practical relevance and cross-lingual applicability. Figure 1: The IRSC Benchmark is structured around five primary task types, each designed to evaluate different aspects of a model’s retrieval capabilities. The red labels indicate the languages and quantities of each dataset. 2. Title -> Paragraph: Tests the model’s ca- pability to find relevant paragraphs given a title. Datasets: zhihu*, New-Title-Chinese†, Arxiv-Abstract(Clement et al., 2019), Sci- Docs(Muennighoff et al., 2023), and Sci- Fact(Muennighoff et al., 2023). 3. Part of Paragraph -> Paragraph: Evalu- ates sensitivity to text fragments, testing if the model can retrieve the full paragraph from a fragment. Datasets: nfcorpus(Muennighoff et al., 2023) (English) and xlsum(Joulin et al., 2017) (Chinese). Through these considerations, IRSC aims to set a new standard for evaluating embedding models in the context of RAG tasks, providing a more nuanced and applicable assessment framework. 4. Keyword -> Paragraph: Measures the model’s ability to retrieve paragraphs based on keywords. Datasets: AG News(Zhang et al., 2015) and CorpusQTPKS. 3.2 Tasks and Evaluation Figure 1 provides an overview of tasks and datasets available in IRSC. The benchmark consists of the following five task types: 5. Summary -> Paragraph: Evaluates perfor- mance in retrieving relevant paragraphs based on a summary. Datasets: XSum(Narayan et al., 2018) (English), xlsum(Joulin et al., 2017) (Chinese), and CorpusQTPKS. 1. Query -> Paragraph: Evaluates the model’s ability to retrieve relevant paragraphs based on a given query. Datasets: MsMARCO(Bajaj et al., 2018), XQuAD(Artetxe et al., 2020), Xtreme(Hu et al., 2020), and MLQA(Lewis et al., 2020). Each task uses 5000 query-content pairs as eval- uation data. The remaining samples form a unified *https://huggingface.co/datasets/suolyer/ zhihu †https://huggingface.co/datasets/madao33/ new-title-chinese 3 database for retrieval across all tasks. During scor- ing, queries are used to search the unified database, and retrieval performance is evaluated based on the precision of retrieved indices against the ground truth. This standardized approach ensures robust and fair evaluation of the model’s retrieval capabil- ities across various tasks. 3.3 Evaluation Metrics The primary evaluation metrics for IRSC include nDCG@10, MRR@10, MAP@10, precision@3, and recall@10. These metrics are used to evaluate model performance in information retrieval tasks: Recall@10: Evaluates the fraction of relevant documents retrieved among the top 10 documents. MRR@10 (Mean Reciprocal Rank): Evalu- ates the rank position of the first relevant document. nDCG@10 (Normalized Discounted Cumula- tive Gain): Measures the ranking quality of the retrieved documents, taking into account the posi- tion of relevant documents in the ranking. In addition to these standard metrics, we intro- duce new metrics to assess different aspects of model performance: Similarity of Semantic Comprehension Index (SSCI) The Similarity of Semantic Comprehension Index, averaged over multiple queries. It measures the difference in semantic understanding between the two models’ outputs across all queries. A higher value indicates a greater disparity in the models’ understanding of the given questions. We define the average SSCI (SSCI) as: SSCI = 1 Q Q (cid:88) q=1 |m1q − m2q| n Retrieval Capability Contest Index (RCCI) The Retrieval Capability Contest Index, averaged over multiple queries. It evaluates the differences in retrieval capabilities between the two models across all queries. A positive average score indi- cates that model 1 performs better overall, while a negative average score indicates that model 2 per- forms better overall. The magnitude of the score in- dicates the extent of the difference in performance. We define the average RCCI (RCCI) as: RCCI = 1 Q Q (cid:88) q=1 m1q − m2q n Parameters • n: The length of each query result vector mi- nus one, representing the maximum index of the retrieval results. • R1, R2: These are matrices representing the results retrieved by the two different models over Q queries. Each matrix has dimensions Q × (n + 1). Each element is a binary value (0 or 1), where 1 indicates the position of the correct answer for each query. For query q, the vectors R1q and R2q are de- fined as follows: R1q = [r11q, r12q, r13q, . . . , r1nq, r1(n+1)q] R2q = [r21q, r22q, r23q, . . . , r2nq, r2(n+1)q] • m1q, m2q: These represent the positions of the correct answer in R1q and R2q respec- tively for query q. If there is no 1 in the vector, it is assigned a value of -1. For query q: m1q = (cid:40) n − index(R1q, 1) −1 if 1 ∈ R1q otherwise m2q = (cid:40) n − index(R2q, 1) −1 if 1 ∈ R2q otherwise where index(Rq, 1) denotes the index position of the element equal to 1 in the vector Rq. By using these metrics, IRSC aims to provide a comprehensive evaluation framework for assess- ing the performance of embedding models across diverse retrieval tasks. 3.4 Model Descriptions We evaluated 13 models using the IRSC Bench- mark. Notably, MiniLM-L6-v2 models do not sup- port Chinese. • S-Arctic Series(Merrick et al., 2024): Includes S-Arctic-S, S-Arctic-M, and S-Arctic-L, designed for semantic embed- dings in text retrieval tasks. • BGE Series(Chen et al., 2024a): Includes BGE-M3 (multilingual) and BGE-Large (opti- mized for Chinese). 4 • GTE Series(Li et al., 2023): Comprises GTE-Small, GTE-Base, and GTE-Large, fo- cusing on general text embeddings. • M3E Series(Wang Yuxin, 2023): Includes M3E-Small, M3E-Base, and M3E-Large, de- signed for efficient multilingual text embed- dings. • MiniLM Series: MiniLM-L12(Reimers and Gurevych, 2019): A multilingual version of the MiniLM series, tailored for para- phrase identification and multilingual retrieval. MiniLM-L6*: A compact, efficient model fo- cused on English for various NLP tasks. 4 Results 4.1 Experimental Setup The experiments are conducted across three differ- ent language requirements: English, Chinese, and Mixed-Language (English + Chinese). For each language requirement, corresponding benchmark datasets are utilized to perform IRSC scoring ex- periments. English: The evaluation involves English- specific benchmark datasets to test the retrieval performance of each model. Chinese: The evaluation uses Chinese-specific benchmark datasets, ensuring the models’ capabili- ties are tested in the Chinese language context. Mixed-Language (English + Chinese): This mixed evaluation assesses the models’ performance across both English and Chinese datasets, provid- ing a comprehensive understanding of their cross- lingual retrieval capabilities. By employing this diversified language setup, we aim to provide a thorough and robust evaluation of each model’s performance in retrieving relevant paragraphs based on various query types within the IRSC benchmark. 4.2 Experimental Results and Analysis 4.2.1 Benchmark Analysis Based on the results in Table 1, we observe that the BGE-M3 model consistently outperforms other models across all metrics and categories, indicat- ing its robustness and effectiveness in both Chinese and English retrieval tasks. Specifically, BGE-M3 *https://huggingface.co/ sentence-transformers/MiniLM-L6-v2 achieves the highest recall at 10 (r@10), mean av- erage precision at 10 (m@10), and normalized dis- counted cumulative gain at 10 (n@10) in the Key- words, Title, Query, Part, and Summary categories. For instance, in the Keywords category, BGE- M3 has an impressive r@10 of 0.8668, m@10 of 0.8205, and n@10 of 0.8320. Conversely, the S-Arctic series (S-Arctic-S, S- Arctic-M, S-Arctic-L) shows relatively lower per- formance compared to other models. Notably, S- Arctic-S performs better than S-Arctic-M and S- Arctic-L across most categories, but it still lags significantly behind models like BGE-M3, GTE- Small, and M3E-Base. For example, in the Sum- mary category, S-Arctic-S achieves an r@10 of 0.5334, whereas BGE-M3 achieves an r@10 of 0.9812. Models like GTE-Base and M3E-Base also demonstrate strong performance, particularly in the Keywords and Summary categories. GTE-Base achieves an r@10 of 0.7940 in the Keywords cate- gory and M3E-Base achieves an r@10 of 0.9644 in the Summary category, showing their potential effectiveness in specific retrieval contexts. We observe an interesting performance pattern between the S-Arctic-S and M3E-Small models. Notably, S-Arctic-S performs significantly better than M3E-Small in the Keywords category. S- Arctic-S achieves an r@10 of 0.6302, while M3E- Small scores 0.3374. However, in Summary cat- egories, M3E-Small significantly outperforms S- Arctic-S. M3E-Small achieves an r@10 of 0.8052 compared to S-Arctic-S’s 0.5334. This pattern in- dicates that while S-Arctic-S excels in Keywords retrieval, it falls behind in other tasks such as Sum- mary, where M3E-Small demonstrates superior per- formance. The diverse performance across different models highlights the importance of selecting the appropri- ate model based on the specific retrieval task and the language requirements. Table 1 presents the results for Mixed-Language task. The results for the Chinese and English tasks will publish in our Github repository. 4.2.2 Radar Chart Analysis To more intuitively showcase the performance of different models, we created radar charts2 where the values for each capability are derived from the average of r@10, m@10, and n@10. These charts provide a clearer view of the comprehensive per- formance of each model across various tasks. 5 Model S-Arctic-S S-Arctic-M S-Arctic-L BGE-M3 GTE-Small GTE-Base GTE-Large M3E-Small M3E-Base M3E-Large MiniLM-L6 MiniLM-L12 Query Title Part Keywords Summary r@10 m@10 n@10 r@10 m@10 n@10 r@10 m@10 n@10 r@10 m@10 n@10 r@10 m@10 n@10 0.3067 0.1379 0.2238 0.6972 0.5099 0.5163 0.5205 0.2292 0.5912 0.3415 0.4589 0.4934 0.2714 0.1125 0.1909 0.6321 0.4643 0.4684 0.4746 0.1874 0.5239 0.2798 0.4180 0.4161 0.2815 0.1196 0.2002 0.6495 0.4771 0.4817 0.4874 0.1981 0.5415 0.2957 0.4297 0.4363 0.3566 0.0198 0.0248 0.8640 0.7360 0.7366 0.7372 0.2850 0.7840 0.5052 0.6168 0.6174 0.3125 0.0145 0.0186 0.8149 0.7005 0.7027 0.7022 0.2267 0.7167 0.4113 0.5842 0.5294 0.3232 0.0158 0.0200 0.8270 0.7093 0.7110 0.7107 0.2407 0.7332 0.4340 0.5923 0.5508 0.4588 0.2746 0.3126 0.7964 0.6916 0.6980 0.6984 0.4942 0.7562 0.5788 0.6066 0.5708 0.4369 0.2514 0.2903 0.7625 0.6527 0.6617 0.6571 0.4522 0.7146 0.5302 0.5720 0.5259 0.4423 0.2570 0.2957 0.7708 0.6622 0.6706 0.6672 0.4624 0.7249 0.5421 0.5805 0.5368 0.6302 0.2856 0.3804 0.8668 0.7876 0.7940 0.7936 0.3374 0.8368 0.5606 0.7042 0.5784 0.5759 0.2339 0.3342 0.8205 0.7258 0.7333 0.7338 0.2874 0.7777 0.4701 0.6326 0.4846 0.5892 0.2464 0.3455 0.8320 0.7409 0.7482 0.7485 0.2993 0.7923 0.4919 0.6500 0.5073 0.5334 0.4554 0.4394 0.9812 0.8284 0.8282 0.8180 0.8052 0.9644 0.8964 0.5484 0.8728 0.5231 0.4151 0.4104 0.9709 0.7890 0.7893 0.7780 0.7572 0.9441 0.8527 0.5190 0.8245 0.5256 0.4247 0.4175 0.9735 0.7986 0.7987 0.7877 0.7689 0.9491 0.8634 0.5260 0.8365 Table 1: IRSC Benchmark Results of S-Arctic Series, BGE Series, GTE Series, M3E Series, and MiniLM Series in All Languages for All Tasks. Metrics: r@10 - Recall at 10, m@10 - MRR(Mean Reciprocal Rank) at 10, n@10 - nDCG(Normalized Discounted Cumulative Gain) at 10 Figure 2: Comparative Performance Radar Charts of S-Arctic Series, BGE Series, GTE Series, M3E Series, and MiniLM Series Models Across IRSC Benchmark’s Query, Title, Part, Keyword and Summary Tasks in Mixed- Language. Metrics: Average of Recall@10, MRR@10 and nDCG@10 From the charts, it is evident that BGE-M3 per- forms exceptionally well in all tasks (Query, Title, Part, Keywords, and Summary), demonstrating its comprehensive advantages across these five areas. The radar chart for BGE-M3 shows a balanced and extensive coverage, with particularly outstanding performance in the Summary tasks. In contrast, the GTE series and M3E series mod- els also show good performance. However, the S-Arctic series underperforms compared to the aforementioned models in all tasks, especially S- Arctic-M, which shows the lowest comprehensive performance across all tasks, indicating its lesser effectiveness in these tasks. The radar charts clearly illustrate the compre- hensive capabilities of each model across different tasks, with BGE-M3 standing out as the most opti- mal model in terms of performance. 4.2.3 Cross Language Analysis In Table 2 , we also conducted experiments on the cross-lingual retrieval capabilities of different models using five IRSC tasks, with 1,000 randomly selected queries for each task. We obtained 5,000 data entries in both English and Chinese languages. Queries originally in English were translated into the target language (Chinese) and then searched within an entirely English database to obtain the Chinese to English (C2E) results in Table 2. The scores are the averages of r@10, m@10, and n@10. From the results, several key observations can be made: 1. Performance Decline in Cross-Lingual Re- trieval: Most models exhibit a decline in performance metrics when transitioning from monolingual (C2C or E2E) to cross-lingual (C2E or E2C) retrieval. This indicates a gen- eral challenge in maintaining semantic align- 6 Model S-Arctic-S S-Arctic-M S-Arctic-L BGE-M3 GTE-Small GTE-Base GTE-Large M3E-Small M3E-Base M3E-Large MiniLM-L6 MiniLM-L12 C2C | C2E 0.0782 | 0.0068 0.1272 | 0.0014 0.0882 | 0.0008 0.8630 | 0.6260 0.4088 | 0.0569 0.4048 | 0.0581 0.4036 | 0.0651 0.7486 | 0.0660 0.8026 | 0.3323 0.7648 | 0.2420 0.1048 | 0.0209 0.5586 | 0.3841 E2E | E2C 0.5848 | 0.0462 0.1441 | 0.0208 0.2008 | 0.0334 0.8427 | 0.5964 0.8499 | 0.0620 0.8613 | 0.0866 0.8693 | 0.0888 0.1327 | 0.0190 0.7423 | 0.1578 0.3659 | 0.0688 0.7942 | 0.0150 0.5872 | 0.4558 relatively well compared to MiniLM-L6 in cross-lingual tasks. These findings underscore the need for further improvement in training vector models for cross- lingual query semantic alignment. Enhancing the models’ ability to maintain semantic coherence across languages could lead to more effective and accurate cross-lingual retrieval systems. Future re- search should focus on developing techniques to bridge the semantic gap between languages, ensur- ing that models can perform consistently well in both monolingual and cross-lingual contexts. Table 2: IRSC Benchmark Results of the S-Arctic Se- ries, BGE Series, GTE Series, M3E Series, and MiniLM Series in Cross Languages. Metrics: recall@10 4.3 SSCI & RCCI Analysis 4.3.1 SSCI ment across different languages. 2. Superior Performance of BGE-M3: The BGE-M3 model consistently demonstrates su- perior performance in both monolingual and cross-lingual retrieval tasks. Notably, its per- formance degradation from monolingual to cross-lingual retrieval is minimal. For in- stance, in C2C, it scores 0.8630, while in C2E, it scores 0.6260. Similarly, in E2E, it scores 0.8427, compared to 0.5964 in E2C. 3. Significant Decline in M3E Series: The M3E series models show a significant decrease in performance when moving to cross-lingual tasks. The most notable drop is observed in the M3E-Base model, which falls from 0.8026 in C2C to 0.3323 in C2E. This highlights a substantial challenge in the model’s ability to align queries semantically across languages. 4. Drastic Decline in GTE Series: The GTE series models exhibit the most drastic decline in performance, especially in the E2C task. Scores around 0.85 in E2E drop below 0.1 in E2C, indicating a significant deficiency in the models’ ability to handle cross-lingual seman- tic alignment from English to Chinese. 5. Mixed Performance in S-Arctic and MiniLM Series: The S-Arctic series models display varying levels of performance, with S-Arctic-S and S-Arctic-M performing poorly in C2E tasks. The MiniLM series also shows mixed results, with MiniLM-L12 performing 7 Figure 3: Comparative SSCI Heatmaps of the S-Arctic Series, BGE Series, GTE Series, M3E Series, and MiniLM Series in the IRSC Benchmark’s Summary Subtask Across Chinese and English. Smaller values indicate more consistent model performance. In Figure 3, we present detailed SSCI results for the Summary task across different languages and models. We observe that in the English language, most models display blue regions, indicating high consistency in semantic understanding among these models. In contrast, for the Chinese language, the SSCI values exhibit more red regions, suggesting lower consistency and greater divergence in seman- tic understanding among the models. Furthermore, by examining the color distribu- tion in Figure 3, we find that models within the same series generally exhibit better semantic un- derstanding consistency, whereas models from dif- ferent series are more likely to show divergence in understanding. From this analysis, we can draw several conclusions: there is a significant difference in semantic understanding consistency across lan- guages, with models showing higher consistency in English compared to Chinese; models within the same series tend to have higher semantic un- derstanding consistency, while different series of models are more prone to divergences in under- standing. mance, exposing limitations in these areas. Future model training should focus on enhancing models’ capabilities in complex context understanding and concise expression to address the challenges posed by complex tasks. 4.3.2 RCCI Figure 4: Comparative SSCI Heatmaps of the S-Arctic Series, BGE Series, GTE Series, M3E Series, and MiniLM Series in the IRSC Benchmark’s Query, Ti- tle, Part and Keyword Subtasks in English. Smaller values indicate more consistent model performance. Figure 4 illustrates the SSCI heatmaps for mod- els on four tasks (excluding Summary) in the En- glish language. From the figure 4, it is evident that different tasks exhibit varying degrees of di- vergence in model understanding. Unlike the Sum- mary task, where most models show blue regions indicating high SSCI values, the IRSC tasks reveal different levels of red regions. This is particularly evident in the Title task, which shows extensive deep red regions, indicating significant divergence in semantic understanding among the models. The Title task imposes higher demands on the mod- els’ semantic understanding, highlighting the dif- ferences in model performance across different task types. While models show high consistency in the Summary task, possibly due to its clear objective and relatively smaller information processing re- quirements, the Title task requires complex context understanding and concise expression, leading to more pronounced divergences among the models. From the analysis of Figure 4, we can derive that the Summary task shows high model consis- tency, whereas the IRSC tasks, especially the Title task, exhibit significant divergences, indicating that task complexity has a substantial impact on model consistency. Tasks requiring complex context un- derstanding and concise expression (such as Ti- tle) reveal significant divergences in model perfor- Figure 5: Comparison of RCCI Results Between S- Arctic-S and M3E-Small Across Mixed-Languages, Chi- nese, and English The figure 5 presents a comparative analysis of the capabilities of two models, S-Arctic-S and M3E-Small, across multiple languages and evalua- tion metrics. The RCCI methodology has been em- ployed to offer a detailed comparison between the models, which overcomes the limitations of simple average-based metrics such as r@10, m@10, and n@10 by highlighting finer differences in model performance. Figure 5 illustrates the RCCI-based analysis clearly delineates the strengths and weaknesses of the two models. M3E-Small excels in the Chinese language, demonstrating robust performance across all metrics, which underscores its potential for ap- plications requiring Chinese language proficiency. Conversely, S-Arctic-S exhibits a competitive edge in the English language metrics, particularly in the Q (Query), T (Title), and K (Keyword) metrics. 5 Conclusion The IRSC Benchmark offers a comprehensive evaluation framework for embedding models in Retrieval-Augmented Generation (RAG) tasks. It includes five retrieval tasks: query-based, title- based, part-of-paragraph-based, keyword-based, and summary-based retrieval, in English and Chi- nese. Key contributions are the IRSC Benchmark, new metrics like the Similarity of Semantic Com- prehension Index (SSCI) and Retrieval Capability Contest Index (RCCI), and a cross-lingual perfor- mance evaluation. 8 Experimental results show BGE-M3’s superior performance across various metrics and tasks, high- lighting its robust retrieval capabilities in monolin- gual and cross-lingual contexts. Diverse model per- formance emphasizes the importance of selecting models based on specific tasks and language needs. Cross-lingual retrieval challenges indicate a need for improved training in vector models for better semantic alignment. Visual tools like radar charts and heatmaps illustrate model strengths and weak- nesses in different tasks and languages, showcasing the IRSC Benchmark’s comprehensive evaluation. Future research should focus on optimizing em- bedding models for complex tasks and improving cross-lingual semantic alignment. 6 Limitations While the IRSC Benchmark provides a comprehen- sive evaluation framework for embedding models in Retrieval-Augmented Generation (RAG) tasks, there are several limitations that need to be ad- dressed: 1. Language Scope: The benchmark primarily focuses on English and Chinese, which lim- its its applicability to other languages. While it provides insights into multilingual capabil- ities, extending the evaluation to a broader range of languages would offer a more holistic view of model performance in truly multilin- gual settings. 2. Task Scope:: Although the benchmark covers five distinct retrieval tasks, real-world RAG applications might involve more complex and diverse scenarios. Expanding the range of tasks to include more specialized or domain- specific queries could provide a more compre- hensive assessment. 3. Model Variability: The benchmark evalu- ates a selection of popular embedding models, but it does not encompass all existing models. New models and variations are continuously being developed, and the benchmark needs to be updated regularly to include these advance- ments. 4. Interoperability with Other Systems: The benchmark does not assess how well these embedding models integrate with other sys- tems and technologies used in RAG pipelines. Evaluating interoperability and integration ef- ficiency could provide a more practical mea- sure of model utility. Addressing these limitations in future research will help improve the robustness and applicability of the IRSC Benchmark, making it a more power- ful tool for evaluating and developing embedding models in RAG tasks. References Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguis- tics. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. Ms marco: A human generated machine reading comprehension dataset. Preprint, arXiv:1611.09268. Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. 2024a. Bge m3-embedding: Multi-lingual, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. arXiv preprint arXiv:2402.03216. Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2024b. Benchmarking large language models in retrieval-augmented generation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 38, pages 17754–17762. Colin B Clement, Matthew Bierbaum, Kevin O’Keeffe, and Alexander A Alemi. 2019. On the use of arxiv as a dataset. On the use of the arXiv as a dataset. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisa- tion. In International Conference on Machine Learn- ing, pages 4411–4421. PMLR. Armand Joulin, Edouard Grave, and Piotr Bo- janowski Tomas Mikolov. 2017. Bag of tricks for efficient text classification. EACL 2017, page 427. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. Mlqa: Eval- uating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 7315– 7330. Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. 2023. Towards 9 Kaitlyn Zhou, Kawin Ethayarajh, Dallas Card, and Dan Jurafsky. 2022. Problems with cosine as a measure of embedding similarity for high frequency words. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 401–423. general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281. Shayne Longpre, Yi Lu, and Joachim Daiber. 2021. Mkqa: A linguistically diverse benchmark for mul- tilingual open domain question answering. Transac- tions of the Association for Computational Linguis- tics, 9:1389–1406. Luke Merrick, Danmei Xu, Gaurav Nuti, and Daniel Campos. 2024. Arctic-embed: Scalable, efficient, and accurate text embedding models. Preprint, arXiv:2405.05374. Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. 2023. Mteb: Massive text embedding benchmark. In Proceedings of the 17th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 2014–2037. Shashi Narayan, Shay Cohen, and Maria Lapata. 2018. Don’t give me the details, just the summary! topic- aware convolutional neural networks for extreme summarization. In 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Association for Computational Linguis- tics. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics. Harald Steck, Chaitanya Ekanadham, and Nathan Kallus. 2024. Is cosine-similarity of embeddings really about similarity? In Companion Proceedings of the ACM on Web Conference 2024, pages 887–890. Nandan Thakur, Nils Reimers, Andreas Rücklé, Ab- hishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). He sicheng Wang Yuxin, Sun Qingxuan. 2023. M3e: Moka massive mixed embedding model. Hao Yu, Aoran Gan, Kai Zhang, Shiwei Tong, Qi Liu, and Zhaofeng Liu. 2024. Evaluation of retrieval- augmented generation: A survey. arXiv e-prints, pages arXiv–2405. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classi- fication. Advances in neural information processing systems, 28. Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xi- aoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. 2023. Making a miracl: Multilingual in- formation retrieval across a continuum of languages. arXiv preprint arXiv:2210.09984. 10
ai_researcher
1
Developing_a_text-message_library_for_tobacco_prevention_among_adolescents_A_qualitative_study.pdf
BENGALI TEXT SUMMARIZATION BY SENTENCE EXTRACTION Kamal Sarkar Computer Science & Engineering Department, Jadavpur University, Kolkata – 700 032, India, [email protected] ABSTRACT Text summarization is a process to produce an abstract or a summary by selecting significant portion of the information from one or more texts. In an automatic text summarization process, a text is given to the computer and the computer returns a shorter less redundant extract or abstract of the original text(s). Many techniques have been developed for summarizing English text(s). But, a very few attempts have been made for Bengali text summarization. This paper presents a method for Bengali text summarization which extracts important sentences from a Bengali document to produce a summary. Keyword: Bengali Text Summarization, Sentence Extraction, Indian Languages 1 INTRODUCTION Now-a-days, information overload on the World Wide Web (WWW) is becoming a problem for an increasingly large number of web users. To reduce this information overload problem, automatic text summarization can be an indispensable tool. The abstracts or summaries can be used as the document surrogates in place of the original documents. In another way, the summaries can help the reader to get a quick overview of an entire document. Another important issue related to the information explosion on the internet is the problem that many documents with the same or similar topics are duplicated. This kind of data duplication problem increases the necessity for effective document summarization. In summary, the following are the important reasons in support of automatic text summarization: • A summary or abstract saves reading time • A summary or an abstract facilitate document selection and literature searches • It improves document indexing efficiency • Machine generated summary is free from bias • Customized summaries can be useful in question-answering systems where they provide personalized information. • The use of automatic or semi-automatic summarization by commercial abstract services may allow them to scale the number of published texts they can evaluate. Input to a summarization process can be one or more text documents. When only one document is the input, it is called single document text summarization and when the input is a group of related text documents, it is called multi-document summarization. We can also categorize the text summarization based on the type of users the summary is intended for: User focused (query focused) summaries are tailored to the requirements of a particular user or group of users and generic summaries are aimed at a broad readership community (Mani, 2001). Depending on the nature of summary, a summary can be categorized as an abstract and an extract. An extract is a summary consisting of a number of salient text units selected from the input. An abstract is a summary, which represents the subject matter of an article with the text units, which are generated by reformulating the salient units selected from an input. An abstract may contain some text units, which are not present into the input text. Based on information content of the summary, it can be categorized as informative and indicative summary. The indicative summary presents an indication about an article’s purpose and approach to the user for selecting the article for in-depth reading; informative summary covers all salient information in the document at some level of detail, that is, it will contain information about all the different aspects such as article’s purpose, scope, approach, results and conclusions etc. For example, an abstract of a research article is more informative than its headline. The main objective of the work presented in this paper is to generate an extract from a Bengali document. We have followed a simple and easy-to-implement approach to Bengali single document text summarization because the sophisticated summarization system requires resources for deeper semantic analysis. Bengali is a resource constrained language and NLP (natural language processing) research activities on Bengali have recently been started. In our work presented in this paper, we have investigated the impact of thematic term feature and position feature on Bengali text summarization. To our knowledge, no generic text summarization system for Bengali is available for comparison to our system. So, we have compared the proposed method to the LEAD baseline which was defined for single document text summarization task in past two DUC conferences DUC 2001 and DUC 2002. LEAD baseline considers the first n words of an input article as a summary, where n is a predefined summary length. Our work is different from the work on Bengali opinion summarization presented in (Das and Bandyopadhyay, 2010) because we mainly focus on generic text summarization for Bengali. In section 2, we present a brief survey on single document text summarization in English domain. The proposed summarization method has been presented in section 3. In section 4, we describe the summary evaluation method and experimental results. 2 A SURVEY ON SINGLE DOCUMENT TEXT SUMMARIZATION IN ENGLISH DOMAIN In this section, we present a brief survey on single document text summarization for English. Although new research on text summarization in English domain has been started many years ago, most works on text summarization today still rely on sentence extraction to form summary. Many previous works on extractive summarization use two major steps: (1) ranking the sentences based on their scores which are computed by combining few or all of the features such as term frequency (TF), positional information and cue phrases (Baxendale, 1958; Edmundson, 1969; Luhn, 1958; Lin and Hovy 1997) and (2) selecting few top ranked sentences to form an extract. The very first work on automatic text summarization by Luhn (1958) computes salient sentences based on word frequency (number of times a word occurs in a document) and phrase frequency. Although subsequent research has developed sophisticated summarization methods based on various new features, the work presented by Edmundson (1969) is still followed today as the foundation for extraction based summarization. Baxendale (1958) presented a straightforward method of sentence extraction using document title, first and last sentences of the document and/or each paragraph. Lin and Hovy (1997) claimed that as the discourse structures change over the domains and the genres, the position method cannot be as simple as in (Baxendale, 1958). They defined an optimal policy of locating the likely positions of topic-bearing sentences in the text. MEAD (Radev et. al., 2004) computes the score of a sentence based on many features such as similarity to the centroid, similarity to the first sentence of the document, position of the sentence in the document, sentence length etc. Kupiec et. el. (1995) applied a machine learning approach to text summarization. They developed a summarizer using a Bayesian classifier to combine features from a corpus of scientific articles and their abstracts. Salton et. al. (1997) presented a sentence extraction method that exploits the semantic links between sentences in the text. The feature they used in this work may be considered as a cohesion feature. Text cohesion (Halliday and Hasan ,1996) refers to the relations (semantic links) between words, word senses, or referring expressions, which determine how tightly connected the text is. In this approach, text is represented by a graph in which each node represents a paragraph in a document and the edges are labeled with the similarity score between two paragraphs. The paragraph that is connected to many other paragraphs with a similarity above a predefined threshold is considered as the bushy node. The paragraph representing the “bushy” node is considered as a salient one. Barzilay and Elhadad (1997) described a summarization approach that used lexical chaining method to compute the salience of a sentence. Cohesion (Halliday and Hasan, 1976 ) is a method for sticking together different parts of the text. Lexical cohesion is the simplest form of cohesion. Lexical Cohesion links the different parts of the text through semantically related terms, co-reference, ellipsis and conjunctions. Lexical cohesion also involves relations such as reiteration, synonymy, hypernymy (IS-A relations such as “dog-is-a-kind-of-animal”, “wrist-is-a-part-of-hand”). The concept of lexical chain was introduced in (Morris and Hirst, 1991). They characterized lexical chain as a sequence of related words that spans a topical unit of text. In other words, lexical chain is basically lexical cohesion that occurs between two terms and among sequences of related words. Barzilay and Elhadad (1997) used a WordNet (Miller, 1995) to construct the lexical chains. The work in (Conroy and O’Leary, 2001) considered the fact that the probability of inclusion of a sentence in an extract depends on whether the previous sentence had been included as well and applied hidden Markov models (HMMs) in sentence extraction task. Osborne (2002) applied maximum entropy (log-linear) model to decide whether a sentence will be included in a summary or not. He assumed no feature independence. The features he considered are: word pairs, sentence length, sentence position, discourse features (e.g., whether sentence follows the “Introduction”, etc.). Compared to creating an extract, automatic generation of abstract is harder and the latter requires deeper approaches which exploit semantic properties in the text. Generation of an abstract from a document is relatively harder since it requires: semantic representation of text units (sentences or paragraphs) in a text, reformulation of two or more text units and rendering the new representation in natural language. Abstractive approaches have used template based information extraction, information fusion and compression. In information extraction based approach, predefined template slots are filled with the desired pieces of information extracted by the summarization engine (Paice and Jones,1993). An automated technique has been presented in (Jing and McKeown, 1999; Jing,2002) to build a corpus representing the cut-and-paste process used by humans so that such a corpus can then be used to train an automated summarizer. True abstraction needs more sophisticated process that requires large-scale resources. Headline generation can be viewed as generation of very short summary (usually less than 10 words) that represents the relevant points contained in a document. A Headline summary is a kind of the indicative summary. Banko et. al. (2000) presented an approach that uses some statistical methods to generate headline like abstracts. HMM (Hidden Markov Model) based headline generation has been presented in (Zajic , Dorr and Schwartz, 2002). Dorr et al. (2003) developed the Hedge Trimmer that uses a parse-and-trim based approach to generate headlines. In this approach, the first sentence of a document is parsed using a parser and then the parsed sentence is compressed to form a headline by eliminating the unimportant constituents of the sentence using a set of linguistically motivated rules. TOPIARY (Zajic et al., 2004), a headline generation system, combines the compressed version of the lead sentence and a set of topic descriptors generated from the corpus to form a headline. The sentence is compressed using the approach similar to the approach in (Dorr et al. 2003) and the topic descriptors. A number of approaches for creating abstracts have been conceptualized without much emphasis on the issue that a true abstract may contain some information not contained in the document. Creating such an abstract requires external information of some kind such as ontology, knowledge base etc.. Since large-scale resources of this kind are difficult to develop, abstractive summarization has not progressed beyond the proof-of-concept stage. 3 PROPOSED SUMMARIZATION METHOD The proposed summarization method is extraction based. It has three major steps: (1) preprocessing (2) sentence ranking (3) summary generation. 3.1 Preprocessing The preprocessing step includes stop-word removal, stemming and breaking the input document in to a collection of sentences. For stop word removal, we have used the Bengali stop-word list downloadable from the website of Forum for Information Retrieval Evaluation (FIRE)(http://www.isical.ac.in/~fire/stopwords_list_ben.txt ). 3.2 Stemming Using stemming, a word is split into its stem and affix. The design of a stemmer is language specific, and requires some significant linguistic expertise in the language. A typical simple stemmer algorithm involves removing suffixes using a list of frequent suffixes, while a more complex one would use morphological knowledge to derive a stem from the words. Since Bengali is a highly inflectional language, stemming is necessary while computing frequency of a term. In our work, we use a lightweight stemmer for Bengali that strips the suffixes using a predefined suffix list, on a “longest match” basis, using the algorithm similar to that for Hindi (Ramanathan and Rao, 2003). 3.3 Sentence Ranking After an input document is formatted and stemmed, the document is broken into a collection of sentences and the sentences are ranked based on two important features: thematic term and position. Thematic term: The thematic terms are the terms which are related to the main theme of a document. We define the thematic terms are the terms whose TFIDF values are greater than a predefined threshold. The TFIDF value of a term is measured by the product of TF and IDF, where TF (term frequency) is the number of times a word occurs in a document and IDF is Inverse Document Frequency. The IDF of a word is computed on a corpus using the formula: IDF=log(N/df) where N=number of documents in the corpus and df (document frequency) indicates the number of documents in which a word occurs. The score of a sentence k is computed based on the similarity of the sentence to the set of thematic terms in a document. The similarity of a sentence k to the set of thematic terms in a document is computed as the sum of the TFIDF values of the thematic terms contained in the sentence k. S k =∑ w TFIDF , (1) w k where TFIDFw,k is a TFIDF value of a thematic term w in a sentence k and Sk is the score of the sentence k. One crucial issue is to determine the TFIDF threshold value based on which we can decide on whether a term is a thematic term or not. In experimental section, we will discuss how this threshold value has been adjusted for the best results. Positional Value: The positional score of a sentence is computed in such a way that the first sentence of a document gets the highest score and the last sentence gets the lowest score. The positional value for the sentence k is computed using following formula: kP = 1 k (2) Sentence length: We consider length of a sentence as a feature because we observe that if a sentence is too short, but it occurs in the beginning paragraph of a document it is sometimes selected due to its positional advantage. On the other hand, if a sentence is too long, it is sometimes selected due to the fact that it contains many words. So, we eliminate the sentences which are too short or too long. Combining Parameters for Sentence Ranking: We compute the score of a sentence using the linear combination of the normalized values of thematic term based score Sk and positional score Pk if the sentence is not too long or too short. If a sentence is too short or too long, it is assigned a score of 0. The final score of a sentence k is: Score k * α ⎧ = ⎨ ⎩ 0, S k if L k + β P * , 0 k ≤ , α β ≤ ≤ L L ∨ L k ≥ L U 1 ⎫ ⎬ ⎭ (3) The values of α , β, LL (lower cutoff on the sentence length L) and LU (upper cutoff on the sentence length L) are obtained by tuning them for the best results on a subset of documents randomly selected from our corpus. In the experimental section, we will discuss in detail how the values of these parameters are tuned. 3.4 Summary Generation A summary is produced after ranking the sentences based on their scores and selecting K-top ranked sentences, when the value of K is set by the user. To increase the readability of the summary, the sentences in the summary are reordered based on their appearances in the original text, for example, the sentence which occurs first in the original text will appear first in the summary. 4 EVALUATION, EXPERIMENTS AND RESULTS To test our summarization system, we collected 38 Bengali documents from the Bengali daily newspaper, Ananda Bazar Patrika. The documents are typed and saved in the text files using UTF-8 format. For each document in our corpus, we consider only one reference summary for evaluation. Evaluation of a system generated summary is done by comparing it to the reference summary. 4.1 Evaluation It is very difficult to determine whether a summary is good or bad. The summary evaluation methods can be broadly categorized as human evaluation methods and automatic (machine-based) evaluation methods. A human evaluation is done by comparing system-generated summaries with reference/model summaries by human judges. According to some predefined guidelines, the judges assign a score in a predefined scale to each summary under evaluation. Quantitative scores are given to the summaries based on the different qualitative features such as information content, fluency etc. The main problems with human evaluation are: (1) the evaluation process is tedious (2) it suffers from the lack of consistency. Two human judges may not agree on each other’s judgments. On the other hand, automatic evaluation (machine based) is always consistent with a judgment. The automatic evaluations may lack the linguistic skills and emotional perspective that a human has. Hence although automatic evaluation is not perfect compared to the human evaluation, it is popular primarily because the evaluation process is quick even if summaries to be evaluated are large in number. Since automatic evaluation is performed by a machine, it follows a fixed logic and always produces the same result on a given summary. Since automatic evaluation processes are free from human bias, it provides a consistent way of comparing the various summarization systems. In several past Document Understanding Conferences (DUC) organized by NIST (The National Institute of Standards and Technology), single document text summarization systems for English have been evaluated. In DUC 2001 and DUC 2002, single document summarization task was to generate a summary of fixed length such as 50 words, 100 words etc. A baseline called LEAD baseline was defined in theses conferences. LEAD baseline considers the first n words of an input article as a summary, where n is a predefined summary length. Unlike DUC single document text summarization task where there was a fixed summary length for each document, we believe that a generic summary of a document may be longer or shorter than a summary of another document. So, we assume that the size of a system generated summary should be equal to that of the corresponding model summary, but the different model summaries may not be equal in size. We adopted an automatic summary evaluation metric for comparing system-generated summaries to reference summaries. When we compare a system generated summary to a reference summary, we ensure that they would be of the same length. We have used the unigram overlap method stated in (Radev et.al, 2004) for evaluating the system generated summaries. Unigram overlap between a system generated summary and a reference summary is computed as follows: Unigram based Recall Score= S RI R (4) |R| is the length of the reference summary and |S ∩ R| indicates the maximum number of unigrams co-occurring in the system generated summary S and the reference summary R. Creation of reference summaries is a laborious task. In our experiment, we have used only one reference summary for evaluating each system generated summary. 4.2 Experiments and Results Tuning α, β and choosing appropriate threshold value: For the best results, α and β used in equation (3) would be set appropriately. At the same time, an appropriate TFIDF threshold value for selecting the thematic terms (discussed in subsection 3.3) should be chosen. For tuning these parameters, we build a training data set by the collection of 38 randomly selecting 10 document-summary pairs from document-summary pairs in our corpus. Initially, we set the value of α to 1 since α is the weight of the positional feature which is observed by us as a feature producing better results than the thematic term feature. We set the value of α to 1 for all the experimental cases presented in this paper. For tuning the value of β, we set the TFIDF threshold value to 0 and conduct experiments with the different values of β that ranges from 0 to 1. To obtain the different values of β, we step between 0 to 1 by 0.1. The figure 1 shows summarization performance curve with respect to different values of β on the training data. Figure1. Average Recall score Vs. β, when TFIDF threshold value is set to 0 and α is set to 1. The figure 1 shows that when the value of β is set to 0.1 which is a relatively smaller value, the better result is obtained. Since depending on TFIDF threshold value we decide on whether a term is the thematic term or not, an appropriate threshold value should be determined to improve the summarization performance. For this pupose, after fixing the value of β to 0.10, we adjust the TFIDF threshold value. The figure 2 shows the summarization performance curve with different TFIDF threshold values. Figure 2. Average Recall score Vs. TFIDF threshold value, when β is set to 0.10 and α is set to 1. The figure 2 shows that the best result is achieved when TFIDF threshold value is set to any value between 3.8 and 4.6. We set TFIDF threshold value to 3.8, because at this value, average recall score transits from a lower value to the best value. After fixing the value of β to 0.10 and the TFIDF threshold value to 3.8, we adjust the lower cutoff and the upper cutoff on the sentence length. Table 1 shows the results on training set with different values of the upper cutoff on sentence length. LU Average Recall Score 25 24 23 22 0.3752 0.3752 0.3752 0.3627 Table 1. Results on training set with different values of the upper cutoff (LU) on sentence length. The results on training set with different values of lower cutoff on sentence length are shown in Table 2. LL Average Recall Score 2 3 4 5 0.3752 0.3770 0.3749 0.3660 Table 2. Results on training set with different values of lower cutoff (LL) on sentence length. Table 1 and table 2 show that the best results are obtained when LU is set to any value between 23 and 25 and LL is set to 3. We set the value of LU to 23 and the value of LL to 3 when we run the system on the test data. from pairs randomly chose 10 document-summary 38 Results: We document-summary pairs in our corpus and considered this subset as a training set for tuning the values of several parameters discussed above. After setting the parameters to the values learnt from the training set, we test our system on the entire collection of 38 documents. From each of 38 documents, a summary of n words is generated, where n is the length of the reference summary of the corresponding document. A system generated summary is compared to a reference summary and the unigram based recall score is computed using the equation (4). The average recall score is obtained by averaging the recall scores obtained on all the documents in the collection. Row1 of Table 3 shows the performance of the proposed system on the test data set. To our knowledge, no generic text summarization system for Bengali is available for comparison to our system. So, we have compared the proposed method to the LEAD baseline. LEAD baseline considers the first n words of an input article as a summary, where n is a predefined summary length. Table 3 shows the comparisons of our system to the LEAD baseline. Methods Average Unigram based Recall Score Proposed System 0.4122 LEAD baseline 0.3991 Table3. Comparison of the proposed system to LEAD baseline Table 3 shows that the proposed method outperforms the LEAD baseline. The evaluation of generic summarization systems in the past DUC conferences DUC 2001 and DUC 2002 proves that it is very hard to beat LEAD baseline on the news documents. An Example: The following is an article taken from the Bengali daily newspaper Ananda Bazar Patrika. আর রkপাত চান না বেলi আেলাচনায় ৷ 5 (cid:314)ফbয়াির : (cid:314)কানo পূরবশত(cid:362) ছা(cid:340)াi (cid:314)কndীয় sরা(cid:626)মntী িপ িচদmরেমর সে(cid:489) pথম শািn আেলাচনায় বসেত চেলেছন আলফার (cid:314)কndীয় (cid:314)নতৃ t ৷ আলফার (cid:314)কndীয় কিম(cid:453) তরেফ জানােনা হেয়েছ, “শািn আেলাচনা আলফার সরবসmত িসdাn ৷ না মানেল পেরশ বরুয়ার িবরুেdo ব(cid:415)বsা (cid:314)নoয়া হেব ৷” aন(cid:415) িদেক, (cid:314)কndীয় কিম(cid:453) o (cid:314)খাদ আলফা (cid:314)চয়ারম(cid:415)ানেক চ(cid:415)ােল(cid:507) জিনেয় পেরশপnী আলফা (cid:314)গা(cid:629)ী i-(cid:314)মল পা(cid:455)েয় জানাল, sাধীন aসেমর দািব বজ(cid:362) ন কের আপেসর পেথ হাঁটেল aরিবn রাজেখায়ােকo তারা বজ(cid:362) ন করেব ৷ (cid:314)সাম o ম(cid:489)লবার, নলবাি(cid:340)েত, আলফার সাধারণ পিরষেদর (cid:315)বঠেকর পর আজ, গুয়াহা(cid:453) (cid:314)pস kােব সাংবািদক সেnলন ডােক আলফা ৷ ei pথম আলফার pকাশ(cid:415) সাংবািদক সেnলন ৷ আলফার সহ-সভাপিত pদীপ গৈগ, িবেদশসিচব শশধর (cid:314)চৗধুরী o pচারসিচব িমিথ(cid:489)া দiমাির সাংবািদকেদর মুেখামুিখ হন ৷ িতন পাতার িববৃিতেত আলফা জানায়: sাধীন aসেমর sp িনেয় সশst সংgাম শুরু করেলo িতন দশেক সাফল(cid:415) (cid:314)মেলিন ৷ তাi আnজ(cid:362) ািতক পিরিsিত o aসেমর মানুেষর দািব (cid:314)মেন তারা রাজৈনিতক সমাধােনর পেথ হাঁটেত চান ৷ দলমত িনরিবেশেষ aসেমর মানুষেক পােশ চায় আলফা ৷ 10 (cid:314)ফbয়াির , বৃহsিতবার িদিlেত (cid:314)কndীয় sরা(cid:626)মntী িপ িচদmরেমর সে(cid:489) ‘পূরবশত(cid:362) ’ ছা(cid:340)াi তারা আেলাচনায় বসেত চেলেছন ৷ eমন কী তাঁরা pধানমntী মনেমাহন িসংেহর সে(cid:489)o (cid:314)দখা করার জন(cid:415) সময় (cid:314)চেয়েছন ৷ pদীপ গৈগেয়র দািব, (cid:314)সনাধ(cid:415)k পেরশ বরুয়া o কম(cid:415)াnার জীবন মরানেক আমntণ জানােলo তাঁরা সাধারণ পিরষেদর (cid:315)বঠেক আেসনিন৷ ei পিরিsিতেত দুi-তৃ তীয়াংেশর (cid:314)বিশ সদেস(cid:415)র uপিsিতেত শািn আেলাচনা িনেয় িসdাn (cid:314)নoয়া হেয়েছ ৷ তেব আেলাচনার িবষয়গুিল িsর করা হয়িন ৷ তেব eনিডeফিবর দািব aনুযায়ী পৃথক বে(cid:340)াল(cid:415)াn গঠন বা eনeসিসeন (আieম)-eর দািবমেতা নাগািলেমর জন(cid:415) aসমেক িdধািবভk করার psাব আলফা খািরজ কেরেছ ৷ (cid:314)ধমািজ হত(cid:415)াকা(cid:521) বা স(cid:507)য় (cid:314)ঘােষর হত(cid:415)া আলফার ব(cid:340) ভু ল বেল (cid:314)মেন (cid:314)নন শশধর ৷ িতিন বেলন, “eত িদন সংgােম আলফা বা (cid:314)সনার হােত যত হত(cid:415)া হেয়েছ সবi দুভ(cid:362) াগ(cid:415)জনক ৷ আর রkপাত চাi না বেলi আেলাচনায় বসিছ ৷ ”আসn িনরবাচেন আলফার (cid:314)কানo সদস(cid:415) aংশ (cid:314)নেব না বেল জানান শশধর ৷ আলফার uপর (cid:314)থেক িনেষধা(cid:507)া তু েল (cid:314)নoয়া o সংঘষ(cid:362)িবরিতর িসdাn িনেয় (cid:314)কেndর সে(cid:489) আেলাচনা হেব ৷ আেলাচনা চালােনা হেব আলফা সদস(cid:415)েদর ‘(cid:314)সফ প(cid:415)ােসজ’ (cid:314)দoয়া িনেয়o ৷ aনুপ (cid:314)চিতয়ার pত(cid:415)প(cid:362)ণ pসে(cid:489)o কথা হেব ৷ শশধর জানান, “(cid:314)চিতয়া রাজশািহ (cid:314)জেল আেছন ৷ তাঁর সে(cid:489) সরাসির (cid:314)যাগােযাগ (cid:314)নi ৷ তেব আেবদন জানাব , িতিন (cid:314)যন বাংলােদেশ রাজৈনিতক আ(cid:445)য় না (cid:314)চেয় আমােদর সে(cid:489) (cid:314)যাগ (cid:314)দন ৷” 28 নmর ব(cid:415)ােটিলয়েনর চার (cid:314)নতা মৃণাল হাজিরকা, pবাল (cid:314)নoগ, জুন ভু iঞাঁ o িজেতন দt আলফা (cid:314)থেক পািলেয় আেলাচনাপnী (cid:314)গা(cid:629)ী গঠন কেরন ৷ রাজেখায়া তাঁেদর বিহ(cid:625)ার কেরন ৷ শশধর জানান, মৃণালরা পুনরিবেবচনার জন(cid:415) আেবদন জানান ৷ তা িবেবচনাধীন ৷ পেরশ sাধীন aসম িনেয় যা বেলেছন, তা তাঁর ব(cid:415)িkগত মত বেলi মেন করেছ আলফা ৷ (cid:314)কndীয় (cid:314)নতৃ t জানায়, “সাধারণ পিরষেদর িসdাn (cid:314)সনাধ(cid:415)kেক পা(cid:455)েয় (cid:314)দoয়া হেব ৷ িতিন মত না মানেল ব(cid:415)বsা (cid:314)নব ৷ তেব eখনo পেরশi (cid:314)সনাধ(cid:415)k ৷ রাজেখায়া পেরেশর সে(cid:489) (cid:314)যাগােযাগ রাখেছন ৷ ”তেব পেরেশর তরেফ (cid:314)য বাত(cid:362)া আেস তােক ‘আলফার বাত(cid:362)া’ বলেত নারাজ রাজেখায়ারা ৷ e িদেক, আজ সাংবািদক সেnলেনর পেরi পেরেশর তরেফ i-(cid:314)মল বাত(cid:362) ায় বলা হেয়েছ, (cid:314)সনাধ(cid:415)k o জীবন মরােণর কােছ রাজেখায়ার আমntণ eেসিছল (cid:455)কi, তেব 18 জানুয়াির সকাল 10টায় (cid:314)য আেলাচনা হoয়ার কথা, (cid:314)সi িচ(cid:455) 17 জানুয়াির (cid:314)পৗঁছয় ৷ পেরশপnী আলফার aিভেযাগ, (cid:314)য a(cid:505)েল পেরশরা রেয়েছন, (cid:314)সখান (cid:314)থেক eত dত আসা সmব নয় (cid:314)জেনi ei কা(cid:521) ঘটােনা হেয়েছ ৷ বাত(cid:362)ায় বলা হেয়েছ , “সারবেভৗম sাধীন aসেমর জন(cid:415) আলফা পদেkপ করেত psত ৷ রাজেখায়া যিদ ‘sাধীন aসম’ aসmব বেল আপেস রািজ হন, তেব তাঁেকo বজ(cid:362) ন করা হেব ৷ ”িবeসeফ সূেt খবর, আজ বাংলােদশ-(cid:314)মঘালয় সীমােn আলফার দুi সদস(cid:415), a(cid:392)t চাuদাং o pদীপ (cid:314)চিতয়ােক ভারেতর হােত তু েল (cid:314)দoয়া হেয়েছ ৷ পেরশ o জীবেনর পের দেল তৃ তীয় sােন িছেলন a(cid:392)t ৷ আলফা জানায়, 18 িডেসmর বাংলােদশ পুিলশ তাঁেদর (cid:314)gফতার কের ৷ Here is the reference summary for the article mentioned above. শত(cid:362) ছা(cid:340)াi (cid:315)বঠেক বসেত রািজ আলফা ৷ (cid:314)কানo রকম শত(cid:362) ছা(cid:340)াi (cid:314)কndীয় সরকােরর সে(cid:489) আেলাচনায় বসার কথা (cid:314)ঘাষণা করল aসেমর জি(cid:489) সংগঠন আলফা ৷ আগামী বৃহsিতবারi (cid:314)কndীয় sরা(cid:626)মntী িপ িচদmরেমর সে(cid:489) (cid:315)বঠেক বসেছন আলফা (cid:314)নতারা ৷ pধানমntী মনেমাহন িসংেহর সে(cid:489)o (cid:314)দখা করেত (cid:314)চেয়েছন তাঁরা ৷ শিনবার pথম pকাশ(cid:415) সাংবািদক সেmলেন আলফার (cid:314)নতারা জানান, সশst আেnালেনর পেথ (cid:314)হঁেট লাভ (cid:314)তা হয়িন, শুধু kিতi হেয়েছ ৷ পিরিsিত পরযােলাচনা কের আলফার (cid:314)কndীয় কিম(cid:453) তাi ঁিশয়াির রাজনীিতর মুলেsােত আসার িসdাn িনেয়েছ ৷ তেব পেরশ বরুয়ার aনুগামীরা হু িদেয়েছন, ‘আপেসর পথ’ না ছা(cid:340)েল তারা aরিবn রাজেখায়ার (cid:314)নতৃ tেক aমান(cid:415) কের ‘আেnালন’ চািলেয় যােব ৷ The following is the summary generated by the proposed system for the news article. আর রkপাত চান না বেলi আেলাচনায় ৷ (cid:314)কানo পূরবশত(cid:362) ছা(cid:340)াi (cid:314)কndীয় sরা(cid:626)মntী িপ িচদmরেমর সে(cid:489) pথম শািn আেলাচনায় বসেত চেলেছন আলফার (cid:314)কndীয় (cid:314)নতৃ t ৷ আলফার (cid:314)কndীয় কিম(cid:453) তরেফ জানােনা হেয়েছ শািn আেলাচনা আলফার সরবসmত িসdাn ৷ না মানেল পেরশ বরুয়ার িবরুেdo ব(cid:415)বsা (cid:314)নoয়া হেব ৷ (cid:314)সাম o ম(cid:489)লবার, নলবাি(cid:340)েত আলফার সাধারণ পিরষেদর (cid:315)বঠেকর পর আজ গুয়াহা(cid:453) (cid:314)pস kােব সাংবািদক সেnলন ডােক আলফা ৷ ei pথম আলফার pকাশ(cid:415) সাংবািদক সেnলন ৷ আলফার সহ-সভাপিত pদীপ গৈগ, িবেদশসিচব শশধর (cid:314)চৗধুরী o pচারসিচব িমিথ(cid:489)া দiমাির সাংবািদকেদর মুেখামুিখ হন ৷ িতন পাতার িববৃিতেত আলফা জানায়: sাধীন aসেমর sp িনেয় সশst সংgাম শুরু করেলo িতন দশেক সাফল(cid:415) (cid:314)মেলিন ৷ 5 Conclusion This paper discusses a single document text summarization method for Bengali. Many techniques have been developed for summarizing English text(s). But, a very few attempts have been made for Bengali text summarization. The performance of the proposed system may further be improved by improving stemming process, exploring more number of features and applying learning algorithm for effective feature combination. Traditionally, more than one reference summaries are used for evaluating each system generated summary, but in our work, we have used only one reference summary for summary evaluation. In future, we will consider more than one reference summaries for summary evaluation. REFERENCES Das, A. & Bandyopadhyay, S. 2010. Topic-Based Bengali Opinion Summarization. COLING (Posters) 2010: 232-240. Ramanathan, A. & Rao, D. D. 2003. A Lightweight Stemmer for Hindi. In the Proceedings of EACL 2003. Dorr, B. J., Zajic, D. & Schwartz, R. 2003. Hedge trimmer: A parse-and-trim approach to headline generation. In Proceedings of the HLT/NAACL 2003 Text Summarization Workshop and Document Understanding Conference (DUC 2003), pp. 1–8, Edmonton, Alberta. Paice, C. D. & Jones, P.A. 1993. The identification of important concepts in highly structured technical papers. In the proceedings of the 16th International Conference on Research and Development in Information Retrieval (SIGIR ‘ 93), 69-78. Lin, C – Y and Hovy, E. 1997. Identifying Topics by Position. In proceedings of the 5th Applied Natural Language Processing Conference, 283 – 290. New Brunswick, New Jersey: Association for Computational Linguistics. Radev, D. R., Jing, H., Sty, M. & Tam, D. 2004. Centroid-based summarization of multiple documents. Journal of Information Processing and Management, Elsevier, Volume 40, Issue 6, pp. 919-938. Radev, D., Allison, T., Blair-Goldensohn, S., Blitzer, J., Celebi, A., Drabek, E., Lam, W., Liu, D., Otterbacher, J., Qi, H., Saggion, H., Teufel, S., Topper, M., Winkel, A., & Zhang, Z. 2004. MEAD - A platform for multidocument multilingual text summarization. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal. Zajic, D., Dorr, B. J. & Schwartz, R. 2004. BBN/UMD at DUC-2004: Topiary. In the Association for the North American Chapter of Proceedings of Computational Linguistics Workshop on Document Understanding, Boston, MA, pp. 112—119. Zajic, D., Dorr, B., & Schwartz, R. 2002. Automatic Headline Generation for Newspaper Stories, In Workshop on Automatic Summarization, Philadelphia, PA, pp. 78-85. Miller, G. 1995. WordNet: A Lexical Database for English, Communications of the Association for Computing Machinery (CACM), 38(11):39-41. Salton, G., Singhal, A., Mitra, M. & Buckley, C. 1997. Automatic text structuring Information Processing and Management. and summary. Journal of 33(2):193—207. Edmundson, H. P. 1969. New methods in automatic extracting. Journal of the Association for Computing Machinery, 16(2):264–285. Luhn, H. P. 1958. The automatic creation of literature abstracts. IBM Journal of Research Development, 2(2):159–165. Mani, I. 2001. Automatic summarization”, Volume 3 of Natural language processing, Amsterdam/Philadelphia: John Benjamins Publishing Company. Jing. H. 2002. Using hidden Markov modeling to decompose human-written summaries. Computational Linguistics, 28(4), 527–543. Jing, H. & McKeown, K. 1999. The decomposition of human-written summary sentences, In the Proceedings of SIGIR’99: 22nd International Conference on Research and Development in Information Retrieval, University of California, Berkeley, August, pages 129–136. Kupiec, J., Pedersen, J. O. & Chen, F. 1995. A trainable document summarizer. In proceedings of Research and Development in Information Retrieval, pp 68–73. Conroy, J. M. & O'Leary, D. P. 2001. Text summarization via hidden Markov models and pivoted QR matrix decomposition. Tech. Rep., University of Maryland, College Park, 2001. Morris, J. & Hirst, G. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text, Computational Linguistics, 17(1):21-43. Halliday, M. A. K. & Hasan, R. 1996. Cohesion in text. Longman, London. Halliday, M. A. K & Hasan, R. 1976. Cohesion in English. English Language Series, Longman, London. Banko, M., Mittal, V. & Witbrock M. 2000. Headline generation based on statistical Translation. In Proceedings of the 38th Annual Meeting of the Association for Comptational Linguistics (ACL-2000), Hong Kong, pp. 318–325. Osborne, M. 2002. Using maximum entropy for sentence extraction. In Proceedings of the Acl-02, Workshop on Automatic Summarization, Volume 4 (Philadelphia, Pennsylvania), Annual Meeting of the ACL, Association for Computational Linguistics, Morristown. Baxendale, P. 1958. Man-made index for technical literature — An experiment. IBM Journal of Research and Development, 2(4), pages 354–361. Barzilay, R. & Elhadad. M. 1997. Using Lexical Chains for Text Summarization. In Proceedings of the Workshop on Intelligent Scalable Text Summarization. pp. 10--17, Madrid, Spain.
ai_researcher
1
“A_DANGER_FORESEEN_IS_A_DANGER_AVOIDED”_HOW_THE_SOURCE_OF_AN_IDEA_INFLUENCES_MANAGERS’_EVALUATION_BEHAVIOR_IN_OPEN_INNOVATION.pdf
1 2 0 2 r a M 6 2 ] h p - m s a l p . s c i s y h p [ 4 v 8 3 1 0 1 . 1 0 1 2 : v i X r a Plasma steering to avoid disruptions in ITER and tokamak power plants Allen H Boozer Columbia University, New York, NY 10027, U.S.A. [email protected] (Dated: March 29, 2021) Steering tokamak plasmas is commonly viewed as a way to avoid disruptions and runaway elec- trons. Plasma steering sounds as safe as driving to work but will be shown to more closely resemble driving at high speed through a dense fog on an icy road. The long time required to terminate an ITER discharge compared to time over which dangers can be foreseen is analogous to driving in a dense fog. The difficulty of regaining plasma control if it is lost resembles driving on an icy road. Disruptions and runaways are associated with three issues—a solution to one tends to complicate the solution to the other two: loss of plasma position control, excessive heat deposition, and wall melting due to runaway electrons. All three risks must be addressed for ITER to achieve its mission and essentially eliminated before tokamak power plants can be deployed. The tokamak literature asserts that disruptions and runaways are a problem of plasma steering. This can be found in a Physics Today article [1], which says the production of fusion energy will be enabled by the questions that ITER will answer, and in a Nuclear Fusion article [2] reviewing progress on dis- ruption prevention for ITER. The purpose of this paper is to clarify issues of plasma steering that need to be addressed for ITER to achieve its mission and for tokamak fusion-energy to be practical. Even problems that were considered solved, such as plasma-position control when sur- rounded by a perfectly conducting chamber [3–5], can be more subtle than was thought. The steering of tokamaks to avoid disruptions is analogous to steering a car to avoid accidents. Steer- ing, whether a car or a tokamak, has two fundamen- tal problems. The first problem for steering is foreseeing dan- gers. To safely steer a car in foggy conditions, the speed of the car must be limited so it can be safely stopped within the distance at which danger- ous conditions can be foreseen. The tokamak ana- logue would be to limit the plasma current to a level at which it can be terminated without a disruption within the time danger can be foreseen. A review of ITER shutdown strategies [6] found that even under ideal conditions at least 60 s is re- quired to terminate a 15 MA ITER current without a disruption. Predictions of disruptions during the flattop period of DIII-D plasmas [7] show a precipi- tous drop in reliability after milliseconds, Figure 1, but even the short-time predictions had only modest reliability, approximately 95%. Many methods have been developed for predicting disruptions; a sample is [8–14]. Although results vary among the meth- ods, the longterm predictions by all known meth- ods have a low reliability relative to what is needed. Even one in ten thousand pulses ending in an un- mitigated disruption could have a large impact on FIG. 1: Time-sliced DIII-D data was used to determine the fraction of the disruptions that were successfully pre- dicted versus the prediction time. F1 and F2 are two dif- ferent weightings of the data. This figure is reproduced with permission from Nucl. Fusion 59 096016 (2019). Copyright 2019, Institute of Physics Publishing. the achievement of the ITER mission [2]. Steering a tokamak to avoid disruptions resembles driving a car at high speed through a dense fog. The tokamak literature recognizes and discusses emergency shutdowns [2, 15] that must be initiated orders of magnitude faster, ∼ 30 ms. An exam- ple, reviewed by Sertoli et al [16], is a wall fragment striking the plasma. They found that this is not a cause for disruptions on JET with an ITER-like wall but had been in other tokamak experiments. As has long been known, wall fragments striking the plasma could become more severe in power plants because of blistering caused by alpha particles [17]. A fast emergency shutdown, which means fast compared to 60 s in ITER, requires a highly reliable strategy for instigating a benign disruption, called disruption mitigation. But, as noted in 2019, “With ITER construction in progress, reliable means of RE (runaway electron) mitigation are yet to be devel- oped ” [18]. Fast shutdowns can also produce un- acceptable forces on the blanket modules in ITER. Subtleties in estimating these forces are discussed in [19]. The second problem for steering is the availabil- ity and timescale of actuators, the analogs of the steering wheel and the brake pedal of a car and the 1.5 s response time of a typical driver. For ITER the actuators are (1) the external loop voltage, (2) the externally produced axisymmetric poloidal mag- netic field, (3) particle injection systems, (4) parti- cle pumps, (5) heating and current drive systems, (6) non-axisymmetric external magnetic fields. The major papers on plasma steering do not discuss the precise use of these six actuators, even in an un- rushed shutdown [6]. Indeed, it is unclear how to use the actuators to control what are most important for avoiding disruptions: the profile of the plasma cur- rent, the loss of position control of the plasma, and the maintenance of a sufficient plasma temperature to avoid runaways. All of the ITER actuators ex- cept particle injection require a timescale of order seconds to be fully effective, which is too long to react to a number of envisioned situations, which require a shutdown be initiated in of order tens of milliseconds [2, 15]. Even when dangers can be ade- quately foreseen, integration is required between the predictors and the actuators for successful steering. Once plasma control is lost on ITER, it is difficult to regain, much like driving on an icy road. Why does it take so long to shutdown an ITER plasma? Magnetic fields produced outside the vac- uum vessel require 0.6 s to penetrate to the plasma [20], and voltage limits on the poloidal field coils typically limit large changes to times longer than several seconds [6]. The toroidal loop voltage on the vessel [20] must be less than 12 V. At 15 MA, the poloidal magnetic flux enclosed by the ITER vacuum vessel can reach 75 V·s, so more than 6 s would be required to remove it using the loop voltage on the vessel. The poloidal flux removal by the resistivity of a 10 keV plasma at the magnetic axis in ITER requires ∼ 1000 s. Although the loop voltage on the vessel can remove the flux faster, the tendency is to produce a highly peaked current profile. The inter- nal inductance (cid:96)i is a measure. The larger (cid:96)i, the more centrally peaked the current and the greater the tendency of the plasma to disrupt, Figure 2, and the more difficult it is to keep the plasma adequately centered in the chamber [6]. As the plasma current drops, the plasma density must be proportionately reduced to stay below the empirical Greenwald den- sity limit [21], and this requires not only particle transport out of the plasma but also particle removal FIG. 2: Time-sliced DIII-D data was used to determine the probability that a disruption occurred within 350 ms as a function of the internal inductance (cid:96)i during flattop periods. Each the three histograms is normalized so that the integral under it is unity. This figure is reproduced with permission from Nucl. Fusion 59 096016 (2019). Copyright 2019, Institute of Physics Publishing. from the plasma chamber. The difficulty of benignly shutting down ITER be- comes far greater during its nuclear phase than be- fore. Control over the power input is lost, and far more dangerous seeds for the transfer of the plasma current from thermal into relativistic electrons are present. Even before the shutdown, steering be- comes more difficult in a nuclear-powered plasma. The current-density profile was identified in [2] as the main drive for disruptive instabilities, but which actuators ensure careful control of that profile over timescales long compared to internal flux relaxation times in a burning-plasma? The issue may be avoided in ITER by limiting the time a plasma may be allowed to burn, but what is the solution in a power plant? For success in a disruption-free shutdown of a burning plasma, the reduction in the plasma pres- sure must be consistent with adjustments to the ex- ternal vertical field for the plasma to remain suffi- ciently centered in the machine to avoid wall contact. Loss of centering resembles going into a skid on an icy road; regaining centering can easily become im- possible. The speed of these adjustments is strictly limited by the allowed voltages on the superconduct- ing poloidal field coils [6]. This is more difficult when deuterium-tritium fusion contributes 500 MW, of which 100 MW heats the plasma, with 50 MW of available external power. The fusion power Pdt is proportional to the plasma pressure squared within 10% accuracy between 10 and 20 keV [22]. Without a large increase in the poloidal-beta as the plasma current Ip is reduced, Pdt drops as I 4 p . The effect on the plasma pressure of the precipitous drop in nuclear power as Ip is decreased is magnified if the 2 plasma switches from the high confinement H-mode to the low confinement L-mode [2]. A reduction in the plasma current by a megaam- pere amplifies the number of energetic electrons by a factor of ten in a hydrogenic plasma [23]—even more when impurities are present [24, 25]. In the pre-nuclear phase of ITER, the only electrons that are energetic enough to runaway are those that were in a high-Te Maxwellian tail before the electron tem- perature Te was reduced sufficiently for the resis- tive electric field ηj|| to exceed the Connor-Hastie electric field [26]. This is when runaway becomes possible, and at the standard ITER density requires Te < ∼ 550 eV. The change from a high electron tem- perature Te ∼ 10 keV to a low temperature must occur quickly, in less than the maximum collisional relaxation time of an energetic electron, the Connor- Hastie [26] collision time τch ≈ 20 ms. In the nuclear phase of ITER operations, two important steady sources of energetic electrons are available: tritium decay and Compton scattering by gamma-rays from the irradiated wall, which can be amplified into dan- gerous relativistic-electron currents [27]. The seriousness of disruptions and runaways is de- termined not only by the damage but also by the length of the shutdown required for repairs. This is much longer after D-T operations in ITER begin. Issues associated with ITER maintenance and repair were reviewed in 2019 by van Houtte [28]. Disruptions and runaways are associated with three issues—a solution to one tends to complicate the solution to the other two: loss of plasma position control, excessive heat deposition, and wall melting due to runaway electrons. All three risks must be re- tired before tokamak power plants can be deployed. Even the successful achievement of the ITER mis- sion will require not only the avoidance of disrup- tions in the narrow sense of a sudden loss of magnetic surfaces but also the avoidance of the production of multi-megaamperes of relativistic electrons. Unac- ceptable melting [18] can be produced by 1.9 MA of relativistic electrons striking the walls over a broad area, or 300 kA if concentrated. The risks of disrup- tions and runaway electrons are related but should not be conflated [19]. In particular, the avoidance of magnetic-surface breakup can exacerbate the risk of runaway electrons. Fusion has the potential of making a major con- tribution to stopping the increase in atmospheric carbon dioxide [29, 32]. For this, minimization of time and risk for a demonstration of fusion power is of great importance. The United States National Academy of Sciences, Engineering, and Medicine stated [32]: “the Department of Energy and the pri- vate sector should produce net electricity in a fusion pilot plant in the United States in the 2035-2040 timeframe.” The cost of each year’s delay in develop- ing a solution, approximately a trillion dollars [29], far exceeds the credible cost of a minimal time and risk fusion program. The cost of deploying a sufficient number of fu- sion reactors to have a significant effect on car- bon dioxide production is order a thousand times greater than constructing a demonstration fusion power plant. Nevertheless, having one working fu- sion power plant is important in itself to world secu- rity. The precise cost of fusion energy is only relevant during the deployment phase in comparison with other solutions—and each of the alternatives for a complete energy system has major disadvantages in comparison to fusion [29]. The cost of electricity and the minimum unit size are only two considerations. Others can be more important: intermittency, site specificity, waste handling, and the potential for nu- clear proliferation. Making the risks of disruptions and runaways ac- ceptable in ITER is difficult but far easier than in DEMO, a machine that can demonstrate fusion power [30]. The basic problem is the structures sur- rounding the plasma are more delicate in a power plant than they are in ITER. In addition, the diag- nostics, which are needed for steering, become much more limited [31]. Magnetic fusion systems can be designed to be robust against disruptions and runaways by making them non-axisymmetric [29]. As stated by the U.S. National Academy [32], an assessment of fast paths to fusion energy requires a multi-year design study of potential fusion power plants. Disruption and runaway issues are far more chal- lenging in a tokamak power plant than during D-T operations of ITER and far more challenging in D-T operations of ITER than in non-D-T operations. A demonstration that disruption and runaway issues can be adequately addressed for practical tokamak fusion power will have to wait approximately thirty years until this can be demonstrated by power plants having operated an adequate period of time. A neg- ative conclusion on practicality could come sooner: after fifteen years when D-T operations start on ITER or after five years when ITER starts plasma operations. In a 2021 paper, Nicholas Eidietis rec- ognizes the challenges that disruptions pose to toka- mak power plants but remains optimistic that these challenges can be met [33]. Careful thought is required to determine how timescales should be integrated within an overall fu- sion program designed to minimize risk and time in demonstrating fusion power at the level required for informed decisions on its deployment. Each year’s delay in deploying carbon-free energy systems not only costs of order a trillion dollars [29] but also af- 3 fects security worldwide. Office of Fusion Energy Sciences under Award Num- bers DE-FG02-03ER54696, DE-SC0018424, and DE-SC0019479. Acknowledgements This material is based upon work supported by the U.S. Department of Energy, Office of Science, [1] R. Hawryluk and H. Zohm, The challenge and promise of studying burning plasmas: Answers to open questions that will be addressed by the ITER experiment should enable the production of fusion energy, Physics Today 72, issue 12, page 34 (De- cember 2019). [2] E.J. Strait, J.L. Barr, M. Baruzzo, J.W. Berk- ery, R.J. Buttery, P.C. de Vries, N.W. Eidietis, R.S. Granetz, J.M. Hanson, C.T. Holcomb, D.A. Humphreys, J.H. Kim, E. Kolemen, M. Kong, M.J. Lanctot, M. Lehnen, E. Lerche, N.C. Logan, M. Maraschek, M. Okabayashi, J.K. Park, A. Pau, G. Pautasso, F.M. Poli, C. Rea, S.A. Sabbagh, O. Sauter, E. Schuster, U.A. Sheikh, C. Sozzi, F. Turco, A.D. Turnbull, Z.R. Wang, W.P. Wehner, and L. Zeng, Progress in disruption prevention for ITER Nucl. Fusion 59, 112012 (2019). [3] D. I. Kiramov and B. N. Breizman, Force-free mo- tion of a cold plasma during the current quench, Phys. Plasmas 25, 092501 (2018). [4] A. H. Boozer, Halo currents and vertical displace- ments after ITER disruptions, Phys. Plasmas 26, 114501 (2019). [5] C. F. Clauser and S. C. Jardin, ITER cold VDEs in the limit of perfectly conducting walls, Phys. Plas- mas 28, 012511 (2021). [6] P. C. de Vries, T. C. Luce, Y. S. Bae, S. Ger- hardt, X. Gong, Y. Gribov, D. Humphreys, A. Kavin, R. R. Khayrutdinov, C. Kessel, S. H. Kim, A. Loarte, V.E. Lukash, E. de la Luna, I. Nunes, F. Poli, J. Qian, M. Reinke, O. Sauter, A. C. C. Sips, J. A. Snipes, J. Stober, W. Treutterer, A. A. Teplukhina, I. Voitsekhovitch, M. H. Woo, S. Wolfe, L. Zabeo, the Alcator C-MOD team, the ASDEX Upgrade team, the DIII-D team, the EAST team, JET contributors, the KSTAR team, the NSTX-U team and the TCV team and ITPA IOS members and experts, Multi-machine analysis of termination scenarios with comparison to simulations of con- trolled shutdown of ITER discharges, Nucl. Fusion 58, 026019 (2018). [7] C. Rea, K.J. Montes, K.G. Erickson, R.S. Granetz, and R.A. Tinguely, A real-time machine learning- based disruption predictor in DIII-D, Nucl. Fusion 59 096016 (2019). [8] T. Yokoyama, T. Sueyoshi, Y. Miyoshi, R. Hiwatari, Y. Igarashi, M. Okada, and Y. Ogawa, Disruption Prediction by Support Vector Machine and Neural Network with Exhaustive Search, Plasma and Fusion Research 13, 3405021(2018). [9] J. Kates-Harbeck, A. Svyatkovskiy, and W. Tang, Predicting disruptive instabilities in controlled fu- sion plasmas through deep learning, Nature 568, 526 (2019). [10] A. Murari, M. Lungaroni, M. Gelfusa, E. Peluso, J. Vega, and JET Contributors, Adaptive learning for disruption prediction in non-stationary conditions, Nucl. Fusion 59, 086037 (2019). [11] Y. C. Fu, D. Eldon, K. Erickson, K. Kleijwegt, L. Lupin-Jimenez, M. D. Boyer, N. Eidietis, N. Bar- bour, O. Izacard, and E. Koleman, Machine learning control for disruption and tearing mode avoidance, Phys. Plasmas 27, 022501 (2020). [12] A. Piccione, J. W. Berkery, S. A. Sabbagh, and Y. Andreopoulos, Physics-guided machine learning ap- proaches to predict the ideal stability properties of fusion plasmas, Nucl. Fusion 60, 046033 (2020). [13] R. M. Churchill, B. Tobias, Y Zhu, and the DIII- D Team, Deep convolutional neural networks for multi-scale time-series classification and application to tokamak disruption prediction using raw, high temporal resolution diagnostic data, Phys. Plasmas 27, 062510 (2020). [14] K. Zhang, D. L. Chen, B. H. Guo, J. J. Chen, and B. J. Xiao, Density limit disruption prediction using a long short-term memory network on EAST, Plasma Science and Technology 22, 115602 (2020). [15] P. C. de Vries, G. Pautasso, D. Humphreys, M. Lehnen, S. Maruyama, J. A. Snipes, A. Vergara, and L. Zabeo, Requirements for Triggering the ITER Disruption Mitigation System, Fusion Science and Technology 69, 471 (2016). [16] M.Sertoli, J. C. Flannegan, A. Cackett, E. Hodille, P. de Vries, I. H. Coffey, B. Sieglin, S. Marsen, S. Brezinsek, G. F. Matthews, J. W. Coenen, JW and JET-EFDA Contributors, Transient impurity events in JET with the new ITER-like wall, Physica Scripta T159, 014014 (2014). [17] Y. Ueda, K. Tobita, and Y. Katoh, PSI issues at plasma facing surfaces of blankets in fusion reactors, J. Nucl. Mat. 313-316, 32 (2003). [18] B. N. Breizman, P. Aleynikov, E. M. Hollmann, and M. Lehnen, Review: Physics of runaway electrons in tokamaks, Nucl. Fusion 59 083001 (2019). [19] A. H. Boozer, The interaction of the ITER first wall with magnetic perturbations, 2021 Nucl. Fusion 4 in press https://doi.org/10.1088/1741-4326/ abe226. [20] P.C. de Vries and Y. Gribov, ITER breakdown and plasma initiation revisited, Nucl. Fusion 59 096043 (2019). [21] M. Greenwald, Density limits in toroidal plasmas, Plasma Phys. Control. Fusion 44, R27 (2002). [22] John Wesson, Tokamaks, International Series of Monographs on Physics 118, Oxford University Press, Oxford, 3rd edition, 2004. [23] A. H. Boozer, Pivotal issues on relativistic electrons in ITER, Nucl. Fusion 58, 036006 (2018). [24] C. J McDevitt , Z. Guo and X. Tang, Avalanche mechanism for runaway electron amplification in a tokamak plasma, Plasma Phys. Control. Fusion 61, 054008 (2019). [25] L. Hesslow, O. Embr´eus, O. Vallhagen, and T. F¨ul¨op, Influence of massive material injection on avalanche runaway generation during tokamak dis- ruptions, Nucl. Fusion 59, 084004 (2019). [26] J. W. Connor and R. J. Hastie, Relativistic limi- tations on runaway electrons, Nucl. Fusion 15, 415 (1975). [27] O. Vallhagen, O Embr´eus, I. Pusztai, L. Hesslow, and T. F¨ul¨op, Runaway dynamics in the DT phase of ITER operations in the presence of massive ma- terial injection, J. Plasma Physics 86, 475860401 (2020). [28] D. van Houtte, ITER framework for RAMI engi- neering, Fusion Science and Technology 75, 1064 (2019). [29] A. H. Boozer Why carbon dioxide makes stellarators so important, Nucl. Fusion 60, 065001 (2020). [30] M. Siccinio, W. Biel, M. Cavedon, E. Fable, G. Fed- erici, F. Janky, H. Lux, F. Maviglia, J. Morris, F. Palermo, O. Sauter, F. Subba, and H. Zohm, DEMO physics challenges beyond ITER, Fusion Engineer- ing and Design 156, 111603 (2020). [31] W. Biel, R. Albanese, R. Ambrosino, et al., M. Ar- iola, M.V. Berkel, I. Bolshakova, K.J. Brunner, R. Cavazzana, M. Cecconello, S. Conroy, A. Dinklage, I. Duran, R. Dux, T. Eade, S. Entler, G. Ericsson, E. Fable, D. Farina, L. Figini, C. Finotti, Th. Franke, L. Giacomelli, L. Giannone, W. Gonzalez, A. Hjal- marsson, M. Hron, F. Janky, A. Kallenbach, J. Ko- goj, R. K¨nigo, O. Kudlacek, R. Luis, A. Malaquias, O. Marchuk, G. Marchiori, M. Mattei, F. Maviglia, G. De Masi, D. Mazon, H. Meister, K. Meyer, D. Micheletti, S. Nowak, Ch. Piron, A. Pironti, N. Rispoli, V. Rohde, G. Sergienko, S. El Shawish, M. Siccinio, A. Silva, F. da Silva, C. Sozzi, M. Tardoc- chi, M. Tokar, W. Treutterer, H. Zohm, Diagnostics for plasma control—From ITER to DEMO, Fusion Engineering and Design 146, 465 (2019). [32] National Academies of Sciences, Engineering, and Medicine, Bringing Fusion to the U.S. Grid. Wash- ington, DC: The National Academies Press (2021), https://www.nap.edu/download/25991# [33] N. Eidietis, Prospects for Disruption Handling in a Tokamak-Based Fusion Reactor, accepted for pub- lication in Fusion Science and Technology, https: //doi.org/10.1080/15361055.2021.1889919. 5
ai_researcher
3
Limitations_of_the_LLM-as-a-Judge_Approach_for_Evaluating_LLM_Outputs_in_Expert_Knowledge_Tasks.pdf
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 1 LLM Online Spatial-temporal Signal Reconstruction Under Noise Yi Yan, Student Member, IEEE, Dayu Qin, Student Member, IEEE, and Ercan E. Kuruoglu, Senior Member, IEEE 4 2 0 2 v o N 4 2 ] G L . s c [ 1 v 4 6 7 5 1 . 1 1 4 2 : v i X r a Abstract—This work introduces the LLM Online Spatial- temporal Reconstruction (LLM-OSR) framework, which inte- grates Graph Signal Processing (GSP) and Large Language Models (LLMs) for online spatial-temporal signal reconstruction. The LLM-OSR utilizes a GSP-based spatial-temporal signal handler to enhance graph signals and employs LLMs to predict missing values based on spatiotemporal patterns. The perfor- mance of LLM-OSR is evaluated on traffic and meteorological datasets under varying Gaussian noise levels. Experimental results demonstrate that utilizing GPT-4-o mini within the LLM- OSR is accurate and robust under Gaussian noise conditions. The limitations are discussed along with future research insights, emphasizing the potential of combining GSP techniques with LLMs for solving spatiotemporal prediction tasks. Index Terms—Large Language Model, Graph Signal Process- ing, spatial-temporal graph, online prediction. I. INTRODUCTION R ECENT advancements in artificial intelligence have led to breakthroughs in many fields such as healthcare diagnostics [1] and investment portfolio construction [2], culminating in the development of Large Language Models. Large Language Models (LLMs) are a type of artificial intelli- gence model that is designed for natural language processing (NLP) tasks for its ability to understand and generate large- scale texts [3]. BERT is a groundbreaking predecessor to modern LLMs by demonstrating the power of bidirectional transformers for natural language understanding [4]. Modern LLMs, such as GPT-3 [5], GPT-4 [6], ERNIE [7] and Kimi AI [8], are trained on datasets with billions of words and typically transformers architecture, to manipulate natural lan- guage [9]. Several applications of LLMs have been explored: LLMs enhance interactive machine translation by delivering high-quality initial translations, adapting efficiently to user feedback, and minimizing training costs [10]; LLMs enhance sentiment analysis by generating domain-specific weak labels and enabling efficient model distillation for practical applica- tions. [11]. However, LLMs still exhibit certain limitations. For example, if the information provided to LLMs is insuffi- cient or the prompt is misleading, inaccurate, or incomplete healthcare decisions can be made by LLMs which can lead to physical or psychological harm [12]. Despite their strength in processing text-based information, LLMs remain limited in Yi Yan and Dayu Qin contributed equally. Yi Yan was affiliated with the Tsinghua-Berkeley Shenzhen Institute, Shenzhen International Graduate School, Tsinghua University, during the completion of this work. Dayu Qin and Ercan E. Kuruoglu are currently affiliated with the Tsinghua- Berkeley Shenzhen Institute, Shenzhen International Graduate School, Ts- inghua University. Corresponding author: Ercan E. Kuruoglu; e-mail: ku- [email protected]. handling multivariate data structures, necessitating exploration into methods like Graph Signal Processing (GSP) and Graph Neural Networks (GNNs) to analyze and model such complex datasets effectively. Graph-based methods provide a powerful framework for modeling and analyzing correlations in complex multi-variate data and have been applied in many fields, such as neurological disorders screening [13] and financial crisis prediction [14]. There are some application scenarios that are better suited for graph methods compared with computer vision (CV) and natural language processing (NLP) approaches, for instance, social network analysis [15], [16], traffic prediction [17], and quantum computing [18]. By exploiting the graph topology, interactions among multivariate data are captured along with the multi-variate data, offering task performance that outper- forms non-graph algorithms. In addition, GSP with spectral approaches enables efficient representation and extraction of spectral patterns in addition to the spatial patterns seen in graphs and has a wide range of applications, including flaw detection in the wire-based directed energy deposition [19] and electroencephalography signal processing [20]. Graph-based methods are conventionally applied to static machine learning tasks such as classification, regression (on time-invariant data), and clustering, which involve data analysis without consid- ering temporal changes [21], [22]. However, some particular tasks such as traffic prediction, climate modeling, and finan- cial forecasting, rely heavily on capturing spatial-temporal dependencies. Spatial-temporal graph algorithms, GSP and GNNs, were designed to solve these kinds of time-varying tasks and have succeeded in dealing with such challenges by effectively modeling relationships and temporal dynamics in graph-structured data [23], [24], [25], [17]. Some recent studies have further explored the integration of GSP with Large Language Models (LLMs), revealing sub- stantial advantages. Firstly, this integration potentially expands the application of LLMs by enabling them to process and analyze time-varying graph structure data, which allows LLMs to engage in graph-based reasoning across varied scenarios such as academic and e-commerce networks [26]. Secondly, GSP can leverage LLMs to process time-varying, multivariate data from a text-based perspective, providing a novel angle for analyzing dynamic complex networks [26]. Notably, LLMs on graphs perform well not by relying on data leakage but because they interpret graphs as languages, where the node label (signal) is deemed more crucial than the structure itself [27]. Additionally, the InstructGLM framework demonstrates how LLMs can effectively represent graph structures through natural language for node classification in citation networks. T JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 2 his approach eliminates the need for complex GNN pipelines and unifies graph learning with natural language processing, showcasing the potential of LLMs in graph tasks [28]. In our work, we introduced a novel method, the Large Language Model for Online Spatial-temporal Reconstruction (LLM-OSR) algorithm, which combines the strengths of the time-varying GSP method and LLMs for efficient and accurate reconstruction of missing signals in dynamic spatiotemporal complex networks. This integrated approach uses GSP to denoise or enhance signal features, ensuring high-quality input for LLM-based prediction tasks. The combination of GSP with LLMs presents noteworthy advantages. Firstly, this integration has the potential to expand the application of LLMs by enabling them to process and analyze time-varying graph structure data. Secondly, GSP can leverage LLMs to process time-varying, multivariate data from a text-based perspective, providing a novel perspective for analyzing dynamic complex networks. There are 2 main contributions to our paper: • We introduced the LLM-OSR algorithm, a novel ap- proach to reconstructing spatial-temporal signals in an online manner by seamlessly combining GSP-based tech- niques with LLM-driven predictors. This innovative ap- proach effectively reconstructs time-varying graph signals in the presence of noise and missing values within spatiotemporal data. • The LLM-OSR employs a sophisticated reverse embed- ding approach to transform spatial-temporal signals on graphs into coherent and contextually meaningful natural language expressions, making the information readily interpretable and actionable by LLMs. Here is the organization of this paper. The preliminary knowledge is presented in Section II. Section III provides a detailed discussion of the LLM-OSR. The experimental results and corresponding discussions are covered in Section IV. The limitations and some potential future extensions of the proposed LLM-OSR are discussed in Section V provides an in- depth discussion of the limitations of LLM-OSR and outlines potential directions for future research. Finally, Section VI concludes the paper. II. PRELIMINARI KNOWLEDGE A. GSP Preliminaries In this paper, we consider an undirected and unweighted graph G = (V, E), where V = {v1, . . . , vi} is the set of nodes or vertices, and E is the set of edges. We can represent the topology of the graph using the adjacency matrix A, where A ∈ RN ×N and its elements are defined as follows: (cid:40) Aij = 1, 0, otherwise. if there is an edge between vi and vj, (1) The time-varying graph signal x[t] ∈ RN is the multi-variate numerical data recorded on the graph nodes that change over time. In GSP, the spectral operations are defined using the graph Laplacian matrix L ∈ RN ×N : L = D − A, (2) where D is the degree matrix, defined as D = diag(1T A) and 1 is an all ones vector. The spectral operations are conducted through GSP by having the GFT as the analogy of the classical Fourier Transform and can be realized by the eigendecomposition of L: L = U diag(λ)UT , (3) where U is the orthonormal eigenvalue matrix and λ is the vector of eigenvalues. In GSP, the Laplacian matrix L of an undirected and unweighted is a symmetric semi- definite matrix. The eigenvectors serve as graph Fourier bases and eigenvalues represent graph frequencies. The eigenvalue- eigenvector pairs are sorted in increasing order: smaller eigen- values correspond to smoother variations (low frequencies) in the graph signal and larger eigenvalues correspond to rapid variations (high frequencies) [29]. Spectral operations can be conducted through applying f =1 h(λ)f to the signal x in the spectral domain filters (cid:80)F through GFT: ˜x = U F (cid:88) f =1 h(λ)f UT x, (4) where ˜x is the processed signal. The formulation in (4) is also known as the graph convolution. By implementing various filters, such as high-pass and low-pass filters, we can perform tasks like denoising and feature enhancement [30]. B. LLM preliminaries it it The Generative Pre-trained Transformer 3 (GPT-3) is a transformer-based large-scale autoregressive language model is developed by OpenAI. With 175 billion parameters, significantly larger than its predecessor, GPT-2, and was designed to excel in task-agnostic few-shot learning. GPT-3 can perform a wide array of NLP tasks, such as text generation, translation, summarization, and question answering, without requiring task-specific fine-tuning. Instead, leverages in- context learning, where the model can adapt to new tasks by interpreting prompts and few-shot demonstrations directly within its input context [31]. The GPT-3 is capable perform various tasks, such as creative writing and generating code, highlighting its broad applicability [5]. Despite these strengths, GPT-3 has notable limitations, such as struggles with long- term coherence, logical consistency, and susceptibility to gen- erating factually incorrect or biased content. Building on the GPT-3 foundation, GPT-4 introduces several key advance- ments that address some of these limitations of GPT-3. GPT-4 offers improved performance across a wider range of tasks and demonstrates enhanced capabilities in reasoning, inference, and contextual understanding by incorporating improvements such as multimodal processing, larger context window, and optimized scaling laws to deliver more accurate and reliable outputs [32]. These technical improvements enable GPT-4 to tackle more complex tasks, such as multimodal reasoning and handling intricate logical chains, with greater precision. For example, the GPT-4 is capable of scoring in the top 10% on simulated bar exams, compared to GPT-3.5, which could score only at the bottom 10% [33]. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 3 Fig. 1. An overview of the LLM-OSR workflow III. METHODOLOGY Algorithm 1 LLM-OSR Overview A. Methodology overview The LLM-OSR algorithm reconstructs missing graph sig- nals by combining GSP-based processing and LLM prediction. To provide an intuition of the LLM-OSR, it first enhances sig- nal features with GSP and then predicts missing values via the LLM for time-varying spatiotemporal data. The entire process operates online, meaning that the spatiotemporal reconstruc- tion is performed in real time as new signal observations are continuously received and processed by the LLM-OSR. The signal observation model used in LLM-OSR can be expressed as o[t] = M(xg[t] + ϵ[t]), (5) where M is the observation mask, xg[t] is the ground truth graph signal, and ϵ[t]) is the i.i.d. zero-mean additive Gaussian noise to the graph signal. Consider a graph G with a total of N nodes, and a subset of these nodes O ⊆ V, where only O out of N nodes are observed. Using the observation set O, we can construct a masking matrix M to model the signal observation: M = diag([O(v1), . . . , O(vO)]T ), (6) where the membership of a node vi in the observation set is defined as: (cid:40) O(vi) = 1, 0, if vi ∈ O, if vi /∈ O. (7) Here, O(vi) acts as an indicator function that determines whether vi belongs to the observed set, and M is a di- agonal matrix with entries corresponding to these indicator values. This assumption is fundamental in various applications, including climate change analysis [34], skeleton-based gait recognition [35], and brain studies [36]. The workflow of one iteration of LLM-OSR is shown in Figure 1. An overview of the LLM-OSR framework is shown in Algorithm 1. B. GSP-based Spatial-temporal Signal Handler The proposed GSP-based Spatial-temporal Signal Handler aims to leverage GSP techniques to denoise and enhance spatial-temporal signals to prepare the data for the LLM-based spatial-temporal signal predictor. During the training phase of LLM-SRO, the goal is to learn the optimal filter parameters within the GSP-based spatial- temporal signal handler from the training data. In our case, the signal observations received by since we assume that 1: Train Phase: 2: Initialize and train the GSP-based spatial-temporal signal handler as seen in Algorithm 2 3: Testing Phase: 4: Deploy the trained GSP-based spatial-temporal signal han- dler 5: Prepare the LLM-based spatial-temporal signal predictor 6: while new observations o[t] are available do 7: 8: Gather ˆx[t − 1], the previous signal estimate Process the observations o[t] with the GSP-based spatial-temporal signal handler and Collect the processed observations ˜o[t] Pass ˜o[t] and ˆx[t − 1] into the LLM-based spatial- temporal signal predictor Operate the LLM-based spatial-temporal signal predictor as seen in Algorithm 3 Collect reconstructed ˆx[t] 9: 10: 11: 12: end while the LLM-OSR contain noise and missing nodes, we would like to let the GSP-based spatial-temporal handler perform a denoising task and feed the denoised observations to the LLM- based spatial-temporal signal predictor. To give an overview, the LLM-OSR takes a universal approach to learning the filter parameters (cid:80)F f =1 h(λ)f in (4) by iteratively applying the graph convolution (4) to the training data, calculating the loss of the estimation, and updating the filter parameter. We augment the training set by concatenating multiple copies to increase the number of time samples. During each iteration, the GSP-based spatial-temporal handler is trained using data from a single time instance. First, we apply the graph convolution operation in (4) to obtain the signal representation. Next, we compute the Mean Absolute Error (MAE) between the predicted output and the ground truth at each time instance t, serving as the performance metric for the current filter: MAE[t] = 1 N N (cid:88) n=1 |xg,n[t] − ˜xn[t]|, (8) where ˜x is the processed graph signal, ˆxn[t] it the processed signal on the nth node within ˜x, and xg,n[t] is the signal on the nth node within the ground truth signal xg[t]. The actual learning and updating of the filter are done by calculating the gradient of the MAE with respect to the filter parameters Observe instananous graph signal 𝒙[𝑡](noisy and missing)GSP-based Spatiotemporal Signal Handler𝒙[𝑡]viewed on graph 𝒢LLM-based Spatiotemporal Signal PredictorDenoised signal 𝒙(cid:3557) [𝑡]Predicted signal 𝒙(cid:3549) [𝑡+1]Repeat as new observation are received JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 4 Fig. 2. The training process of the GSP-based spatial-temporal signal handler. (cid:80)F f =1 h(λ)f , denoted as ∇HMAE then updated the filter parameters using the gradient descent rule: F (cid:88) f =1 h(λ)f = F (cid:88) f =1 h(λ)f − η · ∇hMAE[t], (9) where η is the learning rate. We repeat this process is repeated iteratively until we run out of training samples or when the MSE converges to a steady value (early stopping). Since the graph filter parameters are applied within the graph convo- lution, it is essentially learned based on the knowledge of both the graph structure and associated graph signals. These parameters are optimized to minimize noise and preserve important spatial and spectral graph features in the data. Once training is complete, the learned GSP filter parameters are fixed and stored in the GSP-based spatial-temporal signal handler for use during the test phase. We assume that the signals in the training set and the testing set have similar spectrums. During the test phase, the pre-trained GSP-based spatial-temporal signal handler is applied to unseen test sam- ples o[t] as they are received in real-time to enhance their quality by denoising. In other words, since each test sample o[t] is an observation with missing values and noise, the pre- trained GSP-based spatial-temporal signal handler is applying a series of graph filters (cid:80)F f =1 h(λ)f to o[t] to denoise using the graph convolution (4). Since the process is completed through the graph convolution (4), the signal observations are processed with the knowledge of the underlying spatial structure of the graph G. The logic of training and deploying the GSP-based spatial-temporal signal handler data can be found in Figure 2. C. LLM-based spatial-temporal signal predictor The first step of our LLM-based spatial-temporal signal predictor is a reverse embedding function to process the denoised signals. In NLP tasks, the embedding process typi- cally involves transforming words, phrases, or sentences into numerical vectors by mapping them into a designated feature space [37]. In conventional graph embedding tasks, data on the graph is typically transformed into a feature space using methods such as GCN or Node2Vec, similar to traditional NLP embeddings [38]. Here in the LLM-OSR, we take a reverse approach to the conventional embedding approaches such as the Node2Vec. Since the time-varying graph signal observations o[t] are already numerical, instead of embedding data into designated space, we directly represent the local Algorithm 2 GSP-based spatial-temporal signal handler 1: Train Phase: 2: Given the training data x[1] . . . x[T ] 3: Initialize graph Laplacian L = D − A (2) and GFT L = UΛUT (3) 4: Initialize filter parameters h(Λ) 5: while halting condition not met do 6: Apply ˜x[t] = Udiag(h(Λ))UT x[t], the graph convolution, as seen in (4) Compute MAE at each time instance t: MAE[t] = 1 N Calculate gradient ∇hMAE with respect to h(Λ) Update filter parameters using gradient descent: h(Λ) = h(Λ) − η · ∇hMAE as seen in (9) n=1 |xg,n[t] − ˜xn[t]| as seen in (8) (cid:80)N 10: end while 11: Export the parameters after training is complete 12: Test Phase: 13: for each received test observation sample o[t] containing noise and missing node observation do Process o[t] using graph convolution and the trained filter diag(h(Λ)): ˜o[t] = Udiag(h(Λ))UT o[t] Apply ˜o[t] to be fed into LLM for reconstruction 15: 16: end for 7: 8: 9: 14: topological connections of nodes on the graph in the spatial domain through LLM natural language expressions (English passages), which consists of text and numbers. Mathemati- cally, extracting the 1-hop localized neighbors of the node vi can be achieved by identifying the nonzero entries in the ith row of the adjacency matrix A. This can be expressed as: Neighbors(vi) = Avi = {vj | A[i, j] ̸= 0, j ̸= i}, (10) where Neighbors(vi) represents the set of neighboring nodes of vi . Notice that in our problem setting, there is only a subset of the nodes being observed. So in the implementation of LLM-OSR, we only consider the processed node signals from the observed node neighbors. These expressions are then further put into the prompt as the tasks T (vi) on each node to be solved by the LLMs, specifying the LLM to conduct a prediction based on the spatial-temporal information. In the spatial domain, the task T (vi) will consist of an expression of which nodes are neighbors to node vi and the processed observed values of its neighbors. In the temporal domain, the task T (vi) will consist of an expression of the past estimated signal values of node vi. Conceptually, we Iterate until denoise objective metGFTIGFTGSP filter learningTraining set: noise signal 𝒙1… 𝒙TGSP-based Spatiotemporal Signal HandlerObtain GSP model JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 5 Fig. 3. The LLM-based Spatial-temporal Signal Predictor. can understand the task of the LLM as to aggregate neighbor signals of vi with self-aggregation; this task can be denoted as: T (vi) = agg ({(ˆxi[t − 1], ˜xj[t]) | j ∈ (Avi ∪ O)}) . (11) This approach shifts the role of the LLM from performing logical reasoning or mathematical calculations, which is a known weakness of the LLM, to generating data based on semantic understanding and context, leveraging the strengths of LLM in processing natural language descriptions. The aim of the LLM-based Spatial-temporal Signal predictor is to let the LLM leverage the smoothness assumption to infer the target values of each node based on the values of its neighboring nodes (spatial) and the past estimation (temporal). The LLMs used within our LLM-OSR framework are GPT- 3.5-turbo and GPT-4-o mini, which belong to the GPT-3 and GPT-4 families, respectively. The GPT-3 leverages in- context learning to perform various tasks—such as translation, question answering, and text completion—without requiring task-specific fine-tuning [31]. The architecture of the GPT-3 is based on transformers and was trained on a diverse dataset sourced primarily from the internet. GPT-4 further builds upon the GPT-3 foundation, incorporating key advancements such as multimodal capabilities, a larger context window, optimized scaling laws, and improved safety mechanisms [32]. These en- hancements result in significant improvements in inference and logical reasoning of GPT-4, demonstrating better performance compared to GPT-3 at understanding prompts, generating more coherent and contextually appropriate responses, and handling complex tasks requiring step-by-step reasoning [33]. These improvements make GPT-4 more suitable for our tasks of online spatial-temporal signal reconstruction, which involves complex reasoning and spatial-temporal pattern recognition. The integration of LLMs with the rest of the components within the LLM-based Spatial-temporal Signal Predictor is facilitated through the OpenAI API. To be specific, LLM-OSR directly utilizes the pre-trained model provided and deployed on the OpenAI server. That is, the LLM-OSR does not perform any additional training or fine-tuning. The API provides access to the vanilla model as-is, leveraging its existing knowledge and capabilities to generate predictions. This allows us to take full advantage of the robust, generalized understanding embedded in the model, ensuring a streamlined and efficient process for spatial-temporal signal forecasting. We followed the approach of structuring the interaction of the reversely embedded graph signal expression with the LLM using a dual-role setup where the LLM contains a system role and a user role. The system role serves as the super- vision guide, providing the global task context and specific constraints to shape the behavior of the LLM. For our spa- tiotemporal task, this role defines the objective as predicting the current value of a graph node based on its previous value and the values of its neighbors, while also enforcing strict output requirements. In our LLM-OSR, the system instructs the LLM to produce only a single numeric value as output, rounded to a certain decimal place, and to avoid any extraneous text or reasoning. This ensures consistency and simplicity in the generated responses. Below is an example of system role content we used in LLM-OSR: The spatiotemporal task is to predict the current number on a graph based on its previous value and the value of its neighbors. The user role, on the other hand, dynamically generates task-specific prompts that supply the LLM with the required details for each individual prediction. Here is an example of directly expressing the neighborhood relationship along with the signals for the user role using natural language expression: Each indexed content is independent. Make 1 nu- meric prediction per indexed context. Precision round to 1 decimal point. Do not output text. Do not recall memories. Time 1439, Entity index: 322. Previous: 61.5, Neighbors: [63.9, 57.4]. These prompts include precise temporal and spatial context, such as the time index, the node index, the previous value of the node, and a list of observed values from its neighbors. By structuring the user prompts this way, we ensure that each query to the LLM is both clear and contextually complete, reducing ambiguity in the response. Furthermore, the interaction process is optimized by batch- ing multiple prompts for efficiency. When a new observation is received in the LLM-OSR, the user role creates a batch of prompts corresponding in real-time. These batched prompts are sent to the LLM in a single API call, with each prompt corresponding to the task T (vi) on a single node, leveraging the prompt-response structure to efficiently handle multiple predictions. Notice that we further included an error-checking function for invalid LLM responses. If an LLM call fails, such as when the LLM response does not include a valid numeric value, our implementation automatically retries the failed predictions. This retry mechanism regenerates the prompts for the unresolved cases and resubmits them to the LLM, with a LLM as predictorGSP denoised signal 𝒙(cid:3557) [𝑡]Predicted signal 𝒙(cid:3549) [𝑡+1]LLM:𝒯(𝑣(cid:2869))…𝒯(𝑣(cid:3041))Node-level task 𝒯𝑣(cid:3041)of each missing node 𝑣(cid:3041)𝒯𝑣(cid:2869) ... 𝒯𝑣(cid:3041)Localized representationsPredict𝑛𝑒𝑥𝑡signalNextobservation of 𝑣(cid:3041)?Previous estimation of 𝑣(cid:3041): 59.2Neighbors of 𝑣(cid:3041): 67.9, … , 68.8Prompting logic: JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 6 predefined maximum number of retries. This additional error- checking function ensures that exceptions and invalid LLM outputs are handled robustly, even in cases where the LLM struggles to produce a valid prediction. By combining the system and user roles with these efficient handling mechanisms, this structured approach allows the LLM to focus on its core strength of generating numeric predictions while providing the necessary temporal and spatial context for accurate, context-aware results. An illustration of the prompts that provide the content and context to the LLM for solving the node-level tasks T (vi)(cid:12) (cid:12)v=1...N can be found in Figure 4. Algorithm 3 LLM-based spatial-temporal signal predictor 1: Initialize the LLM model and form the system and user role prompt templates 2: while new processed observations ˜o[t] are available from 3: 4: 5: 6: 7: 8: the GSP-based spatial-temporal signal handler do Collect the output from the GSP-based filter spatial-temporal signal handler for each missing node vi in set V − O do Collect ˆxi[t − 1], the previous estimation of vi Collect node neighborhood for vi using (10) and the corresponding processed observed neighbor signals of node vi in ˜o[t] Form LLM task T (vi) from aggregation (11) and their corresponding prompts Feed T (vi) and their corresponding prompts into the LLM Let LLM predict ˆxi[t] end for (Retry if output invalid) Collect all node reconstructions and map them to ˆx[t] 9: 10: 11: end while IV. EXPERIMENTS AND DISCUSSION A. Experiment Setting Here we will provide a brief discussion about the datasets, tested algorithms, and experiment settings. 1) Dataset Description: • Traffic Data: We utilize the publicly available Seattle Loop Detector Dataset [39], which contains traffic flow data collected from loop detectors on the highways in the Seattle area. This dataset provides hourly traffic readings that are essential for analyzing spatiotemporal patterns. The experimental setup includes the addition of Gaussian noise with variances of 1.0 and 1.5 to evaluate the robustness of the models under varying levels of data corruption. The graph topology is constructed by mapping the physical locations of N = 323 loop detectors to their corresponding positions along the actual highway path. Each loop detector is a node on the graph G. The traffic speed is recorded in 5-minute intervals. We selected a sub-portion of the signal consisting of 7 days of reading, making the size of the data R323×2016. The recordings from the first 576 will be in the training set to tune or learn the model parameters and the rest are in the testing set. An illustration of 4 different time instances of this time-varying dataset can be found in Figure 5, • Meteorological Data: Hourly wind speed and temper- ature data are obtained from NOAA [40]. Because of the behavior differences between wind speed and tem- perature, we analyze them as separate datasets in the experiments. Each node in the dataset corresponds to a geographic location defined by its latitude and longitude. We selected N = 197 stations that contain no missing recordings in 3 consecutive days, giving us R197×96. We split the first 24 time steps into the training set and the rest into the testing set. To capture the spatial dependen- cies among nodes, a k-nearest-neighbor (kNN) graph is constructed, where the edges’ weights are computed by using a Gaussian kernel method which is described in the GNLMS framework [41]. the node observation ratio to be 70% for all We set the datasets. The missing nodes are missing throughout the entire experiment, making it challenging to infer the missing signals without the utilization of the graph topology. The goal of the experiments is that given an observation o[t] that is only partially observed, reconstruct the ground truth signal xg[t] from the observation and past p estimations. 2) Considered Algorithms: We consider 2 distinct settings of the LLM-OSR. The first setting is to use GPT-3.5 turbo as the LLM within the LLM-OSR; we denote this setting as the LLM-OSR-3.5. The second setting is to use GPT-4-o mini as the LLM within the LLM-OSR, denoted as LLM-OSR-4. The 2 LLM-OSR variants will be evaluated against a variety of baseline algorithms, including graph adaptive filters, graph time-series analysis algorithms, and GNNs: • GLMS [42]: An adaptive filter designed for online graph signal estimation under Gaussian noise, derived from an LMS optimization problem. • GNLMS [43]: A variant of the GLMS that incorporates spectral normalization to enhance performance. • GNS [44]: An adaptive filter developed for online graph signal estimation under impulsive noise, derived from an L1 optimization problem. • GCN [22]: A widely recognized graph neural network (GNN) where each layer applies the graph convolution (4), incorporating spatial normalization and a non-linear activation function. • GVARMA [45]: A time-series analysis method that ex- tends the classical VARMA model into the graph signal processing (GSP) domain by defining ARMA parameters using the graph Fourier transform (GFT) in the graph spectral domain. • GGARCH [46]: A time-series analysis method analo- gous to the classical GARCH model, adapted to the GSP domain by defining GARCH parameters using the GFT in the graph spectral domain. • RGDAN [17]: A GNN architecture that combines graph diffusion-based modeling with spatial and temporal at- tention mechanisms to capture complex relationships in graph-structured data. Notice that other than the adaptive filters and the LLM-OSRs, JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 7 Fig. 4. The prompts prepared for LLM and the responses generated by the LLM (GPT-4o mini). the other algorithms are offline algorithms during training. 3) Computational Environment: The experiments are con- ducted on a workstation equipped with the following hardware and software: • CPU: Intel Core i9-13900K • GPU: NVIDIA RTX 4090 with 24GB of G6X memory • Operating System: Windows 11 These computational resources provide sufficient computa- tional power for handling large-scale graph computations and training the LLM-OSR model. B. LLM-OSR on Traffic Prediction The enhanced multilingual proficiency of GPT-4 enables it to excel in low-resource languages, and its optimized scaling laws ensure predictable improvements. LLM on graphs per- form well not because of leakage, LLM understands graphs as languages instead of the topological structures, and the node label (signal) is more important than structure: [27]. The results of our experiments, as summarized in Tables I and II, demonstrate the significant performance improve- ments achieved by the LLM-OSR models in traffic prediction tasks on the Seattle Loop Dataset. Specifically, LLM-OSR- 4 consistently outperforms all baseline models in terms of both RMSE and MAE. These include GSP-based methods like GLMS, GNLMS, and GNS, GNN-based algorithms such as GCN and RGDAN, and time-series analysis algorithms like GVARMA and GGARCH. These findings emphasize the ability of LLMs to understand graph-encoded traffic data by interpreting node-level signals as linguistic constructs instead of reading numerical data stored as graph representations or numerical embeddings. This unique approach allows LLM- OSR to extract richer, more contextually relevant patterns, thereby leading to superior predictive performance. The experimental reveal a performance results further gap between LLM-OSR-3.5 and LLM-OSR-4. While GPT- 4-based LLM-OSR-4 achieves good performance, GPT-3.5- based LLM-OSR-3.5 struggles significantly, often performing worse than many baseline graph-based methods. This perfor- mance disparity highlights the enhanced modeling capabilities of GPT-4, which can better understand the prompt and node- level signals and adapt to noisy inputs compared to GPT-3.5. It also shows that earlier LLM versions like GPT-3.5 may lack the robustness required for spatiotemporal graph tasks. TABLE I EXPERIMENT RMSE FOR SEATTLE LOOP DATASET Model 1.0 1.5 LLM-OSR-3.5 LLM-OSR-4 GLMS GNLMS GNS GCN GVARMA GGARCH RGDAN 12.23 ± 3.9e+00 4.05 ± 1.6e-01 8.04 ± 2.1e-03 7.91 ± 1.2e-03 8.55 ± 1.5e-03 26.93 ± 7e-02 21.75 ± 1.3e-03 21.73 ± 4.0e-04 5.32 ± 3.2e-01 13.76 ± 4.0e-00 4.69 ± 6.8e-03 8.07 ± 2.1e-03 7.91 ± 9.0e-04 8.55 ± 2.3e-03 26.95 ± 4e-02 21.75 ± 1.3e-03 21.73 ± 1.3e-03 6.61 ± 2.1e+00 TABLE II EXPERIMENT MAE FOR SEATTLE LOOP DATASET Model 1.0 1.5 LLM-OSR-3.5 LLM-OSR-4 GLMS GNLMS GNS GCN GVARMA GGARCH RGDAN 3.52 ± 3.8e-01 2.88 ± 2.3e-02 5.09 ± 1.8e-03 4.88 ± 9.3e-04 4.69 ± 1.6e-03 19.22 ± e+00 18.53 ± 3.7e-04 18.54 ± 3.5e-04 3.23 ± 1.3e-01 4.23 ± 3.3e-01 3.62 ± 7.4e-03 5.12 ± 2.4e-03 4.89 ± 7.6e-04 4.70 ± 1.2e-03 19.22 ± e+00 18.53 ± 1.2e-03 18.55 ± 1.2e-03 3.96 ± 1.2e+00 C. LLM-OSR on Weather Prediction The performance of LLM-OSR models on weather predic- tion tasks, as presented in Tables III, IV, V, and VI, highlights their capability in handling spatiotemporal graph data under varying noise conditions. Gaussian noise with variances of 0.2, 0.6, and 1.0 was added to simulate real-world data. The results show that while LLM-OSR-4 excels under lower noise levels, its performance degrades more significantly compared to other models as noise variance increases, which reveals a limitation of current LLM-based approaches. For hourly wind speed prediction, LLM-OSR-4 achieves exceptional results. When the noise variance is 0.2 and 0.6 it outperforms all baselines. For noise variances of 0.6 and 1.0, GPT-4o miniAAAAAAAAAAAA{"id":"chatcmpl-AWML5QOAk67fsedip6q31IqOLB5AU","object":"chat.completion","created":1732275731,"model":"gpt-4o-min","choices":[{"index":0,"message":{"role":"assistant","content":"60.4","refusal":null},"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":106,"completion_tokens":3,"total_tokens":109,"prompt_tokens_details":{"cached_tokens":0,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}},"system_fingerprint":null,"code":0,"msg":"ok"}AAAAAA{"model":"gpt-4o-mini","messages":[{"role":"system","content":"Thespatiotemporal task is to predict the current number on a graph based on its previous value and the value of its neighbors."},{"role":"user","content":"Eachindexed content is independent. Make 1 numeric prediction per indexed context. Precision round to 1 decimal point. Do not output text. Do not recall memories. Time index: 1439, Entity index: 322. Previous: 61.5, Neighbors: [63.9, 57.4]"}]} JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 8 Fig. 5. The Seattle loop dataset at 4 different time instances. LLM-OSR-4 ranks as the best-performing in terms of RMSE and second-best-performing in terms of MAE. Its second- best-performing cases are surpassed only by RGDAN, a re- cently proposed, sophisticated model that combines spatial and temporal embeddings. RGDAN achieves this by leveraging GNN diffusion attention mechanisms for spatial embeddings and with temporal attention for temporal embeddings. These embeddings are then integrated using transformer attention, making the RDGAN a powerful algorithm [17]. it is worth noting that both RDGAN and GPT-4-o mini are attention- based algorithms, which highlights the potential for LLM- OSR to further enhance its performance. We will discuss several potential approaches that could potentially boost the performance of LLM-OSR in the next section. In hourly tem- perature prediction tasks, LLM-OSR-4 maintains its leading performance under low noise conditions for a noise variance of 0.2. However, its performance declines as noise increases com- pared with RGDAN, but is still the second-best-performing algorithm among all the tested algorithms. Looking at the results within the LLMs, the LLM-OSR-4 outperforms LLM- OSR-3.5 again, suggesting that the GPT-4-o mini is a more appropriate choice than the GPT-3.5 turbo for LLM-OSR. The results suggest that the performance of LLM-OSR-4 will degrade as the noise variance increases in comparison to non-LLM methods such as RGDAN. Theoretically, algorithms are expected to perform worse as noise increases, but LLM- OSR-4 appears more sensitive to this degradation. This is likely due to the inherent limitations of LLMs. While the GSP- based spatial-temporal handler is capable of noise reduction, it does not completely eliminate noise. The residual noise within the processed signals ˜o[t] leads to degraded predictions of LLM-OSR. Another potential factor that leads to reduced robustness under high noise conditions is the lack of fine- tuning or retraining of the LLM in the LLM-OSR. We ac- knowledge the limitations and have considered them in our experimental design. A more detailed discussion of this issue will be presented in the next section. Despite these challenges, the strong performance of LLM-OSR-4 under Gaussian noise conditions demonstrates its potential for spatial-temporal sig- nal prediction tasks with noisy real-world datasets. TABLE III EXPERIMENT RMSE FOR HOURLY WIND SPEED PREDICTION Model 0.2 0.6 1.0 LLM-OSR-3.5 LLM-OSR-4 GLMS GNLMS GNS GCN RGDAN GVARMA GGARCH 4.58 ± 9.1e-01 1.39 ± 6.1e-03 2.19 ± 1.8e-03 2.18 ± 1.5e-03 2.38 ± 5.8e-03 2.65 ± 1e-02 1.83 ± 9.0e-02 4.07 ± 6.7e-03 3.71± 1.9e-03 5.11 ± 9.8e-01 1.70 ± 7.5e-03 2.21 ± 2.9e-03 2.20 ± 3.3e-03 2.39 ± 6.7e-03 2.82 ± 4.2e-01 1.89 ± 5.2e-02 4.09 ± 1.0e-02 3.73 ± 3.2e-03 5.06 ± 4.9e-01 1.91 ± 1.1e-02 2.22 ± 5.4e-03 2.21 ± 5.8e-03 2.39 ± 8.2e-03 2.68 ± 5e-02 1.97 ± 5.4e-02 4.11 ± 1.5e-02 3.76 ± 6.7e-03 TABLE IV EXPERIMENT MAE FOR HOURLY WIND SPEED PREDICTION Model 0.2 0.6 1.0 LLM-OSR-3.5 LLM-OSR-4 GLMS GNLMS GNS GCN RGDAN GVARMA GGARCH 2.77 ± 8.8e-01 1.01 ± 6.3e-03 1.75 ± 1.5e-03 1.74 ± 1.3e-03 1.87 ± 3.8e-03 2.10 ± 4.1e-02 1.35 ± 6.2e-02 3.15 ± 4.2e-03 3.10 ± 1.9e-03 3.09 ± 8.4e-01 1.35 ± 6.3e-03 1.76 ± 1.9e-03 1.75 ± 2.1e-03 1.88 ± 4.5e-03 2.22 ± 1.5e-02 1.43 ± 3.5e-02 3.17 ± 6.5e-03 3.12 ± 3.9e-03 5.06 ± 4.9e-01 1.54 ± 1.0e-02 1.77 ± 3.7e-03 1.76 ± 3.8e-03 3.53 ± 5.9e-03 2.12 ± 3.4e-02 1.51 ± 3.6e-02 3.19 ± 9.0e-03 3.14 ± 7.7e-03 V. LIMITATIONS AND FUTURE WORK The LLM-OSR demonstrated an impressive ability to cap- in graph ture complex relationships and patterns inherent structures while conducting 1-step online prediction on time- varying graph signals. However, as we developed the LLM- OSR, several limitations and challenges emerged that must be addressed when leveraging LLMs on time-varying graph signals. Let us discuss the LLM-related limitations in the LLM- OSR. First, LLMs are known to have difficulties in under- standing numerical data [47]. In our experiments, there are rare occasions that the LLM will output a NaN instead of giving 25323845525966Speed (mph) JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 9 TABLE V EXPERIMENT RMSE FOR HOURLY TEMPERATURE PREDICTION Model 0.2 0.6 1.0 LLM-OSR-3.5 LLM-OSR-4 GLMS GNLMS GNS GCN RGDAN GVARMA GGARCH 4.90 ± 9.2e-01 1.23 ± 7.5e-03 4.41 ± 5.1e-03 4.40 ± 5.7e-03 5.53 ± 1.6e-02 3.09 ± 5.3e-02 1.33 ± 1.9e-01 3.13 ± 2.7e-03 3.22 ± 2.7e-03 4.97 ± 6.8e-01 1.54 ± 4.8e-03 4.43 ± 7.1e-03 4.41 ± 7.0e-03 5.54 ± 2.9e-02 3.10 ± 6.3e-02 1.33 ± 9.3e-02 3.17 ± 4.7e-03 3.26 ± 4.7e-03 4.65 ± 7.7e-01 1.72 ± 6.0e-03 4.45 ± 9.1e-03 4.42 ± 9.1e-03 5.57 ± 1.8e-02 3.13 ± 7.4e-02 1.37 ± 1.3e-01 3.22 ± 4.4e-03 3.30 ± 6.1e-03 TABLE VI EXPERIMENT MAE FOR HOURLY TEMPERATURE PREDICTION Model 0.2 0.6 1.0 LLM-OSR-3.5 LLM-OSR-4 GLMS GNLMS GNS GCN RGDAN GVARMA GGARCH 1.78 ± 1.3e-01 0.90 ± 4.1e-03 3.03 ± 1.3e-03 3.02 ± 1.6e-03 3.58 ± 3.8e-03 2.32 ± 4.1e-02 0.98 ± 1.4e-01 2.29 ± 3.1e-03 2.38 ± 2.8e-03 1.97 ± 8.0e-02 1.19 ± 3.2e-03 3.05 ± 3.6e-03 3.03 ± 3.1e-03 3.55 ± 8.2e-03 2.32 ± 1.5e-02 0.99 ± 6.5e-02 2.34 ± 5.3e-03 2.43 ± 4.6e-03 2.02 ± 8.2e-02 1.35 ± 3.6e-03 3.07 ± 4.0e-03 3.04 ± 3.4e-03 3.53 ± 5.9e-03 2.29 ± 4.4e-02 1.04 ± 1.0e-01 2.39 ± 3.7e-03 2.47 ± 4.5e-03 us a numeric output. This limitation can be addressed in the future when more powerful LLMs are proposed. Diving deeper into the aspect of the intrinsic limitation of LLMs, we noticed that poorly designed prompts often fail to generate accurate numerical outputs or even any prediction at all. Prompts that are inaccurate or of low accuracy can significantly impair the capabilities of LLM. [48] To improve numerical understand- ing, prompts should be carefully designed to provide clear instructions and context[49]. In LLM-OSR, LLMs are used as predictors, which means that we do not train or tune the LLMs and let LLMs make zero-shot predictions. The decision not to fine-tune the LLM-OSR models stems from several considerations. However, fine-tuning large language models, such as GPT-4, requires significant computational resources and time. In addition, fine-tuning requires an appropriate balance between the size of the training dataset and the number of parameters in the model. In our case, the available datasets are relatively small compared to the parameter size of GPT-4, making fine-tuning or retraining the LLMs only marginally effective. One potential workaround to retraining and fine- tuning the LLMs would be including examples within the prompt that demonstrate how numerical outputs are expected; this can transform the LLM predictors from zero-shot learners to few-shot learners, which is expected to help guide the model for more accurate predictions. We expect there will be a performance increase if the LLMs are fine-tuned or deployed as a few-shot learning predictor instead of the current zero- shot predictor [50]. There are also spatio-temporal graph-related limitations that could be addressed in future research. In the spatial domain, our current LLM-OSR is limited to processing one node neighborhood per LLM prompt. When a single prompt is used to process multiple nodes, the outputs are frequently incomplete or contain extraneous elements, as LLMs exhibit limited capability in handling multiple sequences of numbers in a single call. In other words, whenever we attempt to process N nodes together, LLM often returns a number of prediction outputs that are not N , and it becomes nearly impossible to align the output when the number of inputs and outputs are mismatched. In the temporal domain, LLMs struggle to comprehend long temporal behaviors and predict longer temporal sequences[51]. These limitations could potentially be solved by more advanced graph representation approaches and advanced programming techniques. During the development of LLM-OSR, we have encoun- tered challenges in terms of scalability and computational complexity. Currently, the speed of completing each LLM call is constrained by the speed at which LLMs process the input tokens and generate output tokens. Combined with the fact that we are processing one node neighborhood per prompt, the run speed of LLM-OSR is significantly dragged down. We also notice that in our current setting, each LLM call will have to provide content in the prompt describing the task along with the reversely embedded graph signals. Otherwise, the performance will significantly decrease as we progress. This is likely due to the fact that LLMs have limited long-term memory capabilities [52]. Similarly, when the LLM generates outputs, the numerical are often associated with contextual text even when the prompt asks it to output only numerical outputs without text. These limitations lead to a significant amount of unintended token usage that bottlenecks the I/O and bandwidth of LLM-OSR, making it challenging to scale to larger graphs and longer temporal sequences. Other than improving the LLMs themselves, an approach that we plan to take in the future is to implement LLM-OSR using distributive techniques. Lastly, we would like to expand the fields of applications of LLM-OSR. For instance, we aim to investigate how LLM- OSR performs in applications involving impulsive and heavy- tailed noise, such as communication systems [53] and medical imaging [54]. A potential approach involves modeling the noise not with a Gaussian distribution, but with α-stable distri- butions. These distributions are well-suited for such scenarios due to their heavy tails and impulsive characteristics. Incor- porating α-stable distributions into LLM-OSR may enhance its robustness by enabling it to handle extreme values more effectively, thereby improving model stability in datasets with long-tailed distributions [55], [56]. Furthermore, we aim to enhance the contextual handling capabilities of LLM-OSR to broaden its applications by leveraging its contextual in- ference power of LLM beyond numerical multivariate data. This expansion could pave the way for developing LLM-OSR variants tailored to CV and artificial intelligence applications in scientific domains, such as document image understanding [57] and material science [58]. VI. CONCLUSION The LLM-OSR algorithm shows significant potential in reconstructing spatial-temporal graph signals by combining GSP-based denoise handler with LLM-based prediction. Ex- JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 10 perimental results highlight the superior performance of LLM- OSR-4 in capturing spatial-temporal dependencies and it achieves high accuracy in signal reconstruction for traffic and weather datasets. While the current performance of LLM-OSR is promising, significant work remains to fully unleash the capabilities of LLMs in spatial-temporal prediction to address the current limitations. The LLM-OSR could serve as a foundation to spark future studies, driving innovation and exploration in the intersection of large language models and dynamic graph signal prediction. ACKNOWLEDGMENT This work is supported by the Tsinghua Shenzhen In- ternational Graduate School Start-up fund under Grant QD2022024C, Shenzhen Science and Technology Innova- tion Commission under Grant JCYJ20220530143002005, and Shenzhen Ubiquitous Data Enabling Key Lab under Grant ZDSYS20220527171406015. REFERENCES [1] J. N. Acosta, G. J. Falcone, P. Rajpurkar, and E. J. Topol, “Multimodal biomedical ai,” Nature Medicine, vol. 28, no. 9, pp. 1773–1784, 2022. [2] G. Sonkavde, D. S. Dharrao, A. M. Bongale, S. T. Deokate, D. Doreswamy, and S. K. Bhat, “Forecasting stock market prices using machine learning and deep learning models: A systematic review, International performance analysis and discussion of implications,” Journal of Financial Studies, vol. 11, no. 3, pp. 94, 2023. [3] A. Radford, J. Wu, D. Amodei, D. Amodei, J. Clark, M. Brundage, and I. Sutskever, “Better language models and their implications,” OpenAI blog, vol. 1, no. 2, 2019. [4] J. Devlin, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. [5] L. Floridi and M. Chiriatti, “Gpt-3: Its nature, scope, limits, and consequences,” Minds and Machines, vol. 30, pp. 681–694, 2020. [6] E. Waisberg, J. Ong, M. Masalkhi, S. A. Kamran, N. Zaman, P. Sarker, A. G. Lee, and A. Tavakkoli, “Gpt-4: a new era of artificial intelligence in medicine,” Irish Journal of Medical Science (1971-), vol. 192, no. 6, pp. 3197–3200, 2023. [7] Y. Sun, S. Wang, S. Feng, S. Ding, C. Pang, J. Shang, J. Liu, X. Chen, Y. Zhao, Y. Lu, et al., “Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation,” arXiv preprint arXiv:2107.02137, 2021. [8] J. Chen, S. Li, Q. Huang, S. Yan, Z. Xie, and Y. Lu, “Application of kimi intelligent assistant in the teaching of water pollution control engineering course,” International Journal of Education and Humanities, vol. 13, no. 3, pp. 39–43, 2024. [9] S. S. Sohail, F. Farhat, Y. Himeur, M. Nadeem, D. Ø. Madsen, Y. Singh, S. Atalla, and W. Mansoor, “Decoding chatgpt: a taxonomy of existing research, current challenges, and possible future directions,” Journal of King Saud University-Computer and Information Sciences, p. 101675, 2023. [10] A. Navarro and F. Casacuberta, “Exploring multilingual pretrained machine translation models for interactive translation,” in Proceedings of Machine Translation Summit XIX, Vol. 2: Users Track, 2023, pp. 132–142. [11] X. Deng, V. Bashlovkina, F. Han, S. Baumgartner, and M. Bendersky, “Llms to the moon? reddit market sentiment analysis with large language models,” in Companion Proceedings of the ACM Web Conference 2023, 2023, pp. 1014–1019. [12] L. Tang, Z. Sun, B. Idnay, J. G. Nestor, A. Soroush, P. A. Elias, Z. Xu, Y. Ding, G. Durrett, J. F. Rousseau, et al., “Evaluating large language models on medical evidence summarization,” NPJ digital medicine, vol. 6, no. 1, pp. 158, 2023. [13] F. Miraglia, F. Vecchio, C. Pappalettera, L. Nucci, M. Cotelli, E. Judica, F. Ferreri, and P. M. Rossini, “Brain connectivity and graph theory analysis in alzheimer’s and parkinson’s disease: the contribution of electrophysiological techniques,” Brain Sciences, vol. 12, no. 3, pp. 402, 2022. [14] D. Qin and E. E. Kuruoglu, “Graph learning based financial market crash identification and prediction,” in IEEE CAI, 2024. [15] M. Rostami, M. Oussalah, K. Berahmand, and V. Farrahi, “Community detection algorithms in healthcare applications: a systematic review,” IEEE Access, vol. 11, pp. 30247–30272, 2023. [16] Y. Yan and E. E. Kuruoglu, “Binarized simplicial convolutional neural networks,” Neural Networks, 2024. [17] J. Fan, W. Weng, H. Tian, H. Wu, F. Zhu, and J. Wu, “RGDAN: A random graph diffusion attention network for traffic prediction,” Neural Networks, vol. 172, pp. 106093, 2024. [18] S. Xu, F. Wilhelm-Mauch, and W. Maass, “Quantum feature embeddings for graph neural networks.,” in HICSS, 2024, pp. 7633–7642. [19] B. Bevans, A. Ramalho, Z. Smoqi, A. Gaikwad, T. G. Santos, P. Rao, and J. Oliveira, “Monitoring and flaw detection during wire-based directed energy deposition using in-situ acoustic sensing and wavelet graph signal analysis,” Materials & Design, vol. 225, pp. 111480, 2023. [20] R. Sharma and H. K. Meena, “Emerging trends in eeg signal processing: A systematic review,” SN Computer Science, vol. 5, no. 4, pp. 1–14, 2024. [21] X. Dong, D. Thanou, L. Toni, M. Bronstein, and P. Frossard, “Graph signal processing for machine learning: A review and new perspectives,” IEEE Signal Processing Magazine, vol. 37, no. 6, pp. 117–127, 2020. [22] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” ICLR, 2017. [23] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting,” IJCAI, 2018. [24] W. Wang and Q. Sun, “Robust adaptive estimation of graph signals based on welsch loss,” Symmetry, vol. 14, no. 2, pp. 426, 2022. [25] Y. Yan, E. E. Kuruoglu, and M. A. Altinkaya, “Adaptive sign algorithm for graph signal processing,” Signal Processing, vol. 200, pp. 108662, 2022. [26] B. Jin, G. Liu, C. Han, M. Jiang, H. Ji, and J. Han, “Large language models on graphs: A comprehensive survey,” IEEE Transactions on Knowledge and Data Engineering, 2024. [27] J. Huang, X. Zhang, Q. Mei, and J. Ma, “Can llms effectively leverage graph structural information: when and why,” arXiv preprint arXiv:2309.16595, 2023. [28] R. Ye, C. Zhang, R. Wang, S. Xu, Y. Zhang, et al., “Natural language is all a graph needs,” arXiv preprint arXiv:2308.07134, vol. 4, no. 5, pp. 7, 2023. [29] A. Ortega, P. Frossard, J. Kovaˇcevi´c, J. M. F. Moura, and P. Van- dergheynst, “Graph signal processing: Overview, challenges, and ap- plications,” Proceedings of the IEEE, vol. 106, no. 5, pp. 808–828, 2018. [30] N. Tremblay, P. Gonc¸alves, and P. Borgnat, “Design of graph filters and filterbanks,” in Cooperative and Graph Signal Processing, pp. 299–324. Elsevier, 2018. [31] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert- Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds. 2020, vol. 33, pp. 1877–1901, Curran Associates, Inc. [32] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023. [33] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Ka- mar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al., “Sparks of artificial arXiv preprint general intelligence: Early experiments with gpt-4,” arXiv:2303.12712, 2023. [34] J. H. Giraldo, A. Mahmood, B. Garcia-Garcia, D. Thanou, and T. Bouw- mans, “Reconstruction of time-varying graph signals via sobolev smoothness,” IEEE Transactions on Signal and Information Processing over Networks, vol. 8, pp. 201–214, 2022. [35] G. Chen, X. Chen, C. Zheng, J. Wang, X. Liu, and Y. Han, “Spatiotem- poral smoothing aggregation enhanced multi-scale residual deep graph convolutional networks for skeleton-based gait recognition,” Applied Intelligence, pp. 1–21, 2024. [36] W. Bai, “Smoothness harmonic: A graph-based approach to reveal spatiotemporal patterns of cortical dynamics in fmri data,” Applied Sciences, vol. 13, no. 12, 2023. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 11 [37] M. A. Qureshi and D. Greene, “Eve: explainable vector based em- bedding technique using wikipedia,” Journal of Intelligent Information Systems, vol. 53, pp. 137–165, 2019. [38] A. Grover and J. Leskovec, “node2vec: Scalable feature learning for networks,” in SIGKDD, 2016, pp. 855–864. [39] C. of Seattle, “Seattle loop detector data,” https://github.com/zhiyongc/ Seattle-Loop-Data, 2020. [40] National Oceanic and Atmospheric Administration, “National oceanic and atmospheric administration (noaa) weather data,” https://www.noaa. gov/, 2024. [41] Y. Yan, R. Adel, and E. E. Kuruoglu, “Graph normalized-lmp algo- rithm for signal estimation under impulsive noise,” Journal of Signal Processing Systems, vol. 95, no. 1, pp. 25–36, 2023. [42] P. D. Lorenzo, S. Barbarossa, P. Banelli, and S. Sardellitti, “Adaptive least mean squares estimation of graph signals,” IEEE Transactions on Signal and Information Processing over Networks., vol. 2, no. 4, pp. 555 – 568, 2016. [43] M. J. M. Spelta and W. A. Martins, “Normalized lms algorithm and data-selective strategies for adaptive graph signal estimation,” Signal Processing, vol. 167, pp. 107326, 2020. [44] C. Peng, Y. Yan, and E. KURUOGLU, “Adaptive message passing sign algorithm,” in Temporal Graph Learning Workshop @ NeurIPS 2023, 2023. [45] E. Isufi, A. Loukas, N. Perraudin, and G. Leus, “Forecasting Time Series With VARMA Recursions on Graphs,” IEEE Transactions on Signal Processing, vol. 67, no. 18, pp. 4870–4885, 2019. [46] J. Hong, Y. Yan, E. E. Kuruoglu, and W. K. Chan, “Multivariate time series forecasting with GARCH models on graphs,” IEEE Transactions on Signal and Information Processing over Networks., vol. 9, pp. 557– 568, 2023. [47] Y. Li, J. Keung, Z. Yang, X. Ma, J. Zhang, and S. Liu, “Simac: simulating agile collaboration to generate acceptance criteria in user story elaboration,” Automated Software Engineering, vol. 31, no. 2, pp. 55, 2024. [48] J. Jang, S. Ye, and M. Seo, “Can large language models truly understand prompts? a case study with negated prompts,” in Transfer learning for natural language processing workshop. PMLR, 2023, pp. 52–62. [49] F. Jia, K. Wang, Y. Zheng, D. Cao, and Y. Liu, “Gpt4mts: Prompt- based large language model for multimodal time-series forecasting,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2024, vol. 38, pp. 23343–23351. [50] J. Chen, Y. Geng, Z. Chen, J. Z. Pan, Y. He, W. Zhang, I. Horrocks, and H. Chen, “Zero-shot and few-shot learning with knowledge graphs: A comprehensive survey,” Proceedings of the IEEE, vol. 111, no. 6, pp. 653–685, 2023. [51] A. Maharana, D.-H. Lee, S. Tulyakov, M. Bansal, F. Barbieri, and Y. Fang, “Evaluating very long-term conversational memory of llm agents,” in Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2024, vol. 1, pp. 13851–13870. [52] S. Shahriar, B. D. Lund, N. R. Mannuru, M. A. Arshad, K. Hayawi, R. V. K. Bevara, A. Mannuru, and L. Batool, “Putting gpt-4o to the sword: A comprehensive evaluation of language, vision, speech, and multimodal proficiency,” Applied Sciences, vol. 14, no. 17, pp. 7782, 2024. [53] O. Karakus, E. E. Kuruoglu, and M. A. Altinkaya, “Modelling impulsive noise in indoor powerline communication systems,” Signal, image and video processing, vol. 14, no. 8, pp. 1655–1661, 2020. [54] W. Lee, H. S. Nam, J. Y. Seok, W.-Y. Oh, J. W. Kim, and H. Yoo, “Deep learning-based image enhancement in optical coherence tomography by exploiting interference fringe,” Communications Biology, vol. 6, no. 1, pp. 464, 2023. [55] D. Herranz, E. Kuruo˘glu, and L. Toffolatti, “An alpha-stable approach to the study of the p (d) distribution of unresolved point sources in cmb sky maps,” Astronomy & Astrophysics, vol. 424, no. 3, pp. 1081–1096, 2004. [56] E. Kuruoglu, C. Molina, S. Godsill, and W. Fitzgerald, “A new analytic representation for the symmetric alpha-stable probability density function,” in Proceedings of the 5th World Meeting of the International Society for Bayesian Analysis (ISBA). ASA: American Statistical Asso- ciation, 1997, pp. 229–233. [57] E. E. Kuruoglu and A. S. Taylor, “Using annotations for summarizing a document image and itemizing the summary based on similar annota- tions,” May 4 2010, US Patent 7,712,028. [58] F. Saffarimiandoab, R. Mattesini, W. Fu, E. E. Kuruoglu, and X. Zhang, “Insights on features’ contribution to desalination dynamics and capacity of capacitive deionization through machine learning study,” Desalina- tion, vol. 515, pp. 115197, 2021.
ai_researcher
2
Unveiling_the_Emotional_Landscape_of_Ukrainian_War_Narratives_using_Large_Language_Models.pdf
3 2 0 2 n u J 6 ] Y C . s c [ 3 v 0 7 7 2 0 . 5 0 3 2 : v i X r a The Politics of Language Choice: How the Russian-Ukrainian War Influences Ukrainians’ Language Use on Twitter Daniel Racek ∗1, Brittany I. Davidson2, Paul W. Thurner3, Xiao Xiang Zhu4, and Göran Kauermann1 1Institute of Statistics, Ludwig-Maximilians-University Munich, Germany 2School of Management, University of Bath, United Kingdom 3Institute of Political Science, Ludwig-Maximilians-University Munich, Germany 4School of Engineering and Design, Technical University of Munich , Germany June 7, 2023 Abstract The use of language is innately political and often a vehicle of cultural identity as well as the basis for nation building. Here, we examine language choice and tweeting activity of Ukrainian citizens based on more than 4 million geo-tagged tweets from over 62,000 users before and during the Russian-Ukrainian War, from January 2020 to October 2022. Using statistical models, we disentangle sample effects, arising from the in- and outflux of users on Twitter, from behavioural effects, arising from behavioural changes of the users. We observe a steady shift from the Russian language towards the Ukrainian language already before the war, which drastically speeds up with its outbreak. We attribute these shifts in large part to users’ behavioural changes. Notably, we find that more than half of the Russian-tweeting users shift towards Ukrainian as a result of the war. 1 Introduction Social media is critically important in today’s society (Saroj and Pal, 2020; Dwivedi et al., 2021; Wong et al., 2021). In recent years, it has played a key role in a number of political shifts and crises (Mäkinen and Wangu Kuira, 2008; Sadri et al., 2018). While social media has been found to amplify all manners of misinformation, propaganda, populism, and xenophobia (Morozov, 2012; Zhuravskaya et al., 2020; Flamino et al., 2023), it can also serve as a mechanism to call for aid and as a source for live updates of major events unfolding (Sacco and Bossio, 2015; Rogstadius et al., 2013; Allcott and Gentzkow, 2017; Kaufhold et al., 2020). In this article, we analyse language use of Ukrainian citizens on social media before and during the Russian invasion of Ukraine (subsequently referred to as war), where after years of tensions and open aggression between Russia and Ukraine (Marples, 2021), on 24th February 2022, Russian forces began to invade and occupy parts of Ukraine (Bigg, 2022). At the time of writing, it has been estimated that the war has led to over 23,000 civilian casualties (OHCHR, 2023) and hundreds of billions of dollars worth of damage (Lamb, 2022; World Bank, 2023). This has caused worldwide unrest, alongside 8.2 million Ukrainian refugees recorded across Europe and 5 million registered for ∗Corresponding author: Daniel Racek, [email protected] 1 temporary protection (UNHCR, 2023; Ratten, 2022). The war in Ukraine is also taking place in the digital era, with social media coverage document- ing the horrific events in up to real-time. This provides a unique digital trace of many first-hand accounts of the war, as citizens are communicating among each other and to the public. This is generally known as crisis informatics, whereby social media data are utilized before, during, or after emergency events for use cases such as disaster monitoring, management, and prevention (Sacco and Bossio, 2015; Reuter et al., 2018; Jurgens and Helsloot, 2018; Kaufhold et al., 2020; Dwarakanath et al., 2021). Recent studies have demonstrated that tweets can capture events of political violence (Dowd et al., 2020) and can help in monitoring and understanding intra-country conflicts (Steinert- Threlkeld et al., 2022). In our work, the language of a tweet is of particular interest. Notably, the use of language is inherently political. Languages can be the cause of conflict (Laitin, 2000) and they are often incor- porated in cultural and ethnic identity definition and are the basis for nation building and political change (Smagulova, 2006; Wright, 2012). After the dissolution of the USSR, most post-soviet coun- tries implemented new language laws in order to assert their original native language and build a new nation (Smagulova, 2006; Pavlenko, 2008). In Ukraine, after independence, many people were considering themselves Russians by nationality or Ukrainian with Russian as their main language. With the Law on Languages (1989) and a 10-year plan for a gradual transition back to Ukrainian (1991), the government aimed to reverse those effects, but was only moderately successful in achiev- ing this goal, as census results show (Marshall, 2002; Stebelsky, 2009; Kulyk, 2018). Only more recently, with the Euromaidan protests and the Russian military intervention in Crimea and the Donbas, surveys between 2012 and 2017 show a consistent and substantial shift away from Russian ethnic and linguistic identification towards Ukrainian practice (Kulyk, 2018). Respondents note an increasing engagement with the Ukrainian language and are more supportive of Ukraine as a direct result of the military intervention. We investigate language choice and tweeting activity on Ukrainian social media from January 2020 to November 2022 using over 4 million geo-tagged tweets from more than 62,000 different users. In doing this, we study how Ukrainian citizens (and non-citizens living there) respond to their coun- try being aggressively attacked and invaded by its direct neighbour they share a long history and language with, and how the use of language evolved before and during this war. Our study allows us to follow the same set of users and observe their (change in) behaviour over both the short- and longer-term as the war breaks out and continues to unfold on an individual level. Hence, we are able to comment on recent news articles outlining shifts in language use from Russian to Ukrainian as a direct result of the war (Harding, 2023; Warner, 2022). Moreover, we are able to monitor long-term language trends even before the war without the necessity of relying on small-scale surveys nor the infrequent censuses, of which the last one was conducted in 2001. More specifically, we study overall trends in the number of tweets in the three main languages (Ukrainian, Russian, English) over time. Second, we investigate how these trends translate to users’ individual tweeting activity and if changes result from the in- and outflux of users, common in online communities (Dabbish et al., 2012; Panek et al., 2018; Ransbotham and Kane, 2011), or if they result from users changing their behaviour over time (Davidson et al., 2019; Eichstaedt and Weidman, 2020; Dzogang et al., 2016). We quantify the magnitude of both effects respectively. Third, we study if changes in users’ tweeting activity originate from shifts between languages and quantify the magnitude of these shifts. Fourth and finally, we take a closer look at those users that switch from predominately tweeting in Russian to predominately tweeting in Ukrainian with the outbreak of the war. 2 2 Results 2.1 Data Collection, Cleaning & Processing We collected tweets from 9th January 2020 to 12th October 2022 using the 1% real-time stream of the Twitter Sample API (Pfeffer et al., 2022). During collection, we filtered the data such that we only gathered tweets containing geo-information using the Filter API. We then manually filtered the dataset to only retain tweets from Ukraine (denoted by the "UA" country tag), as common in the literature (Hu and Wang, 2020), and exclude any retweets. Our subsequently conducted sensi- tivity analysis shows that through this two-stage filtering process, we were able to recover almost geo-tagged tweets from Ukraine during this time period (see section 4.2). We conducted an extensive spam filtering scheme, in which we 1) removed any duplicate tweets, 2) identified and removed potential spam bots by training a bot detection model following Yang et al. (2020), 3) removed users with >100 tweets per day, 4) only kept tweets coming from official Twitter clients or Instagram, and 5) applied additional filtering rules specific to our dataset. This reduced our dataset from originally 4,453,341 tweets (62,712 users) down to 2,845,670 tweets (41,696 users). For an extensive description and rationale see section 4.3. Unsurprisingly, social media is popular in Ukraine, particularly among the younger generation, with almost all citizens aged 18-39 in 2021 reporting that they use social media. For Twitter, user statistics are as follows: 18-29 (13% usage), 30-39 (8%), 40-49 (7%), 50+ (1%) (Statista, 2022b). Hence, our subsequent findings are not necessarily applicable to the entire population. However, they still provide valuable insights into the language use of Ukrainians aged 18-49. 2.2 Descriptive Findings To determine the language of a tweet, in accordance with the literature (Mosleh et al., 2021; Barbi- eri et al., 2022), we utilize the language field provided by the Twitter API. Ukrainian (35.8%) and Russian (35.4%) tweets are most prevalent in our dataset, followed by English (11.5%). A large proportion of tweets (11.1%) is labeled as "undefined", which among others consists of tweets that are too short, contain only hashtags, or only have media links. All other languages have shares of 1.2% or less. For the subsequent analysis we focus on tweets coming from the three main languages (English, Russian, Ukrainian) and discard all remaining tweets. A full breakdown of the language distribution is reported in section 5.1. In our dataset, there are clear trends in the aggregate over time (Figure 1). In the beginning of 2020, we can see that Russian is the predominant language being used on Twitter in Ukraine, however, over time, this number gradually declines. The number of Ukrainian and English tweets on the other hand remains more or less constant over this initial time period. In the figure, we mark two key dates. On 11th November 2021, the United States officially report a mobilization of Russian troops along the Ukrainian border for the first time (Stewart and Ali, 2021; Euronews, 2021; NDTV, 2022). We will subsequently call this the first signs of aggression. 24th February 2022 marks the begin of the Russian invasion of Ukraine (subsequently referred to as outbreak of the war). As we approach this outbreak, there is a clear spike in tweets across all three languages, with a larger spike in both English and Ukrainian. Afterwards, English and Russian remain mostly constant, although the former on a much higher level than before. For Ukrainian, there is a clear upward trend in the daily number of tweets after the outbreak of the war. Given these remarkable shifts in the number of tweets in the three considered languages, we want to investigate the underlying factors contributing to these changes. Note, that from the 3 Figure 1: Daily number of tweets in the three most common languages (Russian, Ukrainian, English) from 9th January to 12th October (1,008 days). The first vertical line denotes the mobilization of the Russian troops along the Ukrainian border (11th November 2021). The second line denotes the outbreak of the war (24th February 2022). aggregate trends, we can not distinguish whether the observed patterns are due to large in- and outfluxes of users, which are common in online communities (Dabbish et al., 2012; Panek et al., 2018; Ransbotham and Kane, 2011), or whether the actively tweeting users change their behaviour over time (Davidson et al., 2019; Eichstaedt and Weidman, 2020; Dzogang et al., 2016). The disentanglement of this question is the aim of the rest of this article. 2.3 User Activity In order to address this question, we restructure our dataset by aggregating the number of tweets made by each user in English (EN), Ukrainian (UA), and Russian (RU) in each week. (Note, that we employ the Ukrainian country code "UA" instead of the official Ukrainian language tag "UK" in order to avoid confusion.) This allows us to study users’ individual behaviour over time. To obtain reliable results, we restrict the further analysis to users, who have tweeted in total at least ten times in any of the three languages. Furthermore, we choose weeks instead of days, as we are interested in general shifts and overall changes in behaviour over time, which are captured sufficiently well on a weekly basis. Through this weekly definition, we can dramatically reduce the size of our dataset, hence more complex modelling approaches become computationally feasible. We drop the first and last week in our dataset as these are incomplete (less than 7 days) and aggregate the remaining tweets on a weekly basis for each user and language. Finally within this, we are only considering weeks in which users are "active" (we define this as any week in which a user is tweeting at least once, as well as up to two weeks after), in order to account for the times in which users may be inac- tive for several weeks at a time or abandon their accounts. Thus, our new sample ranges from 13th January 2020 to 10th October 2022 and consists of 143 analysis weeks, 13,643 users and 1,045,245 observations. Using this definition of user activity, we can visualize the total amount of active users as well In the as turnover rates (switch from active to inactive and vice versa) over time (Figure 2). beginning of 2020, we have around 2,800 active users per week. This number gradually decreases 4 to roughly 1,800 until we approach the outbreak of the war. Afterwards, the number of active users starts increasing again. Note the drop and subsequent spike in activity shortly before and with the outbreak of the war. Looking at the turnover rates, we find that there is a constant stream of ∼ 250 (potentially different) users per week that switch from active to inactive and vice versa. The aforementioned spikes are also evident in these turnover rates. Finally, we find that there are roughly 50 users per week that join our sample for the first time and about the same amount that leave it altogether. Both of these numbers almost double after the outbreak of the war. Figure 2: Weekly user activity graphs. The brown graph reports the number of active users in each week. The blue (red) graph reports the number of users who switch to active (inactive), the green the number of users who switch to active for the first time, the purple the number of users who were active for the last time, i.e. drop out of the sample altogether. All graphs, but particularly the latter two, are skewed upwards respectively downwards towards beginning and end of the study period due to the nature of how the dataset is constructed. Hence, we drop the first and last three weeks for visualization purposes (137 total weeks left). The full plot is available in supplementary material S.1. We also provide an additional version without the active user graph with a rescaling of the y-axis there. The first vertical line denotes the mobilization of the Russian troops along the Ukrainian border (11th November 2021). The second line denotes the outbreak of the war (24th February 2022). 2.4 Tweeting Activity To obtain a better understanding on how the average active Ukrainian Twitter user changes over time, we visualize the average number of published tweets by a user in each language in Figure 3a. We smooth this average to highlight general trends. From the figure, we can clearly see that there are substantial shifts. Overall, the average number of RU tweets per user decreases constantly over time (from 4.8 to 2.2), the outbreak of the war being no exception. The average number of EN tweets decreases slightly until the war, where we notice a sudden uptick (from 0.5 to 1.9) followed by a steady decline. Meanwhile, the number of UA tweets slowly but steadily rises (from 2.4 to 3.0), with steeper increases after the first signs of aggression in November 2021 and no appearance of slowing down (5.3 at the end). By combining these findings with Figure 2, we can at least partially explain the aggregate trends evident in Figure 1. While the active user sample is shrinking over time, those users that stay (and join) the sample are tweeting more in UA. Hence, there is no decrease in the overall amount of 5 UA tweets. We find the exact opposite for RU tweets. As the number of active users is declining, the users that stay active are tweeting less in RU, resulting in the visible decrease of aggregate RU tweets over time. Notably, so far, we do not know, if those changes in the average amount of tweets per user are simply driven by shifts in our active user sample (i.e., are those users that initially tweet a lot in RU leaving over time and this is why we see this decrease in the average?), or, if these changes are (at least partially) driven by behavioural changes in those users that remain active on Twitter (i.e., are the same users tweeting less in RU over time?). (a) Average number of tweets. The graphs report a smoothed average of the published number of tweets per user in each week in each language. The shaded area depicts the 95% confidence interval of the smooth fit. The non-smoothed version of the plot is available in supplementary material S.2. (b) Sample effects. The graphs report a smoothed average of the random effects of the active users in each week in each language. The shaded area depicts the 95% confidence inter- val of the smooth fit. The non-smoothed version of the plot is available in supplementary material S.3. (c) Behavioural effects. The graphs report the fitted global trend over all users in each week in each language. The shaded area depicts the 95% confidence interval of the fitted effect. Figure 3: Changes in the number of tweets per user. (a) visualizes the average number of tweets over time, (b) how sample changes affected the number of tweets, (c) how behavioural changes affected this number. The first vertical line denotes the mobilization of the Russian troops along the Ukrainian border (11th November 2021). The second line denotes the outbreak of the war (24th February 2022). We address this through our tweet model described in section 4.4. We fit a generalized additive 6 mixed model (GAMM) to predict the number of tweets made by each user in each language in each week, assuming a Poisson distribution. By incorporating both a smooth global time trend for each language, as well as user-specific random effects for each of the languages, we disentangle sample shifts (random effects) from behavioural changes (global trend). Note, as on most other social me- dia platforms, users have the option to create new accounts, which we cannot match to their prior ones. Hence, some of the behavioural effects might be underestimated and instead accounted for as sample effects. Figure 3b visualizes the fitted average sample effects, i.e. the graphs depict how the average time-constant tweeting intensity in our active user sample changes over time due to user turnover. The figure shows, that the average RU tweeting intensity is mostly constant over time until Novem- ber 2021, where aggression starts. From that point onward, in the span of only a few months, we see a decline of 22% in RU tweets from November 2021 to October 2022 (end of study period), solely attributed to changes in the user sample during that period. For EN, we find somewhat of an opposite effect. Similarly, there are only minor fluctuations until November 2021. But afterwards, there is a sharp increase of 104%. Taking a look at UA, we find a long-term increase of about 37% before the aggression starts. This increase comes to a hold shortly before the war, and significantly speeds up in the weeks after (+97%). All (relative) effect sizes calculated between the most relevant dates in our analysis period (start of study period, first signs of aggression, outbreak of war, end of study period) are reported in Table 1. Table 1: Tweet Activity Effect Sizes between Key Dates Language English Ukrainian Russian English Ukrainian Russian Sample Effects Start - Aggression Aggression - War War - End Study Aggression - End Study +1.36% +36.54% -5.42% -36.75% +5.71% -50.58% +51.45% -4.99% -19.92% +34.82% +107.33% -2.92% Behavioural Effects +130.11% +35.72% +4.68% -39.98% +15.184% -23.86% +104.19% +96.97% -22.26% +38.09% +56.32% -20.30% Notes: Effect sizes for both sample and behavioural changes extracted from the tweet model described in section 4.4 between key dates. All effect sizes are relative increases in the number of tweets between the two respective dates. Start: start of the study period—13th January 2020. Aggression: first official US report of a mobilization of the Russian troops along the Ukrainian border—11th November 2021. War: outbreak of the war—24th February 2022. End Study: end of the study period—10th October 2022. Next, we will investigate behavioural changes using Figure 3c. The graphs depict how the tweeting behaviour of the active users changes throughout the study period, when controlling for the user turnover (sample effects). Starting with RU, we notice that users are tweeting less and less over time. From January 2020 to November 2021, users tweet 51% less in RU due to behavioural changes. Subsequently, we see a small rise with the outbreak of the war (+5%), followed up by an even steeper decline (-24%). In contrast, UA is reasonably consistent in its use up until the start of aggression. From there, we observe a surge (+36%) until the outbreak of the war, followed by a gentler increase (+15%) after. Finally, looking more closely at EN tweeting behaviour, we can observe a general downward trend (-37%) until November 2021. Once the aggression starts, there is a huge spike (+130%), as users are tweeting a lot more in EN. After the outbreak of the war, this somewhat reverses (-40%), however, without dropping back down to pre-aggression levels. A full breakdown of all changes is reported in Table 1. 7 Overall, we can conclude that there are only minor sample shifts pre-dating aggression that affected tweeting activity, but major shifts thereafter. In terms of behaviour, we can already see steady changes early on, which significantly intensify with the war. However, as of yet, we cannot exactly pinpoint where those changes come from. Are users that already tweet in UA simply tweeting more with the outbreak of the war, or is it possible that users are actively switching the language they are tweeting in? 2.5 Choice of Language We analyze the choice of language more closely in the following. As we are interested in shifts between the individual languages, we look at the pairwise probability to tweet in one language over another over time. Hence, the probability reports how likely it is that a user tweets in language one (e.g. UA) over language two (e.g. EN). With three languages, this pairwise evaluation gives us a total of three different language pairs (UA over RU, UA over EN, RU over EN), where the order in which we specify each pair is irrelevant. Figure 4a visualizes how these pairwise probabilities evolved for an average user over time. For RU over EN the probability is mostly constant (80% to tweet in RU) until aggression starts, from where it continuously drops down to 55%. For UA over EN we see small increases over time (68% to 74%). With the mobilization of the Russian troops, we see a drop (63%), followed by a rise back to pre-aggression levels months into the war. Finally, for UA over RU we see a completely different pattern. Initially, the probability to tweet in UA is low (33%), from where it continues to rise consistently. In the weeks leading up to the war, there is a significant speed up in this shift, resulting in a probability of 77% to tweet in UA over RU towards the end of the analysis period in October 2022. Similarly to before, we can disentangle sample shifts from behavioural changes through statisti- cal modelling. In summary, we fit a GAMM to model users’ pairwise language probability to tweet over time, assuming a binomial distribution. As before, we include a smooth global time trend and user-specific random effects into the model. We fit such a model, for all three aforementioned language-pairs. A full description is provided in section 4.5. Figure 4b visualizes the fitted average sample effects across all three models, i.e. the graphs depict how the average time-constant tweeting probabilities in the active user sample change over time. As we are working with coefficients of a logistic regression, changes must be interpreted with respect to changes in the odds. The figure shows that for RU over EN, initially, there are no relevant sample shifts (on average). However, as we approach the outbreak of the war, we can report a large drop in the odds, as users are 64% less likely to tweet in RU over EN than before, with further decreases thereafter (-24%). For UA over EN, we find a small to moderate increase until aggression (+29%) due to sample shifts, followed by a large drop until war outbreak (-58%), which is recovered in the months after (+64%). Finally, for UA over RU, there is a constant increase in the odds over time (+50%), which significantly speeds up once aggression starts (+101% until October 2022). Table 2 details all changes. Combining this with the results from the previous section, we can conclude that the user turnover in the first 1.5 years shifts the sample such that users are more likely to tweet in UA (than RU or EN), but not at the expense of either of the two other languages, as tweet levels are (mostly) steady for both. As we approach the outbreak of the war, this drastically changes. Then, the user sample clearly shifts away from RU, as users are instead tweeting more in EN (initially) and UA (long-term). Upon further investigation (supplementary material S.4 and S.5), we find that users tweeting in RU start leaving around November 2021 (start of aggression), with EN users joining. The former continue to leave as the war unfolds, with some of the latter also slowly leaving the sample again over time. This is also reflected in the increase of the UA odds over time (UA over 8 (a) Average language probability. The graphs report a smoothed average of the probability to tweet in language one over language two per user in each week for the tree language pairs. The shaded area depicts the 95% confidence interval of the smooth fit. The non-smoothed version of the plot is available in supplementary material S.2. (b) Sample effects. The graphs report a smoothed average of the random effects of the active users in each week for all three language pairs (hence for all three language GAMMs). The shaded area depicts the 95% confidence interval of the smooth fit. The non-smoothed version of the plot is available in supplementary material S.3. (c) Behavioural effects. The graphs report the fitted global trend over all users in each week for all three language pairs (hence for all three language GAMMs). The shaded area depicts the 95% confidence interval of the fitted effect. Figure 4: Changes in the choice of language per user. (a) visualizes the average probability to tweet in one language over another, (b) how sample changes affected the probability, (c) how behavioural changes affected the probability. The first vertical line denotes the mobilization of the Russian troops along the Ukrainian border (11th November 2021). The second line denotes the outbreak of the war (24th February 2022). RU consistently, UA over EN as war continues). Figure 4c reports behavioural language changes across all three language pairs, when controlling for the user turnover. For RU over EN we see a constant decline in the odds over time (-33% to tweet in RU), which further speeds up once aggression starts (-55%). For UA over EN we see the exact opposite, as over time users are more likely to tweet in UA (+81% in odds). This change reverses with the start of aggression and the outbreak of the war (-40%), but subsequently reaches pre-aggression levels as the war unfolds. Finally, we can see a clear shift from UA to RU even early on (+129%). This switch becomes even more striking with the outbreak of the war, as users are 9 Table 2: Language Choice Effect Sizes between Key Dates Language UA over RU UA over EN RU over EN UA over RU UA over EN RU over EN Sample Effects Start - Aggression Aggression - War War - End Study Aggression - End Study +49.58% +28.66% -6.66% +130.99% +63.41% -38.89% +13.37% -58.47% -63.71% +77.01% +63.79% -24.36% Behavioural Effects +52.08% -33.61% -38.69% +129.24% +92.663% -20.659% +100.68% -31.98% -72.55% +248.63% +27.90% -51.36% Notes: Effect sizes for both sample and behavioural changes extracted from the language model described in section 4.5 between key dates. All effect sizes are relative increases in the odds between the two respective dates. Start: start of the study period—13th January 2020. Aggression: first official US report of a mobilization of the Russian troops along the Ukrainian border—11th November 2021. War: outbreak of the war—24th February 2022. End Study: end of the study period—10th October 2022. actively changing their behaviour such that average user is 250% more likely to tweet in UA over RU in the span of a single year. Table 2 reports all relevant changes. Connecting these language shifts with the results on tweeting activity, we find that the initial Instead, users are decline in EN and RU tweeting activity is not limited to monolingual users. actively shifting towards UA, by reducing their amount of RU and EN tweets (with a stronger shift from RU than EN respectively). Similarly, the temporary increase in EN tweeting behaviour leading up to the war can be linked to both UA and RU users. Finally and most importantly, the decline of RU and the rise of UA tweeting behaviour that manifests with the war is strongly driven by a major language shift (2.5x) from RU to UA. We visualize and demonstrate this substantial behavioural language shift from UA to RU in Figure 5. Figure 5a plots the language proportion of each user (UA to RU; from 0 to 1) that tweet in either language before (y-axis) and after the war (x-axis). Hence, along the straight black line through the origin we have users that do not switch language (top right UA, bottom left RU), users above the line switch to RU, below the line to UA, with users switching completely from one language to the other being located in either the top left (all tweets in UA to all in RU) or bottom right corner. Statistically significant (p < 0.05) language shifts from before to after war outbreak for each user are marked. From the figure it becomes evident that there are many users that do not switch language (in both UA and RU), as well as many users clearly switching from RU to UA at various levels, whereas there are only very few switching from UA to RU. In this sample of users who tweet in either RU or UA both before and after the outbreak of the war (3237 users), we have 1363 users who predominately tweet in RU (>80% of tweets) be- fore the war. Of those, 839 (61.6%) tweet more in UA after the war, with 566 (41.5%) reporting a significant behavioural change (p < 0.05). Out of those 850 users, 341 (25%) even switch to predominately tweeting in UA (>80% of tweets), i.e. perform a "hard-switch", with 296 (21.7%) statistically significant hard-switches (p < 0.05). We pick those 296 users and plot their weekly language proportion over time in Figure 5b. Red points denote 100% of the tweets being phrased in RU, blue points denote the same in UA. From the figure, we can clearly see a substantial break and change in behaviour around the time the war breaks out (second black line), as most of the users switch from RU to UA around this mark. On Ukrainian side, we have 1172 users who predominately tweet in UA (>80% of tweets) before the war. Of those, 471 (40.2%) tweet more in RU after the war, with only 83 (7.1%) reporting a sig- 10 (a) Scatterplot of users’ language proportions before and after the outbreak of the war. We are only considering users who tweet in either RU or UA (or both) before and after (n = 3237). The points are colored with respect to each user’s shift in language (1 denotes a complete shift to UA, -1 a complete shift to RU, 0 no shift). The straight line through the origin covers all points without a shift. Significant shifts (p<0.05) are denoted through full (non-empty) points. Significance was calculated by individually comparing each user’s language proportion through a two-sided z-test before and after war outbreak (24th February 2022). n = 1808 (821 significant) shifts towards Ukrainian, n = 818 (106 significant) shifts towards Russian. (b) Scatterplot of users’ language proportion in each week over time. Each row (on the y-axis) denotes one of the n = 295 users with a statistically significant hard-switch from RU to UA. The points are colored with respect to each user’s language proportion in the respective week (145 total weeks). Missing points indicate that a user was not tweeting in the respective week. The first vertical line denotes the mobilization of the Russian troops along the Ukrainian border (11th November 2021). The second line denotes the outbreak of the war (24th February 2022). Figure 5: Language proportion scatterplots of users. The language proportion ranges from [0, 1], with 0 being defined as 100% of a user’s tweets being in RU, and 1 as 100% of tweets in UA. Only RU and UA tweets of each user are considered. 11 nificant behavioural change (p < 0.05). More importantly, we only observe 35 (3%) hard-switches, out of which 20 (1.7%) are significant (p < 0.05). Hence, there are only very few UA tweeting users for which we can report a significant switch towards RU after the war. Finally, we analyze potential differences in those RU users that perform a hard-switch to UA from those that do not. We find that there are significant differences (p < 0.05) in the median in various user characteristics between the two groups. Users switching have more followers (+54.5%), a higher tweet frequency (+47.7%) as well as a higher like frequency (+48.9%) and published more Ukraine geo-tagged tweets during the study period (+49.1%), whereas there are only small non- significant differences in account age (+9.7%; p = 0.13) and followings (+13.8%; p = 0.15). For more information and a full breakdown see section 5.2. 3 Discussion We collected geo-tagged tweets from Ukraine and analyzed tweeting activity and language choice before and during the Russian-Ukrainian War from 9th January 2020 to 12th October 2022. Due to the nature of our longitudinal dataset, in which we observe the same set of users across the study period, we were able to disentangle shifts in the user sample, arising from user turnover, from behavioural changes of the actively tweeting users. We find there is a steady long-term shift away from Russian towards Ukrainian already before the war, as the Ukrainian tweet probability rises substantially (vs. Russian; 33% to 47%). This shift can be largely attributed to behavioural changes. The actively tweeting users reduce their number of Russian tweets in favour of Ukrainian over time. This finding is in line with trends observed over a 20-year period between the 1989 and the last conducted census in 2001 (Stebelsky, 2009) and more recently across surveys (Kulyk, 2018), where the share of people reporting Ukrainian as their native language perpetually rose over time. Notably, with the Euromaidan protests and the subsequent Russian military intervention in 2014, this shift seems to have sped up, as citizens ethnonational identification and everyday language use is substantially shifting towards Ukrainian. The pattern we observe on Ukrainian Twitter is relatively similar. We find a gradual but sub- stantial language shift already pre-war, which drastically accelerates with the start of the Russian aggression in November 2021 and the subsequent outbreak of the war. In the span of a few months, Ukrainian tweet probability rises from 47% to a remarkable 76%. While some of this increase can be explained by Russian tweeting users leaving and Ukrainian users joining (+101% in odds to tweet in Ukrainian), the major factor is a behavioural change (+249% in odds to tweet in Ukrainian), with a rise in Ukrainian (+56%) and a decrease in Russian tweeting activity (-20%). Notably, we show that out of those users predominately tweeting in Russian before the war, roughly half of them tweet more in Ukrainian after. Strikingly, around a quarter of them switch to predominately tweeting in Ukrainian, i.e. performs a hard-switch. It is worth noting, that we do not observe more than a handful of switches in the other direction. This shift from Ukrainian to Russian is in line with recent reports and small-scale surveys outlining the war as the cause for the recent shifts in language use across Ukraine (Harding, 2023; Warner, 2022). Our work confirms these findings on a large-scale on social-media and pinpoints this substantial change exactly to the outbreak of the war. Russian users that perform a hard-switch to Ukrainian seem to be more active on Twitter and have a larger follower base, despite the overall number of followers being fairly low (median of 119 vs. 77). Nonetheless, we find these differences to be statistically significant. While these would not be deemed as influencer accounts, their behaviour could be attributed to a form of signalling to their user-base of their opposition to the war. 12 Furthermore, we find a long-term behavioural shift away from English tweeting activity up until November 2021. This could be interpreted as a reduction in talking to a broader international audience during that time (Smith, 2015; Christiansen, 2015; Moreno-Fernández and Mella, 2022), due to the fact that English is the most widely understood language on the internet by far (Statista, 2022a). However, not surprisingly, with the mobilization of the Russian troops along the Ukrainian border and specifically in the weeks leading up to the war, with a spike during outbreak, we observe a substantial shift towards English, as we hypothesize users wanted to let the world know what was happening and called for aid. While we record a large influx of English speaking users during that time, we can also observe a substantial behavioural shift. Already active users tweet substan- tially more in English, independent of the language they were normally tweeting in. As the war continues to unfold, this somewhat reverses, with some of the newly joined English users leaving and behaviour reverting, although not to pre-aggression levels. With the world being more aware of the situation, and the international community supporting Ukraine in various ways (European Commission, 2023; White House, 2023), we hypothesize users have less reasons to continue tweeting in English. Instead, they return back to intra-national discussions and thus their native language(s). We recognize that our study provides a foundation towards a better understanding on how the Ukrainian population reacted to the Russian invasion both on- and offline. Future work could po- tentially take a closer look on content and sentiment of tweets through multilingual topic modelling and sentiment analyses. This could be augmented through the use of media objects attached to tweets such as images and videos. An investigation of retweet and follower networks could provide additional information on user characteristics as well as interactions in order to find differences be- tween the users that are shifting language compared to those that are not. Naturally, any analysis could be extended to other social media platforms. In summary, our work investigated tweeting activity and language choice on Ukrainian Twitter before and during the Russian-Ukrainian War through a large-scale longitudinal study. We observe a substantial shift away from the Russian language to Ukrainian, with more than half of the predom- inately Russian-tweeting users shifting towards Ukrainian, and a quarter of them even performing a hard-switch to Ukrainian, as the war broke out. We may interpret this as citizens’ increasing opposition to Russia and a return to the country’s linguistic roots as well as a push towards a conscious self-definition of being Ukrainian. We deem this a powerful political message to send to a global audience. 4 Methods This study was ethically approved by the ethics commission of the faculty of mathematics, computer science and statistics at Ludwig-Maximilians-Universität (LMU) München, Germany. The reference identifier is EK-MIS-2022-127. 4.1 Data Collection The original Twitter dataset obtained from the 1% stream consisted of 4,102,982 tweets (see section 2.1 for details). As we began cleaning, we noticed gaps with missing tweets, most likely due to server and internet outages during the real-time data collection process. Hence, we retrospectively identified and filled all gaps. To do this, we first identified all time windows > 10 min without any tweet and added them to our download queue. Days with more than two of such time windows were added to the queue as a whole. We then queried the Twitter Research API 2.0 using the tweets/search/all endpoint to obtain tweets with Ukrainian geoinformation for all time windows in 13 this queue and added the newly obtained tweets to our original dataset. Finally, we repeated this process for the 15 days with the least amount of tweets in our dataset. After removing all duplicates, this meant we added a total of 350,359 additional tweets to our dataset this way. We perform our sensitivity analysis (see section 4.2) after this step. We clean this dataset by removing spam as well as potential spam bots and accounts, as described in section 4.3. 4.2 Sensitivity Analysis After the collection of tweets as described in section 4.1, we evaluate the completeness of the dataset, i.e. if we were able to recover most of the tweets published in Ukraine during that time, using the following strategy. We draw a random subset of 29 days from our analysis period and draw tweets from the Twitter Research API 2.0 using the tweets/search/all endpoint, which returns all historic tweets that have not been deleted since. We report a coverage of 98.24% (SD: 3.09%). More importantly, in the opposite direction we are only able to report a coverage of 77.67% (SD: 9.55%). Hence, employing our strategy using the real-time stream offers substantially more tweets, which have been deleted since (for more information on tweet deletion and its effects see Pfeffer et al. (2022)). Moreover, this suggests we were able to recover most of the geo-tagged tweets from Ukraine using our strategy. 4.3 Data Cleaning & Pre-processing For cleaning our dataset, we first train a Twitter bot detection model using a random forest (RF), as described in Yang et al. (2020). We use the exact same model as described in the authors’ work (except for removing the attribute profile_use_background_image, which is no longer avail- able from the Twitter API), using the training datasets botometer-feedback, celebrity, political-bots, as well as 100 manually labelled Twitter accounts from our dataset. To evaluate performance, we first set up a nested cross validation (CV) routine, with both a 5-fold CV in the inner and outer loop. The inner CV is used for hyperparameter tuning, tuning both the number of trees as well as the minimum node size of the RF, whereas the outer loop is used for evaluating model performance. This results in an average area under the receicer operator characteristics curve (AUROC) of 0.9837 and an average area under the precision-recall curve (AUPRC) of 0.7707. For our final model, we replicate this procedure, by setting up a 5-fold CV on the entire dataset to find the best performing hyperparameters. We then train our RF on the entire dataset and use this model to identify bots and spam accounts in our dataset. As we are only interested in removing the most prevalent spam, we opt for a conservative re- moval strategy to not falsely remove too many real and non-spam users. Hence, we only remove users with a predicted bot probability > 50% and more than 10 tweets since account creation as well as users with a predicted bot probability > 30% and more than 10,000 tweets. While thresholds of 50% and 30% respectively might not seem conservative, in the given setting, in which the bot class is heavily underrepresented (3.7% of observations in training dataset), an F1-optimizing threshold on the training dataset would lie far below that. We are somewhat less conservative with users that published over 10000 tweets, as in most cases they are spam accounts (e.g. related to bitcoins or NFTs). We do to not remove users with less than 11 tweets, as even for a human it becomes incredibly difficult to determine if a user is a bot with such limited amount of information to draw from. At the same time, we noticed a large influx of new users after the outbreak of the war who exclusively called for help in a short span of time, a behaviour which can easily be mistaken for a bot. Notably, we do not tune the optimal classification threshold, as the outbreak of the war in Ukraine represents an unprecedented event, with an unusual amount of new users joining (see section 2.3). Hence, we expect the distribution between the target label (bot or human) and our features to be different between the bot training dataset and our Ukrainian dataset. Unfortunately, 14 an extensive manual labelling strategy and more elaborate bot detection is beyond the scope of this work and would warrant its own paper. In summary, with this strategy we remove a total of 2021 users and their tweets from our dataset. To further identify and remove potential spam accounts, we identify all accounts with more than 100 tweets on a single day (the mean is ∼ 4.4 and the median = 2), and remove those 257 users from the dataset. We also noticed an unusual amount of Tweets containing the word "BTS" (45,579; referring to the Korean K-Pop band, see Lee and Nguyen (2020) for more information) with spikes on specific days, which we subsequently filter out. Next, we identify and remove any tweets published by the same user that contain the exact same text as their previous tweet if both tweets were published within a one minute window. Fifth and finally, we filter out any tweets with the source attribute not being equal to Instagram or Twitter. That way, we discard any tweets automatically published by social media schedulers such as dlvr, which are often used by news agencies or other companies. 4.4 Tweet Modelling We define the number of tweets Yt,u,l made in week t by user u in language l. As tweets are count data, we model the Yt,u,l to follow a Poisson distribution with intensity λt,u,l, where λt,u,l = exp(µ + sl(t) + Wu,l). Here, µ is a general time-constant intercept, which captures the average tweet intensity over all users, languages and weeks. The Wu,l are language-specific time-constant random intercepts for each user u, assumed to be normally distributed. They capture by how much the average tweet- ing behaviour (more or less tweets) of each user in each language differs from the general mean µ. Finally, sl(t) denotes a smooth global time trend for each language l (Ukrainian, Russian, En- glish) and captures changes in the tweeting behaviour over all users over time. Hence, with the latter, we can measure behavioural changes of the users over time (e.g. are users tweeting more with the outbreak of the war?), whereas the random intercepts measure changes in the user sample over time (e.g. are users that enter the platform after the war tweeting more on average?). We fit the model with the R package mgcv v1.8.41 (Wood, 2017) using the GAM implementation for very large datasets bam. To speed up the estimation, we use the discrete option, which discretizes covariates to ease storage and increase efficiency. For fitting sl(t), we employ thin plate regres- sion splines. Our estimation sample consists of y = 1,045,245 observations, with t = 143 weeks, l = 3 languages and u = 13,643 users. For our fitted model, we report an explained deviance of 71.3%. The effect sizes in the main text are calculated as follows. For the behavioural effects we derive the change in sl(t) between two respective dates t1 and t2 and take the exp(.), i.e. exp(sl(t2)−sl(t1)) for each language l. The result is the change in expected tweeting activity due to behavioural changes, when controlling for the in- and outflux of users. The sample effects are derived by averaging the random effects of the active users at the two respective dates and taking the exp(.), i.e. exp(W t2,l − W t1,l). We define W t,l as the average random effect in language l over all users u active at time point t. This captures the averaged change in expected tweeting activity due to a change in average tweeting intensity of the active users, when controlling for behavioural changes. 4.5 Language Modelling To model users’ pairwise language probability, we refrain from a multinomial modelling strategy, as even with a weekly setup our dataset is particularly large. (To the best of our knowledge, a package with a parallel estimation routine for large datasets that can fit a GAMM for a multinomial Instead, we model each pairwise probability separately through a distribution does not exist.) 15 binomial distribution. Our pairwise evaluation gives us a total of three different language pairs (UA over RU, UA over EN, RU over EN), for which we model the probability π to tweet in language one (subsequently l1) over language two (subsequently l2). The order in which we specify these pairs is irrelevant, as the probability to tweet in l2 over l1 is simply 1 − π. More specifically, we define Xt,u as the number of tweets made in week t by user u in l1. We assume Xt,u ∼ Binomial(nt,u, πt,u), where nt,u denotes the total number of tweets made by user u in week t (sum of tweets in l1 and l2) and πt,u corresponds to the probability to tweet in l1 over l2. We assume that nt,u is known and instead model πt,u by setting πt,u = f (µ + s(t) + Wu), where f (.) is defined as the logistic function. Similarly to before, µ is a general time-constant inter- cept, which captures the average mean probability over all users and weeks to tweet in l1 over l2. Again, the Wu are time-constant random intercepts for each user u that capture by how much the average probability differs from the general mean µ, and are assumed to be normally distributed. The smooth global time trend s(t) captures changes in the probability over all users over time. Hence, as before, we can measure behavioural changes of the users over time with the latter (are users actively changing the language they are tweeting in?), whereas the random intercepts mea- sure changes in the sample over time (how does the language probability of users entering/leaving the platform evolve?). We estimate this model specification for all three aforementioned language- pairs with the R package mgcv v1.8.41 (Wood, 2017) using the GAM implementation for very large datasets bam. To speed up the estimation, we use the discrete option, which discretizes covariates to ease storage and increase efficiency. For fitting s(t), we employ thin plate regression splines. Users not tweeting in either of the two languages of the respective language pair, need to be discarded by definition. Hence, for UA over RU our estimation sample consists of of x = 194,178 observations, with t = 143 weeks and u = 10,531 users. For UA over EN: x = 146,984, t = 143, u = 9,133. For RU over EN: x = 170,853, t = 143, u = 10777. For our fitted models, we report explained deviances of: 85.8% (UA over RU), 90.5% (UA over EN) and 90% (RU over EN). The coefficients of a logistic regression, as employed here, must be interpreted with respect to changes in the odds (also known as odds ratio). The odds ratio is defined as odds = p/(1 − p). Hence, it describes how likely an event is going to happen compared to not happen. In this setting, it describes how likely it is to tweet in language 1 over language 2. The effect sizes in the main text are calculated as follows. For the behavioural effects we derive the change in s(t) between two respective dates t1 and t2 and take the exp(.), i.e. exp(s(t2)−s(t1)) for each of the three models. The result is the change in odds to tweet in l1 over l2 due to behavioural changes, when controlling for the in- and outflux of users. The sample effects are derived by averaging the random effects of the active users at the two respective dates and taking the exp(.), i.e. exp(W t2 − W t1) for each of the three models. We define W t as the average random effect over all users u active at time point t. This captures the averaged change in odds due to a change in average tweeting probability of the active users, when controlling for behavioural changes. 16 5 Extended Data 5.1 Language Distribution Figure 6: Relative distribution of the top 10 languages across the entire sample after preprocessing and cleaning (n = 2,845,670 tweets). "Undefined" consists of tweets that are too short, contain only hashtags, contain only mentions or only have media (links), for all of which a language is not available. 5.2 Differences in User Characteristics for Russian Users We evaluate differences in user characteristics between the 1,363 user who predominately tweet in Russian (>80% of tweets) with respect to their language shift with the outbreak of the war in Table 3. Column 2 reports the median of the respective user characteristic for those 1067 Russian users that do not perform a statistically significant (p < 0.05) hard-switch to Ukrainian (>80% of tweets) with the outbreak of the war, column 3 for the 296 users that do. To determine significance, we employ a two-sided z-test on each user’s language proportion (% tweets in UA) before and after the outbreak of the war. Column 4 reports the relative difference from the switch group to the no switch group, with bold values indicating significant differences between the two groups (p < 0.05). Column 5 reports the p-value of the two-sided statistical significance test on the difference in median between the two groups using a chi-squared test. Column 6 the chi-squared statistic. 17 Table 3: Median % Differences in User Characteristics User Characteristic Followers Tweet Frequency Like Frequency # of Tweets in Ukraine Account Age (Month) Followings No Switch 77 0.79 0.84 57 98.28 116 Switch Difference 119 1.16 1.25 85 107.84 +9.73% +13.9% 132 +123.61% 0.004 +47.73% 0.021 +48.93% 0.021 +49.12% 0.001 0.127 0.155 8.223 5.352 5.352 10.639 2.326 2.023 P-Value χ2 Notes: n = 1,067 users in the no switch group, n = 296 users in the switch group. Followers are the number of accounts that follow a user. The tweet frequency reports the number of tweets per day. The like frequency the number of liked tweets (by the user) per day. "# of tweets in Ukraine" reports the number of tweets in our dataset. The account age reports the number of months a user account has existed from account creation to their latest tweet in our dataset. Followings report the number of accounts a user is following. All user characteristics (except # tweets in Ukraine) are derived from the Twitter API, using the provided fields accompanying the user’s latest tweets. Funding Statement This work is supported by the Helmholtz Association under the joint research school “Munich School for Data Science - MUDS”. This work is also supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. [ERC-2016-StG-714087], Acronym: So2Sat) References Allcott, H., Gentzkow, M., 2017. Social media and fake news in the 2016 election. Journal of economic perspectives 31, 211–236. Barbieri, F., Anke, L.E., Camacho-Collados, J., 2022. Xlm-t: Multilingual language models in twitter for sentiment analysis and beyond, in: Proceedings of the Thirteenth Language Resources and Evaluation Conference, pp. 258–266. Bigg, M.M., 2022. development from every month of the war. ukraine-russia-war-timeline.html. Retrieved 2023-01-14. Russia invaded ukraine more than 200 days ago. here is one key URL: https://www.nytimes.com/article/ Christiansen, T.W., 2015. The rise of english as the global lingua franca. is the world heading towards greater monolingualism or new forms of plurilingualism? Lingue e Linguaggi , 129–154. Dabbish, L., Farzan, R., Kraut, R., Postmes, T., 2012. Fresh faces in the crowd: turnover, identity, and commitment in online groups, in: Proceedings of the ACM 2012 conference on computer supported cooperative work, pp. 245–248. Davidson, B.I., Jones, S.L., Joinson, A.N., Hinds, J., 2019. The evolution of online ideological communities. PloS one 14, e0216932. Dowd, C., Justino, P., Kishi, R., Marchais, G., 2020. Comparing ‘new’and ‘old’media for violence monitoring and crisis response: evidence from kenya. Research & Politics 7. Dwarakanath, L., Kamsin, A., Rasheed, R.A., Anandhan, A., Shuib, L., 2021. Automated machine learning approaches for emergency response and coordination via social media in the aftermath of a disaster: A review. IEEE Access 9, 68917–68931. Dwivedi, Y.K., Ismagilova, E., Rana, N.P., Raman, R., 2021. Social media adoption, usage and impact in business-to-business (b2b) context: A state-of-the-art literature review. Information Systems Frontiers , 1–23. 18 Dzogang, F., Lansdall-Welfare, T., Cristianini, N., 2016. Seasonal fluctuations in collective mood revealed by wikipedia searches and twitter posts, in: 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), IEEE. pp. 931–937. Euronews, Eichstaedt, J.C., Weidman, A.C., 2020. Tracking fluctuations in psychological states using social media language: A case study of weekly emotion. European Journal of Personality 34, 845–858. ’unusual’ near Ukrainian URL: https://www.euronews.com/2021/11/11/ us-alleges-unusual-russian-troop-movements-near-ukrainian-border. Retrieved 2023- 03-30. alleges Euronews troop movements Russian border. 2021. US European Commission, URL: https:// eu-solidarity-ukraine.ec.europa.eu/eu-ukraine-standing-together_en. Retrieved 2023- 03-30. EU-Ukraine: together. Standing 2023. Flamino, J., Galeazzi, A., Feldman, S., Macy, M.W., Cross, B., Zhou, Z., Serafino, M., Bovet, A., Makse, H.A., Szymanski, B.K., 2023. Political polarization of news media and influencers on twitter in the 2016 and 2020 us presidential elections. Nature Human Behaviour , 1–13. 2023. Harding, L., embrace shift’: their language. The Guardian URL: https://www.theguardian.com/world/2023/mar/06/ russia-ukrainians-embrace-language-war. Retrieved 2023-03-29. war prompts ukrainians generational to ‘a Hu, Y., Wang, R.Q., 2020. Understanding the removal of precise geotagging in tweets. Nature Human Behaviour 4, 1219–1221. Jurgens, M., Helsloot, I., 2018. The effect of social media on the dynamics of (self) resilience during disasters: A literature review. Journal of Contingencies and Crisis Management 26, 79–88. Kaufhold, M.A., Rupp, N., Reuter, C., Habdank, M., 2020. Mitigating information overload in social media during conflicts and crises: design and evaluation of a cross-platform alerting system. Behaviour & Information Technology 39, 319–342. Kulyk, V., 2018. Shedding russianness, recasting ukrainianness: The post-euromaidan dynamics of ethnonational identifications in ukraine. Post-Soviet Affairs 34, 119–138. Laitin, D.D., 2000. Language conflict and violence: the straw that strengthens the camel’s back. European Journal of Sociology/Archives Européennes de Sociologie 41, 97–137. Lamb, W., 2022. Rebuilding Ukraine will cost at least $349 billion, a new report estimates. The New York Times URL: https://www.nytimes.com/live/2022/09/10/world/ukraine-russia-war# rebuilding-ukraine-349-billion-dollars. Retrieved 2023-04-14. Lee, J.H., Nguyen, A.T., 2020. How music fans shape commercial music services: A case study of bts and army., in: ISMIR, pp. 837–845. Mäkinen, M., Wangu Kuira, M., 2008. Social media and postelection crisis in kenya. The interna- tional journal of press/politics 13, 328–335. Marples, D.R., 2021. The War in Ukraine’s Donbas: Origins, Contexts, and the Future. Central European University Press. Marshall, C.A., 2002. Post-soviet language policy and the language utilization patterns of kyivan youth. Language Policy 1, 237–260. Moreno-Fernández, F., Mella, H.Á., 2022. Reexamining the international importance of languages. HCIAS Working Papers on Ibero-America . Morozov, E., 2012. The net delusion: The dark side of Internet freedom. PublicAffairs. Mosleh, M., Pennycook, G., Arechar, A.A., Rand, D.G., 2021. Cognitive reflection correlates with behavior on twitter. Nature communications 12, 921. Soldiers, 2022. NDTV, Russia-Ukraine soldiers-separatists-sanctions-a-timeline-of-the-russia-ukraine-crisis-2782377. Retrieved 2023-03-30. Crisis. URL: Separatists, NDTV Sanctions: The Timeline https://www.ndtv.com/world-news/ Of A 19 OHCHR, 2023. Ukraine: civilian casualty update 24 April 2023. URL: https://www.ohchr.org/en/ news/2023/04/ukraine-civilian-casualty-update-24-april-2023. Retrieved 2023-04-26. Panek, E., Hollenbach, C., Yang, J., Rhodes, T., 2018. The effects of group size and time on the for- mation of online communities: Evidence from reddit. Social Media+ Society 4, 2056305118815908. Pavlenko, A., 2008. Multilingualism in post-soviet countries: Language revival, language removal, and sociolinguistic theory. International journal of bilingual education and bilingualism 11, 275– 314. Pfeffer, J., Mooseder, A., Hammer, L., Stritzel, O., Garcia, D., 2022. This sample seems to be good enough! assessing coverage and temporal reliability of twitter’s academic api. arXiv preprint arXiv:2204.02290 . Ransbotham, S., Kane, G.C., 2011. Membership turnover and collaboration success in online com- munities: Explaining rises and falls from grace in wikipedia. Mis Quarterly , 613–627. Ratten, V., 2022. The ukraine/russia conflict: Geopolitical and international business strategies. Thunderbird International Business Review . Reuter, C., Hughes, A.L., Kaufhold, M.A., 2018. Social media in crisis management: An evaluation and analysis of crisis informatics research. International Journal of Human–Computer Interaction 34, 280–294. Rogstadius, J., Vukovic, M., Teixeira, C.A., Kostakos, V., Karapanos, E., Laredo, J.A., 2013. Cri- sistracker: Crowdsourced social media curation for disaster awareness. IBM Journal of Research and Development 57, 4–1. Sacco, V., Bossio, D., 2015. Using social media in the news reportage of war & conflict: Opportu- nities and challenges. The journal of media innovations 2, 59–76. Sadri, A.M., Hasan, S., Ukkusuri, S.V., Cebrian, M., 2018. Crisis communication patterns in social media during hurricane sandy. Transportation research record 2672, 125–137. Saroj, A., Pal, S., 2020. Use of social media in crisis management: A survey. International Journal of Disaster Risk Reduction 48, 101584. Smagulova, J., 2006. Kazakhstan: Language, identity, and conflict. Innovation: The European Journal of Social Science Research 19, 303–320. Smith, L.E., 2015. English as an international language: No room for linguistic chauvinism. Journal of English as a Lingua Franca 4, 165–171. Statista, 2022a. Infographic: English Is the Internet’s Universal Language. URL: https://www. statista.com/chart/26884/languages-on-the-internet. Retrieved 2023-03-27. Statista, 2022b. Most popular social media by age Ukraine 2021. URL: https://www.statista. com/statistics/1256255/most-popular-social-media-by-age-ukraine/. Retrieved 2023-03- 28. Stebelsky, I., 2009. Ethnic self-identification in ukraine, 1989–2001: why more ukrainians and fewer russians? Canadian Slavonic Papers 51, 77–100. Steinert-Threlkeld, Z.C., Chan, A.M., Joo, J., 2022. How state and protester violence affect protest dynamics. The Journal of Politics 84, 798–813. Stewart, P., Ali, I., 2021. Pentagon says it continues to see unusual Russian military ac- Reuters URL: https://www.reuters.com/world/europe/ tivity near Ukraine border. pentagon-says-it-continues-see-unusual-russian-military-activity-near-ukraine-2021-11-15/. UNHCR, 2023. Ukraine refugee situation. URL: https://data.unhcr.org/en/situations/ ukraine. Retrieved 2023-04-14. 2022. A., survey Warner, use, war-in-ukraine-spurs-decline-in-russian-language-use-survey-shows/. 2023-03-29. in ukraine Multilingual russian-language https://multilingual.com/ Retrieved decline shows. URL: spurs War in 20 White House, of Supporting Ukraine. https://www.whitehouse.gov/briefing-room/statements-releases/2023/02/21/ fact-sheet-one-year-of-supporting-ukraine/. Retrieved 2023-03-30. FACT SHEET: One Year 2023. URL: Wong, A., Ho, S., Olusanya, O., Antonini, M.V., Lyness, D., 2021. The use of social media and online communications in times of pandemic covid-19. Journal of the Intensive Care Society 22, 255–260. Wood, S.N., 2017. Generalized additive models: an introduction with R. CRC press. World Bank, 2023. Ukraine rapid damage and needs assessment: February 2022 - february 2023 (english). Washington, D.C. : World Bank Group. URL: http://documents.worldbank.org/ curated/en/099184503212328877/P1801740d1177f03c0ab180057556615497. Wright, S., 2012. Language policy, the nation and nationalism. Cambridge University Press. Cam- bridge Handbooks in Language and Linguistics, p. 59–78. Yang, K.C., Varol, O., Hui, P.M., Menczer, F., 2020. Scalable and generalizable social bot detection through data selection, in: Proceedings of the AAAI conference on artificial intelligence, pp. 1096–1103. Zhuravskaya, E., Petrova, M., Enikolopov, R., 2020. Political effects of the internet and social media. Annual review of economics 12, 415–438. 21
ai_researcher
1
Gamification_and_Simulation_Distance_Education_for_Industrial_Engineering_Students.pdf
Distributed Applications in Gamification of the Learning Process Martin Zagar Web and Mobile Computing Department RIT Croatia Zagreb, Croatia [email protected] Nikola Draskovic International Business Department RIT Croatia Zagreb, Croatia [email protected] Matija Sipek Web and Mobile Computing Department RIT Croatia Zagreb, Croatia [email protected] Branko Mihaljevic Web and Mobile Computing Department RIT Croatia Zagreb, Croatia [email protected] Abstract— Driven by the fact that many of us experienced softer or not-so-soft lockdown, the intention of a couple of instructors at our university was to develop a collaborative tool that could help in online delivery and gamification on two courses that are delivered in the Business and IT curriculums we are offering to our students. That tool could be described as a decentralized web application that simulates Internet marketing principles and helps in gamification of the learning process for our students. We planned our web application for Internet marketing simulation as the gamification of the learning process, which is one of the basics for active learning for Internet Marketing course for International Business students, to gain new class activities by online simulation competing in the field of Internet marketing principles; and for IT students in developing the Web application and also on adopting Blockchain technologies for the distributed reports which need to have a consensus of all teams included in the simulation. The proposed solution includes the design of business logic simulation and using four main digital marketing tools – social networking, content creating and sharing, search engine marketing, and display advertising in use of such application for hands-on online class exercises. Keywords— Distributed applications in learning, gamification of the learning process, online delivery of classes, Internet marketing. I. INTRODUCTION planning, Studying, developing distributed and applications, differs significantly compared to traditional, centralized software applications, namely managing internal persistent memory and operations conducted with it. From an educational point of view, this process is not covered completely in a proper way. IT students lack knowledge on how to build these specific applications due to a lack of business background and related experience. On the other hand, Business students may have the knowledge of business logic an Internet simulation would apply, but they have no technical knowledge needed for the application development. For the IT students, the switch from centralized application deployment to a decentralized approach introduces new options regarding different principles in software development where access management, integrity, and immutability are imperative. Business students need to understand Internet marketing principles comprehensively, so there is a need for shifting the educational process, for both IT their higher and Business students. Also, according to the common rationale, a motivating learning environment will result in better-performing students and level of satisfaction with both delivered courses and instructors [1]. Therefore, it is essential for curriculum designers to develop programs and assignments that will challenge students, but also provide them with valuable experiences, especially real- life experiences [2]. According to [3], simulated real-world experience allows students to test their skills before leaving the educational environment. Based on the current students’ feedback on courses Internet Marketing (in the Business curriculum), and Web App Development (in the IT curriculum), students would like to have more real-world and real-time examples and as much as possible real-world experience [4]. The main purpose of the Internet marketing simulation is to provide both groups of students with course-specific knowledge and experience. In other words, the aim of Internet simulation is for students to develop a better understanding of theoretical fundamentals and course topics in a more interesting and dynamic way. Having in mind the collaborative nature of this project, the proposed simulation is enabling active learning in two courses within two different curriculums. The simulation supports the idea of topic delivery in a blended way. The lectures could be delivered face-to-face, while hands-on activities for both student sections could be delivered through the online Internet marketing simulation. This learning by doing approach already exists in different Internet marketing simulations, such as Stukent Mimic Pro Simulation [5], Markstrat [6], or Anylogic [7], which we used as a benchmark to initially set parameters of our application. The main problem with other existing solutions is that they are not fully adaptable to all market dependencies, which our application is, and which is a prerequisite for fully online delivery of the courses II. APPROACH AND METHODS The initial version of the Internet Marketing simulation comprised a simple business environment where students were supposed to manage digital marketing efforts for a hypothetical smartphone brand with a limited range of three devices with different technical specifications and targeted market. The simulation features an administration panel that provides the instructor with an option to modify various simulation parameters. Once the simulation is being set-up, the first round is a test round, which gives students an opportunity to get familiar with the interface. Both test and regular rounds consist of planning the budgets per digital platform and decisions about other parameters (e.g., keywords strategy). After each round, activity reports (i.e., digital platform insights/analytics) are generated providing students with valuable feedback on their actions. Additionally, students have an option to review market reports (containing competition's numbers and consumer preferences), but these reports would require spending of a certain amount of virtual money. Based on inputs from various reports, students have to manage digital marketing activities, such as content creation and content share/promotion via social media channels. The simulation gives students an option to decide how much of the virtual money should be spent per turn. The simulation is designed as a three-step process: A. Planning stage In and the planning stage, budgets, message strategy, keywords, and targeting options are set. Each student or team has to make a decision on budget allocation and spending per turn. The total simulation budget is predefined by the instructor. The weekly budget has to be split between communication products/brands channels/activities. per threshold channel/activity is predefined (i.e., spending less than the threshold value will have no impact). Furthermore, more focused spending (e.g., limited to one product and/or fewer channels/activities) is more efficient, like in reality. This stage is completely developed and deployed in JavaScript technologies and is accessible through the front-end of our application. different spending The B. Execution of the plan refers This step is done on a daily or weekly turn, depending on the class delivery type. In the execution of the plan, students can make some low-scale changes to optimize the promotional effort. Each student or team has to select the keyword (or even the channel) per multiple keywords, depending on channel/activity. Keyword topic communication is focused on (more details about the Keywords in section III B.). Keywords have a different impact on the efficiency of promotional activities depending on the product features, target audience preferences, and proposed budget. Synergy across channels can be achieved if communication is more focused (e.g., the same keyword is used across all communication channels). In other words, consistency is crucial. Students/teams are limited here with the drop-down menu. the main to their In order to optimize the overall promotional effort, target audience(s). students can precisely define Targeting primarily refers to the promotional activities and it is platform-specific. A proper combination of message strategy and targeting is essential for success. Appropriate targeting is also correlated with content creation and its quality score in the context of search marketing. In other words, a higher quality score, together with a consistent bidding strategy, will result in a more efficient search marketing campaign. This stage is completed on the basic level. At the moment, the basic principles of correlations have been implemented. However, as part of the future development, further and more detailed development of correlations between various planning variables is expected. C. Reporting Once the round/turn is finished, students receive distributed formal reports (digital platform insights/analytics) and some less formal (salesforce feedback). For the market report (containing competition's numbers and consumer preferences), students have to spend a certain amount of their overall marketing budget. This backend process is designed instead of one centralized database by using distributed Blockchain technologies, where we have developed reporting on the Blockchain technology. Initially, we encountered difficulties in payment options when tried to implement this in standard Ethereum Virtual Machine [8], which is part of the Ethereum network (this is to get the distribution of the reports, but also to prove the integrity of the data and to get an overall consensus of all of the parties in the reporting), so we plan to translate our solution to another Blockchain technology – HashNet network [9], where we can run our simulation reporting for free. Our next step was adding the additional feature for the weekly budgeting that has to be split between products and different communication channels/activities to enable multi- user mode, so students (or teams of students) are able to compete with each other. Execution of the weekly plan based on weekly budgeting enables students to make some low-scale changes in order to optimize the promotional effort. III. SYSTEM DESCRIPTION AND OUTCOMES In this section, we will describe what are the goals of the simulation and how to interact with the simulation with three main tabs. The current simulation is providing the main features of Internet marketing principles, which could be easily extended upon further feedback. A. Goals of simulation From the perspectives of users, there are several goals in this simulation: • The main goal is to increase sales for each of the three smartphone models; • Additionally, students increase brand loyalty/image because it will have a positive impact on sales in the next simulation turns (long-term perspectives). should Furthermore, Business students/teams are faced with the following set of goals of specific challenges: • to increase sales, students have to In order appropriately allocate marketing budget and set-up all the correlated variables (e.g., keywords, post promotions). • After each into turn, students have consideration outcomes from previous turns and adjust variables accordingly in order to optimize the performance of the digital marketing activities. take to For IT students, goals are related to the technical aspects of the simulation. Therefore, IT students are dedicated to the collection of real-time inputs from the users for further application features development. Students/teams have to manage digital marketing efforts for three product models (i.e., low-end, mid-end, and high-end products/brands). By default, the smartphone product category has been set-up. However, the instructor could make customization and manually change the product category. Since the marketing budget is limited, it is not possible to optimize promotional effort for all three products at the same time. Therefore, students have to make decisions about certain trade-offs. Ideally, only one model should be promoted within a turn (day or week) or over a period of few turns (if promoted for two or three weeks in a row, 20% better results should be expected). However, spending too much budget and focus on just one product can potentially have a negative impact on other models’ sales (if promoted four weeks in a row, there is no additional impact; for the fifth week, there is a 25% penalty; for the sixth week 45% penalty and so on). Additionally, the available budget would not support the full utilization of all channels within one turn. Therefore, students/teams needed to carefully develop their strategies and prioritize. Players should not try to utilize all channels at the same moment because the budget cannot support that. B. Simulation interface The first tab of the simulation is the planning stage. Each team will first have to decide how much money to spend in one turn. The total simulation budget is predefined. The weekly budget has to be split between products and different communication channels/activities, and an example is shown in Table I. The spending threshold per channel/activity is also predefined, e.g., spending less than the threshold value will have no impact. Furthermore, more focused spending (e.g., limited to one product and/or fewer channels/activities) is more efficient. The numbers in the table are just for illustrational purposes. Students are responsible for all inputs. TABLE I. BUDGET SPLIT PER CHANNEL/ACTIVITY IN COMMUNICATION PLATFORMS Communication platforms Budget split per channel/activity Phone 2 Phone 1 Phone 3 Budget per channel/activity EUR Web site content 50% 40% 10% SEO 60% 30% 10% Facebook content production Facebook page promotion Facebook content promotion Youtube content production Youtube content promotion Instagram content production TOTAL 40% 40% 20% 60% 30% 10% 70% 30% 0% 100% 0% 100% 0% 0% 0% 50% 40% 10% 500 200 500 100 200 500 100 300 2400 The second tab of simulation is the message strategy described by one or more keywords. Here, students have to select a keyword (or even multiple keywords, depending on the channel) per channel/activity. This is the main topic of communication focus. Obviously, synergy across channels can be achieved if communication is more focused (e.g., one keyword is used in all channels). In other words, consistency is crucial. Students are limited here with the drop-down menu. The current list of keywords for initial simulation consists of some simple points like Product features in general, Photography, Memory, Distinctive design, Practical design, Technical support reminder, Brand image related, Product differentiation, and Sales promotion support. Keywords have a different impact on the efficiency of promotional activities depending on target audience preferences, and proposed budget. the product features, The last tab of simulation is focused on targeting. Targeting primarily refers to the promotional activities and it is platform-specific. A proper combination of message strategy and targeting is essential for success. For content creation, targeting will improve its quality score. The content quality score can improve promotional efficiency. C. Technologies Our system comprises several interconnected technologies that together create a useable, precise, and responsive system with agility for further implementations. One of the main goals was to focus on privacy guarantees for both students and professors such as decentralization and differential privacy. The system holds three main sectors that have explicit responsibilities and are interconnected by an intermediary level that manipulates and shares data. Also, the key enforcer for using these technologies is the fact of the limited scope of the academic environment we were working in. Thus, we created a working prototype that uses the underlying technologies which allow free transactions per se, in comparison to EVM (Ethereum Virtual Machine) cost of transactions [8]. Nevertheless, the system can be migrated to any EVM-based blockchain network by just changing the targeted network and adjusting some communicational variables. Blockchain supporting technologies: • Truffle Suite – blockchain development environment • Ganache – a local in-memory blockchain • Web3.js – a web plug-in for Ethereum nodes • MetaMask – Ethereum browser extension/crypto wallet • Solidity – a programming language for smart contracts. Truffle Suite [10] is a development environment that provides a set of tools that allow an easier development lifecycle for blockchain development. Also, it can be used as a testing framework and an asset pipeline for all blockchains targeting the Ethereum Virtual Machine (EVM). The environment offers built-in smart contract compilation, linking and deployment to multiple networks, interactive debugger, many libraries, and automated testing along with scriptable deployment and migrations frameworks, which allows accessible and straightforward implementation with different distributed ledger systems. From dependencies in our deployment, we were using Truffle v5.0.2 (core: v5.0.2) and truffle-contract v3.0.6. Ganache is part of the Truffle Suite and presents a simulated personal blockchain environment replicating the behavior of the initial, real-world distributed ledgers. It can be used throughout the whole of the development cycle, allowing deployment and testing dApps in a safe but still analogous environment to the real concept. Ganache gives developers 10 mock accounts obtaining a certain amount of fake cryptocurrency, and with them, we were able to simulate a cohort of individual parties acting within the system. Each blockchain interaction on Ganache is saved with its TX hash, as well as the type of transaction, sender address, value (presented in given currency), gas used, gas price and limit with the block number in which the transaction was mined. For development we have used Ganache v2.4.0 with JSON Remote Procedure Call (RPC) server setup, the price of each unit of gas is set up at 20000000000 Wei, the gas limit is 6721975 units and the hard fork is set to Petersburg. Web3.js is a set of libraries that allow communication with Ethereum nodes, both local and remotely placed ones via different network protocols. To connect Truffle Suite and Ganache to the browser, we have used web3.js v1.3.0 as a JavaScript web provider. MetaMask allows websites to request Ethereum accounts, thus allowing them to operate Ethereum dApps. It does that by adding a provider object that indicates an Ethereum user, resulting in an Application Programming Interface (API) that can read data from the blockchains. In our system, we have used the MetaMask v4.0.2, which was a part of the user interface (UI) as well as a transaction communicator between the user and the simulation. Accordingly, when a distributed application wants to perform write a transaction on the blockchain, the user gets a secure interface to inspect the transaction before deciding it respectively. Consequently, our simulation asks the user on key system checkpoints through blockchain transactions. The setup of the system requires three central items which are detecting the Ethereum provider and Ethereum network on whom the user account is connected, and finally gets the user’s Ethereum account. to approve or decline to push Solidity is an object-oriented high-level language for the development of smart contracts. The language utilizes a simple single-slot database in which you can query and modify the code by calling the functions that manage the database. Our system uses solidity v0.5.0 with which we are deploying contracts to the Ganache in-memory representation of the Ethereum blockchain. Lastly, from the technologies which are not directly connected to the blockchain reporting process, but are still a key part in the simulation, we have used HTML5, CSS, Bootstrap, JavaScript, and C# for the API. (R) Build Engine The API business logic calculation subsystem uses C# with Microsoft version 16.2.32702+c4012a063 for the .NET Core version, and the project SDK Microsoft.NET.Sdk.Web with netcoreapp2.2 target framework. The API processes the largest amount of data and is the main sales force calculations mechanism. Through the simulation, JavaScript calls are made to the API via specialized controllers, which can handle user interaction with or without blockchain to make initial calculations, as well as finalize the weekly turn report. D. System Communication Flow At the start of the simulation, the admin enters pre- determined data, and this data is the is starting benchmark of globally defined values, which define the successfulness of each user’s overall situation. These elements include the social media network’s Number of likes, Post engagement, Page views, Average post reach, etc., and they are equally set for all users. This is the first interaction with the blockchain reporting mechanism, as transparency and immutability between all users of this data are imperative. Secondly, the user starts defining Communication platform activities defined in Table I, and this data is gathered on the client side, and transformed into JavaScript Object Notation (JSON) data-interchange format. Then, the data is being pushed directly to the API at the beginning of a weekly turn, and cannot be changed anymore for that week, again on the next weekly turn, the student enters new choices. API immediately creates objects and calculates dependencies, which will be used to produce rudimentary figures which impact the core strategy results (platform insight analytics and salesforce feedback) and through the turn. At the end of the weekly turn, the remaining user choice data regarding Brand content, Mobile content, Google ads, Facebook, Instagram, and YouTube ads keywords is being sent to API in order to complete analytics for the given turn. In parallel, the API takes global, blockchain-held information, that influences the current week’s results. Afterward, the finalized weekly calculations from the API are combined with the blockchain saved global data in order to produce weekly progress reports, as well as present the total results at the end of the simulation. E. System Testing results Fig. 1 presents the total amount spend per user per simulation cycle. It is important to emphasize that Ethereum transaction fees variate constantly, and as Ethereum is the second-highest valued cryptocurrency, these fee costs are not insignificant for the user. The testing was conducted for the period from November 12, 2020, to November 17, 2020. Total spending per user 0.06694 0.06692 0.0669 0.06688 0.06686 t n e p S t n u o m A l a t o T 1 2 3 5 6 4 7 User Accounts 8 9 10 Fig. 1. Preliminary user spending per simulation cycle results The first four user accounts were tested within a space of a couple of hours, so the result prices fluctuate slightly. Fig. 1 presents data from Blockchair, a tool that provides different data regarding Ethereum variables [8]. The slight fall in the transaction costs of the user accounts 5 and 6 can be seen in Fig. 2. As of 14th, the average daily price was 0.00259 ETH and on the 15th 0.00241 ETH, as compared to previous and after days when it circulated around 0.0032 ETH. Finally, the last four user accounts were tested successively, with the time between transactions being around 30 seconds. Our application provides a high level of customization through the admin tools accessible by the instructors, but also through the user panel and different options based on the distributed Blockchain reports. Also, our application can run in multi-user mode, so students (or teams of students) are able to compete with each other. This enables the gamification of the learning process, which is one of the basics for active learning, one of the pillars of our transition to digital delivery of teaching and learning activities. International Business students get a better understanding of the Internet marketing principles not only by using this application, but also by partly designing the business logic (other parts are on the instructor on this course), so they will construct the knowledge and understanding. IT students build the real-world application (and not usually some in-class application no one uses after they complete the course), will be able to interact with the real users (in this case online with their counterpart International Business students) about the user experience of their application (usually they are able just to get instructor’s feedback, which is more theoretical). Both boost the way how they construct their knowledge and understanding. With this overall simulation approach their later probability of failure in their businesses will be lower. This option is also tightly related technologies for real-world applications and online course deliveries and we used this the capabilities and to boost COVID-19 outbreak competencies of our international students that are now online. to changes in IT ACKNOWLEDGMENT This work is supported by the RIT PLIG grant. REFERENCES [1] M. Žagar, S. J. Zilora, and B. Mihaljević, “International student cooperation in Capstone research project,” Proceedings of 2019 IEEE Global Engineering Education Conference (EDUCON), pp. 5-7, 2019. doi: 10.1109/EDUCON.2019.8725158 [2] M. Žagar, and N. Drašković, “Work-in-Progress: Internet Marketing Simulation for Project-Based Learning,” Proceedings of 2020 IEEE Global Engineering Education Conference (EDUCON), pp. 8-11, 2020. doi: 10.1109/EDUCON45650.2020.9125319 [3] L. Barack, “Real-world experiences are crucial for students,” Education Drive, Industry Dive, 12 Sept., 2018. [Online]. Available: https://www.educationdive.com/news/real-world-experiences-are- crucial-for-students/531960/ [4] M. Žagar, S. J. Zilora, and B. Mihaljević, “Assessment of Student Learning in Capstone Project, ” EDULEARN19 Proceedings, pp. 1207-1211, 2019. doi: 10.21125/edulearn.2019.0371 [5] Stukent Mimic Pro Simulation, 2020. [Online]. Available: https://www.stukent.com/mimic-pro-simulation/ [6] StratX Simulations, Markstrat Simulation, 2020. [Online]. Available: https://web.stratxsimulations.com/simulation/strategic-marketing- simulation/ [7] Anylogic, Anylogic Simulation software, 2020. [Online]. Available: https://www.anylogic.com/ [8] A. Maiboroda, “Ethereum Virtual Machine (EVM),” 2020. [Online]. Available: https://ethereum.org/en/developers/docs/evm/, 2020. J. Maričević, “Hashnet - a transaction superhighway,” Tolar.io, 2020. [Online]. Available: https://tolar.io/hashnet [9] [10] Truffle, Truffle Suite for Smart Contracts, 2020. [Online]. Available: https://www.trufflesuite.com/ Fig. 2. Ethereum Average transaction fee variation Fig. 3 presents the average time per user’s transaction. As it can be seen, time varies between 3 and 6 seconds per transaction, which is within some average time needed for transactions in Ethereum [8]. From the perspective of simulation reporting, it shows that reports are quickly distributed among the students/teams, and in this way, they have a distributed reporting system that can be proved by each team (which is important from the integrity perspective). Time to the finality of a transaction by using the Ethereum network could be the problem for cryptocurrency application but for proving the integrity of report transactions in our simulation is not imperative. In this way, students can run the simulation in multi-user mode to compete with each other and yet to get consistent reporting, which integrity is proved and shared by the complete network. Fig. 3. Average time per all user’s transactions IV. CONCLUSIONS AND SUMMARY principles, Our system comprises several interconnected technologies that together create a useable, precise, and responsive system with agility for further operations. We covered Internet marketing application development based on Blockchain reporting. One of the main goals was to focus on privacy guarantees for both students and professors, such as decentralization and differential privacy. The system holds three main sectors which have explicit responsibilities and are interconnected by an intermediary level that manipulates and shares data. distributed Web
ai_researcher
3
Building_the_“_Popper_Machine_”__A_Necessary_Research_Goal_for_Systems_Biology.pdf
2 2 0 2 n a J 5 2 ] I A . s c [ 2 v 2 0 7 3 0 . 1 0 2 2 : v i X r a Learning Logic Programs From Noisy Failures John Wahlig St Hilda’s College Supervised by Dr Andrew Cropper University of Oxford A thesis submitted for the degree of Master of Science Computer Science Trinity 2021 Acknowledgements I first would like to thank Andrew Cropper for his incredible guidance while supervising this project. His insight and expertise was immeasur- able. I would next like to thank Rolf Morel for all of his help, advice, and morale boosting conversations. I would also like to thank Brad Hunter for his insightful discussions and for the pleasure of working beside him. I give utmost thanks to my parents for affording me this incredible oppor- tunity and for their endless support and encouragement. Nothing I have achieved would be possible without their generosity and sacrifice. I would lastly like to thank Stephen, Brian, and Abby for inspiring me everyday to be the best that I can be and Alba for just about everything. Abstract Inductive Logic Programming (ILP) is a form of machine learning (ML) which in contrast to many other state of the art ML methods typically produces highly interpretable and reusable models. However, many ILP systems lack the ability to naturally learn from any noisy or partially missclassified training data. We introduce the relaxed learning from fail- ures approach to ILP, a noise handling modification of the previously in- troduced learning from failures (LFF) approach [13] which is incapable of handling noise. We additionally introduce the novel Noisy Popper ILP sys- tem which implements this relaxed approach and is a modification of the existing Popper system [13]. Like Popper, Noisy Popper takes a generate- test-constrain loop to search its hypothesis space wherein failed hypothe- ses are used to construct hypothesis constraints. These constraints are used to prune the hypothesis space, making the hypothesis search more efficient. However, in the relaxed setting, constraints are generated in a more lax fashion as to avoid allowing noisy training data to lead to hypoth- esis constraints which prune optimal hypotheses. Constraints unique to the relaxed setting are generated via hypothesis comparison. Additional constraints are generated by weighing the accuracy of hypotheses against their sizes to avoid overfitting through an application of the minimum de- scription length. We support this new setting through theoretical proofs as well as experimental results which suggest that Noisy Popper improves the noise handling capabilities of Popper but at the cost of overall runtime efficiency. Contents 1 Introduction 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Paper Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Related Work 2.1 No Noise Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Set Covering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Sampling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Branch and Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Neural Networks 2.6 Applications to Popper . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Learning from Failures (LFF) Framework 3.1 Logic Programming Preliminaries . . . . . . . . . . . . . . . . . . . . 3.1.1 First Order Logic . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 LFF Problem Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Declaration Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Hypothesis Contraints 3.2.3 Problem Setting . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Generalizations and Specializations . . . . . . . . . . . . . . . 3.3 Hypothesis Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Generalization Constraints . . . . . . . . . . . . . . . . . . . . Specialization Constraints . . . . . . . . . . . . . . . . . . . . 3.3.2 3.3.3 Elimination Constraints . . . . . . . . . . . . . . . . . . . . . 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 4 5 7 7 8 9 10 10 11 12 12 12 14 15 16 18 20 21 22 23 24 25 i CONTENTS 4 Relaxed LFF Framework 4.1 Relaxed LFF Problem Setting . . . . . . . . . . . . . . . . . . . . . . 4.2 Relaxed LFF Hypothesis Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Hypothesis Constraints Applications 4.3 Relaxed LFF Hypothesis Constraints with Hypothesis Size . . . . . . 4.3.1 Hypothesis Constraints with Hypothesis Size Applications . . 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Noisy Popper Implementation 5.1 Normal Popper Implementation . . . . . . . . . . . . . . . . . . . . . 5.1.1 Hypothesis to ASP Encoding . . . . . . . . . . . . . . . . . . 5.1.2 Generalization Constraints . . . . . . . . . . . . . . . . . . . . Specialization Constraints . . . . . . . . . . . . . . . . . . . . 5.1.3 5.1.4 Elimination Constraints . . . . . . . . . . . . . . . . . . . . . 5.1.5 Banish Constraints . . . . . . . . . . . . . . . . . . . . . . . . 5.1.6 Normal Popper Worked Example . . . . . . . . . . . . . . . . 5.2 Anytime Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Minimal Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Sound Hypothesis Constraints . . . . . . . . . . . . . . . . . . . . . . 5.5 Sound Constraints with Hypothesis Size . . . . . . . . . . . . . . . . 5.6 Noisy Popper Worked Example . . . . . . . . . . . . . . . . . . . . . 5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Experimental Results 6.1 Noisy Popper vs. Normal Popper . . . . . . . . . . . . . . . . . . . . 6.1.1 Experiment 1: East-West Trains . . . . . . . . . . . . . . . . . 6.1.2 Experiment 2: List Manipulations . . . . . . . . . . . . . . . . 6.1.3 Experiment 3: IGGP Problems . . . . . . . . . . . . . . . . . 6.2 Noisy Popper Enhancements . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Experiment 1: East-West Trains . . . . . . . . . . . . . . . . . 6.2.2 Experiment 2: List Manipulations . . . . . . . . . . . . . . . . 6.2.3 Experiment 3: IGGP Problems . . . . . . . . . . . . . . . . . 6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Conclusions 7.1 Summary and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 26 26 28 35 36 40 41 42 42 44 46 47 48 49 50 52 54 55 57 60 63 65 66 66 69 71 75 76 77 82 82 84 84 85 CONTENTS Bibliography iii 87 List of Figures 1.1 Michalski’s original east-west trains problem . . . . . . . . . . . . . . 3 6.1 East-West Trains predictive accuracy and learning time (in seconds) for program h1 when varying percentage of noisy training data. Standard error is depicted by bars. . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 East-West Trains predictive accuracy and learning time (in seconds) for program h2 when varying percentage of noisy training data. Standard error is depicted by bars. . . . . . . . . . . . . . . . . . . . . . . . . . 68 69 6.3 IGGP Minimal Decay task predictive accuracy and time when varying percentage of noisy training data. Standard error is depicted by bars. 75 6.4 IGGP RPS task predictive accuracy and time when varying percentage . . . . . . of noisy training data. Standard error is depicted by bars. 6.5 East-West Trains predictive accuracies and learning times of Noisy Popper variants (in seconds) for program h1 when varying percentage . . . . . . of noisy training data. Standard error is depicted by bars. 6.6 East-West Trains predictive accuracies and learning times of Noisy Popper variants (in seconds) for program h2 when varying percentage . . . . . . of noisy training data. Standard error is depicted by bars. 6.7 Predictive accuracies of maintained best programs for Noisy Popper variants versus the number of programs generated by each system on 75 77 78 evens dataset with 5% training noise. Standard error is depicted by bars. 79 6.8 IGGP minimal decay task predictive accuracy and time of Noisy Pop- per variants (in seconds) for when varying percentage of noisy training . . . . . . . . . . . . . . . . data. Standard error is depicted by bars. 82 6.9 IGGP rps task predictive accuracy and time of Noisy Popper variants (in seconds) when varying percentage of noisy training data. Standard error is depicted by bars. . . . . . . . . . . . . . . . . . . . . . . . . . 83 iv Chapter 1 Introduction 1.1 Motivation A major goal for AI is to achieve human-like intelligence through the imitation of our cognitive abilities [23]. To this end, AI systems often aim to mimic our automatic inductive capacity in which previous (background) knowledge and prior observations are used to infer upon new observations [30] - a complex task having applications in numerous domains such as image classification [16] and autonomous navigation [5]. Notably, humans have the innate ability to filter out outlying or incorrect observa- tions, naturally and accurately handling noisy data or incomplete sets of background knowledge. Many machine learning (ML) systems have been implemented to achieve this inductive behavior, capable of identifying patterns among millions of often noisy datapoints. However, traditional ML methods such as neural networks are typically incapable of expressing their models in forms which are easily comprehensible to humans. Additionally, the knowledge learned by many of these systems lacks trans- ferability and cannot be applied to similar problems. For example, an AI system such as AlphaGo [44] which has learned to effectively play the game of Go on a standard 19 × 19 size board may struggle greatly on a board of different size. Without com- prehensibility and transferability, these systems fail to achieve true levels of human cognition [23, 31]. Inductive Logic Programming [32] however has been an approach more capable of meeting these additional requirements. Inductive logic programming (ILP) is a form of ML wherein a logic program which defines a target predicate is learned given positive and negative examples of the predicate and background knowledge (BK). The target predicate, BK, and examples are represented as logical statements, typically as logic programs in a language such as Prolog whose notation we will use throughout this paper. More precisely, BK defines 1 CHAPTER 1. INTRODUCTION 2 predicates and ground-truth atoms that the system may use to define the target predicate program. The aim of the system is to learn a program (or hypothesis) that correctly generalizes as many examples as possible, i.e., entails as many positive examples as possible and does not entail as many negative examples as possible. Example 1 (ILP Problem) Consider Michalski’s trains problem consisting of 5 trains moving eastbound and 5 moving westbound. We will use this example through- out the paper and Figure 1.1 below visually depicts the original problem. Each train is comprised of a locomotive pulling a variable number of cars, each with distinct characteristics such as length, number of wheels, shape, number of cargo loads, shape of loads, etc. The goal of the problem is to identify a set of one or more rules which distinguishes the eastbound trains from the westbound. For instance, a solution to the original problem shown in Figure 1.1 would be the following rule: if a train has a car which is short, has two wheels, and its roof is closed, then it is eastbound and otherwise it is westbound. This problem is easily described in an ILP setting by letting one set of trains, say eastbound, represent positive examples and westbound trains represent negative examples. BK here defines each train and its characteristics with logical predicates for length, number of wheels, shape, etc. Hypotheses to these problems can be easily described using these logical predicates. For example, the rule above would be written as: eastbound(A) :- has car(A,B), short(B), two wheels(B), roof closed(B). Here, the hypothesis is claiming that any eastbound train A must have a car B and B must be short, have two wheels, and its roof must be closed. While most modern ILP systems are able to effectively learn classifiers to solve the original trains prob- lem, more complicated variations have been used to compare system capabilities and predictive accuracies. A strong motivation behind ILP methods is the high level of comprehensibility pos- sessed by logic programs that they return when compared to traditional ML models. Programs described by programming languages possess semantics interpretable by both humans and computers. This falls in line with Michie’s [28] strong criterion for machine learning and highly explainable AI whereas traditional ML methods of- ten focus solely on improving the weak criterion, i.e., predictive accuracy. Work in this area of ultra-strong machine learning [36] has demonstrated how human under- standing of a system can be improved by better understanding the program, thus CHAPTER 1. INTRODUCTION 3 Figure 1.1: Michalski’s original east-west trains problem motivating this desire for comprehensibility. The symbolic nature of the logic pro- grams not only increases their comprehensibility but also allows for ease of lifelong learning and knowledge transfer [46, 27, 9, 10], an essential criteria for human-like AI. Models can be easily reused and complex models can be built up from solving smaller problems whose logic programs are simply placed in the BK. ILP also constitutes a form of code synthesis having applications in code completion, automated program- ming, and novel program invention. Additionally, unlike many traditional machine learning approaches, ILP can generalize exceptionally well with just a few examples [11]. At it’s core, the problem of ILP is efficiently searching the set of all possible hy- potheses, the hypothesis space, which is typically very large. Current ILP systems take many approaches to this problem including but not limited to set covering tech- niques [40, 33, 1, 4, 45], meta-level encodings [14, 35, 20], and neural networks [17], each with various tradeoffs. Simultaneously, as most machine learning methods must, these systems often navigate the issue of noisy datasets (i.e., misclassified examples). While some systems such as TILDE [4], Progol [33], and Aleph [45] are naturally built to withstand noise to varying degrees, they struggle with building recursive pro- grams, often vital for constructing optimal solutions. Others such as ILASP [25] and Metagol [35] are inherently incapable of generalizing with noisy data - noise being commonly ignored by ILP systems in exchange for soundness or optimality. However in exchange, ILASP and Metagol are both capable of generating recursive programs CHAPTER 1. INTRODUCTION 4 and possess varying levels of predicate invention wherein new predicates are created by the system and defined using BK. These both are useful in constructing compact and often optimal solutions. Handling noise is a fundamental issue in machine learn- ing as real-world data is typically endowed with misclassifications and human-errors. As such, machine learning approaches should be able to handle this noise as well as possible, though this problem is never trivial to solve. Popper Popper [13] is an ILP system which takes a unique approach of learning from failures (LFF). Popper uses a three stage loop: generate, test, and constrain. However, unlike other three stage approaches [24], Popper uses theta-subsumption [38] in conjunction with failed hypotheses to constraint the hypothesis space. Rather than building up a solution to the problem, Popper essentially narrows down the possible solutions significantly enough that the hypothesis space can be efficiently searched. In the generate stage, Popper generates a program or hypothesis from the hypoth- esis space which satisfies a set of hypothesis constraints and subsequently tests the hypothesis against training examples in the test stage. Successful hypotheses are re- turn as solution programs while failed ones are used to generate further hypothesis constraints for subsequent iterations. While Popper’s approach has notable strengths over some existing techniques such as ease of generating recursive programs, it is completely unable to handle noise. Programs generated by Popper necessarily entail all positive examples and no negative ones, clearly overfitting in the presence of any missclassified data. It is the objective of this project to modify Popper’s approach to allow it to generalize better to noisy datasets without compromising its overall performance capabilities. 1.2 Contribution The main contribution of this project is an extension to Popper which can handle noise in exchange for the optimality guarantees of the system. For simplicity, this paper will refer to the original version of Popper as Normal Popper and the novel noise handling version of Popper as Noisy Popper. The main hypotheses of this project are: • Noisy Popper typically generalizes better than Normal Popper with noisy datasets, being able to return more accurate hypothesis as solutions than Normal Popper which may be incapable of returning any program at all. CHAPTER 1. INTRODUCTION 5 • Noisy Popper does not lose out in ability to generalize well with noiseless datasets, however, it performs less efficiently than Normal Popper in these en- vironments. Noise Handling Approach Noisy Popper makes several modifications and algo- rithmic changes to Normal Popper to allow it to better generalize to noise. These changes are as follows: • The first contribution is to alter the LFF framework to one which handles noise. This altered setup is called the Relaxed LFF Framework and it relaxes definitions for solutions and optimally thus also changing how the strictly the hypothesis space is searched. • The second contribution is to introduce theoretically sound constraints in this framework. These sound constraints are used to prune suboptimal hypotheses under the new setting. Some of these use the minimum description length principle to help the system avoid hypotheses which overfit the data. These constraints are described and proved in Propositions 4-14 in Chapter 4. • The third contribution is Noisy Popper, which implements this relaxed LFF framework in addition to some enhancements to improve noise handling capacity including any anytime algorithm approach and efficient minimal constraints. • The final contribution is an experimental evaluation of Noisy Popper which demonstrates that Noisy Popper on average generalizes better to noisy datasets than Normal Popper, that Noisy Popper generalizes as well as Normal Popper to noiseless datasets, and how each enhancement used to construct Noisy Popper effects its overall performance. 1.3 Paper Organization This dissertation consists of seven chapters including this brief introduction to the project. The subsequent chapters will be as follows: • Chapter 2 will review related works in the field of ILP including systems which are unable to handle noise, systems which can handle noise, their methods, and how effective they are in practice. CHAPTER 1. INTRODUCTION 6 • Chapter 3 will cover background information on the LFF framework including a brief review of logic notation and ILP. • Chapter 4 will begin the novel contributions of this paper and discuss the Re- laxed LFF framework including the theoretical claims used to justify the frame- work setup. • Chapter 5 will cover the implementation of Noisy Popper and touch on the implementation of Normal Popper out of necessity. • Chapter 6 will discuss the experiments and empirical results and analysis of the Noisy Popper system. • Chapter 7 is the conclusion and will summarise the paper, its claims and find- ings, and discuss limitations of the work as well as future work to be considered. Chapter 2 Related Work In this chapter, we give a brief overview of the current state of ILP by discussing several systems, their approaches, and their noise handling capacities. 2.1 No Noise Handling ILP has been a machine learning area of great interest for over three decades [11]. Naturally, many varied approaches to solving ILP problems have been introduced, each with varying degrees of success at handling noisy data, though many take no attempt at all. A common approach to ILP is through the use of metarules [15] which are logical statements defining the syntactic form logic programs may take within the hypothesis space, thus restricting said space. Metagol [35] is a popular ILP system under this Meta-Interpretive Learning (MIL) setting. Because of the strict nature of these metarules, MIL systems like Metagol often possess higher inductive bias when compared to predicate declarations which simply define which predicates may appear in the head or body of a clause. These predicate declarations are what Popper uses as its language bias (restrictions which define the initial hypothesis space). Metagol additionally allows for automatic predicate invention wherein novel predicates are created using existing predicates and can be used to simplify hypothesis construction. The major drawback of such an approach however is the need for domain expertise as a user typically needs to define the metarules to be used by the system. Additionally, like Popper, Metagol only returns solutions which entail all positive examples and no negative examples, meaning that the system is naturally incapable of generalizing to noise. ILASP [25] is another ILP system which cannot handle noise, but takes an Answer Set Programming (ASP) approach. With ASP approaches, the problem itself is en- 7 CHAPTER 2. RELATED WORK 8 coded as an ASP problem using logical statements or rules. These rules form a type of constraint satisfaction problem which is then solved by a state of the art ASP solver, generating an answer set solution which satisfies the given problem constraints. While effective, these methods carry drawback as all machine learning approaches do. ILASP works in a similar loop to Popper, generating hypothesis and using them to construct ASP constraints to improve the search in subsequent iterations. These constraints are in the form of boolean formulas over the given set of rules. ILASP pre-computes all of these rules using an ASP encoding for the given ILP problem, constructs additional ASP constraints from these encodings, and finally solves an ad- ditional ASP problem with these new constraints. Pre-computing all rules is not only computationally expensive, but the system also struggles to learn rules with many body literals. Additionally, ILASP does not typically scale as well as other systems as it requires a large amount of grounding with the programs it generates. With noise, a similar issue to Metagol and Popper exists where the system continues to constrain hypotheses until a solution is found which covers all positive examples and no negative examples. HEXMIL [20] is an approach which combines the MIL and ASP settings using a HEX- formalism to encode MIL with external sources, reducing the bottleneck produced by the need to ground all atoms. Like the others, this approach fundamentally cannot handle noise as returned hypotheses must entail all positive examples and no negative ones. Contrasting to these approaches, in this project we introduce Noisy Popper which is capable of generalizing to sets of noisy examples as returned solutions may not perfectly entail all positive examples and no negative ones. 2.2 Set Covering One popular approach to ILP is to use set covering algorithms which progressively learn hypotheses by adding one logical clause at a time, covering a number of positive examples with each. Perhaps the most influential ILP system and one which imple- ments a set covering technique is Progol [33]. Its intuitive approach selects a positive example that has not yet been entailed by the program and generates the bottom clause or the most specific clause that entails that example using the minimal Her- brand model of the set of atoms. It then attempts to make this clause as general as possible so that when added to the program constructed so far, it entails as many new positive examples as possible while avoiding entailing negative examples. However, CHAPTER 2. RELATED WORK 9 the model contains a noise parameter which controls the quantity of negative exam- ples that are allowed to be entailed. In this way, the system may avoid overfitting the data. However, this hyperparameter leaves much of the noise handling procedure on the user and is not a default mechanism of the system. Significant fine tuning is required for Progol to adequately generalize to noisy datasets, a noticeable burden on the user. Aleph [45] is a popular system based on Progol but built in Prolog. Aleph uses a scoring metric to determine how general to make the bottom clauses. This score can be user defined. As such, it can be selected so that the system is adaptable to noise - a returned solution may not perfectly entail all positive example and no negative examples thus avoiding overfitting. However, like Progol, setting up the hyperparameter environment to accurately learn from noisy data is cumbersome. TILDE [4] uses an approach of top-down induction of decision trees [39] combined with first-order logic to construct a solution as a decision tree. As with traditional bi- nary decision trees, they are constructed to correctly classify the given set of training examples with the left and right splits corresponding to conjunctions in the logi- cal statements constructed, though the model produced is not required to cover all given examples. A tree construction where each example corresponds to a single leaf node/classification is entirely possible and would constitute a form of overfitting, so the system takes steps to avoid this as in a traditional machine learning setting. However, this method again requires fine tuning. Under-pruning the tree can lead to significant overfitting while over-pruning results in small decision trees which do not fit the data well. Noisy Popper however requires no such noise parameters and naturally generalizes to noisy datasets without requiring fine tuning. 2.3 Sampling Examples MetagolN T [34] is a noise tolerant extension of the Metagol system, simply acting as a wrapper around the original Metagol algorithm. MetagolN T first generates a random subset of the training examples. It then learns a program which perfectly fits this data using the original Metagol system and finds the accuracy of the resulting program on the remaining unused training data. The system repeats this loop several times and simply returns the program which obtained the highest accuracy. In this way, the returned program will not always perfectly fit all training data as Metagol would, but will often better generalize to noisy datasets. This approach has shown decent results having even been used accurately for some image classification problems [34]. However, in that same work, the authors address how the approach has limited grasp CHAPTER 2. RELATED WORK 10 on noise handling and often fails if noise concentration is too high. The system is largely dependent on a number of factors including the number of training examples, size of the random subsets, and number of candidate programs generated. If too much noise is present in each subset, no program will capture the true underlying data pattern. If the subsets are too small to ensure at least some contain relatively little noise, a similar issues may occur where there are not enough examples to generalize from. Tuning these hyperparameters is not always trivial. Like systems such as Aleph, MetagfolN T also requires a difficult to use noise parameter which determines how many negative examples are permitted to be entailed. Like with most systems, these hyperparameters are difficult to effectively tune, though Noisy Popper lacks them entirely. 2.4 Branch and Bound ILASP3 [26] is a noise tolerant version of ILASP taking the form of a branch and bound approach to ILP. Like ILASP, ILASP3 uses an ASP encoding to constrain the search space in a similar generate, test, constrain loop as Popper. With each hypoth- esis generated, the system tests which positive examples are not covered, determines why, and uses these failed examples to generate additional ASP constrains for the next iteration, pruning the search space. Unlike ILASP, ILASP3 assigns weights to each example. The system then searches for an optimal program which entails the highest sum of weights as possible, rather than simply trying to entail all positive examples and no negative examples. In this way, ILASP3 is designed to handle noisy data. Weights can also be used to correct imbalances in the ratio between positive and negative examples. For example, if there are twice as many positive examples as negative, the negative examples may be weighted twice as much to avoid being ignored as noise by the system, i.e., the system would only focus on entailing as many positive examples as possible since the negative weights are negligible. However, like the original ILASP system, ILASP3 still pre-computes all ASP rules at each itera- tion leading to large computational cost, causing it to struggle when scaling to large datasets. 2.5 Neural Networks An alternative to these previous ILP approaches is to take a continuous rather than discrete approach to the problem through the use of neural networks. The ∂ILP [17] CHAPTER 2. RELATED WORK 11 system uses continuous weight values to determine probability distributions of atoms over clauses. The system uses stochastic gradient descent to alter these weights and minimize the cross entropy loss of each classified example. Like with most standard neural network approaches, the system can be tuned with hyperparameters such as a learning rate and initial clause weights. In this way, the system can be trained to handle some amount of noise as the returned program may not have zero loss. As with the previous systems however, this hyperparameter tuning is not always intuitive. Additionally, ∂ILP requires program templates to constrain the set of programs searched which is another user defined parameter, requiring some amount of brute-force work in order to generate an efficient search space. 2.6 Applications to Popper While the noise handling approaches for these ILP systems are worth studying in their own rights, the unique LFF framework of Popper means that we cannot apply many of these techniques directly. The general concept of scoring hypotheses used in ILASP3 is a concept which can be applied to Popper as a means to compare more than just the accuracy of hypotheses in order to prevent overfitting, e.g., we may want to score a short and highly accurate hypothesis higher than a massive but perfectly accurate one. Ultimately however, a novel approach to noise handling must be taken with Popper, though we aim to show that the theoretical results used can be extended to other systems in the future regardless of whether they fall under the LFF framework. Additionally, many of these noise tolerant systems do so through the use of hyperparameters which often make them cumbersome to use and ineffective under default conditions, i.e., significant tuning is usually required to allow the systems to effectively generalize to noisy data. As such, a goal of the Noisy Popper implementation is to make it as natural of an extension of Normal Popper as possible which requires little to no hyperparameters. Chapter 3 Learning from Failures (LFF) Framework This chapter will provide an overview of the LFF framework used by Normal Popper and modified by Noisy Popper. First, we will briefly cover logic programming pre- liminaries and notation necessary for the rest of the paper, though we will assume some prior knowledge of boolean logic on the part of the reader. Using this notation, we will formally define the ILP problem setting. We will conclude by explaining the LFF framework and its definitions. 3.1 Logic Programming Preliminaries To understand the LFF framework, it is necessary to review the framework of logic programming. This section will briefly cover necessary definitions based on those found in [8, 11]. We will assume some familiarity with the topic, though for a com- prehensive overview, interested readers are encouraged to reference [37]. 3.1.1 First Order Logic We will refer to the following definitions from [11] throughout the paper: • A variable is a character string which starts with an uppercase letter, e.g., A, B, Var. • A function symbol is a character string which starts with a lowercase letter, e.g., f, eastbound, last, head. • A predicate symbol is a character string which starts with a lowercase, like a function symbol. The arity n of a predicate symbol represents the number 12 CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 13 of arguments that it takes and is denoted as p/n, e.g., f/1, eastbound/1, last/1, head/2. • A constant symbol is a function or predicate symbol which has arity 0. • A term is a variable or constant symbol, or a function or predicate symbol with arity n that is immediately followed by a tuple of n terms. • We call a term ground if it contains no variables. • An atom is a logical formula p(t1, t2, ..., tn), where p is a predicate symbol of arity n and ti is a term for i ∈ {1, 2, ..., n}, e.g., eastbound(train) where eastbound is a predicate symbol of artiy 1 and train is a constant symbol. • An atom is ground is all of its terms are ground, like the example in the definition above. • We represent the negation symbol as ¬. • A literal is an atom A (a positive literal) or its negation ¬A (a negative literal), e.g., eastbound(train1) is both an atom and a literal while ¬eastbound(train1) is only a literal as atoms do not contain the negation symbol. Clauses We can use these previous definitions as building blocks to construct the logic programs and constraints we will be using. Definition 1 (Clause) A clause if a finite (possibly empty) disjunction of literals. For instance, this following set of literals constitutes a clause: {eastbound(A), ¬has car(A,B), ¬two wheels(B), ¬roof closed(B)} We assume all variables in a clause are universally quantified, so explicit quantifiers are omitted. As with terms and atoms, clauses are ground if they contain no variables, so the example above is not ground. In logic programming, clauses are typically in reverse implication form: h ← b1 ∧ b2 ∧ ... ∧ bn. CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 14 Put verbally, the above clause states that the literal h, known as the head literal, is true only if all literals bi, known as body literals, are all true. All the bi literals together are the body of the clause. Note, the head literal must always be a positive literal. We often use shorthand replacing ← with :- and ∧ with , to ease writing clauses and make them similar to actual Prolog notation, e.g., the above clause we would write as: h :- b1, b2, ..., bn. For simplicity, we will use this Prolog notation throughout this paper. We define a clausal theory as a set of clauses. In the LFF setting, we restrict clauses to those which contain no function symbols and where every variable which appears in the head of a clause also appears in its body. These clauses are known as Datalog clauses and a set of Datalog clauses constitute a Datalog theory. We also define a Horn clause as a clause with at most one positive literal, as is the case with the above example. We restrict our setting to only Horn clauses and Horn theories which are sets of Horn clauses. Definite clauses are Horn clauses with exactly one positive literal while a Definite logic program is a set of definite clauses. The logic programs which form our ILP hypothesis spaces will consist of only Datalog definite logic programs. Substitution Substitution is an essential logic programming concept and is simply the act of replacing variables v0, v1, ..., vn with terms t0, t1, ..., tn. Such a substitu- tion is denoted by: θ = {v0/t0, v1/t1, ..., vn/tn}. For instance, the substitution θ = {A/train} to eastbound(A) :- has car(A,B), two wheels(B), roof closed(B) yields eastbound(train) :- has car(train,B), two wheels(B), roof closed(B). In this example, eastbound(train) would be true if train possess some car B such that B has three wheels and its roof is opened. A substitution θ unifies atoms A and B if Aθ = Bθ, i.e., using substitution θ on atoms A and B obtains equivalent results. 3.2 LFF Problem Setting This section formally introduces definitions to the LFF framework problem. Most of the definitions are taken from [13]. Interested readers should refer to this paper for a more thorough explanation. CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 15 3.2.1 Declaration Bias The LFF problem setting is based off of the ILP learning from entailment setting [40] whose goal, as stated in the first chapter, is to take as input sets of positive and negative examples, BK, and a target predicate and return a hypothesis or logic program which in conjunction with the BK entails all positive examples and no neg- ative examples. All ILP approaches boil down to searching a hypothesis space for such a program. For each ILP problem, the hypothesis space is restricted by a lan- guage bias. Though several language biases exist in ILP, our LFF framework uses predicate declarations which declare which predicates are permitted to appear in the head of a clause in a hypothesis and which are permitted to appear in the body. The declarations are defined as follows: Definition 2 (Head Declaration) A head declaration is a ground atom of the form head pred(p, a) where p is a predicate symbol of arity a [13]. For example, for our running trains problem, we would have head pred(eastbound,1). Definition 3 (Body Declaration) A body declaration is a ground atom of the form body pred(p, a) where p is a predicate symbol of arity a [13]. For example, for the trains example, we would have body pred(has car,2), body pred(two wheels,1) and body pred(roof closed,1) among others. We can then define a declaration bias D as a pair (Dh, Db) where Dh is a set of head declatations and Db is a set of body declarations. The LFF hypothesis space then must only be comprised of programs whose clauses conform to these declaration biases. We define the notion of a declaration consistent clause: Definition 4 (Declaration Consistent Clause) Let D = (Dh, Db) be a declara- tion bias and C = h ← b1, b2, ..., bn be a definite clause. We say that C is declaration consistent with D if and only if: • h is an atom of the form p(X1, X2, ..., Xn) such that head pred(p, n) ∈ Dh. • every bi is a literal of the form p(X1, X2, ..., Xm) such that body pred(p, m) ∈ Db. • every Xi is a first-order variable. CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 16 [13] Example 2 (Clause Declaration Consistency) Let D = ({head pred(eastbound,1)}, {body pred(has car,2), body pred(two wheels,1), body pred(roof closed,1)}) be a declaration bias. The following clauses would be declaration consistent with D: eastbound(A) :- has car(A,B). eastbound(A) :- has car(A,B), two wheels(A). eastbound(A) :- has car(A,B), roof closed(B). Conversely, the following clauses are declaration inconsistent with D: eastbound(A, B) :- has car(A,B). eastbound(A) :- has car(A,B), eastbound(A). eastbound(A) :- has car(A,B), has load(B,C). With this definition, we can fully define Declaration consistent hypotheses which pop- ulate our hypothesis space: Definition 5 (Declaration Consistent Hypothesis) Let D = (Dh, Db) be a declaration bias. A declaration consistent hypothesis H is a set of definite clauses where each clause C ∈ H is declaration consistent with D [13]. Example 3 (Hypothesis Declaration Consistency) Again, let D be the same declaration bias as in the example above. Then the following hypotheses are decla- ration consistent: h1 = (cid:8) eastbound(A) :- has car(A,B), two wheels(B). (cid:9) (cid:26) eastbound(A) :- has car(A,B), two wheels(B). eastbound(A) :- has car(A,B), roof closed(B). h2 = (cid:27) 3.2.2 Hypothesis Contraints While declaration biases are how we restrict the initial hypothesis space, the LFF framework revolves around pruning the hypothesis space through hypothesis con- straints which we define as in [13]. We first precisely define a constraint: Definition 6 (Constraint) A constraint is a Horn clause without a head, i.e., a denial. We say that a constraint is violated if all of its body literals are true [13]. CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 17 We can proceed with a general definition of a hypothesis constraint: Definition 7 (Hypothesis Constraint) Let L be a language that defines hypothe- ses, i.e., a meta-language. Then a hypothesis constraint is a constraint expressed in L [13]. Example 4 (Hypothesis Constraints) In both Normal and Noisy Popper, the meta-language used to encode programs takes a form like this: head literal(Clause,Pred,Arity,Vars) which denotes that the clause Clause possesses a head literal with predicate symbol Pred which has an arity of Arity and whose arguments are defined by Vars (note: Vars would be represented by a tuple of size equal to Arity). The following atom: body literal(Clause,Pred,Arity,Vars) analogously defines a body literal appearing in Clause. We can then construct an example of a hypothesis constraint: :- head literal(C,p,1, ), body literal(C,p,1, ) where ’ ’s represent wildcards. This constraint simply states that clause C cannot contain a predicate symbol p which appears both in the head and the body of the clause, e.g., the clause C = p(A) :- p1(A,B), p(B). Like with declaration consistent hypotheses, we can now define a hypothesis which is consistent with all hypothesis constraints: Definition 8 (Constrain Consistent Hypothesis) Let C be a set of hypothesis constraints written in a language L . A set of definite clauses H is consistent with C if, when written in L , H does not violate any constraint in C. [13] CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 18 3.2.3 Problem Setting Now that we have defined declaration bias and hypothesis constraints, we can fully define the LFF hypothesis space which takes a similar form to most ILP hypothesis spaces: Definition 9 (Hypothesis Space) Let D be a declaration bias and C be a set of hypothesis constraints. Then, the hypothesis space HD,C is the set of all declaration and constraint consistent hypotheses. We refer to any element in HD,C as a hypothesis [13]. We additionally can define the precise LFF problem: Definition 10 (LFF Problem Input) Our problem input is a tuple (B, D, C, E+, E−) where: • B is a Horn program denoting background knowledge • D is a declaration bias • C is a set of hypothesis constraints • E+ is a set of ground atoms denoting positive examples • E− is a set of ground atoms denoting negative examples [13] As in [13] we will also define several hypothesis outcomes or types commonly used in ILP literature [37] which we will refer to moving forward. Definition 11 (Hypothesis Types) Let (B, D, C, E+, E−) be an input tuple and H ∈ HD,C be a hypothesis. Then H is: • Complete when ∀e ∈ E+, H ∪ B |= e • Consistent when ∀e ∈ E−, H ∪ B (cid:54)|= e • Incomplete when ∃e ∈ E+, H ∪ B (cid:54)|= e • Inconsistent when ∃e ∈ E−, H ∪ B |= e CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 19 • Totally Incomplete when ∀e ∈ E+, H ∪ B (cid:54)|= e • Totally Inconsistent when ∀e ∈ E−, H ∪ B |= e [13] This terminology also helps us define an LFF solution and LFF failed hypothesis: Definition 12 (LFF Solution) Given an input tuple (B, D, C, E+, E−), a hypoth- esis H ∈ HD,C is a solution when H is complete and consistent [13]. Definition 13 (LFF Failed Hypothesis) Given an input tuple (B, D, C, E+, E−), a hypothesis H ∈ HD,C fails (or is a failed hypothesis) when H is either incomplete or inconsistent [13]. These definition correspond to many ILP system settings we discussed in Chapter’s 1 and 2 where a solution entails all positive examples and no negative examples. Optimality For a given LFF problem, there can naturally be several solutions. For example, consider an east-west trains problem where all eastbound trains are those which possess a car with two wheels and all other trains are westbound. Consider the hypotheses: h1 = (cid:8) eastbound(A) :- has car(A,B), two wheels(B). (cid:9) (cid:26) eastbound(A) :- has car(A,B), two wheels(B). h2 = eastbound(A) :- has car(A,B), two wheels(B), roof closed(B). (cid:27) Both hypotheses would correctly identify all trains. Note that the second clause in h2 will entail nothing extra from the first clause, making it redundant. Naturally, we would rather return hypothesis h1 as it is simpler, lacking this redudant clause. Though deciding between two solutions is a common and non-trivial problem in ILP, often systems define optimality in terms of length, returning the solution with fewest clauses [35, 20] or literals [7, 25]. While many ILP systems are not guaranteed to return optimal solutions [33, 45, 4], Normal Popper [13] is guaranteed to return op- timal solutions with minimal number of total literals. Noisy Popper also appeals to this description of optimality as it works closely with the minimum description length CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 20 (MDL) principle [42] which is used to justify several claims later in this paper. As such, we will formally define hypothesis size and solution optimality: Definition 14 (Hypothesis Size) The function size(H) returns the total number of literals in the hypothesis H [13]. Definition 15 (LFF Optimal Solution) Given an input tuple (B, D, C, E+, E−), a hypothesis H ∈ HD,C is an optimal solution when two conditions hold: • H is a solution • ∀H (cid:48) ∈ HD,C such that H (cid:48) is a solution, size(H) ≤ size(H (cid:48)) [13] 3.2.4 Generalizations and Specializations In the LFF framework, the hypothesis constraints are learned from the generalizations and specializations of failed hypotheses. In this way, large sections of the hypothesis space can be pruned for each hypothesis generated and tested. To understand gen- eralizations and specializations, we need to define the notion of θ-subsumption [38] which we refer to simply as subsumption. Definition 16 (Clausal Subsumption) A clause C1 subsumes a clause C2 if and only if there exists a θ-subsumption such that C1θ ⊆ C2 [13]. Example 5 (Clausal Subsumption) Let C1 and C2 be defined as: C1 = eastbound(A) :- has car(A,B). C2 = eastbound(X) :- has car(X,Y), two wheels(Y). We say that C1 subsumes C2 since if θ = {A/B} then C1 θ ⊆ C2. Importantly, subsumption implies entailment [37], though the converse does not nec- essarily hold. Thus, if clause C1 subsumes C2, then C1 must entail at least everything that C2 does. [29] extends this idea of subsumption to clausal theories: Definition 17 (Theory Subsumption) A clausal theory T1 subsumes a clausal theory T2, denoted T1 (cid:22) T2, if and only if ∀C2 ∈ T2, ∃C1 ∈ T1 such that C1 subsumes C2 [13]. Example 6 (Theory Subsumption) Let h1 and h2 be defined as: CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 21 h1 = (cid:8) eastbound(A) :- has car(A,B), two wheels(B). (cid:9) (cid:26) (cid:27) h2 = h3 = eastbound(A) :- has car(A,B), two wheels(B), roof closed(B). (cid:26) eastbound(A) :- has car(A,B), two wheels(B). eastbound(A) :- has car(A,B), roof closed(B). (cid:27) Then we can say h1 (cid:22) h2, h3 (cid:22) h2, and h3 (cid:22) h1. [13] also proves the following proposition regarding theory subsumption: Proposition 1 (Subsumption implies Entailment) Let T1 and T2 be clausal theories. If T1 (cid:22) T2 then T1 |= T2 [13]. That is, using the programs above, any example that h1 entails is also entailed by h3. Using Definition 17 for theory subsumption, we can define the notion of a generaliza- tion: Definition 18 (Generalization) A clausal theory T1 is a generalization of a clausal theory T2 if and only if T1 (cid:22) T2 [13]. For example, again using the programs above, h3 is a generalization of h1 which is a generalizations of h2. Likewise, we can define the notion of a specialization: Definition 19 (Specialization) A clausal theory T1 is a specialization of a clausal theory T2 if and only if T2 (cid:22) T1 [13]. Again using the previous programs as examples, h2 is a specialization of h1 which is a specialization of h3. With these definitions, we can describe in the next section how Normal Popper gen- erates hypothesis constraints using generalizations and specializations from failed hypotheses. 3.3 Hypothesis Constraints The Normal Popper system breaks down the ILP problem into three separate stages: generate, test, and constrain. Unlike many ILP approaches which refine a clause [40, 33, 4, 45, 1] or hypothesis [6, 3, 35], Normal Popper refines the hypothesis space itself CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 22 by learning hypothesis constraints. In the generate stage, Normal Popper generates a hypothesis which satisfies all current hypothesis constraints. These constraints determine the syntactic form a hypothesis may take. In the subsequent test stage, this hypothesis is then tested against the positive and negative examples provided to the system. Should a hypothesis fail, i.e., it is either incomplete or inconsistent, the system continues on to the constrain stage. Here, the system learns additional hypothesis constraints from the failed hypothesis to further prune the hypothesis space for future hypothesis generation. There are two general types of constraints that both Normal Popper and Noisy Popper are concerned with: generalizations and specializations. We will discuss both here in addition to a third particular type of constrain called elimination constraints. 3.3.1 Generalization Constraints Consider a hypothesis H being tested against E−. If H is inconsistent, that is it entails some or all of the examples in E−, we can conclude that H is too general. That is, H is entailing too many examples and not being restrictive enough. Thus, any solution to the ILP problem is necessarily more restrictive than H, i.e., it entails less than H. We can prune all generalizations of H as these too must be inconsistent [13] since they only can entail additional examples from H. This leads us to the definition of a generalization constraint: Definition 20 (Generalization Constraint) A generalization constraint only prunes generalizations of a hypothesis from the hypothesis space [13]. Example 7 (Generalization Constraints) Suppose we have the following defined: E− = {eastbound(train1).} h = {eastbound :- has car(A,B), two wheels(B).} Additionally, suppose the BK contains facts: has car(train1,car1). two wheels(car1). We can see how h entails the only negative example, indicating that it is too general. As such, all generalizations of h can be pruned, e.g., programs such as: CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 23 h1 = h2 = (cid:26) eastbound(A) :- has car(A,B), two wheels(B). eastbound(A) :- has car(A,B), roof closed(B). eastbound(A) :- has car(A,B), two wheels(B). eastbound(A) :- has car(A,B), has load(B,C), circle(C). eastbound(A) :- has car(A,B), short(B).    (cid:27)    Because h1 (cid:23) h and h2 (cid:23) h, both h1 and h2 must also entail this one negative example and therefore cannot be LFF solutions. Note that given hypotheses h and h’, if h ⊆ h’ then h is a generalization of h’. 3.3.2 Specialization Constraints Next, consider a hypothesis H being tested against E+. If H is incomplete, that is it entails only some or none of the examples in E+, we can conclude that H is too specific. That is, H is entailing too few examples and being overly restrictive. Thus, any solution to the ILP problem is necessarily less restrictive than H, i.e., it entails more than H. We can prune all specializations of H as these too must be incomplete [13] since they only can entail fewer examples than H. This leads us to the definition of a specialization constraint: Definition 21 (Specialization Constraint) A specialization constraint only prunes specialization of a hypothesis from the hypothesis space [13]. Example 8 (Specialization Constraints) Suppose we have the following defined: E+ = {eastbound(train2).} h = {eastbound :- has car(A,B), two wheels(B), roof closed(B).} Additionally, suppose the BK contains the facts: has car(train2,car2). two wheels(car2). We can see how h does not entails the only positive example, since train2 only contains a car which has three wheels, but does not have its roof closed. This indicates that the hypothesis is too specific. As such, all specializations of h can be pruned, e.g., programs such as: h1 = (cid:8) eastbound(A) :- has car(A,B), two wheels(B), roof closed(B), short(B). (cid:9) h2 = (cid:8) eastbound(A) :- has car(A,B), two wheels(B), has load(B,C), circle(C). (cid:9) Because h (cid:23) h1 and h (cid:23) h2, both h1 and h2 must also fail to entail this positive example and therefore cannot be LFF solutions. CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 24 3.3.3 Elimination Constraints Finally, we can consider a specific case where a hypothesis H is totally incomplete. In addition to the normal specialization constraint, we can prune a particular set of hypotheses which contain a version of H within themselves. To precisely define these hypotheses, we will need an additional definition: Definition 22 (Separable) A separable hypothesis G is one where no predicate symbol in the head of a clause in G occurs in the body of a clause in G [13]. Example 9 (Non-separable Hypotheses) Consider the following hypothesis: h = (cid:26) eastbound(A) :- has car(A,B), f(B). (cid:27) f(B) :- two wheels(B), roof closed(B) Hypothesis h is non-separable because the predicate symbol f appears in both a [13] shows that if a hypothesis H is head of a clause and in the body of a clause. totally incomplete, then neither H nor any specialization of H can appear inside any separable optimal solution. Thus, all separable hypotheses containing a specialization of H can be pruned. This leads us to the definition of an elimination constraint: Definition 23 (Elimination Constraint) An elimination constraint only prunes separable hypotheses that contain specialisations of a hypothesis from the hypothesis space [13]. Example 10 (Elimination Constraints) Consider the set of positive examples: E+ ={eastbound(train1)., eastbound(train2).} and consider the candidate hypothesis h: h = (cid:8) eastbound(A) :- has car(A,B), two wheels(B), roof closed(B). (cid:9) Additionally, suppose the BK contains the facts: has car(train1,car1). two wheels(car1). has car(train2,car2). short(car2). CHAPTER 3. LEARNING FROM FAILURES (LFF) FRAMEWORK 25 Clearly, h is totally incomplete and as such, Popper will add an elimination constraint which will prune all separable hypotheses that contain h1 or any of its specializations such as: h1 = h2 = h3 =   (cid:26) eastbound(A) :- has car(A,B), two wheels(B), roof closed(B). eastbound(A) :- has car(A,B), has load(B,C), circle(C). eastbound(A) :- has car(A,B), two wheels(B), roof closed(B). eastbound(A) :- has car(A,B), short(B). eastbound(A) :- has car(A,B), two wheels(B).  (cid:26) eastbound(A) :- has car(A,B), two wheels(B), roof closed(B), short(B). (cid:27)    (cid:27) eastbound(A) :- has car(A,B), long(B). Note that elimination constraints may prune solutions from the hypothesis space. If E− is empty in the example above, the hypothesis h2 above would be a solution to the problem as it entails all positive examples. However, this hypothesis is not optimal as elimination constraints will never prune optimal solutions. An optimal solution to this example for instance would instead be: h4 = (cid:26) eastbound(A) :- has car(A,B), short(B). (cid:27) eastbound(A) :- has car(A,B), two wheels(B). These basic hypothesis constraints allow Normal Popper to perform exceptionally well on many datasets, even those with very few examples. However, these constraints heavily rely on the absence of noise in the example sets. As the next chapter will discuss, incorrectly labelled examples can cause these hypothesis constraints to prune valuable sections of the hypothesis space. 3.4 Summary In this chapter, we summarized the LFF framework for ILP problem solving. In doing so, we reviewed necessary logic programming concepts and notations as well as formalized terminology we well use frequently moving forward. We additionally discussed the crucial concepts of subsumption, generalizations, and specialization which both Noisy and Normal Popper base their hypothesis constraints on. Finally, we discussed the constraints Normal Popper implements: generalization constraints, specialization constraints, and elimination constraints giving examples of each. In the next chapter, we discuss the modified problem setting for Noisy Popper which we refer to as the Relaxed LFF Framework. Chapter 4 Relaxed LFF Framework The LFF setting described in Chapter 3 has a limitation when handling noise in that solutions must perfectly fit the given examples. Its hypothesis constraints may remove highly accurate hypotheses which do not overfit any noisy data in favor of an LFF solution which do overfit. One approach to avoid this is to ignore or relax all hypothesis constraints and take a brute force approach, enumerating all hypothesis until one of adequate accuracy is found. This has an obvious inefficiency limitation. The aim of this project if to find a middle-ground between the two solutions which better generalizes to noisy data. From this chapter, we will focus on presenting the novel contributions of the project. Here, we first outline the altered Relaxed LFF Framework from which Noisy Popper is built. We start by defining the altered noisy problem setting. We then describe and prove the sound hypothesis constraints within this new setting. Finally, we describe how we can apply the MDL principle to prune overfitting hypotheses through additional sound constraints which take into account hypothesis size. 4.1 Relaxed LFF Problem Setting In contrast to the LFF problem setting, in the general relaxed setting we do not necessarily wish to find hypotheses that entail all positive examples and no negative examples. Rather, we wish to find hypotheses which optimize some other metric or score. In this manner, we can define the relaxed LFF problem input: Definition 24 (Relaxed LFF Problem Input) Our problem input is a tuple (B, D, C, E+, E−, S) where: • B is a Horn program denoting background knowledge 26 CHAPTER 4. RELAXED LFF FRAMEWORK 27 • D is a declaration bias • C is a set of hypothesis constraints • E+ is a set of ground atoms denoting positive examples • E− is a set of ground atoms denoting negative examples • S is a scoring function which takes as input B, E+, E− as well as a hypothesis H ∈ HD,C Note that the hypothesis space in this relaxed setting is unchanged from the LFF setting. From here, we can define an solution in this new setting: Definition 25 (Relaxed LFF Solution) Given an input tuple (B, D, C, E+, E−, S), a hypothesis H ∈ HD,C is a solution when ∀H (cid:48) ∈ HD,C, S(H, B, E+, E−) ≥ S(H (cid:48), B, E+, E−). Note that it is possible to model the LFF setting from Chapter 3 in with this definition. To do so, the scoring function S would be defined as: S(H, B, E+, E−) = (cid:26) 1 if H is complete and consistent (cid:27) 0 otherwise As in the LFF setting, we define optimality similarly using the size of a hypothesis: Definition 26 (Relaxed LFF Optimal Solution) Given an input tuple (B, D, C, E+, E−, S), a hypothesis H ∈ HD,C is an optimal solution when two condi- tions hold: • H is a solution in the relaxed LFF setting • ∀H (cid:48) ∈ HD,C such that H (cid:48) is a solution, size(H) ≤ size(H (cid:48)) With these definition, we can lay out the theoretical contributions of this project through new hypothesis constraints which remain sound in this new setting. CHAPTER 4. RELAXED LFF FRAMEWORK 28 4.2 Relaxed LFF Hypothesis Constraints The main difficulty Normal Popper has when dealing with noise is its strict constraints which prune the hypothesis space. If a hypothesis does not entail even just a single positive hypothesis, it is rejected and all of its specializations are pruned. Similarly if a hypothesis entails just one negative example, all of its generalizations are pruned. While this works extremely well under the normal LFF setting, in the presence of noise, being so strict may prune relaxed LFF solutions which do not fit the noisy data and only the underlying patterns. We can illustrate this type of overpruning through an example: Example 11 (Overpruning) Consider Normal Popper trying to learn the program: h = (cid:8) eastbound(A) :- has car(A,B), short(B), two wheels(B). (cid:9) Assume that all examples are correctly labelled except one noisy example: eastbound(train1) ∈ E+ where train1 only possess a single long car. That is, in the BK we have among others the facts: has car(train1,car1). long(car1). Suppose we generate the hypothesis: h1 = (cid:8) eastbound(A) :- has car(A,B), short(B). (cid:9) This program will entail all positive examples in E+ except for the single noisy ex- ample as train1 does not possess a short car. As such, in the LFF setting, h1 is categorized as too specific. Thus, all specializations of h1 are pruned which includes the desired solution h. This overly strict pruning clearly can lead Normal Popper to overfitting any noisy dataset as even a single incorrectly labelled example can cause heavy pruning of the hypothesis space, potentially eliminating relaxed LFF solutions. This can be avoided by not applying any LFF hypothesis constraints. However, we still wish to improve the efficiency of the hypothesis search by removing hypotheses which can not be relaxed LFF solutions, thus motivating relaxed LFF hypothesis constraints. Any hypothesis constraints used in this setting should be sound under some scoring CHAPTER 4. RELAXED LFF FRAMEWORK 29 function. Here we define the accuracy scoring function used in this section. We first define the notions of true positive, true negative, false positive, and false negative. Definition 27 (True Positive) Given an input tuple (B, D, C, E+, E−, S) and a hypothesis H ∈ HD,C, we define the true positive function as (cid:12){e | e ∈ E+ and H ∪ B |= e}(cid:12) tp(H, B, E+) = (cid:12) (cid:12). Definition 28 (True Negative) Given an input tuple (B, D, C, E+, E−, S) and a hypothesis H ∈ HD,C, we define the true negative function as (cid:12){e | e ∈ E− and H ∪ B (cid:54)|= e}(cid:12) tn(H, B, E−) = (cid:12) (cid:12). Definition 29 (False Positive) Given an input tuple (B, D, C, E+, E−, S) and a hypothesis H ∈ HD,C, we define the false positive function as (cid:12){e | e ∈ E− and H ∪ B |= e}(cid:12) f p(H, B, E−) = (cid:12) (cid:12). Equivalently, we may choose to write f p(H, B, E−) = |E−| − tn(H, B, E−) Definition 30 (False Negative) Given an input tuple (B, D, C, E+, E−, S) and a hypothesis H ∈ HD,C, we define the false negative function as (cid:12){e | e ∈ E+ and H ∪ B (cid:54)|= e}(cid:12) f n(H, B, E+) = (cid:12) (cid:12). Equivalently, we may choose to write f n(H, B, E+) = |E+| − tp(H, B, E+) We will use these functions throughout the remained of the paper. With these, we can define the method with which hypotheses are scored in this section: Definition 31 (Accuracy Score) Given an input tuple (B, D, C, E+, E−, SACC) and a hypothesis H ∈ HD,C, the function SACC(H, E+, E−, B) = tp(H, B, E+) + tn(H, B, E−). Note that this scoring function measures training accuracy rather than test accuracy. We now aim to determine situations in which certain hypotheses are known to be suboptimal under this scoring method in the relaxed LFF setting. First, we will consider constraints constructed by comparing two hypotheses. We motivate this through an example: Example 12 (Learning by Comparing Hypotheses) Consider we have relaxed LFF input (B, D, C, E+, E−, SACC) and previously observed the following hypothesis: h1 = (cid:8) eastbound(A) :- has car(A,B), short(B). (cid:9) CHAPTER 4. RELAXED LFF FRAMEWORK 30 which has tp(h1, B, E+) = 5 and tn(h1, B, E−) = 3. Now, consider a generalization of this hypothesis: h2 = (cid:26) eastbound(A) :- has car(A,B), short(B). (cid:27) eastbound(A) :- has car(A,B), two wheels(B). which identically has tp(h2, B, E+) = 5 and tn(h2, B, E−) = 3. We can conclude that the clause eastbound(A) :- has car(A,B), two wheels(B). is redundant as it en- tails no additional positive examples than the clause eastbound(A) :- has car(A,B), short(B). As such, it is not worthwhile to consider any non-recursive generalizations of h2 which add additional clauses to h2 as we could simply add these same clauses to h1, producing a hypothesis that scores the same but is smaller as it does not contain the redundant clause. To illustrate, consider this generalization: of h2: h(cid:48) 2 =    eastbound(A) :- has car(A,B), short(B). eastbound(A) :- has car(A,B), two wheels(B). eastbound(A) :- has car(A,B), three wheels(B).    which we assume scores tp(h(cid:48) generalization of h1: 2, B, E+) = 8 and tn(h(cid:48) 2, B, E−) = 4. Now, consider this h(cid:48) 1 = (cid:26) eastbound(A) :- has car(A,B), short(B). (cid:27) eastbound(A) :- has car(A,B), three wheels(B). 1, B, E+) = 8 and tn(h(cid:48) which must also score tp(h(cid:48) the same, it is redundant to consider both of them and since size(h(cid:48) can safely prune h(cid:48) this form can be safely pruned from the hypothesis space. 2 as a less optimal hypothesis than h(cid:48) 2 score 2), we 1. Thus, all generalizations of 1 and h(cid:48) 1) < size(h(cid:48) 1, B, E−) = 4. Since h(cid:48) This example illustrates how we can compare previously observed hypotheses to any new hypotheses in order to identify hypotheses which are suboptimal under the SACC scoring. The remainder of this section will outline and prove such suboptimal cir- cumstances. We start by defining some useful propositions relating generalizations and specializa- tions to scoring: Proposition 2 (Scores of Generalizations) Given problem input (B, D, C, E+, E−, S), let H and H (cid:48) be hypotheses in HD,C where H (cid:48) is a generalization of H. Then: CHAPTER 4. RELAXED LFF FRAMEWORK 31 • tp(H (cid:48), B, E+) ≥ tp(H, B, E+) and • tn(H (cid:48), B, E−) ≤ tn(H, B, E−) Proof. Follows immediately from Proposition 1 as subsumption implies entailment. Proposition 3 (Scores of Specializations) Given problem input (B, D, C, E+, E−, S), let H and H (cid:48) be hypotheses in HD,C where H (cid:48) is a specialization of H. Then: • tp(H (cid:48), B, E+) ≤ tp(H, B, E+) and • tn(H (cid:48), B, E−) ≥ tn(H, B, E−) Proof. Follows immediately from Proposition 1 as subsumption implies entailment. We now prove when relaxed generalization and specialization hypothesis constraints may be applied: Proposition 4 Given problem input (B, D, C, E+, E−, SACC) and hypotheses H1, H2, and H3 in HD,C where H3 is a generalization of H1, if SACC(H2, B, E+, E−) − SACC(H1, B, E+, E−) > f n(H1, B, E+) then SACC(H2, B, E+, E−) > SACC(H3, B, E+, E−). Proof. The improvement in score from H1 to H2 can be quantified by: SACC(H2, B, E+, E−) − SACC(H1, B, E+, E−) > f n(H1, B, E+) SACC(H2, B, E+, E−) − [tp(H1, B, E+) + tn(H1, B, E−)] > |E+| − tp(H1, B, E+) SACC(H2, B, E+, E−) > |E+| + tn(H1, B, E−) ≥ tp(H3, B, E+) + tn(H3, B, E−) = SACC(H3, B, E+, E−) The second to last line following from Proposition 2 and that H3 is a generalization of H1 in addition to the fact |E+| ≥ tp(H3, B, E+). CHAPTER 4. RELAXED LFF FRAMEWORK 32 Proposition 5 Given problem input (B, D, C, E+, E−, SACC) and hypotheses H1, H2, and H3 in HD,C where H3 in is a specialization of H1. If SACC(H2, B, E+, E−) − SACC(H1, B, E+, E−) > f p(H1, B, E−) then SACC(H2, B, E+, E−) > SACC(H3, B, E+, E−). Proof. The improvement in score from H1 to H2 can be quantified by: SACC(H2, B, E+, E−) − SACC(H1, B, E+, E−) > f p(H1, B, E−) SACC(H2, B, E+, E−) − [tp(H1, B, E+) + tn(H1, B, E−)] > |E−| − tp(H1, B, E+) SACC(H2, B, E+, E−) > tp(H1, B, E+) + |E−| ≥ tp(H3, B, E+) + tn(H3, B, E−) = SACC(H3, B, E+, E−) The second to last line following from Proposition 3 and that H3 is a specialization of H1 in addition to the fact |E−| ≥ tn(H3, B, E−). Proposition 6 Given problem input (B, D, C, E+, E−, SACC) and hypotheses H1 and H2 in HD,C where H2 is a generalization of H1, if tp(H1, B, E+) = tp(H2, B, E+), then given any non-recursive hypothesis H (cid:48) 2 = H2 ∪ C for some non-empty set of clauses C, there exists a generalization of H1, say H (cid:48) SACC(H (cid:48) 1, B, E+, E−) ≥ SACC(H (cid:48) 2, B, E+, E−). 1, such that 2, B, E+) − tp(H2, B, E+) and Proof. Let H (cid:48) 1 = H1 ∪ C. Also let n = tn(H1, B, E−) − tn(H2, B, E−), i.e., SACC(H2, B, E+, E−) = SACC(H1, B, E+, E−) − n (note, n ≥ 0 by by Proposi- tion 2). Assume that C entails p additional positive examples from H2 and n(cid:48) more negative examples. That is p = tp(H (cid:48) 2, B, E−) (again noting that n(cid:48) ≥ 0 and p ≥ 0 by Proposi- n(cid:48) = tn(H2, B, E−) − tn(H (cid:48) tion 2). Since H (cid:48) 1 contains C, it must also entail p additional positive examples from H1 and at most n + n(cid:48) additional negative examples as it may entail the negative examples that H2 entailed but H1 did not. Thus, we have SACC(H (cid:48) SACC(H (cid:48) (note that SACC(H (cid:48) Thus, SACC(H (cid:48) 2, B, E+, E−) = SACC(H1, B, E+, E−) + p − n − n(cid:48) and 1, B, E+, E−) ≥ SACC(H1, B, E+, E−) + p − n − n(cid:48) 1, B, E+, E−) has an upper bound of SACC(H1, B, E+, E−)+p−n). 1, B, E+, E−) ≥ SACC(H (cid:48) 2, B, E+, E−). CHAPTER 4. RELAXED LFF FRAMEWORK 33 Proposition 7 Given problem input (B, D, C, E+, E−, SACC) and hypotheses H1 and H2 where H1 ⊆ H2, if tp(H1, B, E+) = tp(H2, B, E+), given any non-recursive specialization of H2, H (cid:48) 1, such that SACC(H (cid:48) 2, there exists a specialization of H1, say H (cid:48) 1, B, E+, E−) ≥ SACC(H (cid:48) 2, B, E+, E−). 2, it is also in H (cid:48) if a clause c is in both H1 and H (cid:48) Proof. We can write H2 = H1 ∪ C for some set of clauses C. Since tp(H1, B, E+) = tp(H2, B, E+), the clauses in C entail no additional positive examples from H1, mak- ing them redundant to the clauses of H1. That is, every positive example entailed by the clauses in C is already entailed by the clauses in H1. We can construct H (cid:48) 1 1. If a clause c is in H1 as such: but not in H (cid:48) 2 and instead has been replaced by a specified version of the clause, call it c(cid:48), then c(cid:48) is also in H (cid:48) 1. Any clauses in C or specified versions of these clauses are not in H (cid:48) 1 will make the same specifications as H2 did to cre- ate H (cid:48) 1 makes the same specifications to clauses in H1 as H (cid:48) 2 does, and since C entails no extra positive examples nor will any of its 2, B, E+). Additionally, by Propo- 1, B, E+) = tp(H (cid:48) specific clauses in H (cid:48) sition 2, tn(H1, B, E−) ≥ tn(H2, B, E−). Since H (cid:48) 1 makes the same specifications as 2, B, E−) noting that at best any specifications H (cid:48) 1, B, E−) ≥ tn(H (cid:48) H (cid:48) 2 makes to clauses in C can only mean C entails no negative examples in H (cid:48) 2. Thus, 2, B, E+, E−). SACC(H (cid:48) 1. In this way, H (cid:48) 2 on the clauses in H1. Since H (cid:48) 1, B, E+, E−) ≥ SACC(H (cid:48) 2, then tn(H (cid:48) 2, then tp(H (cid:48) Proposition 8 Given problem input (B, D, C, E+, E−, SACC) and hypotheses H1 and H2 in HD,C where H2 is a specialization of H1, if tn(H1, B, E−) = tn(H2, B, E−), then given any non-recursive hypothesis H (cid:48) 2 = H2 ∪ C for some set of clauses C there exists a generalization of H1, say H (cid:48) 1, B, E+, E−) ≥ SACC(H (cid:48) 1, such that SACC(H (cid:48) 2, B, E+, E−). 1, B, E−) = tn(H (cid:48) Proof. Let H (cid:48) tn(H (cid:48) tp(H1, B, E+) ≥ tp(H2, B, E+) which means that tp(H (cid:48) Thus, SACC(H (cid:48) 1 = H1 ∪ C. Then, since tn(H1, B, E−) = tn(H2, B, E−), we know that 2, B, E−). Since H2 is a specialization of H1, by Proposition 3, 2, B, E+). 1, B, E+, E−) ≥ SACC(H (cid:48) 1, B, E+) ≥ tp(H (cid:48) 2, B, E+, E−). Propositions 6-8 notably only demonstrate SACC suboptimality for non-recursive gen- eralizations and specializations. Though it may appear that additional clauses or literals are redundant and do not improve either the tp or tn scores, they may set up CHAPTER 4. RELAXED LFF FRAMEWORK 34 base cases for an SACC-optimal recursive program. We demonstrate this through an example: Example 13 (Non-recursive Case Motivation) Consider trying find a program for the target predicate alleven/1 which when given a list of integers returns True if all of them are even and False otherwise. For instance, alleven([2,4,10,8]). evaluates to True and alleven([1,2,3]). evaluates to False. Assume that all examples are noiseless and suppose that the BK B contains the following: head([H| ],H)., i.e., returns True only if H is the first element of the given list tail([ |T],T)., i.e., returns True only if T is the given list with the first element removed empty([])., i.e. returns True only if the given list is empty zero(0)., i.e., returns True only if the given integer is 0 even(A) :- 0 is A mod 2, i.e. returns True only if the given integer is even Suppose we have previously seen hypothesis h1 = {alleven(A) :- head(A,B), even(B).} which entailed all positive examples and some negative examples. Now, consider a second hypothesis h2: h2 = (cid:26) alleven(A) :- head(A,B), even(B). (cid:27) alleven(A) :- empty(A). which likewise entails all positive examples and the same number of negative examples. By Proposition 7, since h1 ⊆ h2 and tp(h1,B, E+) = tp(h2,B, E+), all non-recursive specializations of h2 are not SACC-optimal. We must specify non-recursive as the recursive specialization h3 where: h3 = (cid:26) alleven(A) :- head(A,B), even(B), tail(A,C), alleven(C). (cid:27) alleven(A) :- empty(A). is a solution and would entail all positive examples and no negative examples, thus being SACC-optimal. Similar results hold for Propositions 6 and 8 where pruning generalizations may remove SACC-optimal recursive programs. Additionally, we can identify some sound constraints that do not rely on comparing hypotheses: Proposition 9 Given problem input (B, D, C, E+, E−, SACC) and hypothesis H ∈ HD,C where tp(H, B, E+) = |E+|, if hypothesis H (cid:48) ∈ HD,C is a generalization of H, then SACC(H, B, E+, E−) ≥ SACC(H (cid:48), B, E+, E−). CHAPTER 4. RELAXED LFF FRAMEWORK 35 Proof. Since H (cid:48) is a generalization of H, by Proposition 2, tp(H, B, E+) ≤ tp(H (cid:48), B, E+) ⇒ tp(H (cid:48), B, E+) = |E+| and tn(H, B, E−) ≥ tn(H (cid:48), B, E−). Thus, SACC(H, B, E+, E−) ≥ SACC(H (cid:48), B, E+, E−). Proposition 10 Given problem input (B, D, C, E+, E−, SACC) and hypothesis H ∈ HD,C where tn(H, B, E−) = |E−|, if hypothesis H (cid:48) ∈ HD,C is a specialization of H, then SACC(H, B, E+, E−) ≥ SACC(H (cid:48), B, E+, E−). Proof. Since H (cid:48) is a specialization of H, by Proposition 3, tn(H, B, E−) ≤ tn(H (cid:48), B, E−) ⇒ tn(H (cid:48), B, E−) = |E−| and tp(H, B, E+) ≥ tp(H (cid:48), B, E+). Thus, SACC(H, B, E+, E−) ≥ SACC(H (cid:48), B, E+, E−). 4.2.1 Hypothesis Constraints Applications These propositions all apply in any ILP setting, however, in our relaxed LFF setting, they determine sets of programs which should be pruned should certain conditions hold: • Proposition 4 implies that if a hypothesis H’s accuracy score is at least f n(H (cid:48), B, E+) greater than that of some hypothesis H (cid:48), we may prune all generalizations of H (cid:48) as they cannot be SACC-optimal. • Proposition 5 implies that if a hypothesis H’s accuracy score is at least f p(H (cid:48), B, E−) greater than that of some hypothesis H (cid:48), we may prune all specializations of H (cid:48) as they cannot be SACC-optimal. • Proposition 6 implies that if a hypothesis H is a generalization of a hypothesis H (cid:48) and both have equal tp values, we may prune all non-recursive superset of H as they cannot be SACC-optimal. • Proposition 7 implies that if a hypothesis H is a superset of a hypothesis H (cid:48) and both have equal tp values, we may prune all non-recursice specializations of H as they cannot be SACC-optimal. • Proposition 8 implies that if a hypothesis H is a specialization of a hypothesis H (cid:48) and both have equal tn values, we may prune all non-recursive generalizations of H as they cannot be SACC-optimal. CHAPTER 4. RELAXED LFF FRAMEWORK 36 • Proposition 9 implies that if a hypothesis H entails all positive examples, we may prune all larger generalizations of H as they cannot be SACC-optimal. • Proposition 10 implies that if a hypothesis H entails no negative examples, we may prune all larger specializations of H as they cannot be SACC-optimal. 4.3 Relaxed LFF Hypothesis Constraints with Hy- pothesis Size Since an LFF solution must entail all positive examples and no negative examples, it is likely to significantly overfit the data in the presence of noise. In ILP, overfitting often is seen through overly large programs with extra clauses specifically covering noisy examples, as we can illustrate: Example 14 (Large Overfitting Hypotheses) Suppose our sets of examples for a east-west trains problem are as follows: E+ = {eastbound(train1)., eastbound(train2)., eastbound(train3)} E− = {eastbound(train4.)} with background knowledge: has car(train1, car1)., two wheels(car1)., long(car1)., has car(train2, car2)., two wheels(car2)., roof closed(car2)., has car(train3, car3)., three wheels(car3)., short(car3)., has car(train4, car4)., three wheels(car4). Also suppose that the eastbound(train3). fact is noisy and should truly be a If it had been correctly classified, we see that an LFF optimal negative example. solution to this problem would be: h1 = (cid:8) eastbound(A) :- has car(A,B), two wheels(B). (cid:9) However, because of the noisy fact, we are required to add an additional clause to entail this fact. So, an LFF optimal solution to this noisy problem could be: h2 = (cid:26) eastbound(A) :- has car(A,B), two wheels(B). (cid:27) eastbound(A) :- has car(A,B), three wheels(B), short(B). Note that h2 has nearly over double the literals as h1. Like in many machine learning methods, the size or complexity of the model can be correlated with overfitting [30]. CHAPTER 4. RELAXED LFF FRAMEWORK 37 In the worst case for an ILP problem, we could generate a program in which there is exactly one clause entailing a single positive fact, meaning that the entire program would contain |E+| clauses. For instance, a naive program for the example above would be: h3 =    eastbound(A) :- has car(A,B), two wheels(B), long(B). eastbound(A) :- has car(A,B), two wheels(B), roof closed(B). eastbound(A) :- has car(A,B), three wheels(B), short(B).    or equivalently, we generate a model which simply remembers all positive examples, storing each in its entirety. But in h3, we can clearly see how the first two clauses could simply be combined into eastbound(A) :- has car(A,B), two wheels(B). which is a generalization of both. Additionally, a naive hypothesis such as this may generate clauses which subsume each other, making the subsumed clauses redundant. Most importantly, such a hypothesis would not generalize at all to data outside of the training set: unless any inputs given to the program were in the training set, the program will always return false. Though constructing such hypotheses is trivial, it will overfit any noisy data, is not useful, and is impractically large given large E+. Application of Minimum Description Length Principle This provides clear motivation why optimality in ILP is typically tied to the size of programs. With the presence of noise, the relaxed LFF setting has to be particularly cognizant of overfitting and avoid exceptionally large programs or programs where single clauses entail one single positive example. To this end, we look to apply the minimum description length (MDL) principle, a common method used for machine learning model selection [41, 2, 19]. At a high level, the MDL principle states that the optimal hypothesis or theory is one where the sum of the theory length and the length of the training data when encoded with that theory is a minimum. If we apply this idea to Example 14 above, hypothesis h2 fits all four examples perfectly, but has a size of seven literals. Hypothesis h1 however fits three of the examples with a size of only three literals. Taking the sum of correctly fit examples and programs length yields totals of 11 for h1 and 6 for h1. Under the MDL setting, we would claim h1 is more optimal than h2 for this particular encoding. An interested reader should consult [42] and [19] for more details on MDL and its applications to machine learning. CHAPTER 4. RELAXED LFF FRAMEWORK 38 In order to apply the MDL principle, we define an alternative hypothesis scoring method which takes into account the hypothesis size, recalling from Definition 14 size(H) equals the number of literals in hypothesis H. Definition 32 (MDL Score) Given an input tuple (B, D, C, E+, E−) and a hy- pothesis H ∈ HD,C, the function SM DL = tp(H, B, E+) + tn(H, B, E−) − size(H). Note that this definition is similar to what the Aleph refers to as its compression evaluation function [45]. With this definition, we can define several additional cir- cumstances when hypotheses are known to be suboptimal under SM DL scoring. First, we consider comparing two hypotheses with one another: Proposition 11 Given problem input (B, D, C, E+, E−, SM DL) and hypotheses H1, H2, and H3 in HD,C where H3 is a generalization of H1, if SM DL(H2, B, E+, E−)−SM DL(H1, B, E+, E−) > f n(H1, B, E+)−(size(H3)−size(H2)), then SM DL(H2, B, E+, E−) > SM DL(H3, B, E+, E−). Proof. The improvement in the MDL score from H1 to H2 can be quantified by: SM DL(H2, B, E+, E−) − SM DL(H1, B, E+, E−) > f n(H1, B, E+) − (size(H3) − size(H1)) SM DL(H2, B, E+, E−) − [tp(H1, B, E+) + tn(H1, B, E−) − size(H1)] > |E+| − tp(H1, B, E+) − (size(H3) − size(H1)) SM DL(H2, B, E+, E−) > |E+| + tn(H1, B, E−) − size(H3) ≥ tp(H3, B, E+) + tn(H3, B, E−) − size(H3) = SM DL(H3, B, E+, E−) The second to last line following from Proposition 2 and that H3 is a generalization of H1 in addition to the fact |E+| ≥ tp(H3, B, E+) Given SM DL(H2, B, E+, E−) and SM DL(H1, B, E+, E−), we can quantify the exact size of H3 when this holds: SM DL(H2, B, E+, E−) > |E+| + tp(H1, B, E+) − size(H3) SM DL(H2, B, E+, E−) − |E+| − tn(H1, B, E−) > −size(H3) |E+| + tn(H1, B, E−) − SM DL(H2, B, E+, E−) < size(H3) Proposition 12 Given problem input (B, D, C, E+, E−, SM DL) and hypotheses H1, H2, and H3 in HD,C where H3 is a specialization of H1, if CHAPTER 4. RELAXED LFF FRAMEWORK 39 SM DL(H2, B, E+, E−)−SM DL(H1, B, E+, E−) > f p(H1, B, E−)−(size(H3)−size(H2)), then SM DL(H2, B, E+, E−) > SM DL(H3, B, E+, E−). Proof. The improvement in the MDL score from H1 to H2 can be quantified by: SM DL(H2, B, E+, E−) − SM DL(H1, B, E+, E−) > f p(H1, B, E−) − (size(H3) − size(H1)) SM DL(H2, B, E+, E−) − [tp(H1, B, E+) + tn(H1, B, E−) − size(H1)] > |E−| − tn(H1, B, E−) − (size(H3) − size(H1)) SM DL(H2, B, E+, E−) > |E−| + tp(H1, B, E+) − size(H3) 1, B, E−) + tp(H3, B,+ ) − size(H3) ≥ tn(H (cid:48) = SM DL(H3, B, E+, E−) The second to last line following from Proposition 3 and that H3 is a specialization of H1 in addition to the fact |E−| ≥ tn(H3, B, E−). Given SM DL(H2, B, E+, E−) and SM DL(H1, B, E+, E−), we can quantify the exact size of H (cid:48) 1 when this holds: SM DL(H2, B, E+, E−) > |E−| + tp(H1, B, E+) − size(H3) SM DL(H2, B, E+, E−) − |E−| − tp(H1, B, E+) > −size(H3) |E−| + tp(H1, B, E+) − SM DL(H2, B, E+, E−) < size(H3) Additionally, we can identify some situations that do not rely on comparing hypothe- ses. Recall that for a hypothesis H, we can write f n(H, B, E+) = |E+|−tp(H, B, E+) and similarly f p(H, B, E−) = |E−| − tn(H, B, E−). Proposition 13 Given problem input (B, D, C, E+, E−, SM DL) and hypotheses H and H (cid:48) in HD,C where H (cid:48) is a generalization of H, if size(H (cid:48)) > f n(H, B, E+) + size(H), then SM DL(H, B, E+, E−) > SM DL(H (cid:48), B, E+, E−). Proof. By Propsition 2 the maximum value for SM DL(H (cid:48), B, E+, E−) is: SM DL(H (cid:48), B, E+, E−) = |E+| + tn(H, B, E−) − size(H (cid:48)) < |E+| + tn(H, B, E−) − [|E+| − tp(H, B, E+) + size(H)] = tp(H, B, E+) + tn(H, B, E−) − size(H) = SM DL(H, B, E+, E−) CHAPTER 4. RELAXED LFF FRAMEWORK 40 Proposition 14 Given problem input (B, D, C, E+, E−, SM DL) and hypotheses H and H (cid:48) in HD,C where H (cid:48) is a specialization of H, if size(H (cid:48)) > f p(H, B, E−) + size(H), then SM DL(H, B, E+, E−) > SM DL(H (cid:48), B, E+, E−). Proof. By Proposition 3 the maximum value for SM DL(H (cid:48), B, E+, E−) is: SM DL(H (cid:48), B, E+, E−) = |E−| + tp(H, B, E+) − size(H (cid:48)) < |E−| + tp(H, B, E+) − [|E−| − tn(H, B, E−) + size(H)] = tp(H, B, E+) + tn(H, B, E−) − size(H) = SM DL(H, B, E+, E−) 4.3.1 Hypothesis Constraints with Hypothesis Size Applica- tions These propositions all apply in any ILP setting, however, in our relaxed LFF setting, they determine sets of programs of specific lengths which should be pruned should certain conditions hold: • Proposition 11 implies that given hypotheses H1 and H2, we may prune any hypothesis of size greater than |E+| + tn(H1, B, E−) − SM DL(H2, B, E+, E−) which is also a generalization of H1 as they cannot be SM DL-optimal. • Proposition 12 implies that given hypotheses H1 and H2, we may prune any hypothesis of size greater than |E−| + tp(H1, B, E+) − SM DL(H2, B, E+, E−) which is also a specialization of H1 as they cannot be SM DL-optimal. • Proposition 13 implies that given hypothesis H, we may prune all generaliza- tions of H with size greater than f n(H, B, E+) + size(H) as they cannot be SM DL-optimal. • Proposition 14 implies that given hypothesis H, we may prune all specializations of H with size greater than f p(H, B, E−) + size(H) as they cannot be SM DL- optimal. CHAPTER 4. RELAXED LFF FRAMEWORK 41 4.4 Summary In this chapter, we introduced the novel contribution of the relaxed LFF setting and its problem definitions. We explained how in order to avoid the overpruning, we relax how hypothesis constraints should be used. We next introduced and proved the orig- inal and sound hypothesis constraints in Propositions 4-10. We next demonstrated how the MDL principle can be applied in this setting to avoid overfitting by taking into account program length. We concluded by introducing and Propositions 11-14 which describe sound hypothesis constraints which take into account hypothesis size under this MDL scoring. In the next chapter, we discuss Noisy Popper’s implemen- tation of the relaxed LFF framework, describing preliminaries of Normal Popper’s implementation as needed. Chapter 5 Noisy Popper Implementation In this chapter, we will discuss the implementation of Noisy Popper using the re- laxed LFF approach. We start by discussing the implementation of Normal Popper as is necessary to understand Noisy Popper. After this, we will explain the specific implementation differences used by Noisy Popper including its anytime algorithm ap- proach, its use of minimal constraints to efficiently prune the search space, and finally the implementation of the sound hypothesis constraints under the SACC scoring and sound hypothesis constraints with hypothesis size under the SM DL scoring discussed in Chapter 4. 5.1 Normal Popper Implementation It is necessary to discuss the Normal Popper implementation as Noisy Popper is an extension of it and uses the same structure and functions with slight modifications. In implementation, Normal Popper combines use of the Prolog logic programming language with ASP in a three stage generate-test-constrain loop. Algorithm 1 [13] below illustrates these three stages of the Normal Popper algorithm. We will discuss the generate, test, and constrain loop implementation here and provide illustrative examples, however an interested reader should consult [13] for full details. Generate The generate function in the Popper algorithm takes the declaration bias, current set of hypothesis constraints as inputs along with upper bounds on the number of literals allowed within clauses and number of clauses allowed within a hypothesis. The hypothesis constraints determine the syntax of each valid hypothesis, e.g., the number of clauses allowed, which predicates can appear together, which clauses are allowed, etc. Hypothesis constraints are constructed as ASP constraints. A defined meta-language is used to encode programs from Prolog to ASP which 42 CHAPTER 5. NOISY POPPER IMPLEMENTATION 43 Algorithm 1 Normal Popper [13] Input: E+, E−, BK, D, C, max vars, max literals, max clauses (where D is a dec- laration bias and C is a set of constraints) Output: LFF Solution or Empty Set program ← generate(D, C, max vars, num literals, max clauses) if program = ’space exhausted’ then num literals ← num literals + 1 continue 1: num literals ← 1 2: while num literals ≤ max literals do 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: end while 13: return {} end if (tp, tn) ← test(E+, E−, BK, program) if tp = |E+| and tn = |E−| then return program end if C ← C + learn constraints(program, tp, tn) are then used to construct constraints. Collectively, these constraints form an ASP problem whose answer sets consist of programs which satisfy the given constraints, i.e., are consistent with all declaration and hypothesis constraints. Such an approach was discussed with other ILP systems [7, 24, 20, 25, 26, 43] in Chapter 2. The generate function returns an answer set to the current ASP problem. This answer set represents a candidate definite program which the system considers as a potential solution. Normal Popper additionally removes invalid hypotheses such as recursive programs without a base case as well as redundant hypothesis in which one clause subsumes another. If there is no answer set to the ASP problem with the specified number of body literals num literals, then ’space exhausted’ is returned rather than a candidate program and the number of body literals allowed is incremented by one. In this way, Normal Popper searches the hypothesis space in order of increasing program size. Test After the generate stage of the Normal Popper loop, in the test function, Normal Popper converts the provided answer set from the generate stage into a Prolog program. The system tests this candidate program against the positive and negative examples with the BK provided to determine which examples are entailed and which are not. Table 5.1 [13] below illustrates the possible outcomes of the program as well as the constraints which will later be generated from them. Note that we are using short hard where tp = tp(program, B, E+) and tn = tn(program, B, E−). Outcomes are CHAPTER 5. NOISY POPPER IMPLEMENTATION 44 tuples consisting of a true positive and true negative score. For example, an outcome of (5, 0) indicates that the given hypothesis entails only five positive examples and does not entail any negative examples (i.e. is consistent). Outcome tp = |E+| 0 < tp < |E+| tp = 0 tn = |E−| n/a Specialization tn < |E−| Generalization Specialization, Generalization Specialization, Elimination Specialization, Generalization, Elimination Table 5.1: The possible outcomes of testing a hypothesis with the constraints learned by normal popper. Note that an outcome where tp = |E+| and tn = |E−| indicates the hypothesis is a solution. If the hypothesis is found to be a solution, that is if tp = |E+| and tn = |E−|, the program is returned by the system. Otherwise, Normal Popper continues onto the constrain stage which uses the failed hypothesis outcome in order to generate additional hypothesis constraints and further prune the hypothesis space. Constrain In the case of a failed hypothesis, Normal Popper uses the hypothesis outcome to determine which hypothesis constraints to apply. Table 5.1 depicts the constraints associated with each possible outcome. We will describe how a hypothe- sis is encoded as an ASP constraint as Noisy Popper uses modified versions of these encodings though an interested reader should consult [13] for a more detailed explana- tion. We will explain the general form these ASP constraints take and give examples for of generalization, specialization, and elimination constraints based on their defi- nitions from Chapter 3 as well as banish constraints which, while less essential to the base version of Normal Popper, are critical for Noisy Popper’s implementation. 5.1.1 Hypothesis to ASP Encoding As mentioned in Example 4 in Section 3.2.2, the meta-language which encodes atoms to ASP programs using either head literal(Clause,Pred,Arity,Vars) for head literals or body literal(Clause,Pred,Arity,Vars) for body literals where Clause denotes the clause containing the literal, Pred defines the predicate symbol of the literal, Arity defines the arity of the predicate and Vars is a tuple containing input variables to the predicate. To do this in Normal Popper, two functions are called: encodeHead and encodeBody as defined here as they are in [13]: CHAPTER 5. NOISY POPPER IMPLEMENTATION 45 encodeHead(Clause,Pred(Var0,...,Vark)) := head literal(Clause,Pred,k + 1,(encodeVar(Var0),...,encodeVar(Vark))) encodeBody(Clause,Pred(Var0,...,Vark)) := body literal(Clause,Pred,k + 1,(encodeVar(Var0),...,encodeVar(Vark))) where encodeVar converts variables to an ASP encoding. Example 15 (Atom Encoding) If we are trying to encode eastbound(A) as a head literal and has car(A,B) as a body literal, both in a clause C1, we would call encodeHead(C1,eastbound(A)) and encodeBody(C1,hascar(A,B)) which would re- turn ASP programs head literal(C1,eastbound,1,(V0)) and body literal(C1,has car,2,(V0,V1)) respectively. Encoding entire clauses naturally builds from encoding literals using the function encodeClause defined as it is in [13]: encodeClause(Clause,(head:-body1,...,bodyn)) := encodeHead(Clause,head), encodeBody(Clause,body1),...,encodeBody(Clause,bodyn), assertDistinct(vars(head) ∪ vars(body1) ∪...∪ vars(bodym)) where the assertDistinct function simply imposes a pairwise inequality constraint on all variables. For instance, if the three encoded variables are V0, V1 and V2, this function simply returns the constraints: V0!=V1, V0!=V2, V1!=V2. Since clauses can appear in multiple hypotheses, Normal Popper uses the clauseIdent function which maps clauses to ASP constraints using a unique identifier. This identifier is used in the ASP literal included clause(Clause,Id) which indicates that a clause Clause includes all literals of the clause identified by Id. This leads to the inclusionRule function as defined in [13]: inclusionRule(head:-body1,...,bodyn) := included clause(Cl,clauseIdent(head:-body1,...,bodyn)):- encodeClause(Cl,(head:-body1,...,bodyn)). This function’s head is true if all of the literals of the provided clause appear simul- taneously in a clause. Note that this may hold true even if additional literals not in the provided clause are present. To ensure that a provided clause appears exactly in a program, we define the exactClause as in [13]: CHAPTER 5. NOISY POPPER IMPLEMENTATION 46 exactClause(Clause,(head:-body1,...,bodym)) := included clause(Clause,clauseIdent(head:-body1,...,bodyn)), clause size(Clause,n) where the function clause size(Clause,n) asserts true only if the given clause con- tains exactly n body literals. Example 16 (Clause Encoding) If we are trying to encode the clause eastbound(A) :- has car(A,B), two wheels(B). and we suppose that clauseIdent(eastbound(A) :- has car(A,B), two wheels(B).) = id1, then we define an inclusion rule with the function inclusionRule(eastbound(A):-has car(A,B),two wheels(B)) which returns: included clause(Cl,id1) :- head literal(Cl,eastbound,1,(V0)), body literal(Cl,has car,2,(V0,V1)), body literal(Cl,two wheels,1,(V1)), V0!=V1. and if we wish to ensure this clause appears exactly in a hypothesis, we use the func- tion exactClause(C1,eastbound(A):-has car(A,B),two wheels(B)) which returns: included clause(C1,id1), clause size(Cl,2). Now that we have defined the ASP encoding used by Normal and Noisy Popper, we can define the exact forms which constraints take. 5.1.2 Generalization Constraints By Definition 18, a generalization of a hypothesis H is a program which contains ex- actly all of H’s exact clauses [13]. Thus, the generalization constraints of Definition 20 can be defined as in [13]: generalizationConstraint({Clause1, Clause2,...,Clausen}) := inclusionRule(Clause1),..., inclusionRule(Clausen). :-exactClause(C11,Clause1),..., exactClause(C1n,Clausen). Example 17 (Generalization Constraint Encoding) The ASP encoding for the inclusion rule and generalization constraint of the hypothesis h = {eastbound(A) :- has car(A,B), two wheels(B).} would be: CHAPTER 5. NOISY POPPER IMPLEMENTATION 47 included clause(Cl,id1) :- head literal(Cl,eastbound,1,(V0)), body literal(Cl,has car,2,(V0,V1)), body literal(Cl,two wheels,1,(V1)), V0!=V1. :- included clause(C10,id1), clause size(C10,2). 5.1.3 Specialization Constraints By Definition 19, a specialization of a hypothesis H is a program which contains all of H’s clauses, which may be specialized, and no additional clauses [13]. Thus, the specialization constraints of Definition 21 can be defined as in [13]: specializationConstraint({Clause1, Clause2,...,Clausen}) := inclusionRule(Clause1),..., inclusionRule(Clausen). :-included clause(C11,clauseIdent(Clause1)),..., included clause(C1n,clauseIdent(Clausen)), assertDistinct({Cl1,...,Cln}), not clause(n). The not clause(n) literal is only satisfied if there are no more than the n distinct clauses given in the constraint. Note, since all clauses may be specialized, we do not require clauses to be exact as we do in generalization constraints. Example 18 (Specialization Constraint Encoding) The ASP encoding for the inclusion rule and specialization constraint of the hypothesis: h = (cid:26) eastbound(A,B) :- has car(A,B), two wheels(B). eastbound(A,B) :- has car(A,B), short(B). (cid:27) would be: CHAPTER 5. NOISY POPPER IMPLEMENTATION 48 included clause(Cl,id2) :- head literal(Cl,eastbound,1,(V0)), body literal(Cl,has car,2,(V0,V1)), body literal(Cl,two wheels,1,(V1)), V0!=V1. included clause(Cl,id3) :- head literal(Cl,eastbound,1,(V0)), body literal(Cl,has car,2,(V0,V1)), body literal(Cl,short,1,(V1)), V0!=V1. :- included clause(C10,id2), included clause(C11,id3), C10!=C11, not clause(2). 5.1.4 Elimination Constraints As outlined in Section 3.3.3, in the case where a hypothesis H is totally incomplete, we wish to prune all separable hypotheses which contain all clauses of H where any of them may be specialized. Before we can describe the ASP encoding for an such a constraint, we require the following logic programs to determine separability [13]: non separable :- head literal( ,P,A, ), body literal( ,P,A, ). separable :- not non separable. With this, we can define the encoding for an elimination constraint as is done in [13]: eliminationConstraint({Clause1, Clause2,...,Clausen}) := inclusionRule(Clause1),..., inclusionRule(Clausen). :-included clause(C11,clauseIdent(Clause1)),..., included clause(C1n,clauseIdent(Clausen)), separable. Example 19 (Elimination Constraint Encoding) The ASP encoding for the elimination constraint for the hypothesis h = {eastbound(A) :- has car(A,B), two wheels(B).} would be: CHAPTER 5. NOISY POPPER IMPLEMENTATION 49 included clause(Cl,id4) :- head literal(Cl,eastbound,1,(V0)), body literal(Cl,has car,2,(V0,V1)), body literal(Cl,two wheels,1,(V1)), V0!=V1. :- included clause(C10,id4), separable. 5.1.5 Banish Constraints The final type of hypothesis constraint Normal Popper implements is to remove a single hypothesis and is known as a banish constraint. While Normal Popper only used this constraint for testing purposes and not in its full implementation, Noisy Popper makes extensive use of it since in the relaxed setting it is common that a failed hypothesis does not generate a hypothesis constraints which prunes itself from the hypothesis space. A banish constraint simply provides that the each clause of a given hypothesis appears in the program, non-specialized, and with no additional clauses. The exact encoding as seen in [13] is as follows: banishConstraint({Clause1, Clause2,...,Clausen}) := inclusionRule(Clause1),..., inclusionRule(Clausen). :-exact clause(C11,Clause1),..., exact clause(C1n,(Clausen), not clause(n). Example 20 (Banish Constraint Encoding) The ASP encoding for the banish constraint for the hypothesis h = {eastbound(A) :- has car(A,B), two wheels(B).} would be: included clause(Cl,id5) :- head literal(Cl,eastbound,1,(V0)), body literal(Cl,has car,2,(V0,V1)), body literal(Cl,two wheels,1,(V1)), V0!=V1. :- included clause(C10,id5), clause size(C10,2), not clause(1). CHAPTER 5. NOISY POPPER IMPLEMENTATION 50 5.1.6 Normal Popper Worked Example To illustrate clearly how Normal Popper works, we will consider a modified example from [13] using the east-west trains problem. Assume we are trying to find the hypothesis eastbound(A) :- has car(A,B), short(B), two wheels(B). and will only consider a small initial hypothesis space, H1: H1 =    h1 = (cid:8) eastbound(A) :- has car(A,B),long(B). (cid:9) h2 = (cid:8) eastbound(A) :- has car(A,B),long(A),two wheels(B). (cid:9) h3 = (cid:8) eastbound(A) :- has car(A,B),roof closed(B). (cid:9) h4 = (cid:8) eastbound(A) :- has car(A,B),short(B),two wheels(B). (cid:9) h5 = (cid:8) eastbound(A) :- has car(A,B),long(B),roof closed(B). (cid:9) (cid:26) eastbound(A) :- has car(A,B),roof closed(B). (cid:27) h6 = h7 = h8 = h9 = eastbound(A) :- has car(A,B),short(B). (cid:26) eastbound(A) :- has car(A,B),roof closed(B). (cid:27) eastbound(A) :- has car(A,B),long(C,D),three wheels(B). (cid:26) eastbound(B) :- has car(A,B),short(B),three wheels(B). eastbound(A) :- has car(A,B),has load(B,C),triangle(C). eastbound(A) :- has car(A,B),long(B). eastbound(A) :- has car(A,B),short(B),three wheels(D,B). eastbound(A) :- has car(A,B),short(B),two wheels(B).    (cid:27)       We will also assume we have the following set of positive examples: E+ = {eastbound(train 1)., eastbound(train 2).} And we will assume the following set of negative examples: E− = {eastbound(train 3)., eastbound(train 4).} with the BK containing the following facts about the trains: has car(train1, car1)., short(car1)., two wheels(car1), roof closed(car1). has car(train2, car2)., short(car2)., two wheels(car2), jagged roof(car2). has car(train2, car3)., three wheels(B)., roof closed(car3). has car(train3, car4)., roof closed(car4)., three wheels(car4)., short(B) has car(train4, car5)., has load(car5,load1)., circle(load1)., two wheels(B). Note that train2 has 2 cars. Normal Popper will first generate the simplest hypothesis from the search space: h1 = (cid:8) eastbound(A,B) :- has car(A,B),long(B). (cid:9) CHAPTER 5. NOISY POPPER IMPLEMENTATION 51 We can see that since neither train1 nor train2 contain a long car, they both will return false when input to this hypothesis. This makes h1 a failed hypothesis as it is totally incomplete which implies h1 is too specific. Normal Popper will generate the following specialization constraint: included clause(Cl,id1) :- head literal(Cl,eastbound,1,(V0)), body literal(Cl,has car,2,(V0,V1)), body literal(Cl,long,1,(V1)), V0!=V1. :- included clause(C10,id1), not clause(1). which prunes all specializations of h1 from H1, namely h2 and h5. Since h1 is totally incomplete, Normal Popper will also generate the following elimination constraint: :- included clause(C10,id1), separable. which prunes all separable hypotheses which contain all of the clauses of h1 where each clause may be specialized. That means that h9 is pruned from the hypothesis space. After this pruning, our hypothesis space is left as: H1 =    h6 = h7 = h8 = h3 = (cid:8) eastbound(A) :- has car(A,B),roof closed(B). (cid:9) h4 = (cid:8) eastbound(A) :- has car(A,B),short(B),two wheels(B). (cid:9) (cid:26) eastbound(A) :- has car(A,B),roof closed(B). (cid:27) eastbound(A) :- has car(A,B),short(B). (cid:26) eastbound(A) :- has car(A,B),roof closed(B). eastbound(A) :- has car(A,B),long(C,D),three wheels(B). (cid:26) eastbound(B) :- has car(A,B),short(B),three wheels(B). eastbound(A) :- has car(A,B),has load(B,C),triangle(C).    (cid:27) (cid:27) The next hypothesis Normal Popper will generate is: h3 = (cid:8) eastbound(A) :- has car(A,B),roof closed(B). (cid:9) When we test this hypothesis, we find that it does entail both positive examples as both trains contain a car with a closed roof. However, h3 also entails the negative example train3. This implies that the hypothesis is too general and thus Normal Popper will generate the following generalization constraint: CHAPTER 5. NOISY POPPER IMPLEMENTATION 52 included clause(Cl,id2) :- head literal(Cl,eastbound,1,(V0)), body literal(Cl,has car,2,(V0,V1)), body literal(Cl,roof closed,1,(V1)), V0!=V1. :- included clause(C10,id2), clause size(C10,2). which prunes all generalizations of h3, namely h6 and h7. Now, our hypothesis space is left as: H1 =    h4 = (cid:8) eastbound(A) :- has car(A,B),short(B),two wheels(B). (cid:9) (cid:26) eastbound(B) :- has car(A,B),short(B),three wheels(B). eastbound(A) :- has car(A,B),has load(B,C),triangle(C). h8 = (cid:27)    Finally, Normal Popper will generate hypothesis h4 which successfully entails all pos- itive examples and no negative examples, making it a solution to the problem and is thus returned. The following sections will discuss how these constraints are adapted into Noisy Pop- per and the modifications Noisy Popper makes to better handle noise. 5.2 Anytime Algorithm The first large obstacle Normal Popper presents when trying to handle noisy can be observed in Theorem 1 from [13]: Theorem 1 (Optimality) : [Normal] Popper returns an optimal solution if one exists [13]. Thus, given any set of examples, Normal Popper will either return a solution which entails all examples in E+ and no examples in E−, overfitting if noise is present, or no hypothesis at all which equates to an empty hypothesis, i.e., a program which always returns true. To avoid returning no hypothesis, Noisy Popper is constructed as an anytime algorithm in which a hypothesis can be returned by the system at any point in its runtime, regardless of whether or not that hypothesis is an optimal solution. The approach taken in Noisy Popper consists of maintaining the best hypothesis seen so far. That is, each hypothesis generated by Popper is scored by the SACC function CHAPTER 5. NOISY POPPER IMPLEMENTATION 53 from Definition 31 and the hypothesis of highest score is maintained by the system. In the case that the Popper algorithm is halted early or the entirety of the hypothesis space is exhausted without finding a solution, the best hypothesis so far is returned. Otherwise, if an LFF solution is found which necessarily has maximum SACC score, that solution is returned as it would be in Normal Popper. This change to the Normal Popper algorithm can be see in Algorithm 2 below. Algorithm 2 Noisy Popper Input: E+, E−, B, D, C, t, max vars, max literals, max programs, max clauses (where B is a set of background knowledge, D is a declaration bias, C is a set of constraints, and t is the minimal constraint threshold) Output: Hypothesis Constraint Consistent Logic Program or Empty Set end if (tp, tn) ← test(E+, E−, B, program) if tp = |E+| and tn = |E−| then return program end if if SACC(program, B, E+, E−) > SACC(best hypothesis, B, E+, E−) then 1: num literals ← 1 2: num programs ← 1 3: best hypothesis ← null 4: program list ← [] 5: while num literals ≤ max literals and num programs ≤ max programs do program ← generate(D, C, max vars, num literals, max clauses) 6: if program = ’space exhausted’ then 7: num literals ← num literals + 1 8: continue 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: end while 28: return best program end if C ← C + learn constraints(program, tp, tn) C ← C + learn sound constraints(program,program list, B, E+, E−, tp, tn) C ← C + learn size constraints(program, program list, B, E+, E−, tp, tn) append(program list, program) end if if tn > t|E−| then tn ← |E−| end if if tp > t|E+| then tp ← |E+| best program ← program Note, Noisy Popper takes a max programs parameter which essentially gives a timeout CHAPTER 5. NOISY POPPER IMPLEMENTATION 54 to the algorithm, forcing it to return whatever the current best hypothesis is after that many programs have been considered. 5.3 Minimal Constraints An effective strategy to apply constraints is to do so minimally and only in cases where we intuitively know that the hypotheses considered are poor. For instance, if a hypothesis entails no positive examples, we can confidently conclude that the hypothesis is too specific, even if a portion of those examples are noisy. Likewise, if a hypothesis entails all negative examples, we can conclude that it is much too general regardless of noise. To this end, given a hypothesis H, we consider applying the typical hypothesis constraints of Normal Popper as follows: • if H is totally incomplete (i.e., tp = 0) prune all specializations of H and all separable hypotheses which contain a specialization of H. • if H is totally inconsistent (i.e., tn = 0) prune all generalizations of H. • Otherwise, only prune H from the hypothesis space Though these constraints are not sound as they may prune SACC-optimal hypotheses they have proven very effective in practice greatly improve the efficiency of the system by pruning significant chunks of the hypothesis space. Minimal Constraint Threshold These minimal constraints arbitrarily, though reasonably, choose a threshold at which to prune at 0, i.e., Noisy Popper should prune normally if the tp or tn scores equal zero. However, Noisy Popper implements this threshold as an optional hyperparameter 0 ≤ t ≤ 1 representing the percentage of positive (resp. negative) examples which if entailed (resp. not entailed) by a hypothesis, no pruning will occur. More specifically, typical constraints of Normal Popper are applied for a hypothesis H as follows: • if tp ≤ t|E+| prune all specializations of H. hypotheses which contain a specialization of H. If tp = 0 prune all separable • if tn ≤ t|E−| prune all generalizations of H. • Otherwise, only prune H from the hypothesis space CHAPTER 5. NOISY POPPER IMPLEMENTATION 55 This implementation can be seen in lines 17-22 of the Noisy Popper algorithm which alters the values of tp and tn to |E+| and |E−| respectively should they exceed the threshold amounts. These modified tp and tn values are used as arguments for the standard learn constraints function on line 23 which produces hypothesis constraints as it did in Algorithm 1. It is common that a program may have tp and tn altered to |E+| and |E−| respectively, i.e., complete relaxation of Normal Popper’s constraints. In these situations, the learn constraints function only generates a banish constraint to ensure that that hypothesis is removed from the search space. Without this, the algorithm may consider that same hypothesis infinitely. Noisy Popper was designed to limit the use of hyperparameters as they make many ILP systems cumbersome to use effectively. By default, t = 0 which is often the most effective setting. 5.4 Sound Hypothesis Constraints The learn sound constraints function in line 24 of the Noisy Popper algorithm is im- plemented in Algorithm 3 seen below. The algorithm compares previously generated programs with the newly generated one to build up a set of hypothesis constraints which are ultimately added to the ASP constraint set C. For simplicity, we will often refer to these specific constraints as sound constraints. Note that |E+| − 1 is used to essentially convey ”some but not all positive examples” and |E−| − 1 likewise conveys ”some but not all negative examples”. Lines 3-5 correspond to Proposition 4 and prunes generalizations of a previously seen hypothesis if they cannot have an SACC score higher than the new hypothesis. Likewise, lines 6-8 correspond to Proposition 5 and prune specializations of a previously seen hypothesis if they cannot have an SACC score higher than the new hypothesis. Lines 9-11 correspond to Propositions 6 and 7 and prune non-recursive supersets and non-recursive specializations of the new hypothesis which has a true positive value equal to a previous hypothesis if the new hypothesis is a superset of the previously seen one. Lines 14-16 pertain just to Proposition 6 as we cannot prune specializations of the new hypothesis if it is not a superset of the previously hypothesis. Lines 18-21 correspond to Proposition 8 and prune generalizations of the new hypothesis which has a true negative value equal to a previously seen hypothesis if the new hypothesis is a specialization of the previously seen one. Lastly, lines 23-28 correspond to Propositions 9 and 10 and prune all generalizations of the new hypothesis if it entails all positive examples and all specializations of the CHAPTER 5. NOISY POPPER IMPLEMENTATION 56 new hypothesis if it entails no negative examples. Algorithm 3 Learn Sound Hypothesis Constraints Input: program, program list, B, E+, E−, tp, tn (where B is a set of background knowledge, tp = tp(program,B, E+) and tn = tn(program,N, E−) as calculated in Algorithm 1) Output: Set of hypothesis constraints (may be empty) if SACC(program,B, E+, E−)−SACC(p,B, E+, E−) > |E+|−tp(p,B, E+) then constraints ← constraints + learn constraints(p, |E+|, |E−| − 1) end if if SACC(program,B, E+, E−)−SACC(p,B, E+, E− > |E−|−tn(p,B, E−) then constraints ← constraints + learn constraints(p, |E+| − 1, E−) end if if is generalization(program, p) and tp(p, B, E+) = tp then if p ⊆ program then constraints ← constraints + learn constraints non rec(program, |E+| − 1, |E−| − 1) constraints ← constraints + learn constraints non rec(program, |E+| − 1, |E−|) end if if is specialization(program, p) and tn(p,B, E−) = tn then constraints ← constraints + learn constraints non rec(program, |E+|, |E−| − 1)) else 1: constraints ← {} 2: for p in program list do 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: end for 23: if tp = |E+| then 24: 25: end if 26: if tn = |E−| then 27: 28: end if 29: return constraints end if end if constraints ← constraints + learn constraints(program, |E+|, |E−| − 1) constraints ← constraints + learn constraints(program, |E+| − 1, |E−|) In Algorithm 3, while the standard learn constraints function is used to generate ASP constraints as normal, a specific variant function learn constraints non rec is also used to learn constraints which specifically do not prune recursive hypotheses. In the ASP encoding, this simply requires adding not recursive to the constraint. So, a generalization constraint which does not prune recursive hypotheses would be defined as: CHAPTER 5. NOISY POPPER IMPLEMENTATION 57 generalizationConstraintNonRec({Clause1, Clause2,...,Clausen}) := inclusionRule(Clause1),..., inclusionRule(Clausen). :-exactClause(C11,Clause1),..., exactClause(C1n,Clausen), not recursive. and likewise, an analogous specialization constraint would be defined as: specializationConstraintNonRec({Clause1, Clause2,...,Clausen}) := inclusionRule(Clause1),..., inclusionRule(Clausen). :-included clause(C11,clauseIdent(Clause1)),..., included clause(C1n,clauseIdent(Clausen)), assertDistinct({Cl1,...,Cln}), not clause(n), not recursive. Additionally, the is generalization and is specialization functions are specially im- plemented for Noisy Popper and check for a version of subsumption as Definitions 18 and 19 outline, i.e., if every clause in H1 is subsumed by some clause in H2, then H2 is a generalization of H1 and H1 is a specialization of H2. Notably, checking subsump- tion is NP-complete [21] and these functions do not check for variable substitutions. 5.5 Sound Constraints with Hypothesis Size The learn size constraints function in line 25 of the Noisy Popper algorithm is im- plemented below in Algorithm 4. Like with Algorithm 3, this algorithm builds a list of hypothesis constraints using hypothesis size by comparing previously generated pro- grams with the newly generated one. We will often refer to these constraints as simply size constraints. These constraints are ultimately added to the ASP constraint set C in the Noisy Popper algorithm and used to prune the hypothesis space of particularly large hypotheses. Lines 3-5 correspond to Proposition 11 and prune all generalizations of a previ- ously seen hypotheses of particular size as they cannot have an SM DL score greater than that of the newly generated program and are thus not SM DL-optimal. Like- wise, lines 6-8 correspond to Proposition 12 and prune all specializations of a previ- ously seen hypothesis of particular size as they cannot be SM DL-optimal. Note that even in the case where the new hypothesis performs exceptionally poorly, these will CHAPTER 5. NOISY POPPER IMPLEMENTATION 58 Algorithm 4 Learn Sound Hypothesis Constraints with Hypothesis Size Input: program, program list, B, E+, E−, tp, tn (where B is a set of background knowledge, tp = tp(program,B, E+) and tn = tn(program,N, E−) as calculated in Algorithm 1) Output: Set of hypothesis constraints (may be empty) gen size1 ← |E+| + tn(p,B, E−) − SM DL(program,B, E+, E−) constraints ← constraints + learn constraints with size(p, |E+|, |E−| − 1 gen size1) spec size1 ← |E−| + tp(p,B, E+) − SM DL(program,B, E+, E−) constraints ← constraints + learn constraints with size(p, |E+| − 1, |E−|, spec size1) 1: constraints ← {} 2: for p in program list do 3: 4: 5: 6: 7: 8: 9: end for 10: gen size2 ← |E+|− tp + size(program) 11: constraints ← constraints + 12: 13: spec size2 ← |E−|− tn + size(program) 14: constraints ← constraints + 15: 16: return constraints learn constraints with size(program, |E+|, |E−| − 1, gen size2) learn constraints with size(program, |E+| − 1, |E−|, spec size2) still generate hypothesis constraints and remove exceptionally large generalizations and specializations from the hypothesis space. Lines 9-11 correspond to Proposi- tion 13 and prunes all generalizations of the new hypothesis H with size greater than f n(H, B, E+) + size(H) as these can never have a greater SM DL score than H and are thus not SM DL-optimal. Likewise, lines 12-14 correspond to Proposi- tion 14 and prunes all specializations of the new hypothesis H with size greater than f p(H, B, E−) + size(H) as these cannot be SM DL-optimal. Like with Algorithm 3, Algorithm 4 also introduces a new modified version of the learn constraints called learn constraints with size which generates variants of the typical ASP constraints, taking an additional size argument. These ASP constraints only prune hypothesis which have a size greater than the given size argument. A generalization constraint which only prunes hypotheses of particular size would be defined as: CHAPTER 5. NOISY POPPER IMPLEMENTATION 59 generalizationConstraintWithSize({Clause1, Clause2,...,Clausen}, size) := inclusionRule(Clause1),..., inclusionRule(Clausen). :-exactClause(C11,Clause1),..., exactClause(C1n,Clausen), program size(N), size < N. where program size(N) holds true only if the number of body literals in the given program equals N. Likewise, an analogous specialization constraint would be defined as: specializationConstraintWithSize({Clause1, Clause2,...,Clausen}, size) := inclusionRule(Clause1),..., inclusionRule(Clausen). :-included clause(C11,clauseIdent(Clause1)),..., included clause(C1n,clauseIdent(Clausen)), assertDistinct({Cl1,...,Cln}), not clause(n), program size(N), size < N. Learning from All Previous Hypotheses The learn sound constraints and learn size constraints algorithms takes a list of programs with which to compare the most recently generated hypothesis with. From line 26 in the Noisy Popper algorithm, we can see that this list is composed of all previously seen hypotheses that the system has generated to that point. While the original motivation behind the sound constraints was to learn from the best hypothesis as it was continuously being maintained, Propositions 4-14 all hold when comparing any two hypotheses. Thus, we can generate additional constraints by comparing any new hypothesis with a running list of all previously encountered and scored hypotheses. We motivate this through an example: Example 21 (Comparing to All Previous Hypotheses) Consider an east-west trains problem with BK B and |E+| = |E−| = 5. Assume we have already observed the following hypotheses: h1 = (cid:8) eastbound(A) :- has car(A,B),short(B). (cid:9) h2 = (cid:8) eastbound(A) :- has car(A,B),long(B). (cid:9) where tp(h1,B, E+) = 2, tn(h1,B, E−) = 0, tp(h2,B, E+) = 0, and tn(h2,B, E−) = 2. Now, consider the next hypothesis generated is h3 = {eastbound(A) :- has car(A,B), three wheels(B).} which has SACC(h3,B, E+, E−) = 6. CHAPTER 5. NOISY POPPER IMPLEMENTATION 60 Since SACC(h3,B, E+, E−) − SACC(h1,B, E+, E−) > |E+| − tp(h1,B, E+), by Propo- sition 4 we may prune all generalizations of h1. Similarly, since SACC(h3,B, E+, E−) − SACC(h2,B, E+, E−) > |E−| − tn(h2,B, E−), by Proposition 5 we may prune all specializations of h1. If we had not maintained both h1 and h2, we would have only been able to identify one of these hypothesis constraints. Thus, the number of hypothesis constraints we can generate can increase as more hypotheses are maintained for comparison. Comparing to all previous hypotheses provides the system with noticeable improve- ment in practice though at the cost of significant inefficiencies as repeated subsump- tion checks are taxing. Steps are taken in the Noisy Popper implementation to avoid redundant constraint generation as much as possible. Lists are maintained to keep track of programs which have had their generalizations and specializations pruned. Should a program have its generalizations pruned, it is removed from the respective list to ensure it is not checked again and similar actions are taken for specializa- tions. Programs removed from the generalizations list additionally cannot generate any generalization constraints with hypothesis size as these would be similarly redun- dant and likewise for specialization constraints with hypothesis size. Even with these changes, we still may loop over every previously seen hypothesis with each generate- test-constrain loop of Algorithm 2. Thus, given N hypothesis in the hypothesis space, we may make O(N 2) total hypotheses comparisons in both Algorithm 3 and Algorithm 4. Each comparison may additional check for incomplete subsumption as specified previously which, if we assume each program has at most C clauses each with at most L literals takes O((CL)2). Thus, the additional code used to modify Normal Popper to Noisy Popper has a worst-case runtime of O((N CL)2). Again, checking subsumption in full is NP-complete [21] and we are using incomplete subsumption checks here. 5.6 Noisy Popper Worked Example To illustrate how Noisy Popper works, we will consider another east-west trains problem. Again assume we are trying to find the hypothesis eastbound(A) :- has car(A,B), short(B), two wheels(B). We will also assume that t = 0 is the minimal constraint threshold used and consider only a small initial hypothesis space, H2: CHAPTER 5. NOISY POPPER IMPLEMENTATION 61 H2 =    h1 = (cid:8) eastbound(A) :- has car(A,B),long(B). (cid:9) h2 = (cid:8) eastbound(A) :- has car(A,B),long(A),two wheels(B). (cid:9) h3 = (cid:8) eastbound(A) :- has car(A,B),short(B). (cid:9) h4 = (cid:8) eastbound(A) :- has car(A,B),short(B),two wheels(B). (cid:9) h5 = (cid:8) eastbound(A) :- has car(A,B),long(B),roof closed(B). (cid:9) h6 = h7 = h8 = (cid:26) eastbound(A) :- has car(A,B),roof closed(B),three wheels(B). eastbound(A) :- has car(A,B),short(B). (cid:26) eastbound(A) :- has car(A,B),short(B),two wheels(B). (cid:27) eastbound(A) :- has car(A,B),roof closed(B),two wheels(B). eastbound(A) :- has car(A,B),long(B). eastbound(A) :- has car(A,B),short(B),three wheels(D,B). eastbound(A) :- has car(A,B),short(B),two wheels(B).       (cid:27)    We will also assume similar sets of examples as in the worked example in Section 5.1.6, but with the addition of a noisy positive examples eastbound(train5).: E+ = {eastbound(train 1)., eastbound(train 2).,eastbound(train5).} E− = {eastbound(train 3)., eastbound(train 4).} where the BK is again the same as in Section 5.1.6 but with the added facts for train5 has car(train5, car6)., two wheels(car6), roof closed(car6). Noisy Popper will first generate hypothesis h1 = {eastbound(A):-has car(A,B),long(B).} from the hypothesis space. Since no train contains a long car, no example is entailed, positive or negative, giving SACC = 2. Being the first hypothesis considered, this is saved as the best hypothesis. Because h1 entails no positive examples and no negative examples, the minimal constraints will not alter the outcome from (tp = 0, tn = 2). This means that a specialization constraint and elimination constraint are generated as they are in Section 5.1.6, pruning all specializations of h1, namely h2 and h5 as well as all separable hypotheses which contain specializations of h1, namely h8 (line 23 of Algorithm 2). Note that the learn sound constraints function would normally produce an iden- tical specialization constraint as above since tn(h1,B, E−) = |E−| (lines 26-27 of Algorithm 3), but this is avoided in implementation as it would be redundant. Simi- larly, the learn size constraints function would produce constraints pruning all gen- eralizations of h1 with size greater than |E+| − tp(h1,B, E+) + size(h1) = 5 (lines 10-12 of Algorithm 4) and pruning all specializations of h1 with size greater than CHAPTER 5. NOISY POPPER IMPLEMENTATION 62 |E−| − tn(h1,B, E−) + size(h1) = 2 (lines 13-15 of Algorithm 4), but this last con- straint is also avoided in implementation as it is redundant. These functions produce no other constraints as there are no previous programs with which to compare h1 to. This leaves the hypothesis space as: H2 =    h6 = h7 = h3 = (cid:8) eastbound(A) :- has car(A,B),short(B). (cid:9) h4 = (cid:8) eastbound(A) :- has car(A,B),short(B),two wheels(B). (cid:9) (cid:26) eastbound(A) :- has car(A,B),roof closed(B),three wheels(B). eastbound(A) :- has car(A,B),short(B). (cid:26) eastbound(A) :- has car(A,B),short(B),two wheels(B). (cid:27) eastbound(A) :- has car(A,B),roof closed(B),two wheels(B). (cid:27)    Noisy Popper will next generate hypothesis h2 = {estbound(A):-has car(A,B),short(B).}. Since train1, train2, and train3 contain short cars, two positive and one negative example will be entailed, giving SACC(h2,B, E+, E−) = 3. Noisy Popper will replace h1 with h2 as the new best hypothesis. Since the tp(h2,B, E+) > 0 and tn(h2,B, E−) > 0, in Normal Popper the generalizations and specializations of h2 would be pruned, in- cluding the true solution h4. However, due to the constraint relaxation, the values of tp and tn are changed to |E+| and |E−| respectively (lines 17-22 of Algorithm 2), so only a banish constraint is generated by learn constraints in line 23 of Algorithm 2. No constraints are generated by learn sound constraints and learn size constraints as these are all again redundant or do not effect our hypothesis space. The only pro- gram removed is h2 through the banish constraint leaving the hypothesis space as: H2 =    h6 = h7 = h4 = (cid:8) eastbound(A) :- has car(A,B),short(B),two wheels(B). (cid:9) (cid:26) eastbound(A) :- has car(A,B),roof closed(B),three wheels(B). eastbound(A) :- has car(A,B),short(B). (cid:26) eastbound(A) :- has car(A,B),short(B),two wheels(B). (cid:27) eastbound(A) :- has car(A,B),roof closed(B),two wheels(B). (cid:27)    The next hypothesis generated is h4 = {eastbound(A):-has car(A,B), short(B),two wheels(B).} which entails no negative examples and all but the single noisy positive example. This gives SACC(h4) = 4 and thus it will be maintained as the new best hypothesis, though since it does not entail all positive and no negative examples, it is not immediately returned. Again, due to the constraint relaxation, the outcome is values for tp and tn are changed to |E+| and |E−| respectively, avoiding pruning all specializations of h4 as would be done in Normal Popper (lines 17-22 of CHAPTER 5. NOISY POPPER IMPLEMENTATION 63 Algorithm 2). However, all specializations of h4 are pruned regardless as it entails no negative examples (lines 30-31 of Algorithm 3). learn size constraints will generate a constraint which prunes all generalizations of previously seen hypothesis h3 with size greater than |E+| + tn(h3,B, E−) − SM DL(h4,B, E+, E−) = 4 (lines 3-5 of Algorithm 4). The generalization with size constraint created would be: included clause(Cl,id1) :- head literal(Cl,eastbound,1,(V0)), body literal(Cl,has car,2,(V0,V1)), body literal(Cl,short,1,(V1)), V0!=V1, V0!=V2, V1!=V2. :- included clause(C10,id2), clause size(C10,2), program size(N), 4 < N. which would prune h6 from the hypothesis space. learn size constraints would also generate a constraint which prunes all generalizations of h4 of size greater than |E+|− tp(h4,B, E+) + size(h4) = 4 (lines 10-12 of Algorithm 4). The generalization with size constraint generated would be similar to the one above but with one additional literal encoded. This prunes h7 from the hypothesis space which is notable as this hypothesis perfectly fits the data, but would overfit as it would entail the noisy positive example. Lastly, h4 itself is pruned from the hypothesis space via a banish constraint leaving H2 empty. Since no solution was found, the best maintained hypothesis is returned instead, meaning that h4 is correctly returned. 5.7 Summary In this chapter, we discussed the implementation details of the Normal Popper system including its generate-test-constraint loop and how it encodes programs and hypoth- esis constraints into an ASP problem. We then discussed how Noisy Popper modifies Normal Popper to be an anytime algorithm and demonstrated how it implements unsound minimal constraints to prune typically suboptimal hypothesis. Next, we described the algorithm which generates sound hypothesis constraints or sound con- straints based on Propositions 4-10 in Chapter 4 including necessary changes to the ASP constraint encodings. We likewise described the algorithm which generates sound hypothesis constraints which take into account hypothesis size or size constraints un- der the MDL scoring based on Propositions 11-14. Finally, we demonstrated Noisy CHAPTER 5. NOISY POPPER IMPLEMENTATION 64 Popper through a worked example. In the next chapter, we will describe the experi- ments and results used to compare Noisy Popper to Normal Popper and experiments which compare the effectiveness of the individual components of Noisy Popper. Chapter 6 Experimental Results In this chapter we will empirically explore the capabilities of Noisy Popper. Namely, we aim to determine the validity of the claims made in Chapter 1 which stated that Noisy Popper is more capable of generalizing to noisy data than Normal Popper and that without noise, Noisy Popper still generalizes well, though less efficiently than Normal Popper. To this end, the following experimental questions are formulated: Q1. How well does Noisy Popper generalize to datasets with varying levels of noise in comparison to Normal Popper? To answer this question, we compare the two systems directly on several problems commonly found in the literature. We will compare purely their accuracies over various amounts of noise including without noise. Though we will briefly compare the two systems as they are, this is a slightly unfair comparison as Normal Popper will typically return an empty solution in the presence of noise. Thus, for most of the experiments we will enhance Normal Popper as an anytime algorithm as we did for Noisy Popper. Q2. How inefficient is Noisy Popper in comparison to Normal Popper? To answer this question, we will again test the two systems against several datasets, this time measuring the time it takes for the systems to complete. A natural question regarding the effectiveness of Noisy Popper is to what degree do each enhancement impacts learning, both in speed and accuracy. Thus, the following question should be posed: Q3. How significantly does each enhancement within Noisy Popper impact its learn- ing capabilities and efficiency. 65 CHAPTER 6. EXPERIMENTAL RESULTS 66 To answer this question, we will evaluate Noisy Popper as a whole with versions of Noisy Popper without (i) minimal constraints (ii) sound constraints and (iii) size constraints in addition to a completely relaxed brute force version of Normal Popper, Enumerate. The accuracies and completion times of each system will be compared to determine the impact of each in various settings. 6.1 Noisy Popper vs. Normal Popper The purpose of this first set of experiments is to evaluate how well Noisy Popper gen- eralizes to noisy and noiseless datasets in comparison to Normal Popper, measuring both the predictive accuracy of the systems as well as the time it take both to run. We will evaluate both systems over several diverse problem sets commonly found in the literature: two Michalski’s east-west trains problem variants, several program synthe- sis list transformation problems, and two inductive general game playing (IGGP [12]) problems. These datsets will also be used for experiments in the following sections. 6.1.1 Experiment 1: East-West Trains This series of problems consists of learning eastbound target predicates for two vari- ations on Michalski’s east-west trains problem as described in Chapter 1 and used throughout this paper. Such problems are easy for a system to overfit and will help determine how effective Noisy Popper is at generalizing to noisy data. Materials Noisy Popper and Normal Popper will be evaluated on two different east- west trains problems with data generated from the following ground truth hypotheses: (cid:26) eastbound(A) :- (cid:27) has car(A,C),long(C),roof closed(C),has car(A,B),three wheels(B). (cid:27) h1 = h2 = (cid:26) eastbound(A) :- has car(A,C),roof open(C),has car(A,B),roof closed(B). Both systems are given identical BK containing the descriptions of all 999 trains via re- lations has car/2, has load/2, short/1, long/1, two wheels/1, three wheels, etc. The language biases for Normal and Noisy Popper restrict hypotheses to at most six unique variables, at most size body literals per clause, and at most three clauses. The systems are given type and directions (i.e., input or output) for arguments of each CHAPTER 6. EXPERIMENTAL RESULTS 67 predicate. The minimal constraint threshold for Noisy Poppper is set to its default t = 0. For one experiment, Normal Popper will be run as is with no modification. This will adequately demonstrate Normal Popper’s inability to generalize at all to noisy data as it will be unable to find an LFF solution before the given system timeout. Out of fairness, the rest of the experiments here and moving forward will run Normal Popper enhanced as an anytime algorithm which, like Noisy Popper, maintains its best seen hypothesis and returns it if no LFF solution is found. Normal Popper will still generate constraints as normal with this enhancement. Methods For each hypothesis above, 50 positive and 50 negative randomly selected examples will be generated for training while 200 positive and 200 negative randomly selected examples will be generated for testing. Example trains are selected randomly using the two hypotheses from the pool of 999 defined in the BK. Should a system fail to return any hypothesis, we will assume that all examples are entailed giving a default predictive accuracy of 50% in this instance. A timeout of ten minutes and a limit of 200 generated programs is enforced per task which ensures both systems can learn from the same number of programs. The predictive accuracy and learning times are recorded for each task and each experiment will repeat the task ten times with the means and standard errors plotted. Each experiment will be repeated for training noise levels from 0% to 40% in increments of 10% with 5% additionally being tested. The test sets will remain noiseless. Results and Analysis Table 6.1 below shows that when compared to Normal Popper unenhanced by an anytime algorithm, Noisy Popper far exceeds its predictive accuracy. Normal Popper achieving 50% accuracy indicates that the was unable to find an LFF solution to the task in the 200 program limit allotted and thus is given the default predictive accuracy. This is an expected and uninteresting result as Normal Popper’s inability to return non-LFF solutions has already been discussed. Target Program Training Noise (%) Normal Popper Noisy Popper h1 h2 0 5 10 20 0 5 10 20 100±0 50±0 50±0 50±0 100±0 50±0 50±0 50±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 Table 6.1: East-West Trains predictive accuracy for programs h1 and h2 with Normal Popper not enhanced as an anytime algorithm. The error is standard CHAPTER 6. EXPERIMENTAL RESULTS 68 Figures 6.1 and 6.2 below compare Normal Popper enhanced as an anytime algorithm with Noisy Popper. In terms of predictive accuracy, Noisy Popper outperforms Nor- mal Popper at all noise levels greater than 0% for both problems and ties as expected at 0% noise with perfect predictive accuracy. A McNemar’s [22] test on the Noisy and Normal Popper predictive accuracy additionally confirmed the significance at the p < 0.001 level for both problems. For most noise levels, Normal Popper typically either overfits the data, returning a hypothesis with several extra clauses, or underfits the data having pruned the correct solution from the hypothesis space early on. Noisy Popper’s relaxed setting helps avoid this over pruning. Over 30% noise, Noisy Popper also begins overfitting the data though still produces higher predictive accuracy than Normal Popper. Normal Popper consistently ran much quicker than Noisy Popper, running in under two seconds regardless of noise level. Noisy Popper’s learning time is much more volatile, dependent on the number of constraints Noisy Popper generates. Precise reasons for this inefficiency are discussed in the second section of experiments. However, with no noise, both systems take roughly the same amount of time to learn the correct solution. Overall, these results suggest that the answer to Q1 is that Noisy Popper generalizes better than Normal Popper under and generalizes as well as Normal Popper when no noise is present. It also suggests that the answer is Q2 is that Noisy Popper is significantly less efficient than Normal Popper and can be at most around 20 times slower than Normal Popper with these datasets. ) % ( y c a r u c c A e v i t c i d e r P 100 95 90 85 ) s d n o c e s ( i e m T g n i n r a e L 20 10 0 0 Normal Popper Noisy Popper 10 20 Noise (%) 30 40 Normal Popper Noisy Popper 0 10 20 Noise (%) 30 40 Figure 6.1: East-West Trains predictive accuracy and learning time (in seconds) for program h1 when varying percentage of noisy training data. Standard error is depicted by bars. CHAPTER 6. EXPERIMENTAL RESULTS 69 ) % ( y c a r u c c A e v i t c i d e r P 100 90 80 ) s d n o c e s ( i e m T g n i n r a e L 20 10 0 0 Normal Popper Noisy Popper 10 20 Noise (%) 30 40 Normal Popper Noisy Popper 0 10 20 Noise (%) 30 40 Figure 6.2: East-West Trains predictive accuracy and learning time (in seconds) for program h2 when varying percentage of noisy training data. Standard error is depicted by bars. 6.1.2 Experiment 2: List Manipulations This series of problems consists of learning target predicates which manipulate or check certain properties of number lists and serves as an example of program synthesis. These problems are difficult typically requiring recursive solutions. Learning recursive programs has often been considered a difficult though important task for ILP systems [11]. Materials Noisy Popper and Normal Popper will be evaluated on the nine list manipulation problems seen below in Table 6.2 from [13]. These have been shown to be challenging for many ILP systems unless strong inductive biases are provided with the exception of Normal Popper which has demonstrated near perfect accuracy on each task in a noiseless setting [13]. Both systems are given identical BK containing some of the monadic (i.e. one ar- gument) relations empty, even, odd, one, and zero, dyadic (i.e., two argument) relations decrement, head, geq, increment, member and tail, and triadic (i.e., three argument) relations append and prepend in order to construct solutions. The language biases for Normal and Noisy Popper restrict hypotheses to at most five unique variables, at most five body literals per clause, and at most two clauses. The systems are again given type and directions for arguments of each predicate as well as a timeout to prevent non-terminating programs from running infinitely. The minimal constraint threshold for Noisy Popper is set to its default t = 0. Normal Popper will CHAPTER 6. EXPERIMENTAL RESULTS 70 Name Description Example Solution addhead Prepend head of list three times addhead(A,B):-head(A,C),cons(C,A,D),cons(C,D,E),cons(C,E,B). droplast Drop the last element of the list droplast(A,B):-tail(A,B),empty(B). droplast(A,B):-tail(A,C),droplast(C,D),head(A,E),cons(E,D,B). evens Check all elements are even finddup Find duplicate elements last Last element of list len Calculates list length member Member of the list evens(A):-empty(A). evens(A):-head(A,B),even(B),tail(A,C),even(C) finddup(A,B):-head(A,B),tail(A,C),member(B,C). finddup(A,B):-tail(A,C),finddup(C,B). last (A,B):-tail(A,C),empty(C),head(A,B). last (A,B):-tail(A,C),last (C,B). len(A,B):-empty(A),zero(B). len(A,B):-tail(A,C),len(C,D),increment(D,B). member(A,B):-head(A,B). member(A,B):-tail(A,C),member(C,B). sorted Checks if list is sorted sorted(A):-tail(A,B),empty(B). sorted(A):-head(A,B),tail(A,C),head(C,D),geq(D,B),sorted(C). threesame First three elements are identical threesame(A):-head(A,B),tail(A,C),head(C,B),tail(C,D),head(D,B). Table 6.2: List manipulation problems with descriptions and example solutions [13]. also be enhanced with an anytime algorithm approach as is done in Experiment 1 above. Methods For each task, 20 positive and 20 negative randomly generated exam- ples are used for training while 1000 positive and 1000 negative randomly generated examples are used for testing. List elements are sampled uniformly from the set {1, 2, ..., 100}. A timeout of ten minutes and a limit of 500 generated programs is enforced per task, even if this prevents a system from finding a more accurate so- lution. The predictive accuracy and learning times are recorded for each task and each experiment will repeat the task ten times with the means and standard errors plotted. Each experiment will be repeated for training noise levels of 0%, 5%, 10%, and 20%, though the test sets will remain noiseless. Results and Analysis Tables 6.3 and 6.4 below depict the predictive accuracy and learning times respectively for each of the list manipulation tasks with training data noise levels of 0%, 5%, 10%, and 20%. For most problems, both systems were able to find correct solutions with the exceptions of finddup in which both systems struggled and member in which Normal Popper struggled with added noise. This indicates that even Normal Popper enhanced with an anytime algorithm is capable of generalizing well to noisy data. This makes cases where Noisy Popper outperforms Normal Popper notable. Both systems however can struggle on particular problems CHAPTER 6. EXPERIMENTAL RESULTS 71 or datasets, though it is possible both systems could have found correct solutions to the finddup task if given additional time. Normal Popper consistently ran significantly faster than Normal Popper. This is due to the large number of constraints Noisy Popper generates which the ASP solver must use and the number of programs compared to generate these constraints. As discussed in Chapter 5, this number of comparisons is quadratic in total number of hypotheses generated. For a single hypothesis generated by Normal Popper, the system may generate at most two constraints whereas Noisy Popper can generate a multitude of constraints for every program already seen by the system. While the Noisy Popper implementation attempts to mitigate redundant constraints, this clearly produces a large bottleneck for the system also effected by a large grounding issue discussed in the following experimentation section. This data indicates than an answer to Q1 is that Noisy Popper generalizes as well as Normal Popper for many noisy datasets, though typically never performs worse. An answer to Q2 is again that Noisy Popper is much more inefficient than Normal Popper and may in fact be unusable in certain cases due to its extreme inefficiencies. 6.1.3 Experiment 3: IGGP Problems The general game playing (GGP) competition [18] measures a system’s general intel- ligence by having giving the agent the rules to several new games described as logic programs before having the agent play each game. The competition winner is the agent which scores the best total over all games. The inductive general game playing (IGGP) [12] task inverts the GGP task, providing a system with logical traces of a game in order for the system to try and learn the rules of the game. These exper- iments focus on the minimal decay and rock, paper, scissors (rps) tasks, aiming to learn the target predicate next score which determines the score a player will have given their action and the action of the other player on a given turn. Materials Noisy Popper and Normal Popper will be evaluated on the IGGP mini- mal decay and rps tasks. The BK for each system will contain facts about particular gameplay traces, i.e., specific actions players took on each turn, the actual score of a player after a turn has completed, etc. Some gameplay rules are also provided in the rps BK such as which action beats which other, i.e., rock beats scissors. The language biases for both systems restrict hypotheses to at most five unique variables, at most five body literals per clause, and at most two clauses for the CHAPTER 6. EXPERIMENTAL RESULTS 72 Name Training Noise (%) Normal Popper Noisy Popper addhead droplast evens finddup last len member sorted threesame 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 95±0 55±0 54±0 52±0 51±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 97±0 96±0 86±0 100±0 100±0 100±0 100±0 100±0 100±0 99±0 99±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 99±0 54±0 52±0 53±0 53±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 99±0 99±0 Table 6.3: Predictive accuracy for Normal and Noisy Popper on list manipulation problems. Accuracies are rounded to the nearest integer and errors to the nearest tenth. Errors are standard. CHAPTER 6. EXPERIMENTAL RESULTS 73 Name Training Noise (%) Normal Popper Noisy Popper addhead droplast evens finddup last len member sorted threesame 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0.6±0 12±3 9±2 8±3 34±8 78±9 80±10 79±8 2±0 13±3 14±1 13±1 8±1 7±0.9 8±0.9 9±1 1±0.4 3±0.4 3±0.4 3±0.4 0.5±0 3±0.4 2±0.2 2±0.1 0.4±0 10±3 18±6 20±5 4±0.4 8±0.4 8±0.4 8±0.4 0.3±0.1 0.8±0.1 1±0.1 1±0.1 2±0.1 79±4 74±2 74±3 81±39 135±45 142±33 137±36 7±0.5 38±1 45±0.5 40±3 39±2 36±1 39±2 40±2 15±4 19±0.5 21±0.3 20±0.3 2±0.1 59±6 56±2 56±2 0.7±0 23±3 23±3 23±3 26±3 42±0.1 43±1 43±0.2 0.5±0.1 2±0.1 3±0.3 4±0.2 Table 6.4: Learning times for Normal and Noisy Popper on list manipulation prob- lems. Times are rounded to the nearest second if they are greater than 1 second and to the tenth otherwise. Errors are standard. CHAPTER 6. EXPERIMENTAL RESULTS 74 minimal decay task and at most seven unique variable, at most six body literals per clause, and at most six clauses for the rps task. Again, types and directions are given for predicate arguments. The minimal constraint threshold for Noisy Popper is set to its default t = 0. Normal Popper will be enhanced with an anytime algorithm approach as is done in Experiment 1 above. Methods For the minimal decay task, 5 positive and 20 negative randomly gen- erated examples are used for training while 5 positive and 30 negative randomly generated examples are used of testing. This allows us to observe how well each sys- tem generalizes when the number of positive and negative training examples are not equal. A timeout of ten minutes and a limit of 150 generated programs is enforced per task. For the RPS task, 20 positive and 50 negative examples are randomly generated to train on and 100 positive and 200 negative are generated for testing. A timeout of ten minutes and a limit of 200 generated programs is enforced per task. The pre- dictive accuracy and learning times are recorded for each task and each experiment will be repeat the task ten times with the means and standard errors plotted. Each experiment will be repeated for training noise levels from 0% to 40% in increments of 10% with 5% noise additionally being tested. Testing sets will remain noiseless. Results and Analysis Figures 6.3 and 6.4 show the predictive accuracies and runtimes of Normal and Noisy Popper on the IGGP minimal decay and rps tasks respectively. These are notably difficult problems and even with no noise, neither system could generate a correct solution for either problem. For minimal decay, this is largely attributed to the exceptionally small number of examples used to train. Both systems typically achieved 100% training accuracy on the noiseless minimal decay data but did not achieve 100% testing accuracy. For RPS, the small program limit of 200 was the contributing factor for suboptimal noiseless accuracies though larger program limits often led to blowup in the Noisy Popper runtime. Despite the low number of programs both systems could learn from, Noisy Popper typically achieved equal or greater predictive accuracy than Normal Popper for all noise levels, though by only a slim margin. A McNemar’s [22] test on the Noisy and Normal Popper predictive accuracy additionally confirmed the significance at the p < 0.001 level for both problems indicating that the performances differences were not random. The results for both tasks suggest that the answer to Q1 is that Noisy Popper can generalize better to noisy data than Normal Popper and as well to noiseless data, though the difference is often marginal and Normal Popper enhanced with an anytime CHAPTER 6. EXPERIMENTAL RESULTS 75 algorithm can often generalize well on its own. Additionally, they suggest the answer to Q2 is that Noisy Popper is significantly less efficient than Normal Popper even with extremely small program limits, an issue for problems which require learning large programs. ) % ( y c a r u c c A e v i t c i d e r P 95 90 85 80 Normal Popper Noisy Popper ) s d n o c e s ( e m T i 4 3 2 1 0 0 10 20 Noise (%) 30 40 Normal Popper Noisy Popper 0 10 20 Noise (%) 30 40 Figure 6.3: IGGP Minimal Decay task predictive accuracy and time when varying percentage of noisy training data. Standard error is depicted by bars. ) % ( y c a r u c c A e v i t c i d e r P 80 70 ) s d n o c e s ( e m T i 40 20 Normal Popper Noisy Popper Normal Popper Noisy Popper 0 10 20 Noise (%) 30 40 0 10 20 Noise (%) 30 40 Figure 6.4: IGGP RPS task predictive accuracy and time when varying percentage of noisy training data. Standard error is depicted by bars. 6.2 Noisy Popper Enhancements The purpose of this set of experiments is to determine how effective each enhancement of Noisy Popper is in aiding the overall system. In each of the following experiments, the following variants of Noisy Popper will be run against one another: (i) brute force method described in Section 4.2 which creates no hypothesis constraints (we CHAPTER 6. EXPERIMENTAL RESULTS 76 will refer to this variant as Enumerate, (ii) Noisy Popper in its entirety, (iii) Noisy Popper without minimal constraints (which will be labelled as w/o minimal ), (iv) Noisy Popper without sound constraints (which will be labelled as w/o sound ), (v) Noisy Popper without size constraints (which will be labelled w/o size). 6.2.1 Experiment 1: East-West Trains This set of experiments is identical to those in Section 6.1.1 using the same east-west trains problems. Materials Each version of Noisy Popper will be evaluated using the same hypothe- ses as in Section 6.1.1. The language biases and BKs remain the same. Methods The methods are the same as in Section 6.1.1. Results and Analysis Figure 6.5 shows that all variants of Noisy Popper achieve similar predictive accuracy for all noise levels which are higher than Enumerate’s pre- dictive accuracy. Noisy Popper without minimal constraints performs slightly better with 40% noise indicating that the minimal constraints used by the other systems typically pruned a highly accurate hypothesis. However, Enumerate did not perform as well despite pruning no hypotheses indicating that Noisy Popper without minimal constraints was only able to find its best hypothesis due the remaining constraints it generated from the other enhancements, i.e. without the additional pruning, it would not have run long enough to find the best solution it did. Figure 6.6 shows that the predictive accuracies for all systems were roughly the same for all noise levels, but this may be attributed to h2 being an easier hypothesis to find. Here at 40% noise, Noisy Popper without minimal constraints and without sound hypothesis constraints both perform slightly better, most likely due to the other Popper variants overfitting the data and pruning too much. The learning times from Figures 6.5 and 6.6 demonstrate that Noisy Popper actually gains some speedup from its minimal and sound constraints. This is likely because when any hypotheses have generalizations or specializations pruned, those programs are essentially forgotten by the system and generate no further constraints. Notably, they no longer generate size constraints which is where the greatest bottleneck is as evidenced by the fast runtime of Noisy Popper without sound size hypothesis con- straints. These size constraints create a blowup in the grounding as all possible pro- gram sizes defined in the ASP constraints must be grounded and the size constraints CHAPTER 6. EXPERIMENTAL RESULTS 77 specify a large range of sizes in the ASP constraints. This can mean thousands of individual programs are required to be grounded by just a single ASP constraint. In practice, the vast majority of constraints generated by Noisy Popper are these size constraints leading to the inefficiency of the system. This data suggests an answer to Q3 is that none of the enhancements individually provide significant benefits to the predictive accuracy of the system, but in conjunction can make the system bet- ter than brute force enumeration. Minimal constraints however provide significant speedup to the system and the sound hypothesis constraints additionally contribute to this improvement. The size constraints are the biggest bottleneck however provid- ing little benefit in return when run for such short durations. It is possible that these size constraints would prevent the system from overfitting when run for extended pe- riods, but their inefficiencies make running the system for too long infeasible. Further improvements to the system and testing would be needed to draw conclusions from this hypothesis. 100 95 90 85 ) % ( y c a r u c c A e v i t c i d e r P 80 0 ) s d n o c e s ( i e m T g n i n r a e L 50 40 30 20 10 0 0 Enumerate Noisy Popper w/o Min. Cons. w/o Sound Cons. w/o Size Cons. 10 20 Noise (%) 30 40 10 20 Noise (%) 30 40 Figure 6.5: East-West Trains predictive accuracies and learning times of Noisy Popper variants (in seconds) for program h1 when varying percentage of noisy training data. Standard error is depicted by bars. 6.2.2 Experiment 2: List Manipulations This set of experiments is identical to those in Section 6.1.2. Materials The materials are identical to those in Section 6.1.2. Methods The methods are identical to those in Section 6.1.2. CHAPTER 6. EXPERIMENTAL RESULTS 78 100 95 90 85 ) % ( y c a r u c c A e v i t c i d e r P ) s d n o c e s ( i e m T g n i n r a e L 60 40 20 0 0 Enumerate Noisy Popper w/o Min. Cons. w/o Sound Cons. w/o Size Cons. 0 10 20 Noise (%) 30 40 10 20 Noise (%) 30 40 Figure 6.6: East-West Trains predictive accuracies and learning times of Noisy Popper variants (in seconds) for program h2 when varying percentage of noisy training data. Standard error is depicted by bars. Results and Analysis Table 6.5 depicts the predictive accuracy for Noisy Popper and its variants on each of the list manipulation tasks. Each performs well on most tasks except for finddup where again no system can find an accurate solution, sorted where only Noisy Popper performs well, and threesame where each variant performs slightly worse than Noisy Popper as noise increases. Enumerate notably performs poorly on several datasets, indicating that it could not find a correct solution in the time allotted while Noisy Popper and its variants could. The sorted task gives the best indication that the minimal constraints provide the biggest impact to predictive accuracy with sound hypothesis constraints contributing to a smaller degree and size constraints being ineffective except at higher noise levels where it prevents Noisy Popper from overfitting. Table 6.6 again demonstrates that typically without the size constraints, Noisy Popper runs much more efficiently, though not as quickly as Enumerate on average. However, in some tasks such as member and threesame, without noise Noisy Popper without size constraints finds the correct solution faster than any other system. We also can again see that typically, the minimal and sound constraints provide Noisy Popper with considerable speedup. Figure 6.7 below depicts the predictive test accuracy of the best hypothesis being maintained by each system versus the total number of programs each program has generated and learned from for the evens task with 5% training noise. This plot is exemplary of many of the list manipulation tasks over all noise levels and demonstrates how Noisy Popper with and without size constraints typically requires the fewest number of programs to learn from to find an optimal solution. Noisy Popper CHAPTER 6. EXPERIMENTAL RESULTS 79 without sound constraints typically requires generating several more hypotheses to accomplish the same and without sound hypothesis constraints requires yet more programs. If Enumerate finds an optimal solution, it typically requires generating the greatest number of hypotheses. Overall, this data suggests the answer to Q3 is again that the enhancements of Noisy Popper do not individually provide large benefits to the accuracy of the system except for specific datasets. However, minimal constraints and sound constraints do provide speedup to the system and help the system find correct solutions quicker than without. They additionally reduce the total number of programs which must be generated to find an optimal solution. This indicates that both sets of constraints contribute are beneficial to the overall system. Size constraints can provide deterrence against overfitting, but only in specific cases and at the cost of great inefficiency. 105 100 95 90 85 80 75 70 65 60 55 50 ) % ( y c a r u c c A e v i t c i d e r P 45 −50 0 50 100 Enumerate Noisy Popper w/o Min. Cons. w/o Sound Cons. w/o Size Cons. 400 450 500 550 150 200 250 Number of Programs Generated 300 350 Figure 6.7: Predictive accuracies of maintained best programs for Noisy Popper vari- ants versus the number of programs generated by each system on evens dataset with 5% training noise. Standard error is depicted by bars. CHAPTER 6. EXPERIMENTAL RESULTS 80 Name Noise (%) Enumerate Noisy Popper w/o Minimal Constraints w/o Sound Constraints w/o Sound Size Constraints addhead droplast evens finddup last len member sorted threesame 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 100±0 100±0 100±0 100±0 50±0 50±0 50±0 50±0 82±0 82±0 82±0 81±0 53±0 52±0 52±0 50±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 76±0 78±0.5 76±0 75±0 100±0 99±0 99±0 99±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 99±0 54±0 52±0 52±0 53±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 99±0 99±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 53±0 52±0 52±0 50±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 92±0.1 79±0.2 76±0 75±0 100±0 100±0 98±0 99±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 53±0 52±0 52±0 49±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 76±0 81±1.5 83±0.1 83±0.1 100±0 100±0 98±0 99±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 53±0 52±0 52±0 50±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 100±0 93±0.1 91±0.3 100±0 100±0 98±0 99±0 Table 6.5: Predictive accuracy for Noisy Popper variants on list manipulation prob- lems. Accuracies are rounded to the nearest integer and errors to the nearest tenth. Errors are standard. CHAPTER 6. EXPERIMENTAL RESULTS 81 Name Noise (%) Enumerate Noisy Popper w/o Minimal Constraints w/o Sound Constraints w/o Sound Size Constraints addhead droplast evens finddup last len member sorted threesame 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 0 5 10 20 2±0.2 4±0.1 4±0.2 5±1 5±0 5±0 6±0.1 6±0.2 6±0.4 5±0.4 6±0.3 5±0.3 7±1 9±1 5±0.6 6±1 3±0.4 4±0.2 4±0.1 3±0.2 2±0.3 3±0.2 3±0.2 3±0.1 0.7±0.1 2±0.1 2±0.2 2±0.2 5±0.3 4±0.8 5±0.2 5±0.2 0.5±0.1 5±0.2 5±0.1 5±0.2 2±0.1 79±4 74±2 74±3 81±39 135±45 142±33 137±6 7±0.5 38±1 45±0.5 40±3 39±2 36±1 39±2 40±2 15±4 19±0.5 21±0.3 20±0.3 2±0.1 59±6 56±2 56±1 0.7±0 23±3 23±3 23±3 26±4 46±0.1 43±0.9 43±0.2 0.5±0.1 2±0.1 3±0.3 4±0.2 3±0.2 68±2 65±5 81±3 71±5 89±3 92±2 92±2 14±2 47±2 58±0 45±5 51±3 49±2 40±0.8 45±0.3 13±0.6 52±2 51±0.8 52±1 3±0.2 60±4 62±2 58±0.8 0.6±0.2 33±0.5 30±0 25±1 43±7 55±2 50±2 47±0.7 0.6±0.1 82±5 78±2 76±2 3±0.2 183±7 195±5 211±7 205±8 215±8 217±8 217±8 17±3 77±3 102±3 85±9 95±3 96±2 83±0.6 90±2 9±2 61±3 60±2 62±2 2±0.2 67±2 66±3 67±1 1±0.1 35±0.2 26±0.2 28±0.5 71±2 69±0.5 69±0.4 72±0.5 0.6±0.1 5±0 2±0.1 3±0.3 1±0.1 80±0.2 91±1 102±2 95±3 112±3 110±2 138±5 2±0.1 14±0.4 15±0.2 13±1 13±2 18±3 10±2 12±1 2±0.4 18±0.4 18±0.2 19±0.1 1±0.1 15±2 18±1 16±1 0.5±0 22±0.1 20±0.1 16±0.5 11±0.6 15±0.3 15±0.7 14±0.4 0.4±0 2±0.1 0.8±0 1±0.1 Table 6.6: Learning times for Noisy Popper variants on list manipulation problems. Times are rounded to the nearest second if they are greater than 1 second and to the tenth otherwise. Errors are standard. CHAPTER 6. EXPERIMENTAL RESULTS 82 6.2.3 Experiment 3: IGGP Problems This set of experiments is identical to those in Section 6.1.3 using the two IGGP problems minimal decay and rps. Materials The materials are identical to those in Section 6.1.3. Methods The methods are identical to those in Section 6.1.3. Results and Analysis Figures 6.8 and 6.9 both demonstrate that on both IGGP tasks, each Noisy Popper variant obtains roughly equal predictive accuracy for these levels of noise. For both minimal decay and rps tasks, Noisy Popper without size constraints again performs significantly more efficiently than the other variants, again due to the grounding blowup previously discussed. This data again suggests that the answer to Q3 is that the enhancements of Noisy Popper do not necessarily improve its predictive accuracy over a brute force approach and that due to the grounding blowup of the size constraints, Noisy Popper performs much less efficiently than this brute force approach. 95 90 85 ) % ( y c a r u c c A e v i t c i d e r P ) s d n o c e s ( e m T i 6 4 2 0 0 Enumerate Noisy Popper w/o Min. Cons. w/o Sound Cons. w/o Size Cons. 0 10 20 Noise (%) 30 40 10 20 Noise (%) 30 40 Figure 6.8: IGGP minimal decay task predictive accuracy and time of Noisy Popper variants (in seconds) for when varying percentage of noisy training data. Standard error is depicted by bars. 6.3 Summary In this chapter, we empirically evaluated Noisy Popper’s performance against Normal Popper’s as well as the effects of the individual enhancements of the Noisy Popper CHAPTER 6. EXPERIMENTAL RESULTS 83 ) % ( y c a r u c c A e v i t c i d e r P 80 78 76 74 72 70 Enumerate Noisy Popper w/o Min. Cons. w/o Sound Cons. w/o Size Cons. 60 40 20 ) s d n o c e s ( e m T i 0 10 20 Noise (%) 30 40 0 10 20 Noise (%) 30 40 Figure 6.9: IGGP rps task predictive accuracy and time of Noisy Popper variants (in seconds) when varying percentage of noisy training data. Standard error is depicted by bars. system. Noisy Popper was shown to better generalize to noisy datasets than Nor- mal Popper for some tasks, but many experiments suggested that Normal Popper enhanced with an anytime algorithm approach can often generalize very well to these datasets. The minimal constraints and sound hypothesis constraints are effective at pruning the hypothesis space and aiding the system in finding optimal solutions by generating fewer hypotheses than the brute force Enumerate method requires. How- ever, this comes at the cost of the expected inefficiency when compared to Normal Popper and Enumerate. The grounding blowup of the size constraints makes generat- ing several thousand programs to learn from infeasible for the system. The following chapter will discuss work to be done in the future to mitigate these inefficiencies and other additions which may be considered for the system. We will also give a brief summary of this project, its findings and contributions as well as its limitations. Chapter 7 Conclusions This paper has discussed the theoretical background, implementation details, and empirical analysis of the Noisy Popper ILP system, an extension of the Normal Popper system [13] which is capable of generalizing to noisy datasets. The following sections give a critical summary and analysis of the work and address future work which could improve Noisy Popper’s capabilities. 7.1 Summary and Evaluation Handling missclassified training examples is an important task in machine learning, though many ILP systems are not naturally capable of doing so. We have shown that the learning from failures (LFF) approach which Normal Popper takes to prune its hypothesis search space is not naturally conducive to this task. The relaxed LFF setting introduced in this paper takes a less strict approach to hypothesis search and in doing so demonstrates better theoretical capabilities of finding hypotheses which generalize well to noisy data. We proved several theoretical claims over how comparing hypotheses can identify sets of hypotheses which perform suboptimally in this relaxed setting under two scoring measures: SACC which measures the training accuracy of a hypothesis and SM DL which weighs training accuracy against the size of the hypothesis. In implementation, Noisy Popper adapts the approach taken by Normal Popper, relaxing the ASP hypothesis constraints which prune the hypothesis space and instead typically only generating constraints which are sound in the relaxed setting. In this way, the theoretical claims made over the relaxed LFF system are translated into hypothesis constraints which reduce the hypothesis space during the system’s search. Many of these constraints however create a large cost in the logical grounding required 84 CHAPTER 7. CONCLUSIONS 85 by the system which leads to significant runtime inefficiencies. The experimental work demonstrated that Noisy Popper never generalizes worse than Normal Popper for both noisy and non-noisy datasets and is capable of exceeding the predictive accuracy of Normal Popper on several datasets. However, enhancing Normal Popper with an anytime algorithm approach makes the system very capable of generalizing to noisy data on its own. The main deficiency of Noisy Popper is its inefficiency which future work must address to make the system viable in practice. Despite these shortcomings, Noisy Popper shows promise of being a useful ILP system which accurately and efficiently generalizes to noisy datasets. 7.2 Future Work Recusive Cases Several of the theoretical claims proved in this paper only dis- cussed the suboptimality of non-recursive programs. Generalizing these claims or constructing new ones which discuss the suboptimality of recursive programs would make these claims more complete. Such claims could also be used in practice to greatly improve the efficiency of hypothesis search. Scoring Metrics In this paper, only two scoring functions were discussed, SACC and SM DL and all theoretical claims were derived under these two settings. Additional theory should be explored under both of these scorings and additional scorings such as those which measure coverage or entropy. Whether these claims manifest as new noise handling systems or as additions to Noisy Popper itself, determining new cases for hypothesis pruning can be used by many ILP systems moving forward and can better improve the searching efficiency of such systems. Grounding Bottleneck The largest bottleneck for Noisy Popper currently is the need to ground all programs for all applicable sizes for the hypothesis cosntraints with programs size. This leads to a blowup in number of required groundings which is the expected cause of the significant learning time difference from Normal Popper to Noisy Popper. Changing the ASP encodings for these size constraints to eliminate this blowup should massively improve the overall efficiency of the system and make it viable for much larger problems than those tested in Chpater 6. Insufficient time led to this problem remaining unresolved upon completion of this project. CHAPTER 7. CONCLUSIONS 86 Subsumption Checking Another large efficiency issue present in the implemen- tation is the naive method in which Noisy Popper compares programs and checks for subsumption. Rather than maintaining all previously seen programs as a list, data structures such as subsumption lattices may be used to reduce the overall number of subsumption checks needed. Given how expensive the subsumption is to check, or rather the incomplete subsumption we use in implementation, this is an improvement that would again boost the overall efficiency of the system and allow the system to better handle checking exceptionally large hypotheses. Parallelization Work to make use of multi-core machines and parallelize the Pop- per system has been completed and has been shown to greatly improve the learning rate of the system. Combining the noise handling approaches used in Noisy Pop- per with this parallized approach should vastly improve the efficiency of the system. Such work however is not trivial and determining new theoretical claims about such an environment are necessary for a sound implementation. Bibliography [1] John Ahlgren and Shiu Yin Yuen. Efficient program synthesis using constraint satisfaction in inductive logic programming. J. Mach. Learn. Res., 14(1):3649– 3682, 2013. [2] Alexessander Alves, Rui Camacho, and Eug´enio C. Oliveira. Improving numeri- cal reasoning capabilities of inductive logic programming systems. In Christian Lemaˆıtre, Carlos A. Reyes Garc´ıa, and Jes´us A. Gonz´alez, editors, Advances in Artificial Intelligence - IBERAMIA 2004, 9th Ibero-American Conference on AI, Puebla, Mexico, November 22-26, 2004, Proceedings, volume 3315 of Lecture Notes in Computer Science, pages 195–204. Springer, 2004. [3] Duangtida Athakravi, Domenico Corapi, Krysia Broda, and Alessandra Russo. Learning through hypothesis refinement using answer set programming. In Ger- son Zaverucha, V´ıtor Santos Costa, and Aline Paes, editors, Inductive Logic Programming - 23rd International Conference, ILP 2013, Rio de Janeiro, Brazil, August 28-30, 2013, Revised Selected Papers, volume 8812 of Lecture Notes in Computer Science, pages 31–46. Springer, 2013. [4] Hendrik Blockeel and Luc De Raedt. Top-down induction of first-order logical decision trees. Artif. Intell., 101(1-2):285–297, 1998. [5] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. End to end learning for self- driving cars, 2016. [6] Ivan Bratko. Refining complete hypotheses in ILP. In Saso Dzeroski and Peter A. Flach, editors, Inductive Logic Programming, 9th International Workshop, ILP- 99, Bled, Slovenia, June 24-27, 1999, Proceedings, volume 1634 of Lecture Notes in Computer Science, pages 44–55. Springer, 1999. 87 BIBLIOGRAPHY 88 [7] Domenico Corapi, Alessandra Russo, and Emil Lupu. Inductive logic program- ming in answer set programming. In Stephen Muggleton, Alireza Tamaddoni- Nezhad, and Francesca A. Lisi, editors, Inductive Logic Programming - 21st International Conference, ILP 2011, Windsor Great Park, UK, July 31 - August 3, 2011, Revised Selected Papers, volume 7207 of Lecture Notes in Computer Science, pages 91–97. Springer, 2011. [8] Andrew Cropper. Efficiently learning efficient programs. PhD thesis, Imperial College London, UK, 2017. [9] Andrew Cropper. Playgol: Learning programs through play. In Sarit Kraus, editor, Proceedings of the Twenty-Eighth International Joint Conference on Ar- tificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 6074– 6080. ijcai.org, 2019. [10] Andrew Cropper. Forgetting to learn logic programs. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 3676–3683. AAAI Press, 2020. [11] Andrew Cropper, Sebastijan Dumancic, and Stephen H. Muggleton. Turning 30: New ideas in inductive logic programming. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 4833–4839. ijcai.org, 2020. [12] Andrew Cropper, Richard Evans, and Mark Law. Inductive general game play- ing. Mach. Learn., 109(7):1393–1434, 2020. [13] Andrew Cropper and Rolf Morel. Learning programs by learning from failures. Mach. Learn., 110(4):801–856, 2021. [14] Andrew Cropper, Rolf Morel, and Stephen Muggleton. Learning higher-order logic programs. Mach. Learn., 109(7):1289–1322, 2020. [15] Andrew Cropper and Sophie Tourret. Logical reduction of metarules. Mach. Learn., 109(7):1323–1369, 2020. BIBLIOGRAPHY 89 [16] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Con- ference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248–255. IEEE Computer Society, 2009. [17] Richard Evans and Edward Grefenstette. Learning explanatory rules from noisy data. J. Artif. Intell. Res., 61:1–64, 2018. [18] Michael R. Genesereth and Yngvi Bj¨ornsson. The international general game playing competition. AI Mag., 34(2):107–111, 2013. [19] Peter Gr¨unwald and Teemu Roos. Minimum description length revisited. Inter- national Journal of Mathematics for Industry, 11(01):1930001, Dec 2019. [20] Tobias Kaminski, Thomas Eiter, and Katsumi Inoue. Exploiting answer set programming with external sources for meta-interpretive learning. Theory Pract. Log. Program., 18(3-4):571–588, 2018. [21] Deepak Kapur and Paliath Narendran. Np-completeness of the set unification and matching problems. In J¨org H. Siekmann, editor, 8th International Con- ference on Automated Deduction, Oxford, England, July 27 - August 1, 1986, Proceedings, volume 230 of Lecture Notes in Computer Science, pages 489–495. Springer, 1986. [22] Peter A Lachenbruch. Mcnemar test. Wiley StatsRef: Statistics Reference On- line, 2014. [23] Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. CoRR, abs/1604.00289, 2016. [24] Mark Law. Inductive learning of answer set programs. PhD thesis, Imperial College London, UK, 2018. [25] Mark Law, Alessandra Russo, and Krysia Broda. Inductive learning of answer set programs. In Eduardo Ferm´e and Jo˜ao Leite, editors, Logics in Artificial In- telligence - 14th European Conference, JELIA 2014, Funchal, Madeira, Portugal, September 24-26, 2014. Proceedings, volume 8761 of Lecture Notes in Computer Science, pages 311–325. Springer, 2014. BIBLIOGRAPHY 90 [26] Mark Law, Alessandra Russo, and Krysia Broda. Inductive learning of answer set programs from noisy examples. CoRR, abs/1808.08441, 2018. [27] Dianhuan Lin, Eyal Dechter, Kevin Ellis, Joshua B. Tenenbaum, and Stephen Muggleton. Bias reformulation for one-shot function induction. In Torsten Schaub, Gerhard Friedrich, and Barry O’Sullivan, editors, ECAI 2014 - 21st Eu- ropean Conference on Artificial Intelligence, 18-22 August 2014, Prague, Czech Republic - Including Prestigious Applications of Intelligent Systems (PAIS 2014), volume 263 of Frontiers in Artificial Intelligence and Applications, pages 525– 530. IOS Press, 2014. [28] Donald Michie. Machine learning in the next five years. In Derek H. Sleeman, editor, Proceedings of the Third European Working Session on Learning, EWSL 1988, Turing Institute, Glasgow, UK, October 3-5, 1988, pages 107–122. Pitman Publishing, 1988. [29] Herman Midelfart. A bounded search space of clausal theories. In Saso Dzeroski and Peter A. Flach, editors, Inductive Logic Programming, 9th International Workshop, ILP-99, Bled, Slovenia, June 24-27, 1999, Proceedings, volume 1634 of Lecture Notes in Computer Science, pages 210–221. Springer, 1999. [30] Tom M. Mitchell. Machine learning, International Edition. McGraw-Hill Series in Computer Science. McGraw-Hill, 1997. [31] Tom M. Mitchell, William W. Cohen, Estevam R. Hruschka Jr., Partha P. Taluk- dar, Bo Yang, Justin Betteridge, Andrew Carlson, Bhavana Dalvi Mishra, Matt Gardner, Bryan Kisiel, Jayant Krishnamurthy, Ni Lao, Kathryn Mazaitis, Thahir Mohamed, Ndapandula Nakashole, Emmanouil A. Platanios, Alan Ritter, Mehdi Samadi, Burr Settles, Richard C. Wang, Derry Wijaya, Abhinav Gupta, Xinlei Chen, Abulhair Saparov, Malcolm Greaves, and Joel Welling. Never-ending learning. Commun. ACM, 61(5):103–115, 2018. [32] Stephen Muggleton. Inductive logic programming. New Gener. Comput., 8(4):295–318, 1991. [33] Stephen Muggleton. Inverse entailment and progol. New Gener. Comput., 13(3&4):245–286, 1995. BIBLIOGRAPHY 91 [34] Stephen Muggleton, Wang-Zhou Dai, Claude Sammut, Alireza Tamaddoni- Nezhad, Jing Wen, and Zhi-Hua Zhou. Meta-interpretive learning from noisy images. Mach. Learn., 107(7):1097–1118, 2018. [35] Stephen H. Muggleton, Dianhuan Lin, and Alireza Tamaddoni-Nezhad. Meta- interpretive learning of higher-order dyadic datalog: predicate invention revis- ited. Mach. Learn., 100(1):49–73, 2015. [36] Stephen H. Muggleton, Ute Schmid, Christina Zeller, Alireza Tamaddoni- Nezhad, and Tarek R. Besold. Ultra-strong machine learning: comprehensibility of programs learned with ILP. Mach. Learn., 107(7):1119–1140, 2018. [37] Shan-Hwei Nienhuys-Cheng and Ronald De Wolf. Foundations of inductive logic programming, volume 1228. Springer Science & Business Media, 1997. [38] Gordon Plotkin. Automatic methods of inductive inference. PhD thesis, The University of Edinburgh, 1972. [39] J. Ross Quinlan. Induction of decision trees. Mach. Learn., 1(1):81–106, 1986. [40] J. Ross Quinlan. Learning logical definitions from relations. Mach. Learn., 5:239–266, 1990. [41] J. Ross Quinlan and Ronald L. Rivest. Inferring decision trees using the minimum description length principle. Inf. Comput., 80(3):227–248, 1989. [42] Jorma Rissanen. Modeling by shortest data description. Autom., 14(5):465–471, 1978. [43] Peter Sch¨uller and Mishal Benz. Best-effort inductive logic programming via fine- grained cost-based hypothesis generation - the inspire system at the inductive logic programming competition. Mach. Learn., 107(7):1141–1169, 2018. [44] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Vedavyas Panneer- shelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nat., 529(7587):484–489, 2016. BIBLIOGRAPHY 92 [45] Ashwin Srinivasan. The aleph manual. Machine Learning at the Computing Laboratory, Oxford University, 2001. [46] Lisa Torrey, Jude W. Shavlik, Trevor Walker, and Richard Maclin. Relational macros for transfer in reinforcement learning. In Hendrik Blockeel, Jan Ra- mon, Jude W. Shavlik, and Prasad Tadepalli, editors, Inductive Logic Program- ming, 17th International Conference, ILP 2007, Corvallis, OR, USA, June 19- 21, 2007, Revised Selected Papers, volume 4894 of Lecture Notes in Computer Science, pages 254–268. Springer, 2007.
ai_researcher
3
Argumentative_Experience_Reducing_Confirmation_Bias_on_Controversial_Issues_through_LLM-Generated_Multi-Persona_Debates.pdf
ArgU: A Controllable Factual Argument Generator Sougata Saha and Rohini Srihari State University of New York at Buffalo Department of Computer Science and Engineering {sougatas, rohini}@buffalo.edu 3 2 0 2 y a M 9 ] L C . s c [ 1 v 4 3 3 5 0 . 5 0 3 2 : v i X r a Abstract Effective argumentation is essential towards a purposeful conversation with a satisfactory outcome. For example, persuading someone to reconsider smoking might involve empathetic, well founded arguments based on facts and ex- pert opinions about its ill-effects and the con- sequences on one’s family. However, the au- tomatic generation of high-quality factual ar- guments can be challenging. Addressing ex- isting controllability issues can make the re- cent advances in computational models for ar- In gument generation a potential solution. this paper, we introduce ArgU: a neural ar- gument generator capable of producing fac- tual arguments from input facts and real-world concepts that can be explicitly controlled for stance and argument structure using Walton’s argument scheme-based control codes. Unfor- tunately, computational argument generation is a relatively new field and lacks datasets con- ducive to training. Hence, we have compiled and released an annotated corpora of 69,428 ar- guments spanning six topics and six argument schemes, making it the largest publicly avail- able corpus for identifying argument schemes; the paper details our annotation and dataset creation framework. We further experiment with an argument generation strategy that es- tablishes an inference strategy by generating an “argument template” before actual argu- ment generation. Our results demonstrate that it is possible to automatically generate diverse arguments exhibiting different inference pat- terns for the same set of facts by using control codes based on argument schemes and stance. 1 Introduction Although arguing is an innate human quality, for- mulating convincing arguments is an art. A success- ful narrative aiming to persuade someone should be rhetorically appealing, trustworthy, factually cor- rect, and logically consistent, which makes formu- lating good arguments challenging. Incorporating neural language models, the relatively new field of computational argument generation has shown promise in assisting with argument synthesis. Argu- ment generators like Project Debater (Slonim et al., 2021) have successfully formulated convincing ar- guments across different domains including legal, politics, education, etc., and can potentially find new argumentative connections. However, lack- ing explicit control mechanisms, neural argument generators often render illogical and inappropri- ate arguments, reducing their trustworthiness and applicability for practical use. Furthermore, train- ing such models requires a considerable amount of quality data, which is hard to collect and an- notate. Hence, we propose ArgU, a controllable neural argument generator trained on a curated and quality-controlled corpus of annotated argument texts from abortion, minimum wage, nuclear en- ergy, gun control, the death penalty and school uniform. Figure 1: Generating stance and argument scheme con- trolled factual arguments using ArgU. ArgU strives to enable effective, scalable and ap- pealing argument generation. As depicted in Figure 1, it takes as input worldly knowledge and concepts as variables and coherently combines them to gen- erate an argument that exhibits the desired pro/con stance and inference structure. Using control codes to regulate argument stance and reasoning, ArgU generates a variety of argument texts for the same set of facts, thus providing diverse response op- tions. Internally ArgU implements a 2-step genera- tion process, where it first generates an “argument template”, which depicts the structure of the final argument based on the control codes, and finally yields the argument text by modifying the template to include the augmented input fact variables. We ground our work on prominent theoretical founda- tions, where the inference structure-based control codes derive from six Walton’s argument schemes: “Means for Goal”, “Goal from Means”, “From Con- sequence”, “Source Knowledge”, “Source Author- ity”, and “Rule or Principle”. Since human annotation is expensive and time- consuming, we devise a multi-phased annotation framework for systematically leveraging human and automatic annotation mechanisms to yield a curated dataset of 69,428 examples for controllable argument synthesis. We release our curated corpus to facilitate further research; an example constitutes an argument text, a set of real-world concepts and knowledge from which the argument derives, and the stance and argument scheme of the text. We fur- ther detail and analyze our annotation framework and share variants of topic-independent computa- tional models for automatically annotating factual spans from argument text and identifying the as- serted argument schemes. We summarize our con- tributions below: • We propose an argument generator that me- thodically generates factual arguments follow- ing a specified stance and argument scheme (Sec. 4). • We share a quality-controlled annotated dataset conducive to training such generators. To our knowledge, this is the largest available corpora that identify argument schemes from argument text (Sec. 3.2.4). • We share our annotation framework and release domain-independent computational models that automatically identify factual spans and argument schemes from argument text from any topic (Sec. 3). 2 Related Work Argument schemes are typical inference patterns found in arguments. Walton provided an in-depth study of argument schemes (Walton et al., 2008) and defined 60 such schemes prevalent in daily argument text. Based on Walton’s argumentation schemes, Kondo et al. (2021) proposed represent- ing the reasoning structure of arguments using Bayesian networks and defined abstract network fragments termed idioms, which we use here. Advances in neural methods for language mod- elling have enabled the field of computational ar- gument generation. Hua and Wang (2018) intro- duced a factual argument generator that generates opposite stance arguments by yielding a set of talk- ing point key phrases, followed by a separate de- coder to produce the final argument text. Hua et al. (2019) proposed Candela, a framework for counter- argument generation similar to Hua and Wang (2018), which also controls for the style. Schiller et al. (2021) introduced Arg-CTRL: a language model for generating sentence-level arguments us- ing topic, stance, and aspect-based control codes (Keskar et al., 2019). Khatib et al. (2021) con- structed argumentation-related knowledge graphs and experimented with using them to control argu- ment generation. Alshomary et al. (2021) explored a novel pipelined approach to generating counter- arguments that first identifies a weak premise and then attacks it with a neurally generated counter- argument. Hypothesizing that the impact of an argument is strongly affected by prior beliefs and morals, Alshomary et al. (2022) studied the feasibil- ity of the automatic generation of morally framed argument text and proposed an argument genera- tor that follows the moral foundation theory. Syed et al. (2021) introduced the task of generating in- formative conclusions from arguments. They com- piled argument text and conclusion pairs and ex- perimented with extractive and abstractive mod- els for conclusion generation using control codes. Chakrabarty et al. (2021) experimented with argu- ment text re-framing for positive effects. They created a suitable corpus and trained a control- lable generator with a post-decoding entailment component for re-framing polarizing and fearful arguments such that it can reduce the fear quo- tient. Our work best aligns with Arg-CTRL and Candela, where we use control codes to regulate argument generation and implement a multi-step decoding pipeline to generate the final argument. However, unlike Arg-CTRL, we control for the argument scheme, and unlike Candela, our multi- step decoding utilizes an argument template as an intermediate step. Most argumentation datasets identify argumen- tative components (claims, premises, etc.), making them better suited for argument-mining tasks (Stab and Gurevych, 2014; Peldszus, 2015; Ghosh et al., 2016; Hidey et al., 2017; Chakrabarty et al., 2019). Further, existing argument scheme annotated cor- pora are either very restricted in domain and size (Reed et al., 2008; Feng and Hirst, 2011; Green, 2015; Musi et al., 2016; Visser et al., 2022; Jo et al., 2021) or only provide guidelines and tools for anno- tations (Visser et al., 2018; Lawrence et al., 2019). Hence, we use the BASN dataset (Kondo et al., 2021), which contains sizeable examples spanning six topics and identify argument schemes. 3 Argument Generation Corpus Training a factual argument generator controlled for the stance and argument scheme requires exam- ples that identify such features from the text: such a corpus is lacking. Hence, we introduce a two- phased annotation framework that yields a corpus of 69,428 examples which (i) identify argument schemes and factual spans from argument text and (ii) grounds the spans to a knowledge base (KB). In the first phase, we employ human annotators to identify factual spans from a subset of an existing dataset of 2,990 arguments which already identifies argument schemes. We further train computational models to annotate the remaining corpus for factual spans and perform extensive quality checks. In the second phase, we train models from the resultant Phase 1 dataset to automatically annotate a larger parallel corpus for both argument scheme and fac- tual spans, yielding an annotated corpus1 of 69,428 arguments for training argument generators. 3.1 Phase 1 (P1): Initial Corpus Creation Kondo et al. (2021) introduced the BASN dataset comprising 2,990 pairs of arguments and abstract network fragments derived from six Walton’s argu- mentation schemes: “Means for Goal”, “Goal from Means”, “From Consequence”, “Source Knowl- edge”, “Source Authority”, “Rule or Principle”, and “Others”. They utilized a knowledge base (KB) of 205 facts (termed as variables) spanning the top- ics of abortion, minimum wage, nuclear energy, gun control, the death penalty and school uniform to define the idioms. Figure 6 (Appendix A) illus- trates an example from the BASN dataset where variables from the KB formulate a pro-stance ar- gument following the “Means for goals” argument scheme. We perform two annotation tasks in P1: (i) Span Detection: Annotate arguments by identi- fying (highlighting) non-overlapping factual spans from argument text. (ii) Span Grounding: Ground the identified factual spans to the available KB vari- ables, or “Others” if the span is unrelated to any available variables. 1All dataset and code to be released post acceptance. We annotate 1,153 randomly sampled examples spanning all six topics and train a model for auto- matically annotating the remaining examples. We further perform human evaluations to determine the correctness of the automatic annotations. 3.1.1 Human Expert Annotation Using Doccano (Nakayama et al., 2018), we an- notated 1,153 examples from the BASN corpus for both the tasks of span detection and grounding, where each sample comprised an argument and a minimum of 2 to a maximum of 5 fact variables from the KB. Figure 5 (Appendix A) contains a screenshot from our Doccano annotation task. We employed two expert annotators with a background in computational linguistics and computer science for the annotation task. To be efficient with re- sources, each annotator independently annotated non-overlapping examples. Further, to ensure con- sistency across annotations, we computed inter- annotator agreement over 66 samples, which re- sulted in a Cohen’s Kappa score of 0.79, indicating substantially high agreement. 3.1.2 Automatic Annotation: ArgSpan We train ArgSpan: a Roberta-based tagger (Liu et al., 2019), on the annotated examples for auto- matically annotating the rest of the BASN dataset for both tasks. Figure 2 illustrates ArgSpan’s architecture. ArgSpan inputs concatenated argu- ment and fact variables and encodes them using a Roberta-based encoder. It reduces the hidden rep- resentation for each fact variable by passing the beginning of the string token (BOS) through a fully connected neural network layer. Finally, it uses a biaffine layer to capture the interaction between the argument text and each variable. The model is trained end-to-end by minimizing the cross entropy loss between the predicted logit for each argument token and the actual BIO scheme encoded target la- bel. Appendix A.1 contains further training details. Figure 2: ArgSpan Architecture. 3.1.3 Evaluation We automatically annotate the remaining BASN samples using ArgSpan. To gauge the quality of the automatic annotations, we ask one of the hu- man evaluators to annotate 300 random examples from the remaining samples using Doccano and compare them with the model predictions. De- tailed in Figure 7 (Appendix A), we evaluate Span Detection by computing the F1 score between the overlapping predicted and human-identified tokens and achieve an average score of 91.1% across all 300 examples. We measure accuracy for evaluating Span Grounding and attain a score of 89.2%. With the additional 300 examples (total of 1,453), we re-train ArgSpan and perform inference on the re- maining BASN samples, yielding a fully annotated corpus of 2,990 examples with KB-grounded fac- tual spans and argument schemes from argument text. Also, we observe very few examples of the “Goal From Means” scheme in the resultant dataset and combine it with the more prevalent “Means for Goal” scheme, resulting in six argument schemes. 3.2 Phase 2 (P2): Corpus Expansion Kondo et al. (2021) used crowd-sourcing to create the BASN dataset, where crowd workers formu- lated argument text from a knowledge base com- prising a limited number of premise-conclusion pairs (fact variables). Although such an approach resulted in a considerable number of arguments, using approximately 34 fact variables per topic, it lacks variety. Training an argument generator on such a corpus would limit its generalizability and use. Hence, we expand the P1 dataset with a parallel corpus (PC) of 66,180 examples from the Aspect-Controlled Reddit and CommonCrawl corpus by Schiller et al. (2021), and 733 combined examples from the Sentential Argument Mining, Arguments to Key Points and the debate portal- based Webis datasets (Stab et al., 2018; Friedman et al., 2021; Bar-Haim et al., 2020; Ajjour et al., 2019). Since the PC examples do not identify fac- tual spans and argument schemes, we use the fully annotated P1 dataset to train ArgSpanScheme: a Roberta-based model that identifies factual spans and argumentation schemes from argument text. We automatically annotate the PC using ArgSpan- Scheme and combine them with the P1 dataset, to yield the P2 dataset. 3.2.1 ArgSpanScheme Architecture Illustrated in Figure 3, we experiment with two vari- ants of ArgSpanScheme to jointly extract factual spans and predict argument schemes from argu- ment text. Both architectures use a Roberta-based encoder to encode an input argument text and differ in the final prediction layers, as detailed below. Parallel Architecture Here we use two indepen- dent classification heads: (i) A span detection head which uses a linear layer to extract factual spans by classifying each encoded argument token as be- longing to one of the three BIO tags. (ii) A scheme detection head which uses a linear layer to pre- dict argument schemes by performing a multi-label (six labels including “Others”) classification on the mean pooled encoded argument tokens. Pipelined Architecture Argument schemes repre- sent structures of inference and are invariant to the constituent facts. For example, although both ar- guments A: “Increase in the minimum wage is not favourable as it can increase unemployment”, and B: “Increase in gun laws are favourable as it re- duces gun violence”, are from different topics, they follow a similar structure “X is/are (not) favourable as it Y”, exhibiting “From Consequences” argu- ment scheme. As depicted in Figure 3, we model this by performing selective multi-headed attention. We mask the factual spans predicted by the span de- tection head and apply two layers of multi-headed self-attention on the remaining tokens. Finally, we pass the BOS token representation through a linear layer to predict the argument schemes. Appendix A.2 contains further training details. Figure 3: ArgSpanScheme Architectures. 3.2.2 Modelling Results and Evaluation For both tasks of span and scheme detection, we compare the F1 score of the parallel and pipelined architectures across different data splits. We per- form a 5-fold Cross Validation (CV) by randomly splitting the resultant dataset from P1 into 93% training and 7% validation split. We further assess the generalizability of ArgSpanScheme by training and validating on examples from non-overlapping topics. As illustrated in Figure 8 (Appendix A), we set up five data splits (ids 1 to 5) comprising three combination ratios of training-validation top- ics (5:1, 4:2, and 2:4), which increases the difficulty Split Partial Span Full CV 0.86/0.85 0.70/0.70 5:1 0.76/0.77 4:2 0.74/0.74 2:4 0.92/0.91 0.77/0.78 0.84/0.84 0.82/0.80 Overall 0.89/0.89 0.81/0.81 0.85/0.85 0.82/0.80 From Consequence 0.94/0.93 0.65/0.65 0.60/0.71 0.63/0.73 From Source Authority 0.92/0.91 0.68/0.85 0.67/0.70 0.69/0.67 From Source Knowledge 0.88/0.90 0.48/0.48 0.49/0.49 0.50/0.49 Scheme Goal From Means/ Means from Goals 0.96/0.95 0.48/0.56 0.45/0.47 0.47/0.46 Rule or Principle 0.97/0.96 0.64/0.66 0.49/0.55 0.73/0.77 Other Overall 0.88/0.86 0.46/0.46 0.49/0.49 0.46/0.46 0.95/0.94 0.68/0.69 0.75/0.82 0.70/0.77 Table 1: ArgSpanScheme span and scheme prediction results for Parallel / Pipelined versions. The best performing model for each data split and task is highlighted in bold. by reducing the number of training topics. Evaluating Span Prediction: For span detection we compute the F1 score at three levels of overlap: (i) Partial Overlap: A span level metric where a predicted span is true positive if at least 50% of its tokens overlap with the actual span. (ii) Full Overlap: A span level metric where a predicted span is true positive if all of its tokens overlap with the actual span. (iii) Overall: A token level metric which compares the predicted and actual token BIO labels. Table 1 shares the CV and combination ratio aggregated results for span detection. We observe similar performance for both ArgSpanScheme ver- sions across all three levels of overlap. Evaluating Scheme Prediction: We compare scheme-wise and overall F1 scores and share the re- sults in Table 1. We observe that the parallel archi- tecture slightly outperforms the pipelined version in CV, whereas the pipelined version almost always performs better for the non-overlapping splits. The results indicate that for scheme detection, incorpo- rating a generalizable architecture by emphasizing the argument structure rather than the factual spans does lead to better results on unseen topics. 3.2.3 Automatic Annotation & Human Eval. Based on the analysis of automatic evaluation re- sults, we train a final pipelined version of ArgSpan- Scheme on the P1 dataset and perform inference on the PC to automatically annotate it for factual spans and argument schemes. We randomly sample 200 annotations and perform a human evaluation using one evaluator to ascertain the annotation quality. Evaluating Span Prediction: We present the hu- man evaluator with an argument text along with the model predicted spans and ask them to rate each example using two custom metrics: (i) Span Preci- sion: On a continuous scale of 1 (low) to 5 (high), how sensible are the identified spans? Spans which are unnecessarily long or abruptly short are penal- ized. This metric evaluates whether the identified spans adequately convey meaningful information. (ii) Span Recall: On a continuous scale of 1 (low) to 5 (high), how well does the model perform in identifying all factual spans? Examples which fail to identify spans conveying real-world concepts and factual knowledge are penalized. We observe an average score of 4.1 (median 4.7) for Span Pre- cision and 3.9 (median 4.4) for Span Recall, indi- cating the reliability of the automatic annotations. Evaluating Scheme Prediction: Since identify- ing argument schemes is a much more difficult task, we first measure the evaluator’s competency by presenting 30 random arguments from the BASN dataset and asking them to label each argument text with the most likely argument scheme. We com- pared the evaluator-assigned labels with the golden labels and found them to be matching in 53.3% of cases, with most matches belonging to the “from consequences”, “rule or principle”, and “means for goal” schemes. Although the labels majorly con- firm, the fair amount of disagreement testifies to the task difficulty. Further, Table 5 (Appendix A) lists a few examples where we believe the evalua- tor labels are more accurate than the actual ones. Post-assessment, we asked the evaluator to evaluate the predicted argument schemes of the previously sampled 200 examples with a binary flag, where 1 signifies agreement and 0 signifies disagreement, and observe a fair agreement rate of 73%. 3.2.4 Dataset Post-processing The PC initially contains 1,272,548 examples, which we automatically annotate for span and ar- gument scheme using ArgSpanScheme. We persist samples where an argument scheme’s predicted probability is at least 20% of the scheme’s average probability and discard examples with the scheme predicted as “Others”. To make the PC consistent with the P1 data, we implement the following steps to normalize and ground the ArgSpanScheme-identified factual spans to the existing KB comprising fact variables from BASN or expand the KB with new knowl- (i) Direct Mapping: edge wherever applicable. Using sentence transformer embedding-based co- sine similarity (Reimers and Gurevych, 2019) and a threshold of 0.85, we associate factual spans from the annotated PC with its most similar fact variable from the KB. (ii) Indirect Mapping: We use the sentence transformer-based community detection clustering algorithm to cluster similar factual spans from the annotated PC. For directly unmapped spans, we associate the KB fact variable of the near- est neighbour in its cluster. Figure 9 (Appendix A) further illustrates each step in detail. We apply a series of filtering steps to ensure the quality of the final corpus. We only keep examples containing a maximum of 30% unnormalized fac- tual spans and add those facts to the KB. Next, we discard instances containing more than 150 words in the argument text and persist examples contain- ing 1-4 fact variables, with each variable present 2-4 times. Finally, to ensure argumentativeness, we parse the argument text using the Dialo-AP argument parser (Saha et al., 2022) and keep ex- amples containing at least one claim. We combine the filtered PC with the P1 dataset to yield 69,428 examples, which we use for argument generation. 4 Controllable Argument Generation Arguments based on similar facts but structured differently might lead to dissimilar consequences by exerting different perlocutionary effects. For ex- ample, consider argument A: “Reproductive rights advocates say enabling access to abortion is impor- tant towards reproductive rights”, which exhibits the “From Source Authority” argument scheme, and B: “Access to abortion is important towards reproductive rights”, which expresses “From Con- sequence”. Although both arguments share the same view regarding the role of abortion in repro- ductive rights, backed by reproductive rights advo- cates who are experts, argument A might lead to a favourable outcome in a situation that demands authority. To assist the formulation of arguments exhibiting heterogeneous viewpoints and reason- ing, we experiment with BART-based (Lewis et al., 2020) neural argument generators capable of gen- erating factual argument text with distinct stances and argument schemes using control codes. 4.1 Model Architecture 4.1.1 Encoder The model inputs a concatenated representation I1 of the argument topic and the required KB fact variables. We prefix each variable with a token <VAR_X> where X ∈ [0, 3] is an incremental id enforcing a random ordering over the variables. The representation I1 is passed through a BART encoder E to yield a hidden representation H. 11). 4.1.2 Decoder A BART based decoder inputs H along with a set of control codes to generate the final argument A. We experiment with two types of decoding: Single Step Decoding: ArgU-Mono: As depicted in Figure 4, following the standard decoding strat- egy of an encoder-decoder architecture, the decoder D1 inputs H along with three control codes (DI 11) comprising the desired stance, argument scheme, and the argument text BOS token ‘<argument>’, and learns the distribution P(A|I1, DI Dual Step Decoding: ArgU-Dual An argument generally exhibits structured reasoning by coher- ently combining variables using appropriate con- nectives and clauses. For example, the variables A: “introduce death penalty” and B: “reduce crime” can be combined as “A has shown evidence in B”, resulting in a pro-death penalty argument “Intro- ducing the death penalty has shown evidence in reducing crime”. Following the same template of “A has shown evidence in B”, the variables A: “en- force gun laws” and B: “reduce gun violence” can be combined to form an argument “Enforcing gun laws has shown evidence in reduction of gun vi- olence”. The ArgU-Dual architecture implements “argument templates” to model this property, where distinct argument texts exhibit similar structure and reasoning over variables. To condition the argument generation on its tem- plate, we train decoder D2 to create an argument template T before generating the actual argument A. As depicted in Figure 4, D2 inputs H and a set of three control codes (DI 21) comprising the de- sired stance, argument scheme, and the template BOS token ‘<pattern>’, to learn the probability distribution P(T|I1, DI 21). Next, we suffix T with the argument BOS token ‘<argument>‘, and pass through D2 to generate the final argument text and learn the distribution P(A|T, DI 22). 4.2 Training, Experiments and Results Figure 4 illustrates our encoder-decoder based model architecture, which we discuss below. We use the resultant P2 dataset for our experiments and create random train-test set of 67,728 and 1,700 Figure 4: ArgU-Mono and Dual End-to-end Architectures. examples. To analyze the effect of each type of con- trol code, we also perform ablation analysis and train two model variants: ArgU-Stance and ArgU- Scheme. Both implementations follow the same encoding and decoding steps as ArgU-Mono, with the only difference being the absence of scheme or stance-based control codes in respective architec- tures. Training details in Appendix A.3. 4.2.1 Automatic Evaluation Results Apart from comparing standard metrics like corpus BLEU (Papineni et al., 2002) and Rouge-L (Lin, 2004), we define the following metrics to evalu- ate each model. (i) Fact Faithfulness (Fact): This evaluates fact faithfulness by measuring the simi- larity between the input variables and the generated argument. We use the sentence transformer’s se- mantic textual similarity to compute the average cosine similarity between the embeddings of the input variables and the model-generated argument, where a higher score correlates with better utiliza- tion of the fact variables. (ii) Entailment (Entail) & Contradiction (Contra): This evaluates the re- latedness between the original and generated argu- ment. We use AllenNLP’s (Gardner et al., 2018) Roberta-based textual entailment model pre-trained on the SNLI dataset (Bowman et al., 2015) to deter- mine whether a generated argument entails (higher better) or contradicts (lower better) the original argument with at least 0.8 probability. We share our results in Table 2 and observe that compared to others, ArgU-Dual majorly yields bet- ter BLEU and RougeL scores and attains the best entailment results, indicating a better correlation with the original argument. On the contrary, using only argument schemes and stance-based control Model BLEU RougeL Fact Entail Contra 0.140 0.144 0.133 0.191 Mono Dual Stance Scheme 0.399 0.406 0.400 0.360 0.379 0.381 0.375 0.377 0.150 0.158 0.151 0.151 0.641 0.641 0.641 0.642 Table 2: Argument generation automatic evaluation re- sults with best model highlighted for each metric. codes generally performs worse. We also observe that ArgU-Mono performs almost at par with ArgU- Stance across all metrics, whereas ArgU-Scheme contradicts the original argument the most. The results not only indicate the benefit of using both stance and scheme-based control codes but also indicate the superiority of the Dual architecture compared to Mono. 4.2.2 Human Evaluation Results We perform a human evaluation study using the evaluators from Section 3.1.1. We created a work- sheet with 50 random examples from the test set, where an example constitutes the argument topic, input KB variables, desired stance and argument scheme, the original argument from the dataset, and the generated argument text from each of the four models. The evaluators were asked to rate each generated argument text on the following five metrics. (i) Fluency: On a scale of 1 (low) to 5 (high), this scores the fluency and grammatical cor- rectness of an argument. (ii) Stance Appropriate- ness (Stance): On a binary scale, this determines if the stance exhibited by a generated argument aligns with the desired stance passed as control (iii) Scheme Appropriateness (Scheme): code. On a binary scale, this determines if the argument scheme exhibited by a generated argument aligns ID Topic Variables Scheme Stance 1 2 3 4 5 6 7 8 Death Penalty Abortion <VAR_0> human rights around the world <VAR_1> mandatory death sentence <VAR_0> reproductive health and rig- hts advocates <VAR_1> stop people from having abortions From Source Authority Rule or Principle From Conse- quence From Source Knowledge Pro Con Pro Con Pro Con Pro Con Argument Template <VAR_0>supporters of the bill say it is a step toward <VAR_1> <VAR_0>advocates have long argued that <VAR_1> <VAR_1>is not a violation of <VAR_0> <VAR_1>is a violation of <VAR_0> <VAR_1>is an important step toward <VAR_0> <VAR_1>does nothing to <VAR_0> <VAR_1>has been proven to be effective in <VAR_0> <VAR_1>is not the answer to <VAR_0> Argument Text human rights supporters of the bills say it is a step towards a mandatory death sentence human rights advocates have long advocated that mandatory death sentences should be abolished mandatory death sentence is not a violation of human rights mandatory death sentence is a violation to international human rights law banning abortion is an important stepping toward reproductive rights banning abortion does nothing to advance women s reproductive rights restricting access to abortion has been proved to be ineffective in protecting women s reproductive rights banning abortion is not the solution to women s reproductive rights Comments Generated arg incorporates input control codes, variables and generated arg template Pro & con args swapped Generated arg template modified during arg generation Table 3: ArgU Generated Samples. with the desired scheme passed as control code. (iv) Fact Faithfulness (Fact): On a scale of 1 (low) to 5 (high), this determines how well the generated argument incorporates the input variables. Ignor- ing variables or including additional facts (hallu- cination) are penalized. (v) Logical Coherence (Logic): A subjective metric that rates the overall sensibleness of the logic portrayed by the generated argument text on a scale of 1 (low) to 5 (high). Model Mono Dual Stance Scheme Fluency (K=0.61) 4.99 4.86 4.95 4.98 Stance (K=0.87) 0.78* 0.80* 0.84 0.65* Scheme (K=0.9) 0.83 0.83 0.79* 0.79* Fact (K=0.68) 3.89 3.88 3.85 3.81 Logic (K=0.71) 4.01 4.06 3.98* 4.17 Table 4: Argument generation human evaluation results with best model highlighted for each metric. We measure inter-annotator agreement by com- puting Cohens kappa (K) and observe substan- tial to high agreement across all metrics. Table 4 shares the averaged ratings from both evalua- tors. For each metric, we highlight in bold the best performing model(s) and mark with an as- terisk the model(s) where the difference from the best is at least 5%. The fluency and fact met- ric results indicate that all models are fluent in generating arguments while incorporating the in- put variables, with ArgU-Mono performing the best. Trained with only stance-based control codes, ArgU-Stance yields the best results for stance ap- propriateness, while trained with only scheme- based control codes, ArgU-Scheme rates the lowest. Contrastly, ArgU-Scheme attains the highest rat- ing for generating logically coherent arguments, whereas ArgU-Stance achieves the lowest rating. Thus, indicating the usefulness of using stance and scheme-based control codes for argument text gen- eration. The ArgU-Dual and Mono variants rate similarly for both metrics, and rate high for scheme appropriateness, indicating that using control codes, the stance and scheme of an argument can be suc- cessfully controlled in tandem. 4.3 Discussion Table 3 contains arguments generated by ArgU- Dual. Examples 1 and 2 show the model’s capabil- ity of generating authoritative argument text with the correct stance by referring to human rights ad- vocates and supporters, thus exhibiting the “From Source Authority” argument scheme. Similarly, examples 3 and 4 denote the model’s capability of generating appropriate argument text following the “Rule or Principle” argument scheme for both stances. Examples 5 and 6 depict a scenario where the generator demonstrates shallow understanding and inanely combines the input variables, yielding contrasting stance arguments. Examples 7 and 8 highlight cases where the argument decoder mod- ifies the generated argument template, which in example 7 changes the meaning of the argument. 5 Conclusion Here we propose ArgU: A neural factual argu- ment generator that systematically generates ar- guments following a specified stance and argument scheme. We devise a multi-step annotation frame- work to yield two golden and silver standard anno- tated datasets that we further use to train multiple ArgU variants. Implementing automatic and human evaluation, we thoroughly analyze ArgU’s gener- ation capabilities. Our findings indicate ArgU’s applicability for aiding users to formulate situation- specific arguments by controlling the argument stance and scheme using control codes. Limitations As depicted in Table 3, there are scenarios where ArgU demonstrates a lack of understanding and in- stead paraphrases the input variables to generate an incorrect response. It seems likely that the model associates negation with Con. However, in exam- ples 5 and 6, the model does not factor the word “stop” in Variable 1, leading to arguments that con- tradict the intended stance. Further, in examples 7 and 8, the argument decoder seems to modify the generated template, which changes the over- all meaning of example 7. Such scenarios might reduce the trust in the model, hurting its practical use. All experiments involving ArgSpan, ArgSpan- Scheme, and ArgU only pertain to abortion, mini- mum wage, nuclear energy, gun control, the death penalty and school uniform. The model perfor- mance on any other topics is unknown. Although we test ArgSpanScheme on out-of-domain test sets, it still confines the six topics. Since ArgU is trained only on argument sentences with less than 150 to- kens, it is more geared towards generating shorter arguments of less than 50 tokens. We further do not benchmark ArgU’s inference time for practical use. Ethics Statement We acknowledge that all experiments were per- formed ethically and purely from an academic point of view. Although this research revolves around arguments from six sensitive topics, the argument generators were not explicitly trained to be dis- criminatory, exhibit bias, or hurt anyone’s senti- ments. Further, any generated text does not reflect the stance of the authors. The human evaluators were appointed and compensated as per the legal norms. References Yamen Ajjour, Milad Alshomary, Henning Wachsmuth, and Benno Stein. 2019. Modeling frames in ar- In Proceedings of the 2019 Confer- gumentation. ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2922–2932, Hong Kong, China. As- sociation for Computational Linguistics. Milad Alshomary, Roxanne El Baff, Timon Gurcke, and Henning Wachsmuth. 2022. The moral de- bater: A study on the computational generation of In Proceedings of the morally framed arguments. 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 8782–8797, Dublin, Ireland. Association for Com- putational Linguistics. Milad Alshomary, Shahbaz Syed, Arkajit Dhar, Martin Potthast, and Henning Wachsmuth. 2021. Counter- argument generation by attacking weak premises. In FINDINGS. Roy Bar-Haim, Lilach Eden, Roni Friedman, Yoav Kantor, Dan Lahav, and Noam Slonim. 2020. From arguments to key points: Towards automatic argu- ment summarization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4029–4039, Online. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Compu- tational Linguistics. Tuhin Chakrabarty, Christopher Hidey, and Smaranda Entrust: Argument reframing ArXiv, Muresan. 2021. with language models and entailment. abs/2103.06758. Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, and Alyssa Hwang. 2019. AMPERSAND: Argument mining for PER- SuAsive oNline discussions. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2933–2943, Hong Kong, China. Association for Computational Linguistics. Vanessa Wei Feng and Graeme Hirst. 2011. Classify- In Proceedings of the ing arguments by scheme. 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 987–996, Portland, Oregon, USA. Asso- ciation for Computational Linguistics. Roni Friedman, Lena Dankin, Yufang Hou, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2021. Overview of the 2021 key point analysis shared task. In Proceedings of the 8th Workshop on Argument Mining, pages 154–164, Punta Cana, Dominican Re- public. Association for Computational Linguistics. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- In Proceedings of Workshop for cessing platform. NLP Open Source Software (NLP-OSS), pages 1– 6, Melbourne, Australia. Association for Computa- tional Linguistics. Debanjan Ghosh, Aquila Khanam, Yubo Han, and Smaranda Muresan. 2016. Coarse-grained argumen- tation features for scoring persuasive essays. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 549–554, Berlin, Germany. Associa- tion for Computational Linguistics. Nancy Green. 2015. Identifying argumentation In Proceed- schemes in genetics research articles. ings of the 2nd Workshop on Argumentation Mining, pages 12–21. Christopher Hidey, Elena Musi, Alyssa Hwang, Smaranda Muresan, and Kathy McKeown. 2017. Analyzing the semantic types of claims and premises In Proceedings of in an online persuasive forum. the 4th Workshop on Argument Mining, pages 11– 21, Copenhagen, Denmark. Association for Compu- tational Linguistics. Xinyu Hua, Zhe Hu, and Lu Wang. 2019. Argument generation with retrieval, planning, and realization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2661–2672, Florence, Italy. Association for Compu- tational Linguistics. Xinyu Hua and Lu Wang. 2018. Neural argument generation augmented with externally retrieved evi- dence. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 219–230, Melbourne, Australia. Association for Computational Linguis- tics. Yohan Jo, Seojin Bang, Chris Reed, and Eduard Hovy. 2021. Classifying argumentative relations using log- ical mechanisms and argumentation schemes. Trans- actions of the Association for Computational Lin- guistics, 9:721–739. Nitish Shirish Keskar, Bryan McCann, Lav Varsh- ney, Caiming Xiong, and Richard Socher. 2019. CTRL - A Conditional Transformer Language Model for Controllable Generation. arXiv preprint arXiv:1909.05858. Khalid Al Khatib, Lukas Trautner, Henning Wachsmuth, Yufang Hou, and Benno Stein. 2021. Employing argumentation knowledge graphs for neural argument generation. In ACL. Takahiro Kondo, Koki Washio, Katsuhiko Hayashi, and Yusuke Miyao. 2021. Bayesian argumentation- scheme networks: A probabilistic model of argu- ment validity facilitated by argumentation schemes. In Proceedings of the 8th Workshop on Argument Mining, pages 112–124, Punta Cana, Dominican Re- public. Association for Computational Linguistics. John Lawrence, Jacky Visser, and Chris Reed. 2019. An online annotation assistant for argument schemes. In Proceedings of the 13th Linguistic An- notation Workshop, pages 100–107. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Ilya Loshchilov and Frank Hutter. 2019. Decoupled In International Con- weight decay regularization. ference on Learning Representations. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed preci- sion training. In International Conference on Learn- ing Representations. Elena Musi, Debanjan Ghosh, and Smaranda Mure- san. 2016. Towards feasible guidelines for the an- In Proceedings of notation of argument schemes. the third workshop on argument mining (ArgMin- ing2016), pages 82–93. Hiroki Nakayama, Takahiro Kubo, Junya Kamura, Ya- sufumi Taniguchi, and Xu Liang. 2018. doccano: Text annotation tool for human. Software available from https://github.com/doccano/doccano. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- In Proceedings of uation of machine translation. the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In ICML. Andreas Peldszus. 2015. An annotated corpus of argu- mentative microtexts. Chris Reed, Raquel Mochales Palau, Glenn Rowe, and Marie-Francine Moens. 2008. Language re- In Proceedings of sources for studying argument. the Sixth International Conference on Language Re- sources and Evaluation (LREC’08), Marrakech, Mo- rocco. European Language Resources Association (ELRA). A Appendix A.1 ArgSpan Training Details We initialize ArgSpan weights with pre-trained Roberta base weights, and train using 2 Nvidia RTX A5000 GPUs with mixed precision (Micike- vicius et al., 2018) and a batch size of 32. Prior to the biaffine layer, we reduce the hidden repre- sentation to 600 dimensions. We use a learning rate of 1E-5 and train till the validation loss stops improving for five steps. We also clip (Pascanu et al., 2013) the gradients to a unit norm and use AdamW (Loshchilov and Hutter, 2019) with the default PyTorch parameters for optimization. A.2 ArgSpanScheme Training Details We initialize ArgSpanScheme weights with pre- trained Roberta base weights, and train using 1 Nvidia RTX A5000 GPUs with mixed precision and a batch size of 64. We use 2 layers of multi- headed self attention using 4 attention heads. We use a learning rate of 1E-5 and train till the valida- tion loss stops improving for five steps. We also clip the gradients to a unit norm and use AdamW with the default PyTorch parameters for optimiza- tion. A.3 ArgU Training Details We initialize model weights with pre-trained BART (Lewis et al., 2020) base weights and expand the embedding layer to accommodate 13 new tokens, detailed in Table 6 Appendix A. We train all mod- els over 2 Nvidia RTX A5000 GPUs with mixed precision and a batch size of 24. We use a learn- ing rate of 1E-5 and train till the validation loss stops improving for five steps. We also clip the gradients to a unit norm and use AdamW with the default PyTorch parameters for optimization. We use beam search for decoding with a beam length of 5, a maximum length of 50 tokens, and a penalty for trigram repetitions in the generated argument. Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Sougata Saha, Souvik Das, and Rohini K. Srihari. 2022. Dialo-AP: A dependency parsing based argument parser for dialogues. In Proceedings of the 29th In- ternational Conference on Computational Linguis- tics, pages 887–901, Gyeongju, Republic of Korea. International Committee on Computational Linguis- tics. Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2021. Aspect-controlled neural argument generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 380–396, Online. Association for Computational Linguistics. Noam Slonim, Yonatan Bilu, Carlos Alzate, Roy Bar-Haim, Ben Bogin, Francesca Bonin, Leshem Choshen, Edo Cohen-Karlik, Lena Dankin, Lilach Edelstein, et al. 2021. An autonomous debating sys- tem. Nature, 591(7850):379–384. Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive es- says. In Proceedings of COLING 2014, the 25th In- ternational Conference on Computational Linguis- tics: Technical Papers, pages 1501–1510, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross- topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 3664–3674, Brussels, Belgium. Association for Computational Linguistics. Shahbaz Syed, Khalid Al-Khatib, Milad Alshomary, Henning Wachsmuth, and Martin Potthast. 2021. Generating informative conclusions for argumenta- tive texts. In Findings. Jacky Visser, John Lawrence, Chris Reed, Jean Wage- mans, and Douglas Walton. 2022. Annotating ar- In Argumentation Through Lan- gument schemes. guages and Cultures, pages 101–139. Springer. Jacky Visser, John Lawrence, Jean Wagemans, and Chris Reed. 2018. Revisiting computational mod- els of argument schemes: Classification, annotation, In 7th International Conference on comparison. Computational Models of Argument, COMMA 2018, pages 313–324. ios Press. Douglas Walton, Christopher Reed, and Fabrizio Macagno. 2008. Argumentation schemes. Cam- bridge University Press. ID 1 2 3 4 5 Figure 5: Doccano Annotation Screenshot. Argument abortion is necessary, because, unintended pregnancies are associated with birth defects, increased risk of child abuse, ad so on. most students do not believe that school uniforms are useful, so uniforms should not be required. the death penalty is unacceptable because of the racial bias in the criminal justice system. the death penalty does not follow a fair criminal justice system because of its racial bias. it is not necessary to require school uniforms, because t is important to respect students who believe that school uniforms are not necessary. increasing the minimum wage reduces income inequality. reducing income inequality is desirable. we should increase the minimum wage. Table 5: Annotator scheme conflicts Actual Label means for goal from source knowledge rule or principle from source authority from consequence Annotator Label from consequence from source authority from source authority from source knowledge means for goal Figure 6: Phase 1 Annotation Pipeline. Figure 7: ArgSpan Evaluation. Figure 8: ArgSpanScheme Data Splits. Description Argument scheme based control codes Tokens <from_consequence>, <from_source_authority>, <from_source_knowledge>, <goal_from_means/means_for_goal>, <rule_or_principle> Argument stance based control codes Variable identifiers <pro>, <con> <VAR_0>, <VAR_1>, <VAR_2>, <VAR_3> Decoder BOS tokens <pattern>, <argument> Table 6: Special Tokens and Control Codes Figure 9: Phase 2 Dataset Fact Normalization Step.
ai_researcher
1
Evaluation_of_Systematically_Developed_Gamification_Strategies_with_Game-Balance_Simulation_Tools.pdf
Highlights Personalised Serious Games and Gamification in Healthcare: Survey and Future Research Direc- tions • The largest application domains of personalised serious games and gamification are behaviour change and rehabilita- tion. • Ontologies and rule-based approaches are popular to integrate domain expertise whereas the Hexad Player Framework is most used for player modelling. • Reuse of components of personalised serious games and gamification remains underresearched as only 10 out of 31 identified articles mentioned some form of reusability. 4 2 0 2 v o N 7 2 ] C H . s c [ 1 v 0 0 5 8 1 . 1 1 4 2 : v i X r a Personalised Serious Games and Gamification in Healthcare: Survey and Future Research Directions Stéphanie Carliera, Femke De Backerea and Filip De Turcka aIDLab, iGent Tower—Department of Information Technology, Ghent University—imec, Technologiepark-Zwijnaarde 126, Ghent, B-9052, Belgium A R T I C L E I N F O A B S T R A C T Keywords: serious games gamification healthcare personalisation Serious games, games with a primary objective other than pure entertainment, and gamification, the use of game elements in non-game contexts, have shown to have positive effects on health outcomes of eHealth applications. However, research has shown that a shift towards a more personalised approach is needed, considering the diversity of users and their contexts. This introduces new challenges to the domain of serious games and gamification (SGG) as research is needed on how such personalisation is achieved. A literature search was conducted, using Web of Science and PubMed, to provide an overview of personalisation strategies applied in SGG in health. In total, 31 articles were identified, of which 22 reported on a serious game and 9 focused on gamification. Results indicate that personalised serious games and gamification have been applied most in the fields of behaviour change and rehabilitation. Furthermore, the use of machine learning and artificial intelligence (AI) for personalisation shows promise as they can find patterns and relationships in large data sets. Findings indicated that reusability is still an under-highlighted aspect in the design and development of personalised SGG, as only 10 out of 31 articles reported on some form of reuse. Future research should go towards the standardisation of the development of personalised SGG by focusing on the reusability of the different components and the use of generative AI. This standardisation holds the potential to simplify the design process and involvement of domain experts and facilitates a more detailed evaluation of different personalisation strategies. 1. Introduction The use of Serious Games and Gamification (SGG) for health care is increasingly popular as its use has shown positive effects on treatment adherence, user motivation and patient education [1–6]. Gamification is the use of game elements, such as rewards and leaderboards, in a non-gaming context. The choice of included game elements can range from a few elements to a more game-like experience [7]. Serious Games (SGs), on the other hand, are games with a primary objective other than pure entertainment, such as education or training [8, 9]. SGG are used in a wide range of health domains, for example, physical and cognitive rehabilitation [10–18], the education of health professionals and patients [19–23], health behaviour change, such as the cessation of substance abuse or the improvement of physical activity [24, 25] and the treatment of mental health disor- ders, such as anxiety and depression [4]. Results indicate that SGG show promise in reducing issues with treatment adherence in healthcare and that they can be effective tools for health, however, research remains in its infancy, limited by design and evaluation challenges [4, 22, 23, 25–30]. One of those challenges is that SGG might not be sus- tainable as patients and users might lose interest over time, leading again to a decrease in treatment adherence and user engagement [26]. Users of mobile applications all have their specific profile and their contexts might change and evolve, calling for a dynamic and adaptable approach to keep ∗Corresponding author [email protected] (S. Carlier) ORCID(s): 0000-0001-6150-717X (S. Carlier) motivation high. Research has indicated that the one-size- fits-all approach needs to be abandoned to shift towards more personalised SGG, that are able to re-engage the user [7, 17, 18, 28, 31–36]. Moreover, designing and implementing a personalised Serious Game (SG) is a costly and challenging process as it requires the same effort from multiple stake- holders, such as (game) developers, software engineers and domain experts, all over again for each SG [16, 37]. While the development of gamified interventions can be considered slightly less cost-intensive as it does not require the devel- opment of a full-fledged game, it should be avoided to use gamification as chocolate-dipped-broccoli, i.e. applied as an afterthought, but to integrate it from the start in the design process [38]. To create effective SGs and gamified mHealth, or mobile health, applications, domain expertise from health professionals is needed, involving them in each step of the design and development process [33, 39]. Personalised SGG, with a user-centred approach, show promise in improving performance outcomes and boosting engagement [2, 40–43]. Research exists on personalised SGG, the obtained results so far are promising and chal- lenged by the uncertainty on how personalisation can be integrated to increase health outcomes [18, 44]. Several reviews on personalised SGG exist, focusing on which player aspects are used for the individualization of SGs [31], diffi- culty adaptation and procedural content generation [45], how game elements have been chosen and used in personalised gamification [46, 47], how machine learning and Artificial Intelligence (AI) and gamification can interact [48] and how player models and adaptation methods are integrated [49]. These reviews, however, focus on either gamification or SGs and include all application domains, which often leads to a Preprint submitted to Elsevier Page 1 of 15 Personalised Serious Games and Gamification in Healthcare predominant focus on games and gamification designed for education. Approaches to personalisation might differ as the objectives of education and health care differ. To fill this gap, this paper investigates how personalisa- tion has been applied to SGG for health. Moreover, the aim is to provide a technical overview of the player modelling techniques and intelligent personalisation methods that have been used. Furthermore, this research examines how expert knowledge is incorporated in user modelling, which user data is used and if the reusability of specific components or transferability of expert knowledge has been facilitated to simplify the design process of personalised SGG. The remainder of the paper is structured as follows: First, Section 2 explains the search strategy, the inclusion criteria and selection procedure of the identified records. Next, Sec- tion 3 provides an overview of the player and expert models, the intelligent personalisation methods and the inclusion of reusability, Section 4 discusses future research direction, followed by the conclusions in Section 5. 2. Method The following paragraphs provide an overview of the search strategy that was used to identify the analysed articles and second, a discussion of defined inclusion criteria and how the final studies were selected. 2.1. Search Strategy The search was conducted in March 2024, using two databases, namely Web of Science and PubMed. This struc- tured literature search was preceded by an exploratory search using Google Scholar to define the keywords to be used in the search. Table 1 gives an overview of the used query and keywords with the respective number of articles that were retrieved from Web of Science and PubMed. Seven other articles were included in the results that were identified during the analysis of the found records. For the title it was required that some keyword referring to personalisation and gamification and/or SGs was included. Furthermore, for the topic of the paper, i.e. abstract and title, the domain of ‘healthcare’ is delineated by all papers that refer to the health or well-being of patients, thereby excluding educa- tion, more specifically education of healthcare professionals and education of people with specific learning disorders. The publication year spans a decade, namely 2014 to 2024. As the aim is to provide an overview of personalisation algorithms and strategies for SGG from the last decade, all review papers were excluded, as these were analyzed separately and reported upon above. 2.2. Inclusion Criteria and Study Selection Figure 1 displays the number of publications identified, screened and excluded at each stage of the literature search and selection process. The structured search resulted in 126 articles from Web of Sciences and 47 articles from PubMed. After the removal of duplicate articles, 127 arti- cles remained. These articles were screened based on title and abstract. After this first screening, the full text of the Table 1 Search query and number of articles retrieved from the different databases. Search Query Title=personali* OR adapt* OR context* OR individu* OR tailored OR intelligent OR "player model*" OR "user model*" OR ontology AND Title="serious game*" OR gamification OR gamified AND Title or abstract=health* OR rehabilitation OR treatment OR disorder OR "behavior change" OR disease OR "physical activity" OR "fitness" AND title= NOT review AND Publication Year=2014-2024 Database Web of Science PubMed Other sources Number of records 126 47 7 remaining 67 articles was analysed. Two more records were identified from other sources based on the expertise of the authors. In total, 33 articles are included in this literature review. Articles were excluded from the analysis if they described a gamified solution or serious game that was not personalised (screening n=43, full-text n=16) or if it did not include a digital intervention (screening n=1, full-text n=7). Furthermore, papers were excluded from the results if the topic was incorrect (screening n=12, full-text n=2) or if they were the wrong publication type, namely reviews or editorials (screening n=1, full-text n=2). Two articles were excluded due to not being available in English and of 3 articles no full text was found. Next, 6 papers were excluded after full-text assessment due to lack of details on the used personalisation strategies. Finally, 2 papers that discussed different aspects of the same research have been included as 1 entry, and for 2 papers that reported the same research, the conference paper has been excluded. This brings the total of included articles to 31. 3. Findings This section discusses the findings of this literature re- view. First, in Section 3.1 an overview and summary of the included articles are discussed, followed by an in-depth ex- planation of the identified methods for player and knowledge modelling in Section 3.2, a classification of the intelligent personalisation methods in Section 3.3, to end with a dis- cussion on reusability if personalised SGG in Section 3.4. 3.1. Overview Of the 31 included papers, 22 discuss a serious game, while the other nine articles research gamification. Six do- mains have been identified, as shown in Figure 2, namely, one serious game on health support [50], i.e. systems that support users in their day-to-day living, six papers on health education, i.e., applications that want to educate patients and users on certain disorders or diseases, of which four Preprint submitted to Elsevier Page 2 of 15 Personalised Serious Games and Gamification in Healthcare Figure 2: Six domains within health care have been identified, of which rehabilitation and behaviour change contained the most articles. Two articles combined two domains(*), namely behaviour change and health education. physical rehabilitation, for which 11 papers were included, all discussing a serious game [10, 13, 59–67], and behaviour change, 13 articles of which seven were on SGs [51, 54, 68– 71] and six were on gamification [72–78]. The papers were reviewed based on three personalisa- tion goals, more specifically, increasing user engagement, increasing treatment adherence, or improving user perfor- mance. Some articles provided more specific objectives, such as increasing knowledge on a certain topic or im- plementing sustainable behaviour change. For this review, the objectives were classified into the three aforementioned categories. Table 2 provides an overview of the included papers and their identified objectives. Furthermore, a sum- mary of the study design, study output and a detailed do- main description was provided. Most studies, namely 21 out of 31, explicitly state that they want to increase user engagement by including personalised gamification or SGs, while SGs focus on rehabilitation often not only to improve engagement but also to increase the performance of the user (8 out of 11). Behaviour change mostly focuses on increasing physical activity, changing nutritional habits or specific disorders such as Attention Deficit Hyperactivity Disorder (ADHD), Autism Spectrum Disorder (ASD) and sleep apnea, while systems for rehabilitation target a range of domains, namely neck and wrist or upper-limb rehabil- itation, neuro-rehabilitation and post-stroke patients, both cognitive and physical rehabilitation. 3.2. Player and Knowledge Models Some studies use models to structure and update specific user information. This information can be limited to player or user data, which can consist of personal information, such as age or game progression data, sensor data, i.e., data collected via sensors or external data, such as heart rate or contextual data such as weather reports. A last type of user information that is sometimes collected is medical data, which we define as data that has been handled, or inputted by health professionals, such as results from medical tests or a set of rehabilitation exercises. An overview of the player information included in each study included in the analysis can be found in Table 3 for interventions using gamification Figure 1: Flow chart of the literature search procedure. SGs [51–54] and two gamified solutions [55, 56]. Two papers include two domains, namely behaviour change and health education, as they not only educate users but also motivate them to implement the behaviour changes [51, 54]. One article discusses gamification for surveys for health [57] and one article discusses SGs for health in general [58]. Furthermore, the two largest categories are cognitive and Preprint submitted to Elsevier Page 3 of 15 Web of SciencePubMedarticlesWeb of Science(n=126)Other sources(n=7)articlesPubMed(n=47)SEARCH QUERY:Title contains keyword related to 'personalisation'AND title contains 'gamification' or 'serious game'AND topic (title/abstract) contains keyword related to 'health'including papers from 2013 - 2024excluding 'review' in titlearticles(n=127)removal of duplicatesscreened articles(n=67) Screened based on title + abstract excluded (n=60)              No personalised SG/gam (n=44)              Wrong topic (n=12)              Foreign language (n=2)              No digital intervention (n=1)              Wrong publication type (n=1)included articles(n=26) Full- text assessed for eligibility excluded (n=41)              No personalised SG/gam (n=21) No digital intervention (n=7) No details (n=6)              No full- text available (n=3)              Wrong topic (n=2)              Wrong publication type (n=2)included articles(n=33)explorative Google Scholar searchincluded articlesin review (n=31) - 2 papers discussed different aspects of the same research, included as 1 entry- 2 papers discussed same research, conference paper excluded Personalised Serious Games and Gamification in Healthcare Table 2 In total, 31 articles have been identified of which 9 reported on gamification and 22 on serious games. An overview is provided of their personalisation goal, study design, output type and application domain. Type Personalisation Goal Study design Output Domain (Detailed) Engag. Adh. Perform. behaviour change (healthy habits) behaviour change (physical activity) behaviour change (physical activity) behaviour change (nutrition) behaviour change (physical activity) behaviour change (physical activity) health education (nutrition) health education (healthy habits) surveys for health behaviour change (physical activity) behaviour change & health education (nutrition) behaviour change (sustainable behaviour + social skills children ASD) behaviour change (attention training for ADHD) behaviour change (attention training for ADHD) behaviour change (physical activity) health health education (cancer) health education (asthma) health support (cognitive impairment elderly) rehabilitation (neck) rehabilitation (wrist) rehabilitation (motor impairment) mixed method intervention (12 participants) intervention (176 participants) design & development intervention (61 children) intervention (40 participants) prototype evaluation (44 students) design & development intervention (28 participants) prototype evaluation (6 experts) intervention (29 participants) framework (FrameworkL) webapplication (CoaFeld) mHealth app (GameBus) mHealth framework (CarpeDiem app) mHealth app mHealth app chatbot app (CiboPoli) recommendation tool mobile survey app asynchronous multiplayer exergame (GardenQuest) serious game (Express Cooking Train) simulator-based validation serious games (inLife platform) experiment (16 children) KeepAttention game experiment (11 children) prototype implementation intervention (21 participants) validation with sample dataset experiment (15 children with asthma) intervention (37 participants without cognitive impairment) intervention (10 participants) experiments (4 healthy participants) design & development intervention (20 post-stroke participants) three-fold validation intervention (7 post-stroke participants, 3 therapists) design & development design discussion intervention (20 post-stroke participants) intervention (25 elderly) blind experiment (42 participants) intervention (52 participants) task-oriented design framework (KeepAttention game) conceptual architecture for smart serious games first person shooter game (PC) serious game e-learning platform (KidBreath) intelligent assistive system with AR mobile serious game RehaBot framework (VR) wrist rehabilitation robot & serious game (Nuts Catcher) serious game (ReHabGame) serious game (ReHabGame) rehabilitation (neurological) exergame-based rehabilitation system (TANGO:H) rehabilitation (cognitive & physical) serious game (Prehab) rehabilitation (post-stroke) serious game (InMotion) serious game (InMotion) tele-rehabilitation system based on serious games and in-cloud data analytics services AR serious game serious game (Wake Up For The Future!) serious game (Fruit-Collection and avatar manoeuvring) rehabilitation (upper limb) rehabilitation (upper limb) rehabilitation (post-stroke) rehabilitation (cognitive & physical) behaviour change & health education (obstructive sleep apnea) rehabilitation (neurorehabilitation) - - - - - - - - - - - - - - - - - gam gam gam gam gam gam gam gam gam SG SG SG SG SG SG SG SG SG SG SG SG SG SG SG SG SG SG SG SG SG SG - - - - - - - - - - - - - - - - - - - - - - - - - - - - - gam = gamification SG = serious game Engag. = Engagement Adh. = Adherence Perform. = Performance Ref [72] [73] [74] [75] [76] [77] [55] [56] [57] [78] [51, 79] [68] [69] [70] [71] [58] [52] [53] [50] [10] [59] [60] [61] [13] [62] [63] [64] [65] [66] [54] [67] and Table 4 for serious games. Additionally, these tables pro- vide an overview of the references that have included domain or expert knowledge, the applied personalisation method, which will be discussed in further detail in Section 3.3. The following paragraphs will discuss the different approaches to modelling user and expert knowledge as identified in the included articles. Hexad Player Model Five studies [55, 57, 72, 77, 78] use the Hexad Player Type Model to classify users according to their player type, which has been designed specifically for the design of game- ful systems tailored to their users [80]. Six player types are defined based on their intrinsic or extrinsic motivation, namely, achiever, free spirit, philanthropist, disruptor, player and socializer. The Hexad framework proposes an empiri- cally validated mapping of several game elements on the 6 player types, as shown in Figure 3 [81]. de Oliveira et al. [72] investigated how the user’s player type can be incorporated to include the correct game ele- ments for each player in a self-care application. The study uses the Hexad Player Model to classify the users according to their type, but they take into account that users and their preferences can change, meaning that their player type and game elements preferences can change too. To accommodate for this change, they include an artificial neural network Preprint submitted to Elsevier Page 4 of 15 Personalised Serious Games and Gamification in Healthcare profile can be adapted according to their in-game perfor- mances. Caggianese et al. [65] proposes a tele-rehabilitation system that utilises different sources of information and data, namely game data, personal user information, Microsoft Kinect sensor data and input from health professionals to provide personalised decision support to the user. To model the required expert knowledge and user information, they used an ontological model, including both game description concepts and motor rehabilitation concepts. The system also provides an interface for health professionals to define each patient’s rehabilitation goals, which include, amongst others, the anatomical problem for each motor district, e.g., left shoulder abduction. Due to the use of an ontology and hybrid production rules, i.e., the combination of ontological rules and fuzzy logic rules [84, 85], this diagnostical information can then be used in the decision support system for adapting the serious game and suggesting improvements in the offered therapy. Kinematic Chain Model and Inverse Kinematic A kinematic chain model describes the movement of a kinematic chain, which is the formulation of the trans- lation, rotation, position and velocity of a body segment interconnected by joints, for a robot or animated character, e.g. human [86–88]. Three included studies from Esfahlani et aL [60, 61, 67] make use of a kinematic chain model to represent the mechanical structure of the user. These studies then use inverse kinematics to control and plan the motion of a desired position to achieve a specific task [87]. The Microsoft Kinect sensor is used to track the user’s skeleton joints in all three studies,in addition to a foot pedal [67], a Thalmic Myo armband [60, 61, 67]. This sensor data is then fed to the personalisation methods to personalise the game and adapt to the difficulty of the conference rehabilitation exercises. The three studies investigate different approaches, namely fuzzy logic [60], Monte Carlo Tree Search [61] and a combination of fuzzy logic and an artificial neural network [67]. Other model approaches The remaining gamified solutions employed three differ- ent approaches, each dependent on the specific data sources required for the construction of the user model: question- naire responses [75], physical activity data [76], and specific domain knowledge [56]. Orte et al. [75] designed a gamified mHealth application for nutritional behaviour change that offers personalised dietary missions. To do so, information about the users’ nutritional habits is gathered via question- naires to build a nutritional behaviour profile. Schäfer et al [76] use smartphone sensor data to derive a physical activity model for children. This model is then used to personalise the application by using an avatar model that mirrors the children’s physical activity level, i.e., sitting, standing, walking and intense. A Random Forest classifier has been used to classify the sensor data. Pardos et al. [56] designed a remote patient monitoring and care platform that Figure 3: The Hexad Player Framework maps game elements on 6 distinct player types, based on the intrinsic or extrinsic motivation of users [81]. (ANN) that classifies the user in the case of loss of interest in current game elements. Zhao et al. [77] compile a player model out of four submodels to create a personalized gami- fied fitness recommender system. The player model consists of an activity recognition model to track the activities of the player, a general model, which includes personal information of the user, an exerciser-type model, which includes expert knowledge for recommending specific activities and finally, a player-type model, based on the Hexad Player model. Similarly, Fadhil et al. [55], Carlier et al. [57] and Chan et al. [78] use the Hexad Player Model to include user- specific game elements in the proposed system. Fadhil et al. [55] included personalised gamification into a chatbot game to teach children about healthy lifestyles and habits. In our previous research [57] we designed an application for increasing engagement and respondent behaviour for health surveys, using personalised gamification. With the GardenQuest game, Chan et al. [78] aims to increase users’ exercise adherence by creating a social multiplayer exergame that groups patients according to their respective player types. Ontology Two studies use ontologies for player and expert knowl- edge modelling [51, 65]. Ontologies offer formal definitions of distinct concepts, their properties, and intricate relation- ships among these concepts, thereby establishing computer- readable classification systems [82, 83]. Figure 4 shows an example of such an ontology, more specifically, the recipe ontology included in the serious game on Nutrition Literacy (NL) and Food Literacy (FL) skills by Mitsis et al. [51]. Using user game information and the knowledge contained in the ontology facilitates the personalisation of the game as recipes can be suggested based on dietary needs and preferences and via a rule-based system, the user’s cooking Preprint submitted to Elsevier Page 5 of 15 Personalised Serious Games and Gamification in Healthcare Figure 4: Recipe ontology used for modelling user and expert knowledge in the serious game ‘ Express Cooking Train" by Mitsis et al. [51]. Figure 5: An example of a primitive element (a) and a mixed element (b) in the knowledge model of Pardos et al. [56] offers personalised gamified recommendations. The knowl- edge needed to recommend healthier habits to users includes official guidelines given by for example the WHO and the American Heart Association and is encoded by a set of multivariate objects and rules for each domain, as shown in Figure 5. Health professionals can then access the platform to create personalized rules for specific patients. Alves et al. [58] developed a first-person shooter video game that adapts the difficulty level to the mental state of the player. Consequently, a classification framework is devel- oped that reads physiological signals, namely heart rate and beta bands of the brainwaves, and outputs the current men- tal state of the player, using Multilayer Perceptron (MLP) Classification [89]. Next, using a state machine, the difficulty level of the game is updated according to the current mental state. Ghorbani et al. [50] evaluate an intelligent assistive system to support the elderly in their daily life activities using Augmented Reality (AR) and SGs. To personalise the system, fuzzy rule bases are built, including the expert knowledge of therapists for each patient. Afyouni et al. [10] introduce “Rehabot” for the adaptive generation of personalized SGs for telerehabilitation. In or- der to provide personalised feedback and adapt the difficulty of the exercises to the user, expert knowledge regarding postures needs to be modelled. To that end, a therapist inputs a set of correct postures for the corresponding patient. The system translates this expert knowledge to a set of joints that are compared to the movements of the user, using the Microsoft Kinect and a posture-matching algorithm. Another example of a personalised serious game for re- habilitation is the TANGO:H platform of González-González et al. [13]. The platform creates a user model that represents a set of data that characterizes the user at a specific moment in time. This user data includes explicit data, i.e. provided by the user, and implicit data, i.e., provided by their interaction with the system. Included in this user model is the system’s estimation of the user’s skill level. To suggest exercises to the user, the user’s skill level is matched with the expected skill level of the rehabilitation exercises, using a recommender system. To update the skill level of the user, a heuristic approach is used, using a formula that considers certain expert intuitions on how the user’s skill level should evolve over time. Another approach to modelling the player’s skill level is seen in the work of Hocine et al. [62]. To model the player, and their motor abilities, they define the “ability zone", which represents the area where the patient can efficiently move on a 2D workspace, such as a graphical tablet. The ability zone is modelled using a 𝑛𝑥𝑚 matrix which maps the physical workspace and the virtual Preprint submitted to Elsevier Page 6 of 15 Personalised Serious Games and Gamification in Healthcare Figure 6: An example of a player’s “ability zone" matrix (1), the obtained image using gradients (2) and the detected edge of the ability zone (3). [62] workspace (computer screen). Each matrix cell then includes information on the performed movements of the patient. Post-stroke patients move the computer mouse within the workspace and the system uses these mouse coordinates to calculate the resulting ability zone. During an assessment exercise, the ability zone matrix of each player is constructed and continuously updated during the playing sessions. This matrix is then used for the adaptation of the game to identify challenging areas for the patient as shown on Figure 6. The ability zone matrix (Figure 6-1) is transformed to an image by assigning gradients to each cell value (Figure 6-2), this is then used to compute the edge of the matrix (Figure 6-3). Targets that are situated inside this edge will be easy, while targets outside the ability zone’s edge will be linked to a higher difficulty level. Alves et al. [64] propose to include personality traits from the Five-Factor model to increase the patient’s mo- tivation for the rehabilitation process. The system aims to support patients with emotional instability as poor reha- bilitation results or criticism by the therapist might easily demotivate them. The personality traits of the patient and in- game actions trigger specific responses to the game. These adaptation rules are used for the fuzzy logic model to provide personalised in-game support. An example of such a rule is: If a patient has High Neuroticism as a personality trait, and performs badly in the game, the game should respond by friendly encouraging the patient to try again. 3.3. Personalisation Methods The following paragraphs will discuss the different per- sonalisation methods identified in the references. Table 5 provides an overview of each of these methods, classified according to their data-driven, knowledge-driven or hybrid nature. Data-driven techniques The techniques listed in the following paragraphs pri- marily use data to extract patterns and relationships. Artificial Neural Networks (ANN) are mathematical mod- els that are able to detect complex non-linear correlations between data [67]. One gamified intervention [72] and three SGs [52, 58, 67] use a form of ANN to offer personalised support. de Oliveira et al. [72] use an ANN for the classifi- cation of the usage pattern of the user to assess if the player is still interested in the offered gamification elements or if updating them is required. In the case of a serious game for supporting Caribbean Men pre- and post- diagnosis of prostate cancer [52], an ANN is used for a computational intelligence predictor that predicts the risk of cancer for the user and then updates the offered information and support in the game for the user based on the outcome of the intel- ligence predicator. Esfahlani et al [67] use a combination of an ANN and fuzzy logic to adjust the difficulty of a serious game for neurorehabilitation. The ANN were used to detect complex non-linear correlations among player movement data and predict the player’s improvement, while fuzzy logic was used to then personalise the offered rehabilitation exercises. Recommender systems are able to suggest an appropri- ate item from a set of items to the user based on certain features. Different types of recommender systems exist: content-based recommender systems rely on the items them- selves to make suggestions, while collaborative filtering uses the user’s behaviour, and finally, hybrid approaches, using a combination exist as well [13]. For the TANGO:H platform, a content-based recommender system, based on the player’s skill and history is used to select rehabilitation exercises of the appropriate skill level [13]. The CarpeDiem app [75] uses a nutritional recommender system to offer individual- ized recommendations and feedback based on questionnaire data. The gamified app uses a rule-based system to determine the user’s level for each food group and which missions can be recommended to that user. Random Forest Classification is used for the classifica- tion of the activity model for children by Schäfer et al [76], which was explained in Section 3.2. The authors compared two classification models, Support Vector Machines (SVM) and Random Forests (RF), with the latter reaching the high- est accuracy. Preprint submitted to Elsevier Page 7 of 15 (1)(2)(3) Personalised Serious Games and Gamification in Healthcare Table 3 An overview of the research on personalised gamification. For each entry, an overview of the applied models, integrated user information, personalisation method, presence of reuse and domain is provided. Ref [72] [73] [74] [75] [76] [77] [55] [56] [57] Model User information Personalisation method Reuse Domain Expert knowledge Player information Gen. Sens. Medic. - - - - - - - - - Hexad Player Model - - nutritional behaviour profile physical activity user model 1. Hexad Player Model, 2. activity recognition model, 3. general info model, 4. exerciser type model Hexad Player Model multivariate objects for expert knowledge Hexad Player Model - - - - - - - - - - - - - - - artificial neural network (ANN) reinforcement learning (RL) interpolation recommender system & rule-based system random forest classification decision trees - rule-based system - - - - - - - behaviour change behaviour change behaviour change behaviour change behaviour change behaviour change health education health education surveys for health Gen. = general user and game information Sens. = data collected via sensors, such as heart rate or contextual data and external data, such as weather reports Medic. = medical data (input provided by health professionals or results from medical tests) Table 4 An overview of the research on personalised serious games. For each entry, an overview of the applied models, integrated user information, personalisation method, presence of reuse and dynamic difficulty balancing and domain is provided. Ref Model User information Personalisation method Reuse Dynamic difficulty Domain Expert knowledge Player information Gen. Sens. Medic. - - - - - - - - - - - - - - - [78] [51, 79] [68] [69] [70] [71] [58] [52] [53] [50] [10] [59] [60] [61] [13] [62] [63] [64] [65] [66] [54] [67] Hexad Player Model - ontology - - - - mental state model - - - expert IF THEN rules set of postures - kinematic chain model & inverse kinematics kinematic chain model & inverse kinematics updated by heuristic player’s motor abilities ( “ability zone") - Five Factor Model ontology - - kinematic chain model & inverse kinematics - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - rule-based system ant colony optimization open learner model rule-based system deep learning & deep RL & optimization (particle swarm optimization, genetic algorithms) ANN: multilayer perception (mental state model) & state machine artificial neural network multi-arm bandit adaptive fuzzy logic model data mining for data prediction reinforcement learning (Q-learning) fuzzy logic model Monte Carlo Tree Search recommender system (content-based) Monte Carlo Tree Search & procedural content generation state-machine rule-based system hybrid production & and ontological rules & fuzzy logic rule-based system genetic algorithm for procedural content generation & rule-based system for dynamic difficulty artificial neural network (ANN) & fuzzy logic model - - - - - - - - - - - - - - - - - - - - - - - behaviour change behaviour change & health education behaviour change behaviour change behaviour change behaviour change health health education health education health support rehabilitation rehabilitation rehabilitation rehabilitation rehabilitation rehabilitation rehabilitation rehabilitation - rehabilitation rehabilitation behavior change & health education rehabilitation Gen. = general user and game information Sens. = data collected via sensors, such as heart rate or contextual data and external data, such as weather reports Medic. = medical data (input provided by health professionals or results from medical tests) Preprint submitted to Elsevier Page 8 of 15 Personalised Serious Games and Gamification in Healthcare Table 5 Overview of the identified personalisation methods. Type Method Artificial Neural Network Recommender system Random Forest Deep Learning Reinforcement Learning Data Mining Interpolation Genetic Algorithm Particle Swarm optimization Ant colony optimization Multi-armed bandit Monte Carlo Tree Search Rule-based Finite State Machine Decision Tree Data-driven Knowledge-driven Hybrid Open learner model Fuzzy logic Gam [72] [75] [76] - [71, 73] - [74] - - - - - SG [52, 58, 67] [13] [71] [59] [10] [54, 71] [71] [68] [53] [61, 62] [56, 70, 75] - [77] [51, 54, 64, 66] [58, 63] - - - [69] [50, 60, 65, 67] Deep Learning is a form of machine learning that con- sists of multiple processing layers to learn complex patterns from high-dimensional data with multiple layers of abstrac- tion [90, 91]. Ahmad et al. [71] envision a platform for smart serious games that manage large volumes of real-time sensor data to make personalised decisions. The platform can use a variety of algorithms such as deep learning and deep reinforcement learning to analyze the contextual data and player history. The use of optimization algorithms can then optimize these results under specific constraints, such as age or medical history, using particle swarm optimization or genetic algorithms. Reinforcement Learning is a machine learning technique where an intelligent agent learns from its environment to maximize rewards and minimize punishment mecha- nisms [59]. Andrade et al. [59] use a form of reinforcement learning, namely Q-learning, which does not need a detailed environmental model and can be interpreted as a Markov decision process with unknown probabilities and rewards. In the Nut Catcher game, the Q-learning game is used to balance the game difficulty by maximizing the performance function and keeping the game challenging and entertaining while the user performs repetitive rehabilitation exercises. Martinho et al. [73] use reinforcement learning for gamified coaching to increase the physical activity of the elderly. Based on the user’s performance, reinforcement learning is applied to decide what health challenges should be next sent to the user by the virtual coach and when. The platform for smart SGs of Ahmad et al. [71], explained in the previous paragraph, incorporates deep reinforcement learning, i.e., the combination of deep learning and reinforcement learn- ing, as one of the intelligent algorithms [91]. Data Mining techniques are used to extract information from large datasets, such as patterns and relationships be- tween input variables [10]. Afyouni et al. [10] designed a gaming platform with “Rehab bots”, virtual assistants, that can adjust the workout difficulty to the user’s performance. Data mining is used to predict how the user will improve over different sessions by following a specific exercise schedule. Interpolation uses the data directly to estimate the values between specific data points. For the mHealth app, Game- Bus [74] a specific formula was devised to calculate the dif- ference between the player’s current level of capability and their preferred level. The user will then receive personalised tasks with an updated complexity to keep increasing their capability level and reach their goal. Genetic Algorithms are heuristic search methods that use principles of natural selection and genetics to solve complex optimization problems [92, 93]. Genetic algorithms are used in games for procedural content generation because they are able to generate highly customized content for a game, which keeps evolving according to the progress of the user [54]. The game “Wake Up For the Future!” [54] uses procedural content generation based on a genetic algorithm to create educational content for obstructive sleep apnea. By auto- matically generating new Non-Player Characters (NPCs), based on the user’s in-game data and choices, the game dif- ficulty can be dynamically adapted and educational content is personalised for each user. The platform for smart SGs of Ahmad et al. [71], which was already discussed earlier, suggests genetic algorithms can be used in the optimization module. Particle Swarm Optimization is an optimization algo- rithm inspired by swarm behaviour found in nature. It differs Preprint submitted to Elsevier Page 9 of 15 Personalised Serious Games and Gamification in Healthcare from genetic algorithms in the lack of a selection step as each member of the population survives [94]. Similarly to the previous paragraph, Particle Swarm Optimization can be used in the optimization module of the platform for smart SGs [71]. Ant Colony Optimization is also an optimization algo- rithm inspired by nature, more specifically, by the behaviour of ants [95]. Semet et al. [68] apply the Ant Colony Op- timization algorithm to achieve an intelligent and adaptive reward allocation system according to the performance of the user. Multi-Armed Bandit is a decision-making and optimiza- tion algorithm that provides a simple model of the trade-off between exploration and exploitation to maximize gain [96]. Multi-armed bandits are computationally efficient and rely on weak knowledge models, however, there is no long- term planning to find the optimal path [53, 96]. The Kid- Breath [53] serious game for children with Asthma uses an adaptation of the Multi-Armed Bandit algorithm to person- alize the content of the health education game, based on the child’s progression. Monte Carlo Tree Search (MCTS) is an optimization algorithm that takes random samples in the decision space and builds a search tree while doing so. It combines ran- dom simulation, i.e. Monte Carlo, with tree-based explo- ration [97, 98]. In the Rehabgame [61] and the Prehab game [62], MCTS is used to gradually control the intensity of the rehabilitation exercises based on the patient’s previous performances, by generating the next set of tasks for the user’s current skill level. Knowledge-driven techniques The knowledge-driven techniques rely on information from specific domains or experts that need to be defined using rules or other approaches to make decisions. Rule-based systems are expert systems that allow reason- ing over predefined knowledge, often represented by if-then rules [99]. Three gamified systems and four SGs mention using a rule-based system to offer a personalised intervention to their users: rules are used to define nutritional and game information [75], domain-specific health guidelines infor- mation [56], information on attention tasks [70], cooking habits [51], personality type-related game responses [64], information on the user’s in-game performance [66] and information regarding NPCs and their attributes [54]. Finite State Machine consist of a set of states and tran- sitions between these states [100]. The first-person shooter game of Alves et al. [58] uses a finite state machine that transitions through the states based on the classification of the user’s mental state, more specifically, boredom, anxiety or flow, as shown in Figure 7. The InMotion rehabilitation game [63], on the other hand, uses the performance results of the user to transfer between different difficulty states, Figure 7: A three-states-machine that switches states if the user’s current mental state changes [58]. Figure 8: An example of the three difficulty states for a specific mini-game in the InMotion game [63]. namely, easy, medium and hard, as shown in Figure 8. The thresholds for entering a different difficulty state differ for each minigame and are customized for each patient. Decision trees consists of decision nodes that specify conditions to with outgoing branches representing possible values resulting from that test. The leaves of the tree each specify a category or outcome [101]. Zhao et al. [77] built a 4-layered model to represent the user in their personalized fitness recommender system, discussed in Section 3.2. The recommendation engine is based on decision trees that in- corporate all the user model information. The decision tree can suggest to extend an existing activity, recommend other types of activities, or recommend to fill some idle time with an activity. Figure 9 shows an example of such a decision tree. Hybrid techniques This final category of personalisation methods uses a combination of data and expert knowledge to make predic- tions and decisions, thereby often leveraging the advantages of both data- and knowledge-driven approaches. Open learner models (OLM) allow users to control how an intelligent system models their knowledge, skills and interests and thereby enhances adaptability and precision of system decisions and supports learning [69, 102]. The KeepAttention serious game [69] for attention training uses an OLM to introduce transparency to enable users to reflect on their own actions, explaining the proposed difficulty of the system and offered challenges. Fuzzy logic incorporates, similarly to rule-based systems, human logic and rules. However, unlike rule-based systems, the gradual transformation from one condition to another is Preprint submitted to Elsevier Page 10 of 15 123BoredomBoredomAnxietyAnxietyAnxiety | FlowFlowFlow | Boredom Personalised Serious Games and Gamification in Healthcare generic to be used by other SGs on the InLife platform in the future. The Keep Attention serious game [70] has been designed such that the tasks that consider the training objects are independent of the game elements, thereby facilitating the creation of different games. Similarly, the PRehab game decouples the game mechanics from the game graphics, so once rules and game behaviours are implemented, they can easily be reused in other games with different graphics [62]. Ahmet et al. [71] propose a modular architecture for smart SGs that ensures high cohesion and low coupling. Devel- opers can decide which contextual data is needed to be used for analysis for the game and other personalisation strategies can easily be applied or added. The TANGO:H platform [13] allows health professionals to design different types of reha- bilitation exercises and games using the Kinect. Similarly, the rehabilitation system of Caggianese et al. [65] ensures reuse by introducing an adaptive game handler component that decouples the SGs from the rest of the system. This means that new SGs can easily be added if they conform to the common interfaces. 4. Future Research Directions It is clear from the results of this literature review that the future of SGG is personalisation if sustainable engagement, treatment adherence and increased positive health outcomes are the goals. However, due to the lack of standardization of the design process in the field of SGG, many open challenges remain in achieving this personalisation. Future research should therefore focus on the reusability of the intelligent components, such as the personalisation algorithms and expert knowledge models. By introducing reusability, the laborious design and development process of SGG can be significantly reduced. Domain experts and Healthcare Professionals (HCPs) can focus on formalising domain knowledge and gaining additional insights from the data acquired from personalised SGG. Moreover, this presents the prospect of rapid prototyping and more detailed evaluation, as different algorithms or models can be quickly interchanged in similar settings to investigate the influence of varying gaming mechanisms, personalisation and adapta- tion strategies, or expert knowledge. Next, current research still fails to consider the dynamic nature of people. Users might change depending on their context or advancement in the treatment. Future research should elaborate on the profile determination of users that goes beyond static player type modelling. Moreover, to provide adequate personalisation tailored to specific indi- viduals, it might be needed to consider more user aspects than simply defining the player type, such as personal details, pathology-specific information or health data collected by wearables and healthcare professionals. Finally, with the rising popularity and possibilities of generative AI, future research should look into the integra- tion of this technology in the design of SGG. Generative AI holds the potential to, together with reusability, significantly reduce the time-intensity of developing such game-based solutions, as it can be introduced into multiple steps of the Figure 9: An example of a decision tree used by Zhao et al. in the recommendation engine [77]. possible, rather than strict true/false condition, which makes it possible to model uncertain information [84, 85]. Fuzzy logic has been used in different SGs for rehabilitation to analyse the player’s achievements and suggest suitable ad- justments to the physical rehabilitation exercises [60, 65, 67] or cognitive rehabilitation exercises [50]. 3.4. Reusability Ten out of the 31 included studies address reuse to sim- plify the design process of SGs or to facilitate the compari- son of different techniques or implementations. Each of these ten studies implemented the reuse of SGs or parts of their implementation in a different way. The following paragraphs provide an overview of the 3 gamified applications and 7 SGs and their interpretation of reuse. Carlier et al. [57] designed a gamified app for health surveys. The app has been designed such that it can easily be reused for the gamification of other surveys. de Oliveira [72] created a framework, Framework L, that guides mobile health application developers in the creation of new mHealth apps for Self-care by selecting which categories should be included in the application and which data must be collected. The mHealth GameBus tool [74] allows reuse for testing purposes, as the platform supports hosting multiple exper- imental designs and easy configuration of the gamification mechanisms. Mitsis et al. [51] facilitate the reuse of their recipe and game ontology by focusing on reusability, ex- tensibility and sustainability when designing their ontology. Semet et al. [68] consider reuse when drafting the require- ments for their reward algorithm, as the algorithm should be Preprint submitted to Elsevier Page 11 of 15 Evaluate averagePA levelRecommend newtype of PACalculate types of PA performed in last weekExtend existing PACreate new PArecommendationLongertime/distanceHigher intensityDecide possibletime for PADecide possiblelocation for PARecommend new PA session>= Recommendation< Recommendation>=3<3Simple scheduleBusy scheduleFind idle timeFind living/work location, commuting etcRecommend a new type of PA Personalised Serious Games and Gamification in Healthcare design process such as the creation of the game narrative, graphics or code-generation and support for the intelligent algorithms. 5. Conclusions Personalised SGG for health show the promise to im- prove user engagement and treatment adherence, however, research on how personalisation is achieved, remains lim- ited. This research provides an overview of that research from the past decade. Out of the 31 identified interventions, 22 designed a serious game while the other nine focused on a gamified mHealth app. The largest application domains for personalised SGG are behaviour change and rehabilitation. Ontologies and rule-based reasoning are popular approaches to integrating expert or domain knowledge in the health systems, whereas the Hexad Player framework is the most used player type framework for personalising gamification. AI and machine learning techniques are promising meth- ods for personalisation as they can find patterns and extract information from data sets, which is often used in digital interventions for health. However, due to a lack of standard- ization and reusability in personalised SGG design, the rapid evaluation and testing of multiple algorithms in a similar setting remains a labour-intensive task. Only 10 out of the 31 articles reported some kind of reusability of their system. Moreover, the interpretation of reuse differed for all of the ten articles and was mostly limited to specific use cases or even specific games or platforms. Future work should investigate if personalisation meth- ods or modelling techniques used in other application do- mains than healthcare can be used for specific domains in health. Furthermore, user profiles should be extended beyond static player type modelling. Moreover, future work should focus on simplifying the design process of person- alised SGG and the possibilities of generative AI, thereby ad- dressing the transferability of expert knowledge and reusabil- ity of intelligent personalisation algorithms and games. References [1] R. Damaševičius, R. Maskeli¯unas, T. Blažauskas, Serious Games Information 14 and Gamification in Healthcare: A Meta-Review, (2023) 105. [2] T. Alahaivala, H. Oinas-Kukkonen, Understanding persuasion contexts in health gamification: A systematic analysis of gamified health behavior change support systems literature, INTERNA- TIONAL JOURNAL OF MEDICAL INFORMATICS 96 (2016) 62–70. Place: ELSEVIER HOUSE, BROOKVALE PLAZA, EAST PARK SHANNON, CO, CLARE, 00000, IRELAND Publisher: ELSEVIER IRELAND LTD Type: Article. [3] A. Metwally, M. Chang, Y. Wang, A. M. F. Yousef, Does Gamifying Homework Influence Performance and Perceived Gameful Experi- ence?, Sustainability 13 (2021). [4] M. Fitzgerald, G. Ratcliffe, Serious Games, Gamification, and Serious Mental Illness: A Scoping Review, Psychiatric Services 71 (2020) 170–183. [5] A. J. A. Seyderhelm, K. L. Blackmore, K. Nesbitt, Towards Cognitive Adaptive Serious Games: A Conceptual Framework, in: E. van der Spek, S. Göbel, E. Y.-L. Do, E. Clua, J. Baalsrud Hauge (Eds.), Entertainment Computing and Serious Games, Springer International Publishing, Cham, 2019, pp. 331–338. doi:10.1007/ 978-3-030-34644-7_27. [6] M. Graafland, M. Schijven, How Serious Games Will Improve Healthcare, 2018, pp. 139–157. [7] G. F. Tondello, A. Mora, A. Marczewski, L. E. Nacke, Empirical validation of the Gamification User Types Hexad scale in English and Spanish, International Journal of Human-Computer Studies 127 (2019) 95–111. [8] T. Susi, M. Johannesson, P. Backlund, Serious Games - An Overview (2015). [9] U. Ritterfeld, M. Cody, P. Vorderer, Serious Games: Mechanisms and Effects, Routledge, 2009. [10] I. Afyouni, A. Murad, A. Einea, Adaptive Rehabilitation Bots in Serious Games, SENSORS 20 (2020). Place: ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND Publisher: MDPI Type: Article. [11] I. Afyouni, A. M. Qamar, S. O. Hussain, F. Ur Rehman, B. Sadiq, A. Murad, Motion-Based Serious Games for Hand Assistive Reha- bilitation, in: Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion, IUI ’17 Companion, Asso- ciation for Computing Machinery, New York, NY, USA, 2017, pp. 133–136. URL: https://doi.org/10.1145/3030024.3040977. doi:10. 1145/3030024.3040977. [12] J. Aguilar, J. Altamiranda, F. Diaz, J. G. De Mesa, A. Pinto, Adaptive plot system for serious emerging games based on the ant colony optimization algorithm, in: Proceedings - 2019 45th Latin American Computing Conference, CLEI 2019, Institute of Electrical and Elec- tronics Engineers Inc., 2019. doi:10.1109/CLEI47609.2019.235104. [13] C. S. Gonzalez-Gonzalez, P. A. Toledo-Delgado, V. Munoz-Cruz, P. Torres-Carrion, V, Serious games for rehabilitation: Gestural in- teraction in personalized gamified exercises through a recommender system, JOURNAL OF BIOMEDICAL INFORMATICS 97 (2019). Place: 525 B ST, STE 1900, SAN DIEGO, CA 92101-4495 USA Publisher: ACADEMIC PRESS INC ELSEVIER SCIENCE Type: Article. [14] S. Y. J. Lau, H. Agius, A framework and immersive serious game for mild cognitive impairment, Multimedia Tools and Applications 80 (2021) 31183–31237. [15] R. J. N. Silva, Spatial Augmented Reality in Serious Games for Cognitive Rehabilitation of the Elderly, 2020. URL: https:// estudogeral.sib.uc.pt/handle/10316/92257. [16] C. Goumopoulos, I. Igoumenakis, Ontology-Driven Mental Healthcare Applications: A Case Study on Cognitive Rehabili- in: Communications in Computer tation with Serious Games, and Information Science, volume 1387, Springer, Cham, 2021, pp. 114–140. URL: https://link.springer.com/chapter/10.1007/ 978-3-030-70807-8{_}7. doi:10.1007/978-3-030-70807-8_7. [17] D. Martinho, J. Carneiro, J. M. Corchado, G. Marreiros, A sys- tematic review of gamification techniques applied to elderly care, Artificial Intelligence Review 53 (2020) 4863–4901. [18] J. F. Vermeir, M. J. White, D. Johnson, G. Crombez, D. M. van Ryckeghem, The effects of gamification on computerized cognitive training: Systematic review and meta-analysis, JMIR Serious Games 8 (2020) e18644. [19] G. Haoran, E. Bazakidi, N. Zary, Serious Games in Health Professions Education: Review of Trends and Learning Efficacy, 2019. URL: http://www.thieme-connect.com/products/ejournals/ html/10.1055/s-0039-1677904http://www.thieme-connect.de/DOI/ DOI?10.1055/s-0039-1677904. [20] I. Gorbanev, S. Agudelo-Londoño, R. A. González, A. Cortes, A. Pomares, V. Delgadillo, F. J. Yepes, O. Muñoz, A systematic review of serious games in medical education: quality of evidence and pedagogical strategy, 2018. URL: https://www.tandfonline.com/ doi/abs/10.1080/10872981.2018.1438718. [21] O. Abraham, S. LeMay, S. Bittner, T. Thakur, H. Stafford, R. Brown, Investigating serious games that incorporate medication use for patients: Systematic literature review, 2020. URL: https://games. jmir.org/2020/2/e16096. Preprint submitted to Elsevier Page 12 of 15 Personalised Serious Games and Gamification in Healthcare [22] N. Sharifzadeh, H. Kharrazi, E. Nazari, H. Tabesh, M. E. Khodaban- deh, S. Heidari, M. Tara, Health education serious games targeting health care providers, patients, and public health users: Scoping review, 2020. URL: https://games.jmir.org/2020/1/e13459. [23] F. Ricciardi, L. T. De Paolis, A Comprehensive Review of Serious Games in Health Professions, 2014. URL: https://dl.acm.org/doi/ abs/10.1155/2014/787968. [24] R. Hervas, D. Ruiz-Carrasco, J. Bravo, T. Mondejar, Gamification mechanics for behavioral change: A systematic review and proposed taxonomy, in: ACM International Conference Proceeding Series, Association for Computing Machinery, 2017, pp. 395–404. URL: https://doi.org/10.1145/. doi:10.1145/3154862.3154939. [25] O. A. David, C. Costescu, R. Cardos, C. Mogoase, How Effective Are Serious Games for Promoting Mental Health and Health Behav- ioral Change in Children and Adolescents? A Systematic Review and Meta-Analysis, Child & Youth Care Forum 49 (2020) 817–838. [26] L. Sardi, A. Idri, J. L. Fernández-Alemán, A systematic review of gamification in e-Health, 2017. URL: https://pubmed.ncbi.nlm.nih. gov/28536062/. [27] T. H. Thomas, V. Sivakumar, D. Babichenko, V. L. Grieve, M. L. Klem, Mapping behavioral health serious game interventions for adults with chronic illness: Scoping review, JMIR Serious Games 8 (2020). [28] J. Hamari, J. Koivisto, H. Sarsa, Does gamification work? - A liter- ature review of empirical studies on gamification, in: Proceedings of the Annual Hawaii International Conference on System Sciences, IEEE Computer Society, 2014, pp. 3025–3034. doi:10.1109/HICSS. 2014.377. [29] M. King, T. Marsh, Z. Akcay, A Review of Indie Games for Serious Mental Health Game Design, in: Lecture Notes in Com- puter Science (including subseries Lecture Notes in Artificial In- telligence and Lecture Notes in Bioinformatics), volume 12945 LNCS, Springer Science and Business Media Deutschland GmbH, 2021, pp. 138–152. URL: https://link.springer.com/chapter/10. 1007/978-3-030-88272-3{_}11. doi:10.1007/978-3-030-88272-3_11. [30] K. Sipiyaruk, J. E. Gallagher, S. Hatzipanagos, P. A. Reynolds, A rapid review of serious games: From healthcare education to dental education, European Journal of Dental Education 22 (2018) 243– 257. [31] P. Sajjadi, A. Ewais, O. De Troyer, Individualization in serious games: A systematic review of the literature on the aspects of the players to adapt to, 2022. [32] M. M. van Dooren, P. Siriaraya, V. Visch, R. Spijkerman, L. Bijkerk, Reflections on the design, implementation, and adoption of a gami- fied eHealth application in youth mental healthcare, Entertainment Computing 31 (2019) 100305. [33] S. Verschueren, C. Buffel, G. V. Stichele, Developing theory-driven, evidence-based serious games for health: Framework based on re- search community insights, 2019. URL: https://games.jmir.org/ 2019/2/e11565. [34] O. De Troyer, Towards effective serious games, in: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings, Institute of Electrical and Electronics Engineers Inc., 2017, pp. 284–289. doi:10.1109/ VS-GAMES.2017.8056615. [35] S. Blatsios, I. Refanidis, Towards an Adaption and Personalisation Solution Based on Multi Agent System Applied on Serious Games, IFIP Advances in Information and Communication Technology 559 (2019) 584–594. [36] N. Lazzaro, Emotion Without Story, 2004, (GDC), pp. Why We Play Games: Four Keys to More in: Game Developer Conference 1–8. URL: www.xeodesign.comhttp: //www.citeulike.org/group/596/article/436449{%}5Cnhttp: //www.xeodesign.com/xeodesign{_}whyweplaygames.pdf. doi:10.1111/j.1464-410X.2004.04896.x. [37] A. Streicher, J. D. Smeddinck, Personalized and Adaptive Serious in: R. Dorner, S. Gobel, M. KickmeierRust, M. Masuch, Games, K. Zweig (Eds.), ENTERTAINMENT COMPUTING AND SERI- OUS GAMES, volume 9970 of Lecture Notes in Computer Science, SPRINGER INTERNATIONAL PUBLISHING AG, GEWERBE- STRASSE 11, CHAM, CH-6330, SWITZERLAND, 2016, pp. 332– 377. doi:10.1007/978-3-319-46152-6_14, iSSN: 0302-9743 Type: Proceedings Paper. [38] E. Sanchez, H. van Oostendorp, J. D. Fijnheer, E. Lavoué, Gamifica- tion, in: A. Tatnall (Ed.), Encyclopedia of Education and Information Technologies, Springer International Publishing, Cham, 2019, pp. 1–11. URL: https://doi.org/10.1007/978-3-319-60013-0_38-1. [39] T. Korhonen, R. Halonen, T. Ravelin, J. Kemppainen, K. Koskela, A Multidisciplinary Approach To Serious Game Development in the Health Sector, The 11th Mediterranean Conference on Information Systems (MCIS), Genoa, Italy, (2017) 15. [40] C. Y. Chow, R. R. Riantiningtyas, M. B. Kanstrup, M. Papavasileiou, G. D. Liem, A. Olsen, Can games change children’s eating be- haviour? A review of gamification and serious games, Food Quality and Preference 80 (2020) 103823. [41] P. Wouters, C. van Nimwegen, H. van Oostendorp, E. D. van Der Spek, A meta-analysis of the cognitive and motivational effects of serious games, Journal of Educational Psychology 105 (2013) 249–265. [42] S. V. Gentry, A. Gauthier, B. L. Ehrstrom, D. Wortley, A. Lilienthal, L. T. Car, S. Dauwels-Okutsu, C. K. Nikolaou, N. Zary, J. Campbell, J. Car, Serious gaming and gamification education in health profes- sions: systematic review, Journal of Medical Internet Research 21 (2019) e12994. [43] J. Wiemeyer, A. Kliem, Serious games in prevention and rehabilitation-a new panacea for elderly people?, European Review of Aging and Physical Activity 9 (2012) 41–50. [44] A. Mora, G. F. Tondello, L. Calvet, C. González, J. Arnedo-Moreno, L. E. Nacke, The quest for a better tailoring of gameful design: An analysis of player type preferences, ACM International Conference Proceeding Series (2019). [45] P. Paraschos, D. Koulouriotis, Game Difficulty Adaptation and International Experience Personalization: A Literature Review, Journal of Human-Computer Interaction 39 (2022) 1–22. [46] L. Rodrigues, A. M. Toda, P. T. Palomino, W. Oliveira, S. Isotani, Personalized gamification: A literature review of outcomes, experi- ments, and approaches, in: Eighth International Conference on Tech- nological Ecosystems for Enhancing Multiculturality, TEEM’20, Association for Computing Machinery, New York, NY, USA, 2021, pp. 699–706. URL: https://dl.acm.org/doi/10.1145/3434780. 3436665. doi:10.1145/3434780.3436665. [47] A. C. T. Klock, I. Gasparini, M. S. Pimenta, J. Hamari, Tailored gamification: A review of literature, International Journal of Human- Computer Studies 144 (2020) 102495. [48] A. Khakpour, R. Colomo-Palacios, Convergence of Gamification and Machine Learning: A Systematic Literature Review, Technol- ogy, Knowledge and Learning 26 (2021) 597–636. [49] R. Hare, Y. Tang, Player Modeling and Adaptation Methods Within IEEE Transactions on Computational Adaptive Serious Games, Social Systems 10 (2023) 1939–1950. [50] F. Ghorbani, M. F. Taghavi, M. Delrobaei, Towards an intelligent assistive system based on augmented reality and serious games, ENTERTAINMENT COMPUTING 40 (2022). [51] K. Mitsis, K. Zarkogianni, N. Bountouni, M. Athanasiou, K. S. Nikita, An ontology-based serious game design for the development in: 41st annual international of nutrition and food literacy skills, conference of the IEEE engineering in medicine and biology society, IEEE Engineering in Medicine and Biology Society Conference Proceedings, IEEE, New York, NY, USA, 2019, pp. 1405–1408. doi:10.1109/embc.2019.8856604. [52] D. Brown, G. Cosma, G. Acampora, S. Seymour-Smith, A. Close, An Intelligent Serious Game for Supporting African and African Caribbean Men during Pre- and Post-Diagnosis of Prostate Cancer, in: 2014 INTERNATIONAL CONFERENCE ON INTERACTIVE TECHNOLOGIES AND GAMES (ITAG 2014), IEEE, 345 E 47TH Preprint submitted to Elsevier Page 13 of 15 Personalised Serious Games and Gamification in Healthcare ST, NEW YORK, NY 10017 USA, 2014, pp. 20–27. doi:10.1109/ iTAG.2014.9, type: Proceedings Paper. [53] A. Delmas, B. Clement, P.-Y. Oudeyer, H. Sauzéon, Fostering Health Education With a Serious Game in Children With Asthma: Pilot Studies for Assessing Learning Efficacy and Automatized Learning Personalization, Frontiers in Education 3 (2018). [54] K. Mitsis, E. Kalafatis, K. Zarkogianni, G. Mourkousis, K. S. Nikita, Procedural content generation based on a genetic algorithm in a serious game for obstructive sleep apnea, in: 2020 IEEE Confer- ence on Games (CoG), IEEE, Osaka, Japan, 2020, pp. 694–697. URL: https://ieeexplore.ieee.org/document/9231785/. doi:10.1109/ CoG47356.2020.9231785. [55] A. Fadhil, A. Villafiorita, An Adaptive Learning with Gamification & Conversational UIs: The Rise of CiboPoliBot, in: ADJUNCT PUBLICATION OF THE 25TH CONFERENCE ON USER MOD- ELING, ADAPTATION AND PERSONALIZATION (UMAP’17), ASSOC COMPUTING MACHINERY, 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES, 2017, pp. 408–412. doi:10. 1145/3099023.3099112, backup Publisher: Assoc Comp Machinery; ACM SIGCHI; ACM SIGWEB Type: Proceedings Paper. [56] A. Pardos, P. Gallos, A. Menychtas, C. Panagopoulos, I. Maglogian- nis, Enriching Remote Monitoring and Care Platforms with Person- alized Recommendations to Enhance Gamification and Coaching, in: M. Hagglund, S. Pelayo, A. Moen, M. Blusi, S. Bonacina, L. Nils- son, I. Madsen, A. Benis, L. Lindskold, P. Gallos (Eds.), caring is sharing-exploiting the value in data for health and innovation- proceedings of mie 2023, volume 302 of Studies in Health Tech- nology and Informatics, IOS PRESS, Amsterdam, The Netherlands, 2023, pp. 332–336. doi:10.3233/SHTI230129. [57] S. Carlier, D. Coppens, F. De Backere, F. De Turck, Investi- gating the Influence of Personalised Gamification on Mobile Sur- vey User Experience, SUSTAINABILITY 13 (2021). Place: ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND Pub- lisher: MDPI Type: Article. [58] T. Alves, S. Gama, F. S. Melo, Flow Adaptation in Serious Games for Health, in: J. Vilaca, T. Grechenig, D. Duque, N. Ro- drigues, N. Dias (Eds.), 2018 IEEE 6TH INTERNATIONAL CON- FERENCE ON SERIOUS GAMES AND APPLICATIONS FOR HEALTH (SEGAH ‘18), IEEE International Conference on Serious Games and Applications for Health, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2018. Backup Publisher: IEEE ISSN: 2330- 5649 Type: Proceedings Paper. [59] K. d. O. Andrade, G. Fernandes, G. A. P. Caurin, A. A. G. Siqueira, R. A. F. Romero, R. d. L. Pereira, Dynamic Player Modelling in Serious Games applied to Rehabilitation Robotics, in: F. Osorio, R. Romero, V. Grassi, D. Wolf, K. Branco, M. Becker (Eds.), 2014 2ND BRAZILIAN ROBOTICS SYMPOSIUM (SBR) / 11TH LATIN AMERICAN ROBOTICS SYMPOSIUM (LARS) / 6TH ROBOCONTROL WORKSHOP ON APPLIED ROBOTICS AND AUTOMATION, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2014, pp. 211–216. doi:10.1109/SBR.LARS.Robocontrol.2014. 41, backup Publisher: Natl Council Sci & Technological Dev; Sao Paulo Res Fdn Type: Proceedings Paper. [60] S. S. Esfahlani, S. Cirstea, A. Sanaei, G. Wilson, An adaptive self-organizing fuzzy logic controller in a serious game for motor impairment rehabilitation, in: 2017 IEEE 26TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), Pro- ceedings of the IEEE International Symposium on Industrial Elec- tronics, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2017, pp. 1311–1318. Backup Publisher: Inst Elect & Elect Engineers; IEEE Ind Elect Soc; Anglia Ruskin Univ ISSN: 2163-5137 Type: Proceedings Paper. [61] S. S. Esfahlani, T. Thompson, A. D. Parsa, I. Brown, S. Cirstea, ReHabgame: A non-immersive virtual reality rehabilitation system with applications in neuroscience, Heliyon 4 (2018) e00526. [62] N. Hocine, A. Gouaich, S. A. Cerri, D. Mottet, J. Froger, I. Laffont, Adaptation in serious games for upper-limb rehabilitation: an ap- proach to improve training outcomes, USER MODELING AND USER-ADAPTED INTERACTION 25 (2015) 65–98. Place: VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHER- LANDS Publisher: SPRINGER Type: Article. [63] J. F. Pinto, H. R. Carvalho, G. R. R. Chambel, J. Ramiro, A. Goncalves, Adaptive gameplay and difficulty adjustment in a gamified upper-limb rehabilitation, in: J. Vilaca, T. Grechenig, D. Duque, N. Rodrigues, N. Dias (Eds.), 2018 IEEE 6th inter- national conference on serious games and applications for health (SEGAH‘18), IEEE International Conference on Serious Games and Applications for Health, IEEE, New York, NY, USA, 2018. [64] T. Alves, C. Martinho, R. Prada, Towards Incorporating Personality in Serious Games for Health, in: 2019 11TH INTERNATIONAL CONFERENCE ON VIRTUAL WORLDS AND GAMES FOR SE- RIOUS APPLICATIONS (VS-GAMES), International Conference on Games and Virtual Worlds for Serious Applications, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2019, pp. 230–233. doi:10.1109/vs-games.2019.8864521, backup Publisher: IEEE; IEEE Comp Soc; 7reasons Medien GmbH; Human Comp Interact Lab; Masaryk Univ, Fac Informat ISSN: 2474-0470 Type: Proceedings Paper. [65] G. Caggianese, S. Cuomo, M. Esposito, M. Franceschini, L. Gallo, F. Infarinato, A. Minutolo, F. Piccialli, P. Romano, Serious Games and In-Cloud Data Analytics for the Virtualization and Personaliza- tion of Rehabilitation Treatments, IEEE TRANSACTIONS ON IN- DUSTRIAL INFORMATICS 15 (2019) 517–526. Place: 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC Type: Article. [66] S.-J. Eun, E. J. Kim, J. Kim, Artificial intelligence-based personal- ized serious game for enhancing the physical and cognitive abilities of the elderly, FUTURE GENERATION COMPUTER SYSTEMS- THE INTERNATIONAL JOURNAL OF ESCIENCE 141 (2023) 713–722. [67] S. Sadeghi Esfahlani, J. Butt, H. Shirvani, Fusion of Artificial IEEE Access Intelligence in Neuro-Rehabilitation Video Games, 7 (2019) 102617–102627. [68] Y. Semet, B. Marcon, K. Demestichas, N. Koutsouris, A. Ascolese, Artificial Ant Colonies for Adaptive Rewards in Serious Games, in: H. Fellermann, J. Bacardit, A. GoniMoreno, R. Fuchslin (Eds.), ALIFE 2019: THE 2019 CONFERENCE ON ARTIFICIAL LIFE, MIT PRESS, ONE ROGERS ST, CAMBRIDGE, MA 02142 USA, 2019, pp. 533–540. Type: Proceedings Paper. [69] N. Hocine, Personalized Serious Games for Self-regulated Attention Training, in: ADJUNCT PUBLICATION OF THE 27TH CON- FERENCE ON USER MODELING, ADAPTATION AND PER- SONALIZATION (ACM UMAP ‘19 ADJUNCT), ASSOC COM- PUTING MACHINERY, 1515 BROADWAY, NEW YORK, NY 10036-9998 USA, 2019, pp. 251–255. doi:10.1145/3314183.3323458, backup Publisher: Assoc Comp Machinery; Assoc Comp Machinery SIGCHI; Assoc Comp Machinery SIGWEB; UM Inc; myAustrian; Springer; Natl Sci Fdn; RISE; Squirrel AI; Univ Cyprus Type: Proceedings Paper. [70] N. Hocine, M. Ameur, W. Ziani, Keep Attention: A Personalized Serious Game for Attention Training (????). [71] S. Ahmad, F. Mehmood, F. Khan, T. K. Whangbo, Architecting in- telligent smart serious games for healthcare applications: A technical perspective, SENSORS 22 (2022). [72] L. W. de Oliveira, S. T. de Carvalho, A Gamification-based Frame- work for mHealth Developers in the Context of Self-Care, in: A. De- Herrera, A. Gonzalez, K. Santosh, Z. Temesgen, B. Kane, P. Soda (Eds.), 2020 IEEE 33RD INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS(CBMS 2020), IEEE International Symposium on Computer-Based Medical Systems, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2020, pp. 138–141. doi:10.1109/CBMS49503.2020.00033, backup Publisher: IEEE; IEEE Comp Soc ISSN: 2372-9198 Type: Proceedings Paper. [73] D. Martinho, V. Crista, K. Matsui, G. Marreiros, J. M. Corchado, Effects of a gamified agent-based system for personalized elderly care: pilot usability study, JMIR SERIOUS GAMES 11 (2023). Preprint submitted to Elsevier Page 14 of 15 Personalised Serious Games and Gamification in Healthcare 6290861. doi:10.1109/BioRob.2012.6290861. [89] R. Kruse, S. Mostaghim, C. Borgelt, C. Braune, M. Steinbrecher, in: R. Kruse, S. Mostaghim, C. Borgelt, Multi-layer Perceptrons, C. Braune, M. Steinbrecher (Eds.), Computational Intelligence: A Methodological Introduction, Springer International Publish- ing, Cham, 2022, pp. 53–124. URL: https://doi.org/10.1007/ 978-3-030-42227-1_5. [90] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (2015) 436–444. [91] Y. Li, Deep Reinforcement Learning, 2018. URL: http://arxiv.org/ abs/1810.06339. doi:10.48550/arXiv.1810.06339. [92] C. R. Reeves, Genetic Algorithms, in: L. Liu, M. T. Özsu (Eds.), Encyclopedia of Database Systems, Springer, New York, NY, 2018, pp. 1583–1587. URL: https://doi.org/10.1007/978-1-4614-8265-9_ 562. [93] K. Sastry, D. Goldberg, G. Kendall, Genetic Algorithms, in: E. K. Burke, G. Kendall (Eds.), Search Methodologies: Introductory Tu- torials in Optimization and Decision Support Techniques, Springer US, Boston, MA, 2005, pp. 97–125. URL: https://doi.org/10.1007/ 0-387-28356-0_4. [94] D. P. Kumar, Particle Swarm Optimization: The Foundation, in: B. A. Mercangöz (Ed.), Applying Particle Swarm Optimization: New Solutions and Cases for Optimized Portfolios, Springer Inter- national Publishing, Cham, 2021, pp. 97–110. URL: https://doi. org/10.1007/978-3-030-70281-6_6. [95] T. Stützle, Ant Colony Optimization, in: M. Ehrgott, C. M. Fonseca, X. Gandibleux, J.-K. Hao, M. Sevaux (Eds.), Evolutionary Multi- Criterion Optimization, Springer, Berlin, Heidelberg, 2009, pp. 2–2. doi:10.1007/978-3-642-01020-0_2. [96] P. Auer, N. Cesa-Bianchi, Y. Freund, R. E. Schapire, The Non- stochastic Multiarmed Bandit Problem, SIAM Journal on Comput- ing 32 (2002) 48–77. [97] M. Świechowski, K. Godlewski, B. Sawicki, J. Mańdziuk, Monte Carlo Tree Search: A Review of Recent Modifications and Applica- tions, Artificial Intelligence Review 56 (2023) 2497–2562. [98] M. H. M. Winands, Monte-Carlo Tree Search, in: N. Lee (Ed.), Encyclopedia of Computer Graphics and Games, Springer Interna- tional Publishing, Cham, 2024, pp. 1179–1184. URL: https://doi. org/10.1007/978-3-031-23161-2_12. [99] H. Liu, A. Gegov, M. Cocea, Rule-based systems: a granular computing perspective, Granular Computing 1 (2016) 259–274. [100] M. Iovino, J. Förster, P. Falco, J. J. Chung, R. Siegwart, C. Smith, On the programming effort required to generate Behavior Trees and Finite State Machines for robotic applications, 2022. URL: http: //arxiv.org/abs/2209.07392. doi:10.48550/arXiv.2209.07392. [101] J. J. Oliver, Trees, Decision Graphs - An Extension of Decision 1993. URL: https://www.semanticscholar.org/paper/ Decision-Graphs-An-Extension-of-Decision-Trees-Oliver/ 73f1d17df0e1232da9e2331878a802a941f351c6. [102] S. Bull, J. Kay, Open Learner Models, in: R. Nkambou, J. Bourdeau, R. Mizoguchi (Eds.), Advances in Intelligent Tutoring Systems, Springer, Berlin, Heidelberg, 2010, pp. 301–322. URL: https://doi. org/10.1007/978-3-642-14363-2_15. [74] R. Nuijten, P. Van Gorp, A. Khanshan, P. Le Blanc, P. van den Berg, A. Kemperman, M. Simons, Evaluating the impact of adaptive personalized goal setting on engagement levels of government staff with a gamified mHealth tool: results from a 2-month randomized controlled trial, JMIR MHEALTH AND UHEALTH 10 (2022). [75] O. Silvia, C. Migliorelli, L. Sistach-Bosch, M. Gomez-Martinez, N. Boque, A tailored and engaging mhealth gamified framework for nutritional behaviour change, NUTRIENTS 15 (2023). [76] H. Schafer, J. Bachner, S. Pretscher, G. Groh, Y. Demetriou, Study on motivating physical activity in children with personalized gamified feedback, in: UMAP’18: ADJUNCT PUBLICATION OF THE 26TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, ASSOC COMPUTING MACHIN- ERY, 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES, 2018, pp. 221–226. doi:10.1145/3213586.3225227. [77] Z. Zhao, A. Arya, R. Orji, G. Chan, Effects of a Personalized Fitness Recommender System Using Gamification and Continuous Player Modeling: System Design and Long-Term Validation Study, JMIR SERIOUS GAMES 8 (2020). Place: 130 QUEENS QUAY E, STE 1102, TORONTO, ON M5A 0P6, CANADA Publisher: JMIR PUBLICATIONS, INC Type: Article. [78] G. Chan, A. Alslaity, J. K. Reen, S. Anukem, R. Orji, GardenQuest: Using Hexad Player Types to Design a Step-Based Multiplayer Per- suasive Game for Motivating Physical Activity, in: A. Meschtscher- jakov, C. Midden, J. Ham (Eds.), Persuasive Technology, volume 13832, Springer Nature Switzerland, Cham, 2023, pp. 337–356. URL: https://link.springer.com/10.1007/978-3-031-30933-5_22. [79] K. Mitsis, K. Zarkogianni, K. Dalakleidi, G. Mourkousis, K. S. Nikita, Evaluation of a Serious Game Promoting Nutrition and Food Literacy: Experiment Design and Preliminary Results, in: 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), IEEE, Athens, Greece, 2019, pp. 497–502. URL: https://ieeexplore.ieee.org/document/8941930/. doi:10.1109/ BIBE.2019.00096. [80] G. F. Tondello, R. R. Wehbe, L. Diamond, M. Busch, A. Mar- czewski, L. E. Nacke, The Gamification User Types Hexad Scale, in: Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ’16, Association for Computing Ma- chinery, New York, NY, USA, 2016, pp. 229–243. URL: https://dl. acm.org/doi/10.1145/2967934.2968082. doi:10.1145/2967934.2968082. [81] A. Marczewski, Even Ninja Monkeys Like to Play: Gamification, Game Thinking and Motivational Design, CreateSpace Independent Publishing Platform, 2015. [82] M. Ashburner, C. A. Ball, J. A. Blake, D. Botstein, H. Butler, J. M. Cherry, A. P. Davis, K. Dolinski, S. S. Dwight, J. T. Eppig, M. A. Harris, D. P. Hill, L. Issel-Tarver, A. Kasarskis, S. Lewis, J. C. Matese, J. E. Richardson, M. Ringwald, G. M. Rubin, G. Sherlock, Gene Ontology: tool for the unification of biology, Nature Genetics 25 (2000) 25–29. [83] C. Dessimoz, N. Škunca (Eds.), The Gene Ontology Hand- book, volume 1446 of Methods in Molecular Biology, Springer, New York, NY, 2017. URL: http://link.springer.com/10.1007/ 978-1-4939-3743-1. [84] L.-X. Wang, A Course in Fuzzy Systems and Control, Prentice Hall PTR, 1997. [85] L. Zadeh, The role of fuzzy logic in modeling, identification and control, Modeling, Identification and Control 15 (1994). [86] B. J. Borbély, P. Szolgay, Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations, Biomedical Engineering Online 16 (2017) 21. [87] D. Lura, The Creation of a Robotics Based Human Upper Body Model for Predictive Simulation of Prostheses Performance, USF Tampa Graduate Theses and Dissertations (2012). [88] E. Papaleo, L. Zollo, S. Sterzi, E. Guglielmelli, An inverse kine- matics algorithm for upper-limb joint reconstruction during robot- aided motor therapy, in: 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2012, pp. 1983–1988. URL: https://ieeexplore.ieee.org/document/ Preprint submitted to Elsevier Page 15 of 15
ai_researcher
4
DrugAgent_Automating_AI-aided_Drug_Discovery_Programming_through_LLM_Multi-Agent_Collaboration.pdf
DrugAgent: Automating AI-aided Drug Discovery Programming through LLM Multi-Agent Collaboration Sizhe Liu1, Yizhou Lu1, Siyu Chen1, Xiyang Hu2, Jieyu Zhao1, Tianfan Fu3, Yue Zhao1* 2Carnegie Mellon University 1University of Southern California 3Rensselaer Polytechnic Institute 4 2 0 2 v o N 4 2 ] G L . s c [ 1 v 2 9 6 5 1 . 1 1 4 2 : v i X r a Abstract Recent advancements in Large Language Models (LLMs) have opened new avenues for accelerating drug discov- ery processes. Despite their potential, several critical chal- lenges remain unsolved, particularly in translating theoret- ical ideas into practical applications within the highly spe- cialized field of pharmaceutical research, limiting practition- ers from leveraging the latest AI development in drug dis- covery. To this end, we introduce DrugAgent, a multi-agent framework aimed at automating machine learning (ML) pro- gramming in drug discovery. DrugAgent incorporates domain expertise by identifying specific requirements and building domain-specific tools, while systematically exploring differ- ent ideas to find effective solutions. A preliminary case study demonstrates DrugAgent ’s potential to overcome key limita- tions LLMs face in drug discovery, moving toward AI-driven innovation. For example, DrugAgent is able to complete the ML programming pipeline end-to-end, from data acquisition to performance evaluation for the ADMET prediction task, and finally select the best model, where the random forest model achieves an F1 score of 0.92 when predicting absorp- tion using the PAMPA dataset. 1 Introduction Artificial intelligence (AI) is driving significant advance- ments in drug discovery (Huang et al. 2022). Due to the high cost and time required for experimentally assessing drug properties, researchers are increasingly looking for ways to accelerate all stages of drug development (Pushpakom et al. 2019). Numerous AI-ready datasets and benchmarks are now available for critical tasks in the drug discovery pro- cess, such as ADMET prediction, drug-target interaction, and high-throughput screening (Huang et al. 2021; Chen et al. 2024; Wang et al. 2024c). Recent advances in deep learning have shown particular promise in accelerating lead optimization and predicting drug-target interactions (Huang et al. 2020), potentially reducing the time and resources needed for traditional experimental methods. Performing machine learning (ML) experiments in drug discovery requires expertise in biology, chemistry, pharma- ceutical science, and computer science, which creates a sig- nificant barrier to entry. Large language models (LLMs), *Corresponding author Copyright © 2025. Preliminary version. with their ability to reason through complex tasks, present an exciting opportunity to automate ML programming in the drug discovery process. General frameworks, e.g., MLA- gentBench (Huang et al. 2024a) and AI-Scientist (Lu et al. 2024a), offer promising solutions for end-to-end ML pro- gramming. Specialized agents with domain-specific tools can further enhance the ability to handle complex tasks in chemistry or biology (Boiko et al. 2023; M. Bran et al. 2024; Inaba et al. 2023). Despite these, significant challenges re- main to fully automate drug discovery research with LLMs. Challenge 1. General-purpose LLMs often lack the spe- cialized domain knowledge needed to accurately implement ML experiments in drug discovery. For instance, incorrect API choices for domain-specific libraries or misunderstand- ings in raw biological data preprocessing steps can easily cause problems that are difficult to debug, especially given the complex codebase typically involved in drug discov- ery tasks. While frameworks like ChemCrown (M. Bran et al. 2024) and MultiTool-CoT (Chain of Thought) (Inaba et al. 2023) provide tools for chemistry tasks like calculating molecular weight and predicting reactions, they do not fully solve this problem. These tools are often too simple for ML programming, indicating the need for a wider set of tools, from data collection to model evaluation. Challenge 2. In many ML tasks, LLMs are required to gen- erate ideas rather than simply implementing a predefined plan. However, LLM-generated ideas often lack grounding in practical context (Si, Yang, and Hashimoto 2024), espe- cially in drug discovery settings. Due to hallucination, an LLM may confidently propose an idea, yet lack the do- main knowledge necessary for implementation (Huang et al. 2023). Existing strategies for exploring viable ideas, such as reasoning and acting (Huang et al. 2024a), generating di- verse ideas (Lu et al. 2024a; Wang et al. 2024a), or using tree search (WecoAI 2024), are generally optimized for standard ML tasks and may be inefficient in scenarios where many proposed ideas cannot be implemented. Thus, it requires a strategy that builds on these methods while better aligning the agent’s idea exploration with its practical knowledge. Our Solutions. To address these challenges, we propose DrugAgent, a multi-agent framework to enhance ML pro- gramming in drug discovery tasks. First, we integrate work- flows that identify steps requiring domain knowledge, allow- ing for the development of specialized tools to handle these tasks before proceeding with coding. Additionally, we in- troduce a dynamic idea space management approach, where diverse ideas are generated at the early stage and later up- dated based on experimental observations, resulting in more efficient exploration. Finally, we provide an enhanced set of tools in the form of comprehensive library documentation that supports essential AI-driven drug discovery tasks, in- cluding biological data retrieval, molecular fingerprinting, AI model development, and performance evaluation. These resources are carefully selected to meet the complex require- ments of real-world programming processes. Main Contributions. Our main contributions include: • Significance. This paper focuses on automating AI-based drug discovery tasks, which is a life-critical and signifi- cant problem. To the best of our knowledge, this is the first attempt to automate AI programming in the context of drug discovery. Our work allows pharmaceutical scien- tists to use AI without a coding background and facilitates AI-based drug discovery research. • Method. We design an automated LLM-based multi-agent system that involves coding programming tailored to drug discovery, which also enables automatic code running and results collection without human intervention. • Results. DrugAgent exhibits initial success in automating a couple of representative AI-based drug discovery tasks. For example, DrugAgent can build a random forest model automatically for drug molecular absorption prediction, achieving an F1 score of 0.920 on the PAMPA dataset. 2 Related Works 2.1 LLM Agents An LLM agent is a system that uses large language models to interact with users or other systems, perform tasks, and make decisions autonomously. Empowered by LLMs, LLM agents have the capability to perform multi-step reasoning, planning, and action execution beyond static text genera- tion (Wang et al. 2024b). Previous works have equipped LLM agents with modules to dynamically interact with ex- ternal tools, retrieve information, and adapt based on real- time feedback (Schick et al. 2023; Yoon, Kim, and Oh 2024; Qin et al. 2023; Ravuru, Sakhinana, and Runkana 2024; L´ala et al. 2023). This allows them to solve complex, evolv- ing tasks such as code writing, long-term reasoning, and decision-making in various contexts (Guo et al. 2024; Jiang et al. 2024). In this work, we tailor LLM multi-agent frame- work to drug discovery tasks. 2.2 LLM for ML Programming Recent work has focused on accelerating traditionally man- ual research processes by automating ML programming. AIDE acts as a data science agent, exploring a vast solution space and iteratively refining its approach to reach optimal solutions (WecoAI 2024). AutoKaggle introduces a special- ized multi-agent framework for Kaggle data science compe- titions (Li et al. 2024b). AI-Scientist enables LLMs to con- duct research autonomously, from idea generation to paper drafting, focusing on ML-related topics (Lu et al. 2024a). In parallel, benchmarks have been developed that provide a suite of 13 tasks to evaluate LLMs’ capabilities in conduct- ing ML programming (Huang et al. 2024a). However, exist- ing works cannot handle domain-specific ML tasks requir- ing complex domain knowledge, e.g., AI-aided drug discov- ery. To address this, we design workflows to insert domain knowledge and call domain-specific tools automatically. 2.3 LLM for Biomedical Discovery Many studies have highlighted the applications of LLMs in biomedical discovery, particularly when integrated with domain-specific tools. For instance, ChemCrown demon- strates the potential of LLM agents in organic synthesis, drug discovery, and materials design (M. Bran et al. 2024). Similarly, MMedAgent is a multimodal medical agent de- signed to handle complex language and multimodal tasks, demonstrating LLM versatility in medical applications (Li et al. 2024a). The multi-agent approach is exemplified by ClinicalAgent (Yue et al. 2024), which introduces a frame- work for clinical trial outcome prediction by decomposing it into subproblems, allowing individual agents to collabo- rate and generate a comprehensive outcome. However, exist- ing ML programming agents may lack the domain-specific knowledge needed for biomedical tasks, while biomedical agents are not typically equipped with ML-specific exper- tise. To bridge this gap, we introduce DrugAgent, a multi- agent LLM system that integrates ML programming capa- bilities with biomedical knowledge, targeting the unique re- quirements of ML tasks in drug discovery. 3 Methodology We introduce DrugAgent, an automated and innovative LLM multi-agent framework designed to streamline AI- aided drug discovery tasks. As illustrated in Figure 1, Dru- gAgent integrates two key components: the LLM Instructor (§3.2), which identifies domain-specific knowledge require- ments and prepares necessary tools, and the LLM Planner (§3.3), which manages and refines the exploration of ideas to optimize task performance. Before detailing these com- ponents and their roles, we define the problem in §3.1. 3.1 Problem Formulation We address the challenge of automating ML programming tasks in the domain of drug discovery. These tasks involve the integration of natural language instructions with com- putational tools to produce accurate and efficient solutions. Following Huang et al. (2024a), An ML programming task is defined by the following components: • Task Description: A natural language specification out- lining the objectives and constraints of the task. • Starter Files: A set of initial resources, such as datasets or code templates, to support task execution. • Evaluator: A performance metric function to assess the quality of the task output. An agent must interpret the task description, utilize the starter files, and execute a sequence of actions to generate a solution. These actions include reading and writing files, Figure 1: Framework overview of DrugAgent. Given an AI-based drug discovery task described in natural language (i.e., user’s input, e.g., design an AI model to predict Absorption (one of the ADMET properties) using the PAMPA dataset (Siramshetty, Shah et al. 2021), the LLM Planner first produces a couple of potential ideas (e.g., GCN (graph convolutional network) (Kipf and Welling 2016), random forest, pretrained model (such as ChemBERTa (Chithrananda, Grand, and Ramsundar 2020))). Then, for each idea, the LLM Instructor transfers the idea into code based on domain knowledge (e.g., dataset acquisition and molecular fingerprinting). Then, the Coder debugs and implements the code and evaluates the performance. Finally, all the results are collected and the best idea is reported (e.g., random forest achieves the best performance in predicting absorption). preprocessing data, implementing ML models, and execut- ing Python programs. The primary challenge lies in align- ing abstract task descriptions with their practical implemen- tation, particularly when domain-specific knowledge is re- quired. The goal is to develop an autonomous system ca- pable of efficiently handling these tasks while minimizing errors and improving success rates. 3.2 LLM Instructor: Domain-Specific Knowledge Identification and Tool Preparation Motivation. Drug discovery is a highly specialized and complex domain that demands precise integration of ML and domain expertise. Although using LLMs offers significant potential to automate and accelerate ML programming in this field, we observe that LLMs often fail to bridge the gap between general-purpose reasoning and the specific needs of drug discovery tasks. This failure arises from hallucina- tion (Huang et al. 2023, 2024b), where LLMs generate in- correct or unrealistic outputs due to a lack of understanding of domain-specific requirements. For example, inappropri- ate preprocessing of SMILES strings or incorrect API us- age for molecular operations can lead to costly debugging and failed experiments. These limitations highlight the ur- gent need for a mechanism to explicitly identify and address domain-specific knowledge requirements before conducting experiments. To address this, we introduce the LLM Instruc- tor in DrugAgent, which follows a structured process: 1. Decomposing the Problem: Break the problem into smaller, actionable substeps for systematic resolu- tion (Wu et al. 2024; Huang et al. 2024a). 2. Identifying Knowledge Needs: Analyze substeps to de- termine if domain-specific expertise or tools are required, using expert-curated prompts. 3. Constructing Tools: Gather or create tools by identify- ing relevant APIs and validating them with unit tests. 4. Reusing Tools: Add validated tools to a reusable toolbox to improve efficiency and reduce errors in future tasks. Each step is critical in enabling the LLM Instructor to bridge the gap between general-purpose reasoning and domain-specific requirements. The following sections pro- vide more details on how domain-specific knowledge is identified, tools are constructed, and failures are handled to ensure the effective execution of ML tasks in drug discovery. Domain-specific Knowledge. Domain-specific knowl- edge refers to specialized information, concepts, and exper- tise related to a particular field or subject area, such as drug discovery in our context. In ML tasks for drug discovery, the absence or incompleteness of domain-specific knowledge often leads to coding errors. We observed that LLMs of- ten fail to recognize the need for domain-specific knowledge in certain tasks due to hallucination, resulting in the incor- rect use of necessary tools. Therefore, an explicit reasoning process is essential. Gathering all relevant domain-specific knowledge and tools before starting the experiment is crucial to minimize errors and ensure the experiment aligns with the field’s complexities. Pre-trainedModelsFor EachIdeaTestGenerationUnit Testpass? Domain Tool ConstructionDocumentationCodeGenerationSelf DebuggingConstructionCompleteRandomForestGCNFinal Code &Self DebuggingResearch Task(eg: ADMET) CoderDataset DownloadingFingerprintingPlannerInstructorConstruction Failed......Identified Domain KnowledgeIdea SpaceReport BestIdeaCoder Instructor. The Instructor agent is responsible for identi- fying substeps of the problem that require domain-specific knowledge. The process begins by decomposing the over- all plan into an actionable sequence of simpler steps, an ap- proach that has proven effective in handling complex tasks, such as ML programming (Wu et al. 2024; Huang et al. 2024a). Next, the Instructor analyzes which of these steps require domain expertise. To improve the accuracy of this identification, we utilize few-shot prompts curated by ex- perts in drug discovery. While the approach does not guar- antee the correct identification of all substeps, our analysis shows that it performs successfully in the majority of cases. Domain Tool Construction. For each identified domain- specific need, we proceed to gather the appropriate tools. In coding tasks, creating a fixed list of tools, as seen in previ- ous biomedical agents (Roohani et al. 2024; M. Bran et al. 2024), is challenging due to the large number of APIs within libraries. As a result, we search through documentation to identify relevant APIs and create tools, which may involve a single API or multiple APIs combined into a helper func- tion. However, relying solely on documentation can intro- duce errors, especially if the documentation is outdated or lacks sufficient detail. Furthermore, machine learning prob- lems frequently necessitate helper functions that combine several APIs in intricate ways, increasing the chance of er- rors. To address this, the Coder first designs unit tests to ver- ify the correctness of the constructed tools, thus minimizing the risk of error propagation across subsequent stages. The Coder then accesses relevant library documentation to final- ize the tool construction. Tool Reusability and Failure Handling. For tools that pass unit tests, we add them to a toolbox for future use. Pre- vious studies have shown the benefits of building a growing toolbox (Wang, Fried, and Neubig 2024). In our case, since many tasks rely on shared domain knowledge, like data ac- quisition, creating reusable functions can help lower costs and reduce errors. In drug discovery tasks, agents often face challenges when trying to build domain-specific tools, even with documentation support. When repeated attempts at de- bugging fail to resolve issues shown in unit tests, we record this outcome and report it to the Planner Agent. This process will be explained further in the next section. 3.3 LLM Planner: Idea Space Management Motivation. Drug discovery tasks are inherently open- ended, with no single deterministic solution. Approaches often vary widely based on available data, domain require- ments, and task constraints. While LLMs can generate mul- tiple ideas, they often struggle to distinguish between feasi- ble and infeasible solutions due to hallucinations or insuffi- cient domain knowledge (Huang et al. 2024b). This ineffi- ciency can lead to wasted computational resources and sub- optimal performance. To address this, the LLM Planner in DrugAgent is to systematically manage and refine the idea space, ensuring actionable and high-performing solutions. Idea Space. The “Idea Space” encompasses the broad range of potential approaches or solutions for a given ML task, recognizing that such tasks are inherently open-ended and lack a single, deterministic solution. Let M denote the set of all possible ideas for a task, and let N ⊆ M represent the subset of ideas that are feasible to implement based on the knowledge available to the LLM. The primary objective is to identify an idea I ∈ N that maximizes the performance metric effectively and efficiently. Justification for the Planner. While LLMs can gener- ate diverse ideas, they often struggle to align these sug- gestions with the implementable subset N , especially in domain-specific tasks like drug discovery. This misalign- ment is largely due to the hallucination tendencies of LLMs, where unrealistic or infeasible ideas are proposed without regard for implementation constraints (Huang et al. 2024b). To address this, we introduce a mechanism to iteratively re- fine the idea space using feedback derived from program- ming observations. By tracking successes and failures in tasks such as tool-building or data preprocessing, the Plan- ner can learn from past attempts to improve its search pro- cess and focus on actionable solutions. Planner. The Planner operates in two key phases: idea generation and idea refinement. During the Idea Initializa- tion phase, the Planner generates K candidate ideas based on the problem statement. In the refinement phase, the Plan- ner uses observations, such as tool failures or experimental outcomes, to adjust the idea set. This process involves three core operations: (1) deleting infeasible ideas, (2) modifying existing ideas to address identified limitations, or (3) intro- ducing new ideas based on accumulated knowledge. As shown in Figure 1, when the Planner encounters a fail- ure in building a tool for domain-specific knowledge, this failure is logged and the associated idea is marked as infea- sible. The Planner then halts further exploration of this idea and removes other ideas that depend on the same missing knowledge. This iterative process not only redirects efforts toward viable solutions but also informs future idea genera- tion, reducing the likelihood of repeating errors and enhanc- ing the overall efficiency of the system. 4 Experiment 4.1 AI-driven Drug Discovery Tasks We propose three representative AI-solvable drug discovery tasks to validate the effectiveness of DrugAgent, as shown in Table 1. These tasks are well-established benchmarks that cover the three essential task categories in the Ther- apeutics Data Commons (TDC) Benchmark (Huang et al. 2021): single-instance prediction, multi-instance prediction, and generation tasks. 1. ADMET Prediction. ADMET (Absorption, Distribu- tion, Metabolism, Excretion, and Toxicity) prediction ex- emplifies a single-instance prediction task, where the goal is to predict pharmacokinetic properties from a drug’s structure. These properties are critical to a drug’s efficacy, safety, and clinical success, making early AD- MET assessment vital for minimizing late-stage failure risks (Niu et al. 2024; Lu et al. 2024b; Chen et al. 2021; Chen, Hao, and Van Rechem 2024). Type Input Impact ADMET Prediction DTI Prediction Molecule Optimization single-instance prediction multi-instance prediction generation SMILES string SMILES string amino acid sequence and protein SMILES string Prevents clinical failures through early and accurate AD- MET profiling trial Reduces high-throughput screen- ing needs and narrow down the search space efficient Enables of molecules with desirable phar- maceutical properties design Data Example Caco-2 (Wang et al. 2016) DAVIS (Davis et al. 2011) ZINC (Sterling and Irwin 2015) Table 1: Task overview: ADMET, DTI, and molecule optimization. In this paper, we restrict our attention to small-molecule drugs, which take up more than 90% of all the approved drugs. Small-molecule drugs can be represented as SMILES string. SMILES string refers to a line notation that describes chemical compounds (e.g., drug molecules) with short ASCII strings. Figure 2: Comparison of ReAct (a) and DrugAgent (b) on an ADMET prediction task using the PAMPA dataset. ReAct, a general-purpose framework, fails due to hallucinated API calls and an inability to self-debug, requiring human intervention to proceed. It focuses solely on fine-tuning a pretrained language model, which is suboptimal for the small dataset size. In con- trast, DrugAgent systematically explores multiple approaches, including random forests, graph neural networks, and pretrained language models. DrugAgent identifies domain-specific requirements, constructs necessary tools, and prunes ineffective ideas, such as molecular graph construction. This structured workflow allows DrugAgent to deliver successful results autonomously, achieving strong performance. See more analysis in §4.3 and the Appendix for the sample code. Bug report to fix forat most 3 timesCode generated bycoding agentTest CodeFalse (Show the error)True Finish all TestsPre-trainedModelsFor EachIdeaTestGenerationUnit Testpass? Domain Tool ConstructionDocumentationCodeGenerationSelf DebuggingConstructionCompleteInput: Designing and Evaluating a Model for ADMET Prediction Using the PAMPADataset(shortened prompt)Plan 1: Fine-tune a pretrained language model.Thought 1: Compute similarity scores based on the product descriptionAction 1: ComputeEmbeddingSimilarity(“swim fins”, GetEntityDocuments()]Result 1: s1 similarity scoresThought 2:Action 2:Result 2:Thought 3:Action 3:Result 3:Final result:(a) ReAct(b) DrugCoderInput: Q Output: Plan1:Code1:Result1:Plan2:Code2:Result2:Plan3:Code3:Result3:Final result:RandomForestGCNFinal Code &Self DebuggingFingerprintingDataset DownloadingMolecule GraphConstructionRandomForestPre-trainedModelsIdea Expansion Research Task(eg: ADMET) GCNCoderUnit testpreparationToolConstructionUnit TestExecutionUnit TestingResearch Task(eg: ADMET) CoderDataset DownloadingFingerprintingAcion(Ideageneration/update/delete)Idea Expansion PlannerInstructorConstruction Failed......Identified Domain KnowledgeIdea SpaceReport BestIdea Ignore the brand information “TUSA” CoderInput: Designing and Evaluating a Model for ADMET Prediction Using thePAMPA Dataset (shortened prompt)Thought 1: My plan is to fine-tune a pre-trained language model.First, download the dataset.Action 1: data = ADME(name='PAMPA')...Human Intervention: Failed to self-debug after several attempts.Thought 5: I will fine-tune a ChemBERTa model.Action 5: model_name = "seyonec/ChemBERTa-77M-MLM"...Human Intervention: Failed to self-debug after several attempts.Thought 10: Task complete.Action 10: Return the result. Wrong API Call!(a) ReAct(b) DrugAgentStep 1: Generate multiple ideas: GNN, pre-trained model, random forest, etc.Step 2: Start with GNN. Identify required domain knowledge: datasetdownloading, molecular graph construction.Step 3: Dataset downloaded successfully! Added to toolbox.Step 4: Molecular graph construction failed. Idea manager stops explorationof the GNN idea.Step 5-10: Attempt pre-trained model. Fetch documentation for ChemBERTa API Step 10-15: Try random forest.Successfully fingerprinted molecule Step 16: Maximum steps reached, return best result from random forest. Wrong API Call! Success Success! Success! 2. Drug-Target Interaction (DTI). DTI prediction is a multi-instance prediction task aimed at forecasting the binding affinity between a drug and a target protein based on small-molecule compound structures and pro- tein amino acid sequences. This task is essential for vir- tual screening, drug repurposing, and side effect predic- tion (Liu et al. 2024; Zhang et al. 2021). 3. Molecule Optimization. Molecule optimization focuses on generating novel and diverse molecules with desir- able pharmaceutical properties, making it a generation task (Xia et al. 2024; Fu et al. 2022). By using tar- geted design methods, this approach reduces the need for exhaustive searches, improving efficiency and inno- vation (Gao et al. 2022). 4.2 Baseline Methods We compare DrugAgent with two established baseline meth- ods to evaluate its performance across the proposed tasks: 1. ReAct. ReAct (Yao et al. 2023) enables LLMs to in- tegrate reasoning and action through an interleaved, in- context approach, allowing interactive analysis of ob- served information and execution of actions. 2. MLAgentBench. The research agent (Huang et al. 2024a) supports tasks such as maintaining a research plan and executing actions like understanding files, editing scripts, and reflecting on task progress. 4.3 Case Study: Comparing DrugAgent with ReAct on ADMET Prediction Tasks To demonstrate the effectiveness of DrugAgent, we con- ducted a case study on an ADMET prediction task and com- pared its performance to ReAct, as illustrated in Fig. 2. This comparison highlights the challenges LLMs face with domain-specific tasks and the advantages of DrugAgent in overcoming these limitations. ReAct (Yao et al. 2023), a general-purpose framework, struggles with domain-specific knowledge integration. For instance, it begins by proposing to fine-tune a pretrained language model but fails at critical steps, such as down- loading the appropriate dataset or selecting the correct API, requiring human intervention to proceed. Moreover, ReAct focuses exclusively on refining a single approach, which is suboptimal for this task given the small dataset size. These limitations illustrate the gap between general-purpose LLM reasoning and the specialized needs of drug discovery tasks. In contrast, DrugAgent adopts a systematic and multi- faceted approach. It explores diverse methods, including random forests, graph neural networks (GNNs), and pre- trained language models, while identifying steps that require domain knowledge. For example, DrugAgent successfully automates tasks such as dataset downloading, molecular fin- gerprinting, and ChemBERTa (Chithrananda, Grand, and Ramsundar 2020) tokenization/model execution. Addition- ally, DrugAgent employs idea pruning to remove approaches that fail validation, such as molecular graph construction for GNN input, saving both time and computational resources. From a performance perspective, DrugAgent delivers ro- bust results across multiple models. The random forest ap- proach achieves a 0.920 F1 score and 0.817 ROC-AUC, while ChemBERTa attains a 0.916 F1 score and 0.776 ROC- AUC. These results underscore DrugAgent’s ability to not only automate domain-specific ML tasks but also select and refine the most effective approaches for the problem at hand. 5 Conclusion In this paper, we introduced DrugAgent, a multi-agent framework that represents a significant step forward in lever- aging large language models for automating critical as- pects of drug discovery. DrugAgent addresses key chal- lenges inherent in this domain, including the inability of general-purpose LLMs to handle domain-specific require- ments, inefficient exploration of idea spaces, and the ab- sence of robust domain-specific tools. By systematically generating and refining ideas, DrugAgent ensures that the exploration process is both efficient and aligned with the practical constraints of drug discovery tasks. Furthermore, integrating specialized toolsets, such as dataset handling, molecular fingerprinting, and tokenization workflows, en- ables DrugAgent to bridge the gap between generalized AI capabilities and the nuanced demands of pharmaceutical re- search. Through proof-of-concept experiments, we demon- strated that DrugAgent outperforms general-purpose frame- works like ReAct by effectively automating complex tasks and identifying optimal solutions. It is important to note that this work represents an ongoing effort to push the boundaries of AI-driven drug discovery. As the field evolves, so too will the opportunities to refine and expand DrugAgent, ensuring its continued relevance and im- pact in addressing the challenges of this dynamic domain. 6 Future Work As this is a preliminary version, several aspects of our work remain to be explored in greater depth. First, we plan to expand our experiments by incorporating additional state-of-the-art baselines and performing large-scale quan- titative comparisons to rigorously evaluate the performance and scalability of DrugAgent across diverse drug discovery tasks. This will include testing on more challenging datasets and tasks to validate the generalizability of our framework. Second, we aim to conduct comprehensive ablation stud- ies to better understand the contributions of individual mod- ules, such as the domain knowledge identification step, the idea generation and pruning process, and the effectiveness of the enhanced toolset. These studies will help isolate and quantify the impact of each component, providing deeper in- sights into DrugAgent ’s strengths and potential limitations. Finally, we intend to explore the integration of DrugA- gent with real-world drug discovery workflows, collaborat- ing with domain experts to assess its practical utility and identify areas for refinement. This will allow us to ensure that DrugAgent is not only a theoretical advancement but also a practical tool that can meaningfully accelerate the drug discovery pipeline. References Boiko, D. A.; MacKnight, R.; Kline, B.; and Gomes, G. 2023. Autonomous Chemical Research with large language models. Nature, 624(7992): 570–578. Chen, J.; Hu, Y.; Wang, Y.; Cao, X.; Lin, M.; Xu, H.; Wu, J.; Xiao, C.; Sun, J.; et al. 2024. TrialBench: Multi- Modal Artificial Intelligence-Ready Clinical Trial Datasets. arXiv:2407.00631. Chen, L.; Lu, Y.; Wu, C.-T.; Clarke, R.; Yu, G.; Van Eyk, J. E.; Herrington, D. M.; and Wang, Y. 2021. Data-driven detection of subtype-specific differentially expressed genes. Scientific reports, 11(1): 332. Chen, T.; Hao, N.; and Van Rechem, C. 2024. Uncer- tainty Quantification on Clinical Trial Outcome Prediction. arXiv:2401.03482. Chithrananda, S.; Grand, G.; and Ramsundar, B. 2020. large-scale self-supervised pretraining for ChemBERTa: In Machine Learning for molecular property prediction. Molecules Workshop at NeurIPS 2020. Davis, M. I.; Hunt, J. P.; Herrgard, S.; Ciceri, P.; Wod- icka, L. M.; Pallares, G.; Hocker, M.; Treiber, D. K.; and Zarrinkar, P. P. 2011. Comprehensive analysis of kinase inhibitor selectivity. Nature biotechnology, 29(11): 1046– 1051. Fu, T.; Gao, W.; Xiao, C.; Yasonik, J.; Coley, C. W.; and Sun, J. 2022. Differentiable Scaffolding Tree for Molecular Optimization. International Conference on Learning Repre- sentations. Gao, W.; Fu, T.; Sun, J.; and Coley, C. 2022. Sample ef- ficiency matters: a benchmark for practical molecular opti- mization. Advances in Neural Information Processing Sys- tems, 35: 21342–21357. Guo, T.; Chen, X.; Wang, Y.; Chang, R.; Pei, S.; Chawla, N. V.; Wiest, O.; and Zhang, X. 2024. Large Language Model based Multi-Agents: A Survey of Progress and Chal- lenges. arXiv:2402.01680. Huang, K.; Fu, T.; Gao, W.; Zhao, Y.; Roohani, Y.; Leskovec, J.; Coley, C. W.; Xiao, C.; Sun, J.; and Zitnik, M. 2021. Therapeutics Data Commons: Machine Learning Datasets and Tasks for Drug Discovery and Development. Advances in Neural Information Processing Systems. Huang, K.; Fu, T.; Gao, W.; Zhao, Y.; Roohani, Y.; Leskovec, J.; Coley, C. W.; Xiao, C.; Sun, J.; and Zitnik, M. 2022. Artificial intelligence foundation for therapeutic science. Nature Chemical Biology, 18: 1033. Huang, K.; Fu, T.; Glass, L. M.; Zitnik, M.; Xiao, C.; and Sun, J. 2020. DeepPurpose: a deep learning library for drug–target interaction prediction. Bioinformatics, 36(22- 23): 5545–5547. Huang, L.; Yu, W.; Ma, W.; Zhong, W.; Feng, Z.; Wang, H.; Chen, Q.; Peng, W.; Feng, X.; Qin, B.; and Liu, T. 2023. A Survey on Hallucination in Large Language Models: Prin- ciples, Taxonomy, Challenges, and Open Questions. arXiv preprint arXiv:2311.05232. Work in progress; 49 pages. Huang, Q.; Vora, J.; Liang, P.; and Leskovec, J. 2024a. MLAgentBench: Evaluating Language Agents on Machine Learning Experimentation. In Thirty-eighth Conference on Neural Information Processing Systems. Huang, Y.; Sun, L.; Wang, H.; Wu, S.; Zhang, Q.; Li, Y.; Gao, C.; Huang, Y.; Lyu, W.; Zhang, Y.; et al. 2024b. Po- sition: Trustllm: Trustworthiness in large language models. In International Conference on Machine Learning, 20166– 20270. PMLR. Inaba, T.; Kiyomaru, H.; Cheng, F.; and Kurohashi, S. 2023. MultiTool-CoT: GPT-3 Can Use Multiple External Tools In Proceedings of the with Chain of Thought Prompting. 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 1522–1532. Toronto, Canada: Association for Computational Linguistics. Jiang, J.; Wang, F.; Shen, J.; Kim, S.; and Kim, S. 2024. A Survey on Large Language Models for Code Generation. arXiv preprint arXiv:2406.00515. Kipf, T. N.; and Welling, M. 2016. Semi-supervised classifi- cation with graph convolutional networks. The International Conference on Learning Representations (ICLR). Li, B.; Yan, T.; Pan, Y.; Luo, J.; Ji, R.; Ding, J.; Xu, Z.; Liu, S.; Dong, H.; Lin, Z.; and Wang, Y. 2024a. MMedAgent: Learning to Use Medical Tools with Multi-modal Agent. arXiv preprint arXiv:2407.02483. Accepted at EMNLP 2024. Li, Z.; Zang, Q.; Ma, D.; Guo, J.; Zheng, T.; Liu, M.; Niu, X.; Wang, Y.; Yang, J.; Liu, J.; Zhong, W.; Zhou, W.; Huang, W.; and Zhang, G. 2024b. AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions. arXiv preprint arXiv:2410.20424. Liu, S.; Xia, J.; Zhang, L.; Liu, Y.; Liu, Y.; Du, W.; Gao, Z.; Hu, B.; Tan, C.; Xiang, H.; and Li, S. Z. 2024. FlexMol: A Flexible Toolkit for Benchmarking Molecular Relational Learning. In Proceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS). Lu, C.; Lu, C.; Lange, R. T.; Foerster, J.; Clune, J.; and Ha, D. 2024a. The AI Scientist: Towards Fully Auto- arXiv preprint mated Open-Ended Scientific Discovery. arXiv:2408.06292. Lu, Y.; Chen, T.; Hao, N.; Van Rechem, C.; Chen, J.; and Fu, T. 2024b. Uncertainty quantification and interpretability for clinical trial approval prediction. Health Data Science, 4: 0126. L´ala, J.; O’Donoghue, O.; Shtedritski, A.; Cox, S.; Ro- driques, S. G.; and White, A. D. 2023. PaperQA: Retrieval- Augmented Generative Agent for Scientific Research. arXiv preprint arXiv:2312.07559. M. Bran, A.; Cox, S.; Schilter, O.; Baldassari, C.; White, A. D.; and Schwaller, P. 2024. Augmenting large language models with Chemistry Tools. Nature Machine Intelligence, 6(5): 525–535. Niu, Z.; Xiao, X.; Wu, W.; Cai, Q.; Jiang, Y.; Jin, W.; Wang, M.; Yang, G.; Kong, L.; Jin, X.; Yang, G.; and Chen, H. 2024. PharmaBench: Enhancing ADMET benchmarks with large language models. Scientific Data, 11(985). Wang, Z.; Fried, D.; and Neubig, G. 2024. TroVE: Induc- ing Verifiable and Efficient Toolboxes for Solving Program- matic Tasks. In Proceedings of the 41st International Con- ference on Machine Learning (ICML). WecoAI. 2024. AIDE: The Machine Learning Engineer Agent. Wu, S.; Zhao, S.; Huang, Q.; Huang, K.; Yasunaga, M.; Cao, K.; Ioannidis, V. N.; Subbian, K.; Leskove, J.; and Zou, J. 2024. AvaTaR: Optimizing LLM Agents for Tool-Assisted Knowledge Retrieval. Xia, Y.; Wang, Y.; Wang, Z.; and Zhang, W. 2024. A com- prehensive review of molecular optimization in Artificial Intelligence-Based Drug Discovery. Quantitative Biology, 12(1): 15–29. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2023. ReAct: Synergizing Reasoning and In International Conference Acting in Language Models. on Learning Representations (ICLR). Yoon, S.; Kim, T. E.; and Oh, Y. J. 2024. Designing and Evaluating Multi-Chatbot Interface for Human-AI Com- munication: Preliminary Findings from a Persuasion Task. arXiv preprint arXiv:2406.19648. Yue, L.; Xing, S.; Chen, J.; and Fu, T. 2024. ClinicalAgent: Clinical Trial Multi-Agent with Large Language Model- based Reasoning. arXiv preprint arXiv:2404.14777. Zhang, B.; Fu, Y.; Lu, Y.; Zhang, Z.; Clarke, R.; Van Eyk, J. E.; Herrington, D. M.; and Wang, Y. 2021. DDN2.0: R and Python packages for differential dependency network analysis of biological systems. bioRxiv, 2021–04. Pushpakom, S.; Iorio, F.; Eyers, P. A.; Escott, K. J.; Hop- per, S.; Wells, A.; Doig, A.; Guilliams, T.; Latimer, J.; Mc- Namee, C.; et al. 2019. Drug repurposing: progress, chal- lenges and recommendations. Nature Reviews Drug Dis- covery, 18(1): 41–58. Qin, Y.; Hu, S.; Lin, Y.; Chen, W.; Ding, N.; Cui, G.; Zeng, Z.; Huang, Y.; Xiao, C.; Han, C.; Fung, Y. R.; Su, Y.; Wang, H.; Qian, C.; Tian, R.; Zhu, K.; Liang, S.; Shen, X.; Xu, B.; Zhang, Z.; Ye, Y.; Li, B.; Tang, Z.; Yi, J.; Zhu, Y.; Dai, Z.; Yan, L.; Cong, X.; Lu, Y.; Zhao, W.; Huang, Y.; Yan, J.; Han, X.; Sun, X.; Li, D.; Phang, J.; Yang, C.; Wu, T.; Ji, H.; Liu, Z.; and Sun, M. 2023. Tool Learning with Foundation Models. arXiv:2304.08354. Ravuru, C.; Sakhinana, S. S.; and Runkana, V. 2024. Agen- tic Retrieval-Augmented Generation for Time Series Anal- ysis. In Proceedings of the Undergraduate Consortium at ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). Roohani, Y.; Lee, A.; Huang, Q.; Vora, J.; Steinhart, Z.; Huang, K.; Marson, A.; Liang, P.; and Leskovec, J. BioDiscoveryAgent: An AI Agent for Design- 2024. arXiv preprint ing Genetic Perturbation Experiments. arXiv:2405.17631. Schick, T.; Dwivedi-Yu, J.; Dess`ı, R.; Raileanu, R.; Lomeli, M.; Zettlemoyer, L.; Cancedda, N.; and Scialom, T. 2023. Toolformer: Language Models Can Teach Themselves to Use Tools. arXiv:2302.04761. Si, C.; Yang, D.; and Hashimoto, T. 2024. Can LLMs Gener- ate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers. arXiv preprint arXiv:2409.04109. Siramshetty, V. B.; Shah, P.; et al. 2021. Validating ADME QSAR Models Using Marketed Drugs. SLAS Discovery, 26(10): 1326–1336. Sterling, T.; and Irwin, J. J. 2015. ZINC 15–ligand discovery for everyone. Journal of chemical information and model- ing, 55(11): 2324–2337. Wang, E.; Cassano, F.; Wu, C.; Bai, Y.; Song, W.; Nath, V.; Han, Z.; Hendryx, S.; Yue, S.; and Zhang, H. 2024a. Plan- ning In Natural Language Improves LLM Search For Code Generation. arXiv preprint arXiv:2409.03733. Wang, L.; Ma, C.; Feng, X.; Zhang, Z.; Yang, H.; Zhang, J.; Chen, Z.; Tang, J.; Chen, X.; Lin, Y.; and et al. 2024b. A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6). Wang, N.; Dong, J.; Deng, Y.; Zhu, M.; Wen, M.; Yao, Z.; Lu, A.; Wang, J.; Luo, X.; and Cao, D. 2016. ADME Proper- ties Evaluation in Drug Discovery: Prediction of Caco-2 Cell Permeability Using a Combination of NSGA-II and Boost- ing. Journal of Chemical Information and Modeling, 56(4): 763–773. Wang, Y.; Fu, T.; Xu, Y.; Ma, Z.; Xu, H.; Du, B.; Gao, H.; Wu, J.; and Chen, J. 2024c. TWIN-GPT: Digital Twins for Clinical Trials via Large Language Model. ACM Transac- tions on Multimedia Computing, Communications and Ap- plications. A LLM-designed Code 1 from tdc.single_pred import ADME 2 from sklearn.ensemble import RandomForestClassifier 3 from sklearn.metrics import roc_auc_score, f1_score 4 from rdkit.Chem import AllChem 5 from rdkit import Chem 6 import numpy as np 7 8 def download_and_split_dataset(): 9 10 """ Downloads the specified ADMET dataset and returns the train and test splits. """ data = ADME(name='PAMPA_NCATS') split = data.get_split() return split 11 12 13 14 15 16 def generate_fingerprints(smiles_list, 17 18 19 20 21 22 23 24 25 26 radius=2, n_bits=2048): """ Converts a list of SMILES strings into molecular fingerprints. """ fingerprints = [] for smiles in smiles_list: mol = Chem.MolFromSmiles(smiles) if mol: fingerprints.append(AllChem. GetMorganFingerprintAsBitVect (mol, radius, nBits= n_bits)) else: fingerprints.append(np.zeros ((n_bits,))) return np.array(fingerprints) 27 28 29 # Main Script 30 if __name__ == "__main__": 31 # Step 1: Download dataset and get 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 train-test split split = download_and_split_dataset() # Step 2: Generate feature matrices and labels X_train = generate_fingerprints( split['train']['Drug']) y_train = split['train']['Y'] X_test = generate_fingerprints(split ['test']['Drug']) y_test = split['test']['Y'] # Step 3: Train Random Forest Classifier rf_model = RandomForestClassifier( n_estimators=100, random_state =42) rf_model.fit(X_train, y_train) # Step 4: Make Predictions y_pred = rf_model.predict(X_test) 47 48 49 50 51 52 53 y_pred_prob = rf_model.predict_proba (X_test)[:, 1] estimates for ROC-AUC # Probability roc_auc = roc_auc_score(y_test, y_pred_prob) f1 = f1_score(y_test, y_pred) print(f"ROC-AUC: {roc_auc:.4f}") print(f"F1-Score: {f1:.4f}") B Sample Prompts Idea generation The planner is tasked with generating diverse ideas for a machine learning experiment based on computer science concepts. For each idea, it is further prompted to refine the method by incorporating details and integrating computational biology concepts. Below are ex- ample outputs: 1 { 2 3 4 "ideas": [ { "method": "Apply a graph neural network (GNN) to learn from the SMILES string structure .", 5 6 7 8 9 10 11 12 13 14 15 "refined_idea": "Represent the SMILES string as a molecular graph, where nodes are atoms and edges represent bonds. Train a GNN model to learn molecular representations from this graph, which can then be used to predict the ADMET property." "method": "Use a transformer- based model to encode the SMILES string.", "refined_idea": "Train a transformer model to learn embeddings for SMILES strings , capturing the sequential information and relationships between atoms and functional groups. These embeddings can be used as input for a downstream prediction model." "method": "Train a random forest model with SMILES string features extracted by RDKit .", "refined_idea": "Utilize RDKit to extract molecular descriptors and fingerprints from the SMILES strings. These features can be used as input for a random forest model to predict the ADMET property." }, { }, { }, { 16 17 18 19 20 21 22 23 24 25 "method": "Utilize a support 7 Step 3: Implement and configure the Random Forest algorithm for classification/regression. 8 - (don't require domain knowledge: Random Forest is a general machine learning algorithm) 9 10 Step 4: Train the Random Forest model on the training data. 11 - (don't require domain knowledge: model training is a general computer science task) 12 13 Step 5: Evaluate the model's performance using standard metrics like MAE, MSE , or accuracy. 14 - (don't require domain knowledge: evaluation using standard metrics is a general computer science task) }, { }, { vector machine (SVM) with a radial basis function (RBF) kernel.", "refined_idea": "Extract molecular descriptors from the SMILES strings using RDKit and train a SVM model with an RBF kernel to predict the ADMET property." "method": "Implement a multi- task learning approach by predicting multiple ADMET properties simultaneously.", "refined_idea": "This idea needs further clarification. What specific ADMET properties are being considered? What is the relationship between these properties and the Caco -2 dataset?" "method": "Fine-tune a pre- trained model such as ChemBERTa for ADMET property prediction.", "refined_idea": "Use the ChemBERTa pre-trained model to extract molecular representations directly from SMILES strings. Fine-tune the model for specific ADMET properties, leveraging the contextual information captured during pre-training ." 26 27 28 } ] } Domain knowledge Analysis The instructor is prompted to break down the idea into substeps and evaluate whether each substep requires domain knowledge. Below is an ex- ample applied to the idea of using a random forest model. 1 Step 1: Load and split the PAMPA dataset . 2 - (require domain knowledge - reason: understanding the specific structure and format of the PAMPA dataset, including downloading from domain- specific sources) 3 4 Step 2: Convert chemical compounds in the dataset to Morgan fingerprints using a chemical library (e.g., RDKit ). 5 - (require domain knowledge - reason: requires familiarity with RDKit or similar libraries to generate Morgan fingerprints) 6
ai_researcher
8
Scientific_Hypothesis_Generation_by_a_Large_Language_Model_Laboratory_Validation_in_Breast_Cancer_Treatment.pdf
3 2 0 2 v o N 0 1 ] L C . s c [ 1 v 5 6 9 5 0 . 1 1 3 2 : v i X r a Large Language Models are Zero Shot Hypothesis Proposers Biqing Qi 1,2,3 ∗ [email protected] Kaiyan Zhang 1 ∗ [email protected] Haoxiang Li 1 [email protected] Kai Tian 1 [email protected] Sihang Zeng 4 [email protected] Zhang-Ren Chen 5 [email protected] Jin-Fang Hu 5 † [email protected] Bowen Zhou 1,2 † [email protected] 1 Tsinghua University 4 University of Washington 2 Frontis.AI 5 The First Affiliated Hospital of Nanchang University 3 Harbin Institute of Technology Abstract Significant scientific discoveries have driven the progress of human civilisation. The explosion of scientific literature and data has created information barriers across disciplines that have slowed the pace of scientific discovery. Large Lan- guage Models (LLMs) hold a wealth of global and interdisciplinary knowledge that promises to break down these information barriers and foster a new wave of scientific discovery. However, the potential of LLMs for scientific discovery has not been formally explored. In this paper, we start from investigating whether LLMs can propose scientific hypotheses. To this end, we construct a dataset consist of background knowledge and hypothesis pairs from biomedical literature. The dataset is divided into training, seen, and unseen test sets based on the publication date to control visibility. We subsequently evaluate the hypothesis generation capa- bilities of various top-tier instructed models in zero-shot, few-shot, and fine-tuning settings, including both closed and open-source LLMs. Additionally, we introduce an LLM-based multi-agent cooperative framework with different role designs and external tools to enhance the capabilities related to generating hypotheses. We also design four metrics through a comprehensive review to evaluate the generated hypotheses for both ChatGPT-based and human evaluations. Through experiments and analyses, we arrive at the following findings: 1) LLMs surprisingly generate untrained yet validated hypotheses from testing literature. 2) Increasing uncertainty facilitates candidate generation, potentially enhancing zero-shot hypothesis genera- tion capabilities. These findings strongly support the potential of LLMs as catalysts for new scientific discoveries and guide further exploration. 1 Introduction “When nothing is sure, everything is possible.” — Margaret Drabble ∗ Equal contributions. † Corresponding author. Workshop on Instruction Tuning and Instruction Following at NeurIPS 2023. The pursuit of knowledge discovery stands as a cornerstone of human progress, driving innovation, and shaping our understanding of the world [30, 28]. However, in recent times, the process of knowledge discovery has encountered formidable challenges, characterized by serendipity and sluggishness. As the volume of data and literature continues to expand at an unprecedented rate, the ability to distill high-value insights and gain profound understanding from this wealth of information has become increasingly daunting[28]. Silos of information have erected themselves between disciplines, impeding the crucial cross-pollination of ideas and insights that could propel discovery to new heights. Yet, amidst these challenges, there is a glimmer of hope. The advent of large-scale models (LLMs), possessing the capacity to harness a vast reservoir of world knowledge and span multiple domains, holds promise in revolutionizing the landscape of knowledge discovery. These models present an opportunity to break down the barriers between disciplines, enabling researchers to traverse the expansive sea of information with ease and efficiency. Central to the process of knowledge discovery lies the formulation of sound hypotheses [42, 31, 1, 38]. However, a glaring gap persists in the arsenal of tools available to formally explore and evaluate hypotheses. While literature is replete with discussions on validation, it often overlooks the critical aspect of generating novel hypotheses. In light of these challenges and opportunities, this paper delves into the current state of knowl- edge discovery, examining the hurdles posed by information explosion and disciplinary iso- lation. It explores the potential transformative role of LLMs in bridging these gaps, ulti- mately emphasizing the pivotal role of hypothesis generation in the knowledge discovery process. Furthermore, it highlights the pressing need for tools and methodologies to facilitate hypothesis generation, thus propelling knowledge discovery into a new era of efficiency and innovation [13]. Currently, both ChatGPT and GPT-4 undergo ex- tensive pre-training on vast datasets and possess the capability of continuous updates. However, ensuring strict traceability of data sources be- comes a challenging task, limiting our ability to explore zero-shot hypothesis generation. The past literatures have explored scenarios of problem discovery, yet rigorous experimental designs to investigate whether LLMs can effec- tively propose genuine problems under zero-shot conditions remain lacking. To tackle this issue, we assemble a dataset of biomedicine literature spanning from January 2000 to September 2023. This dataset is partitioned into training and test- ing sets, with the training set exclusively con- taining literature published before January 2023. We construct an unseen test set using literature from August 2023 and ensure that the evaluated LLMs have been trained on corpora before that date. Additionally, we devise a multi-intelligent collaborative framework that incorporates search tools and role-playing to delve deeper into and uncover the potential for hypothesis generation. Figure 1: Illustrating an generated hypothesis uti- lizing the fine-tuned 65B LLaMA within our con- structed datasets, which closely match the findings in existing literature. Through experiments and analyses as shown in Figure 1, we draw the following findings: 1) LLMs surprisingly generate hypotheses that are untrained yet validated when tested against literature. 2) Increasing uncertainty levels can benefit by diversifying candidate generation and potentially enhanc- ing zero-shot hypothesis generation capabilities. For instance, introducing heightened uncertainty through collaborative multi-agent approaches significantly improves the model’s ability to generalize in zero-shot scenarios. However, integrating subsequent few-shot enhancements and using additional tools may reduce the model’s proficiency in generating hypotheses. This phenomenon is likely due to the reduction of uncertainty, limiting the model’s space for hypothesis generation. Consequently, it lacks consistent positive effects, underscoring the need for careful consideration of the type of external knowledge employed. The above findings also support the notion: "When nothing is sure, everything is possible." Specifically, our contributions are as follows: 1) To rigorously validate the zero-shot and few-shot hypothesis generation potential of LLMs, we construct temporal biomedical instruction data and devised novel and effective experiments for 2 Instruction:Youarearesearcher.Youcancomeupwithnewhypothesesbasedonyourexistingknowledge.Hypothesesaregivenagainstthefollowingbackground.Youshouldbeasdetailedaspossible.Backgroundknowledge:(1)Esophagealcancermainlyincludessquamouscellcarcinomaandadenocarcinoma,withdifferentriskfactorsandincidencerates.(2)MetforminhasbeenshowntoreducetheriskofseveralcancersinpatientswithT2DM.(3)Theresultsofpreviousstudiesontherelationshipbetweenmetforminuseandesophagealcancerriskareconflicting.Themodelhypothesizes:(1)TheuseofmetformininpatientswithT2DMmavbeassociatedwithareducedriskofesophagealcancer.(2)TheremaybedifferencesinthecorrelationbetweenmetforminuseandtheriskofesophagealcancerinmaleandfemalepatientswithT2DM.(Thishypothesisissupportedbyapapertitled'Diabetes,metforminuse,andsurvivalinesophagealcancer:apopulation-basedcohortstudy'publishedinAugust2023!) comprehensive analysis and evaluation. To the best of our knowledge, this is the first work that formally designs experiments to investigate the zero shot hypothesis generation capacity of LLMs. 2) Through validation across different models and various scenario dimensions, we surprisingly find that LLMs possess rudimentary higher-order knowledge reasoning capabilities and can propose new hypothesis statements. This provides new empirical insights and pathways for knowledge discovery. 3) For a comprehensive review of the generated hypotheses, we design metrics across four dimensions for both ChatGPT-based and human evaluations. The correlation scores between ChatGPT evaluations and manual results indicate that LLMs also play a significant role in hypothesis evaluations. 4) To efficiently explore and further harness the capability of hypothesis generation, we introduce a multi-agent system based on LLMs. Through efficient collaboration among multiple models and tool utilization, we analyze the factors influencing hypothesis generation by LLMs. 2 Process of Scientific Discovery Figure 2: The iterative experimental loop of scientific discovery: observations and data accumulated from past experiments are analyzed and used to generate new hypotheses, and in turn new experiments that will yield new data to continue to cycle. In this paper, we mainly focus on investigating whether LLMs have the zero shot generalization ability to generate new hypotheses. Scientific discovery involves key components, each crucial for advancing our understanding of the natural world: data analysis, hypothesis formulation, experiment design, execution, and observation and reflection [13] as shown in Figure 2. 1) Data Analysis: Foundational in the scientific process, it entails collecting and examining data to discern patterns and anomalies, extracting insights through statistical techniques and visualiza- tion. It initiates scientific inquiry, guiding further exploration. 2) Generating Hypotheses: Among these components, hypothesis formulation is pivotal. It entails crafting informed guesses to explain observed phenomena. Hypotheses serve as guiding frameworks, directing and focusing research by articulating specific relationships and outcomes for experimental exploration. 3) Experiment Design: Once a hypothesis is set, designing experiments becomes essential to rigorously test its validity. This involves defining variables, specifying control groups, and outlining methods and procedures. Well-designed experiments ensure objective hypothesis testing and yield meaningful, informative results. 4) Experiment Execution: Meticulous execution of designed experiments and data collection are critical. Researchers adhere precisely to experimental protocols, recording obser- vations, measurements, and unexpected findings. Integrity in execution ensures reliable, reproducible outcomes. 5) Accumulating Observations: After experiments, scientists engage in observation and reflection. They analyze collected data to determine if results support or refute the initial hypothesis. If unsupported, hypotheses may be revised or new ones formulated based on findings. Observation and reflection permit iterative refinement of scientific understanding. Hypothesis Pioneers Pathways: Guiding Knowledge Discovery. While all components are essential, hypothesis formulation holds a unique position. It drives the scientific endeavor, guiding research question selection, experiment design, and data analysis. Well-constructed hypotheses not only provide direction but also lay the foundation for meaningful scientific discoveries by posing rigorously testable questions. Hypothesis formulation serves as the intellectual anchor steering scientific investigation and ultimately advancing knowledge. 3 Experimenal Loop of Scientific Discovery1) Data Understanding & Analysis 2) Generating Hypotheses3) Experimental Design4) Experiment Implement5) Accumulating Observations(cid:0)C: Design Experiments for How to Evaluate LLMs can Give Propsers1) Analyst(cid:0)2) Engineer(cid:0)3) Scientist(cid:0)Multi Agents Collaboration based Hypothesis Proposing4) Critic(cid:0)CriticEvaluateUserGuide[Optional]ScientistFormulateEngineerSearchAnalystExtractHypothesis Proposing背景分析、信息检索、假设提出、评估反馈Analyzes research background(cid:0)Extracts keywords and topics(cid:0)Provides direction for searchesUses keywords from Analyst(cid:0)Searches for relevant information(cid:0)Compiles and organizes findingsFormulates hypotheses(cid:0)Interprets Engineer's findings(cid:0)Bridges existing literature with new insightsEvaluates proposed hypotheses(cid:0)Ensures scientific validity(cid:0)Provides feedback for refinement 3 Can LLMs Truly Generate Zero-Shot Hypotheses? In this section, we outline the methodology employed for a thorough assessment of LLMs’ capacity to generate hypotheses under zero-shot conditions. To accomplish this, we begin by defining the problem of hypothesis generation in zero-shot settings. Next, we elucidate the process of dataset construction within the biomedical domain. Finally, we undertake comprehensive experiments to evaluate various instructed models across multiple dimensions, aiming to explore the factors influencing the ability of LLMs to propose improved hypotheses. 3.1 Problem Definition Following the scientific discovery process outlined in Section 2, hypothesis generation typically occurs after thorough literature analysis and examination of specific phenomena. To enhance evaluation effectiveness, we formalize this process as a text completion task. Given dataset D, an instruction I, and text pairs (Xi, Yi)n i=1 containing background knowledge and corresponding hypotheses, extracted from medical papers, our objective is to assess model M by having it generate hypotheses based on the task instruction and background knowledge, i.e., M (I, Xi) = Yi, for each i ∈ 1, ..., n. The objective function is formulated as: y∗ = arg max y1,...,yn n (cid:89) t=1 P (yt|y1, . . . , yt−1, I, X). 3.2 Dataset Construction In this section, we detail the process of constructing datasets and ensuring the robustness of our evalua- tion. Prevalent LLMs, like Llama and ChatGPT, face challenges in tracing the origin of their knowledge due to continuous self-updating. To address this, we propose a novel approach to assess LLMs’ hypothe- sis generation. Recognizing their potential impact on public domain data, we reconstruct a new biomedical literature dataset based on publication dates. Figure 3: Data partition pipeline. As depicted in Figure 3, we designated the year 2023 as the cut-off point. Our training dataset comprises literature published before January 2023, while the test dataset comprises literature published after January 2023, forming pairs of data with background knowledge and hypothesis proposals. Due to the emergence of more advanced LLMs, our evaluations focus exclusively on the unseen test set, featuring literature published in August 2023. We selected instructed models fine-tuned before August 2023 for both evaluation and fine-tuning testing. In our experimental setup, we implemented stringent measures to ensure the models had no prior exposure to the test data, affirming the validity of our experiments. We strictly follow the standard pipeline as outlined in Self-Instruct [32] for our data generation process, encompassing four key steps: 1) Compose the paper set based on the topic and content of the literature. 2) Utilize chatgpt-turbo-3.5 to summarize the literature knowledge. 3) Generate background knowledge-assume pairs. 4) Filter low-quality data. 5) Split the dataset according to publication time. 3.3 Datast Analysis In this section, we provide a comprehensive overview of the constructed dataset, encompassing details about the data acquisition strategy, dataset size, visibility control measures, distribution by year and month, as well as topic distribution. We have created two datasets to maintain control over the visibility of hypotheses: 1) Seen dataset This dataset comprises 2700 background and hypothesis pairs sourced from literature published before January 2023. This dataset was partitioned into training (2500) and validation (200) subsets (as well as seen test set). It is consistent with the corpus that the LLMs have been exposed to. 2) Unseen dataset The unseen dataset consists of 200 pairs extracted from papers published in August 2023, which the LLMs have not encountered during training and are used for testing purposes. 4 10,000 medical papers fromPubMedBackground knowledge: (1)…(2)…(3)…Hypotheses:(1)…(2)…(3)…PairsText PoolBeforeJanuary2023TrainModel output:…EvalGolden output:…AfterJanuary2023SFTInstruction:…Background:…Instruction:…Background:… We also provide publication date and topic distribution of constructed dataset in Appendix B.1. 3.4 Experiment Setup In this section, we introduce experimental settings for hypothesis generation and evaluation. Models For a fair comparison, we exclusively evaluate LLMs trained on corpora before March 2023 to ensure the test set remains unseen. We consider three categories of models in total: 1) API-based LLMs: this is mainly ChatGPT. 2) General domain instructed LLMs: These models consist of open-source models that have undergone fine-tuning based on Llama using general domain instructions. We primarily choose the top-tier models based on their performance rankings on the Alpaca Eval Leaderboard 3. 3) Specific domain instructed LLMs: These include PMC-LLaMA [35], and MedAlpaca [10]. These models are trained on a variety of sources in medicine domain, such as medical books, PMC papers, medical dialogs, and others. We provide detailed meta-information for various models, including their training data sources and publication dates, in Appendix B.2. Prompts To ensure a consistent output format across different models, we create prompts in two formats: zero-shot and few-shot examples. In our experiments, we adopt a 5-shot format, selecting examples from the training set before January 2023 using both randomly sampled and similarity retrieval methods. We provide illustrations of zero-shot and few-shot prompts in Appendix E. Finetuning To assess the hypothesis generation capability beyond zero-shot scenarios, we identify the top-performing open-source models through few-shot evaluation. We proceed to fine-tune the full parameters of WizardLM-13B-V1.2 with the background and hypothesis pairs. The fine-tuning process consists of three epochs, employing a batch size of 8, a maximum sequence length of 2048 tokens, and a learning rate set at 3e-5. We implement early stopping and select the best checkpoints based on their performance on the seen test dataset. Evaluation Given the disparities between the hypothesis generation task and traditional text genera- tion tasks liking machine translation and summarization, with the former being more challenging and often involving uncertainty that extends beyond established ground truth, we approach our evaluation from two primary perspectives: conducting evaluations with and without golden hypotheses. In evaluations with golden hypotheses, we employ standard text generation metrics, including BLEU and ROUGE in evaluate library4, to assess word overlap between the generated outputs and the ground truth. The vastness of the hypothesis space renders it difficult to comprehensively assess the quality of generated hypotheses using word overlap metrics alone. To provide a more comprehensive evaluation of the generated hypotheses from multiple facets, we have thoughtfully devised four metrics: novelty, relevance, significance, and verifiability. Inspired by recent research that highlights ChatGPT as proficient annotators [8, 16], demonstrating a strong correlation with human ratings, we employ ChatGPT for further evaluation. In detail, we request ChatGPT to evaluate both the generated scientific hypotheses and the provided background across these aspects. The scoring scale ranges from 0 to 3, where a higher score indicates superior results. Additionally, we solicit ChatGPT to furnish a step-by-step explanation to substantiate the assigned score. Moreover, we conduct human evaluation based on the four metrics for the top-tier models identified in the automatic evaluation in Section 3.5, and we provide a detailed description of this process in Section 3.6. 3.5 Experiment Results This section presents the results of hypothesis generation across various models in both zero-shot and few-shot settings. We primarily analyze the results from two perspectives: the impact of the zero-shot setting and the influence of introducing external knowledge on hypothesis generation. 3.5.1 Impact of zero-shot settings The results presented in Table 1 demonstrate the significant impact of zero-shot settings in improving hypothesis generation, particularly in terms of fostering high novelty. We analyze these results from two key perspectives as following. 3https://tatsu-lab.github.io/alpaca_eval/ 4https://huggingface.co/docs/evaluate/index 5 Table 1: Results of various LLMs: We assess instructed models using zero-shot and few-shot format prompts to generate constrained outputs. To provide a comprehensive assessment, we calculate the average scores for novelty, relevance, significance, and verifiability, denoted as Avg. Results marked with an asterisk (*) indicate that the few-shot prompts are constructed by retrieving samples from the training set that are similar to the background of inputs. To facilitate better comparison, we highlight the highest and sub-high score with both bold and underline formatting under each category. Category Model API-based gpt-3.5-turbo(0-shot) gpt-3.5-turbo(5-shot) gpt-3.5-turbo(5-shot)* Vicuna-33b-v1.3(0-shot) Vicuna-33b-v1.3(5-shot) Vicuna-33b-v1.3(5-shot)* Llama-2-70b-chat(0-shot) Llama-2-70b-chat(5-shot) Llama-2-70b-chat(5-shot)* WizardLM-13B-V1.2(0-shot) WizardLM-13B-V1.2(5-shot) WizardLM-13B-V1.2(5-shot)* WizardLM-70B-V1.0(0-shot) WizardLM-70B-V1.0(5-shot) WizardLM-70B-V1.0(5-shot)* Openchat-v3.2-super(0-shot) Openchat-v3.2-super(5-shot) Openchat-v3.2-super(5-shot)* MedAlpaca-13B(0-shot) MedAlpaca-13B(5-shot) MedAlpaca-13B(5-shot)* PMC-LLaMA-13B(0-shot) PMC-LLaMA-13B(5-shot) PMC-LLaMA-13B(5-shot)* General Medicine SFT WizardLM-13B-V1.2 Seen Unseen BLEU ROUGE BLEU ROUGE Novelty Relevance Significance Verifiability Avg 13.93 16.47 17.33 13.97 11.23 12.78 10.95 8.17 8.40 11.91 14.00 14.96 13.45 14.04 14.46 8.79 12.46 12.37 6.10 0.99 4.60 22.89 1.36 6.21 19.13 25.32 27.07 27.28 24.43 22.54 24.11 21.56 21.09 21.65 23.35 24.30 25.66 24.12 24.59 24.78 22.71 23.60 23.93 22.07 3.84 9.36 40.36 4.83 12.39 27.35 15.52 16.49 17.71 13.66 11.49 13.12 11.44 7.63 9.66 12.03 13.82 15.26 14.25 13.78 15.26 8.38 12.58 12.88 5.82 1.08 4.50 22.37 1.41 6.16 19.73 26.48 26.96 27.53 23.43 22.68 23.66 22.04 20.70 22.43 23.55 24.38 25.78 25.05 24.28 25.56 21.48 24.21 24.78 20.49 3.84 9.07 40.45 4.78 12.13 27.58 1.42 1.22 1.02 1.67 1.60 1.19 1.86 1.95 1.43 1.62 1.33 1.06 1.57 1.17 0.97 1.58 1.06 1.16 0.55 0.98 1.09 0.76 1.13 1.73 0.97 2.63 2.57 2.61 2.55 2.40 2.71 2.41 2.58 2.50 2.55 2.54 2.64 2.45 2.61 2.67 2.51 2.64 2.76 1.17 1.32 1.40 1.94 1.45 2.17 2.55 1.58 1.84 1.85 2.04 1.67 2.00 1.91 2.06 1.94 1.90 1.81 1.73 1.74 2.12 1.85 1.70 2.09 2.10 1.17 1.32 1.20 1.42 1.36 1.88 1.38 1.97 2.03 2.36 1.84 1.90 2.17 1.98 2.22 2.15 1.90 2.23 2.14 1.89 2.14 1.99 2.05 2.20 2.23 1.06 1.49 1.53 1.52 0.88 2.09 2.26 1.90 1.92 1.96 2.03 1.89 2.02 2.04 2.20 2.01 1.99 1.97 1.89 1.91 2.01 1.87 1.96 2.00 2.07 0.99 1.28 1.31 1.41 1.21 1.97 1.79 Zero-shot Outperforms Few-shot. Our findings indicate that, for extra large models like Llama-2-70b-chat and WizardLM-70B-V1.0, zero-shot performance surpasses that of the few- shot setting, where few-shot examples are obtained by randomly sampling. This suggests that the capacity of hypothesis generation is limited by the inclusion of few-shot examples, and models exhibit stronger abilities in a zero-shot setting. Outperforming the Unseen Compared to the Seen Test Set. Despite the visibility of literature published before 2022 in the pre-training corpus of most LLMs, we have categorized the test set into "seen" and "unseen." Typically, LLMs may excel in the "seen" test set due to the potential memorization of hypotheses present in the pre-training corpus, resulting in higher performance compared to the "unseen" test set. However, our results indicate that LLMs tend to perform better on the "unseen" test set. We speculate that this is because the complexity of hypothesis generation may hinder LLMs from effectively leveraging the dark knowledge in their parameters. 3.5.2 Influence of external knowledge Based on the results, we observe that the introduction of external knowledge, such as few-shot examples, domain adaptation, and instruction fine-tuning, does not consistently enhance the ability of hypothesis proposing. Few-Shot Examples Enhance Verifiability but Decrease Novelty. In comparison to zero-shot settings, models using few-shot prompts benefit from the provided examples, resulting in very high matching rates. Regarding word overlap metrics, including BLEU and ROUGE, most models, especially WizardLM series models, and Openchat-v3.2-super, show improved performance when provided with in-context examples, with retrieved examples being particularly beneficial for their generations. However, it’s important to note that these few-shot prompts significantly increase verifiability while simultaneously leading to lower levels of novelty compared to zero-shot results. Randomly Sampled Few-Shot Examples vs. Similarity Retrieval. Given that randomly sampled in-context examples often differ from the provided background in terms of topics or domains, this can potentially confuse LLMs and lead to decreased performance. In our pursuit of further exploration into the hypothesis generation capabilities of LLMs, we retrieve examples from the training dataset based on their similarity to the given background. The results indicate that similarity retrieval can further enhance performance. 6 Instruction Tuning Enhances LLM Performance. Following fine-tuning on a dataset comprising background and hypothesis pairs, WizardLM-13B-V1.2 attains superior performance, surpassing even gpt-3.5-turbo and WizardLM-70B-V1.0. This finding underscores that domain adaptation remains a valuable approach to enhance the hypothesis generation capabilities of LLMs. It not only leads to greater resource efficiency but also supports the promotion of privacy in a localized context. Impact of Domain Adaptation on Hypothesis Generation. We have also conducted an analysis of the influence of fine-tuning for domain adaptation on hypothesis generation. In this comparison, we utilize instructed models adapted to the field of medicine. The results obtained from MedAplaca and PMC-LLaMA indicate that domain adaptation can significantly improve word overlap performance. However, the metrics derived from ChatGPT suggest that domain adaptation has only a minimal effect on hypothesis generation. This discrepancy between word overlap metrics and ChatGPT’s evaluation highlights the need for more comprehensive and unified metrics in the context of hypothesis generation tasks. 3.6 Human Evaluation and Case Study In this section, we conduct a human evaluation to assess the generated hypotheses and calculate coherence scores to compare them with ChatGPT evaluation scores, guiding further evaluation efforts. 3.6.1 Evaluation Settings Evaluation Metrics To comprehensively evaluate the generations manually and simultaneously assess the quality of ChatGPT’s evaluations, we continue to utilize the four metrics outlined in Section 3.4, which encompass novelty, relevance, significance, and verifiability. The range of each metric remains from 0 to 3, with higher values indicating better performance. Additionally, we calculate coherence scores between human evaluations and ChatGPT evaluations. Selection of Models Given the constraints associated with the cost of human evaluation, our primary objective is to assess whether LLMs can produce valuable hypotheses, rather than striving for state-of- the-art performance. As a result, we exclusively perform human evaluation on the outputs generated by the LLM that ranks highest in performance based on automatic metrics and ChatGPT evaluation. Furthermore, we aim to encompass a variety of prompts and models in our evaluation. The final models selected for human evaluation are detailed in Table 2. Evaluation Details We randomly selected 100 examples from the unseen test set and had three evaluators with a biomedical background assign scores for each metric to each model. 3.6.2 Evaluation Results As depicted in Table 2, the human evaluations exhibit a strong correlation with ChatGPT’s evaluations, with Pearson and Spearman coefficients exceeding 0.7 for all models. These results strongly support our earlier findings regarding the influence of zero-shot learning and external knowledge. This reinforces our assertion that large language models can effectively propose hypotheses and signifi- cantly contribute to scientific discovery. For additional insights, we present correlation coefficients between word overlap scores and manual scores in the Appendix C, revealing lower coherence and highlighting the need for more advanced evaluation metrics. We also conduct a case study that showcases the hypotheses generated by various models and includes examples of step-by-step evaluations by ChatGPT. Details can be found in Appendix D. 4 Can agent collaboration enhance LLMs’ zero-shot generalization? In this section, we will strive to enhance the ability of LLMs in hypothesis generation through multi-agent collaboration and the use of tools. Our objective is to improve hypothesis efficiency by employing multi-agent collaboration, simulating real-world research scenarios. To begin, we introduce the conceptual system of multi-agent collaboration for hypothesis generation, drawing inspiration from scientific research. Subsequently, we present the role design and the tools use in this context. Finally, we present preliminary validated results of the multi-agent system using our proposed BHP dataset. 7 Table 2: This table presents the results of human evaluation. The Avg Coefficient are used to assess the correlation between the average scores obtained from ChatGPT and those from human evaluation. Category Model ChatGPT Human Eval Avg Coefficient Eval.Avg Novelty Relevance Significance Verifiability Avg Pearson Spearman API-based gpt-3.5-turbo(0-shot) gpt-3.5-turbo(5-shot)* General Llama-2-70b-chat(0-shot) Llama-2-70b-chat(5-shot) Llama-2-70b-chat(5-shot)* WizardLM-70B-V1.0(0-shot) WizardLM-70B-V1.0(5-shot) Medicine PMC-LLaMA-13B(0-shot) PMC-LLaMA-13B(5-shot)* SFT WizardLM-13BV1.2 1.90 1.96 2.04 2.20 2.01 1.91 2.01 1.41 1.97 1.79 1.54 1.31 1.77 2.15 1.38 1.38 1.15 1.00 1.85 0.85 2.69 2.62 2.23 2.77 2.62 2.31 2.69 2.62 2.23 2.77 1.77 2.08 1.92 2.08 2.31 1.54 2.46 1.92 1.92 1.23 2.08 2.62 1.92 2.31 2.00 2.00 1.77 2.00 1.69 2.23 2.02 2.15 1.96 2.33 2.08 1.81 2.02 1.88 1.92 1.77 0.87 0.80 0.89 0.96 0.97 0.90 0.85 0.73 0.95 0.83 0.78 0.78 0.84 0.90 0.94 0.75 0.89 0.73 0.94 0.85 4.1 Multi-agent Framework Inspired by the structured methodology detailed in Section 2, we introduce a comprehensive frame- work tailored for hypothesis formulation. This framework encapsulates a multi-agent system where each agent assumes a distinct role, mirroring the collaborative nature of scientific endeavors. Through a symbiotic and iterative process, these agents collaborate to craft hypotheses that are not only grounded in existing knowledge but also pave the way for novel insights. By emulating the essence of scientific discovery, our framework strives to produce hypotheses that are both innovative and scientifically robust. As depicted in Figure 4, we have partitioned the framework into five components, encompassing four automated agents and the option for human involvement within the loop. Figure 4: The conceptual system of multi-agent collaboration for hypothesis generation. The overall prototyping process is illustrated below, allowing users to choose optional involvement. We offer core role descriptions of multi-agents and the fully automated system above. Role Design In our proposed multi-agent framework, each component plays a distinct and pivotal role. The Analyst serves as the foundation, meticulously extracting and defining core elements from the research background. Its primary objective is to interpret the literature, distilling it into keywords or topics that subsequently guide the Engineer’s search efforts. The Engineer, leveraging these keywords, embarks on a mission to retrieve and organize pertinent information. They meticulously plan and execute detailed searches, ensuring that the findings are compiled in a structured manner. This organized materials then lands in the domain of the Scientist, whose objective is to weave together the Engineer’s findings with the original research background. Through careful interpretation, the Scientist crafts a hypothesis that is both grounded in existing knowledge and offers a fresh perspective. However, before this hypothesis is finalized, it undergoes scrutiny by the Critic. The Critic’s role 8 Experimenal Loop of Scientific Discovery1) Data Understanding & Analysis 2) Generating Hypotheses3) Experimental Design4) Experiment Implement5) Accumulating Observations(cid:0)C: Design Experiments for How to Evaluate LLMs can Give Propsers1) Analyst(cid:0)2) Engineer(cid:0)3) Scientist(cid:0)Multi Agents Collaboration based Hypothesis Proposing4) Critic(cid:0)CriticEvaluateUserGuide[Optional]ScientistFormulateEngineerSearchAnalystExtractHypothesis Proposing背景分析、信息检索、假设提出、评估反馈Analyzes research background(cid:0)Extracts keywords and topics(cid:0)Provides direction for searchesUses keywords from Analyst(cid:0)Searches for relevant information(cid:0)Compiles and organizes findingsFormulates hypotheses(cid:0)Interprets Engineer's findings(cid:0)Bridges existing literature with new insightsEvaluates proposed hypotheses(cid:0)Ensures scientific validity(cid:0)Provides feedback for refinement is paramount in ensuring the hypothesis’s robustness, coherence, and novelty. They evaluate the hypothesis against the backdrop of the research background, ensuring it stands up to academic rigor. Feedback from the Critic, if necessary, loops back to refine the hypothesis or prompts the Analyst for further insights, creating a cyclical and iterative process of refinement. Tool Use To explore external knowledge beyond the inherent dark knowledge within LLMs, we integrate the Engineer agent with search engines, mainly PubMed 5. Similarly, to control the visibility of the unseen test dataset, we filter and exclude literature published after January 2023 from the search results. We carry out tool use experiments using ReAct [40] and OpenAI function calling. ReAct is a method that extends the concept of Chain of Thought (CoT) [34], involving thinking before taking action and subsequently making observations based on feedback from the environment. In our experiments, we instruct the LLMs to initially contemplate the provided background information and then make a decision regarding whether to utilize tools. Upon receiving feedback from the tools, the LLMs are expected to identify supporting evidence in the results or potentially make further tool requests. The LLMs are responsible for concluding the hypothesis generation process and summarizing the hypotheses independently. In the case of OpenAI function calling, we directly specify tools for publication searching and transmit them to OpenAI APIs. This process is roughly implemented through fine-tuning, as described in ToolFormer [24]. 4.2 Experiment Results Our primary focus is to investigate the impact of tool use and multi-agent collaboration on hypothesis generation. We present the experimental results in Table 3. Based on the results, we summarize our findings from two perspectives: tool use and role-playing. Influence of Tool Use Based on our results, we observe that tool use has minimal impact on improving the hypothesis generation ability of LLMs. This observation aligns with the findings presented in the previous sections regarding the analysis of external knowledge. Notably, the ReAct-based method performs worse than OpenAI function calling. It is also evident that LLMs struggle to identify useful information and exhibit weaknesses in the thought-action-observation process, even when utilizing the official interface from OpenAI. Hypothesis generation is indeed a challenging task that necessitates iterative discussions and the exchange of ideas among various individuals. Multi-Agent Collaboration In addition to tool use, our findings suggest that the division of labor and interaction among multi-agents can significantly enhance the model’s capability to propose hypotheses by introducing uncertainty. This mirrors the dynamics of real-world scientific research, where hypotheses are formulated through iterative discussions and refutations. Additionally, it is worth noting that tool use can further enhance the performance of the multi-agent framework. Table 3: Results of individual agents and multi-agent systems, both with and without the use of tools, on the unseen test dataset. The results demonstrate that both multi-agent systems and the utilization of tools enhance the ability of LLMs in hypothesis generation. Among the various types of models, both 2a and 2b are evaluated with tool use. The difference between them lies in their implementations: ReAct [40] and OpenAI function calling 6, respectively. Model Influence Factor Automatic GPT-4 Evaluation Multi-agent Tool use BLUE ROUGE Novelty Relevance Significance Verifiability Avg 1 2a 2b 3 4 - - - (cid:33) (cid:33) - (cid:33) (cid:33) - (cid:33) 15.52 14.94 15.87 11.71 11.18 26.48 24.16 24.94 22.11 22.04 1.23 0.78 0.57 1.35 1.52 2.57 2.42 2.58 2.85 2.66 1.84 1.18 0.93 2.05 2.06 2.03 1.87 1.89 2.10 2.05 1.92 1.56 1.49 2.09 2.07 5 Conclusion From the hypothesis-proposer perspective, we investigated LLMs’ zero-shot generalisation ability in scientific research. Specifically, we first build a comprehensive corpus based on biomedical literature, 5https://pubmed.ncbi.nlm.nih.gov/ 6https://openai.com/blog/function-calling-and-other-api-updates 9 split by publication date, including background knowledge and hypothesis pairs. This corpus is then used as a basis for fine-tuning LLMs, leading to the generation of the LLM. To further analysis and enhance the capabilities of the hypothesis proposer, we introduce a LLM-based multi-agent collaboration system. Experimental results show that fine-tuned LLMs of various sizes can propose new hypotheses that did not appear in the training data but can be confirmed by the test literature, with performance comparable to ChatGPT and in some cases even better. Notably, our study revealed that introducing uncertainty into processes and operations enhances zero-shot generalization capabilities. These findings confirm the potential of LLMs to propose new hypotheses and offers hope for future unlocked scientific discovery. In future work, we will focus on optimizing models and generating hypotheses guided by effective uncertainty assessment metrics. Acknowledgements We extend our gratitude to the anonymous reviewers for their insightful feedback. References [1] Daniil A Boiko, Robert MacKnight, and Gabe Gomes. Emergent autonomous scientific research capabilities of large language models. arXiv preprint arXiv:2304.05332, 2023. [2] Andres M. Bran, Sam Cox, Andrew D. White, and Philippe Schwaller. ChemCrow: Augmenting large-language models with chemistry tools, June 2023. arXiv:2304.05376. [3] Boxi Cao, Hongyu Lin, Xianpei Han, and Le Sun. The Life Cycle of Knowledge in Big Language Models: A Survey, March 2023. arXiv:2303.07616 [cs]. [4] Zhuo Chang, Jing Zhang, Yilun Liu, Huajian Gao, and Guang-Kui Xu. New Mechanical Markers for Tracking the Progression of Myocardial Infarction. Nano Letters, 23(16):7350–7357, August 2023. [5] Xiaoyu Chen, Shenao Zhang, Pushi Zhang, Li Zhao, and Jianyu Chen. Asking Before Ac- tion: Gather Information in Embodied Decision Making with Language Models, May 2023. arXiv:2305.15695 [cs]. [6] Yulin Chen, Ning Ding, Hai-Tao Zheng, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Empow- ering Private Tutoring by Chaining Large Language Models, September 2023. arXiv:2309.08112 [cs] version: 1. [7] Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback, May 2023. arXiv:2305.10142 [cs]. [8] Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. ChatGPT Outperforms Crowd- Workers for Text-Annotation Tasks. Proceedings of the National Academy of Sciences, 120(30):e2305016120, July 2023. arXiv:2303.15056 [cs]. [9] Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, and Jianfeng Gao. MindAgent: Emergent Gaming Interaction, September 2023. arXiv:2309.09971 [cs] version: 1. [10] Tianyu Han, Lisa C Adams, Jens-Michalis Papaioannou, Paul Grundmann, Tom Oberhauser, Alexander Löser, Daniel Truhn, and Keno K Bressem. Medalpaca–an open-source collection of medical conversational ai models and training data. arXiv preprint arXiv:2304.08247, 2023. [11] Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, and Chenglin Wu. MetaGPT: Meta Programming for Multi-Agent Collaborative Framework, August 2023. arXiv:2308.00352 [cs]. [12] Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Zhao, and Hang Zhao. ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory, June 2023. arXiv:2306.03901 [cs]. 10 [13] Moksh Jain, Tristan Deleu, Jason Hartford, Cheng-Hao Liu, Alex Hernandez-Garcia, and Yoshua Bengio. Gflownets for ai-driven scientific discovery. Digital Discovery, 2(3):557–577, 2023. [14] Jikun Kang, Romain Laroche, Xindi Yuan, Adam Trischler, Xue Liu, and Jie Fu. Think Before You Act: Decision Transformers with Internal Working Memory, May 2023. 0 citations (Semantic Scholar/arXiv) [2023-05-30] arXiv:2305.16338 [cs]. [15] Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society, March 2023. arXiv:2303.17760 [cs]. [16] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment, May 2023. arXiv:2303.16634 [cs]. [17] Philipp Maas, Frank Carey, Chris Wheeler, Edward Saatchi, Pete Billington, and Jessica Yaffa Shamash. SHOW-1 and Showrunner Agents in Multi-Agent Simulations. arXiv preprint, 2023. [18] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. WebGPT: Browser-assisted question-answering with human feedback, June 2022. arXiv:2112.09332. [19] Aaron Parisi, Yao Zhao, and Noah Fiedel. TALM: Tool Augmented Language Models, May 2022. arXiv:2205.12255 [cs]. [20] Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative Agents: Interactive Simulacra of Human Behavior, August 2023. arXiv:2304.03442 [cs]. [21] Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. Gorilla: Large Language Model Connected with Massive APIs, May 2023. arXiv:2305.15334 [cs]. [22] Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun. Communicative Agents for Software Development, August 2023. arXiv:2307.07924 [cs]. [23] Vipula Rawte, Amit Sheth, and Amitava Das. A Survey of Hallucination in Large Foundation Models, September 2023. arXiv:2309.05922 [cs]. [24] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language Models Can Teach Themselves to Use Tools, February 2023. arXiv:2302.04761. [25] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face, May 2023. arXiv:2303.17580 [cs]. [26] Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language Agents with Verbal Reinforcement Learning, June 2023. arXiv:2303.11366 [cs]. [27] Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li, Ke Wang, Ye Tian, and Sujian Li. RestGPT: Connecting Large Language Models with Real-World Applications via RESTful APIs, June 2023. arXiv:2306.06624 [cs]. [28] Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085, 2022. [29] Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. On the Planning Abilities of Large Language Models – A Critical Investigation, May 2023. arXiv:2305.15771 [cs]. 11 [30] Hanchen Wang, Tianfan Fu, Yuanqi Du, Wenhao Gao, Kexin Huang, Ziming Liu, Payal Chandak, Shengchao Liu, Peter Van Katwyk, Andreea Deac, et al. Scientific discovery in the age of artificial intelligence. Nature, 620(7972):47–60, 2023. [31] Qingyun Wang, Doug Downey, Heng Ji, and Tom Hope. Learning to generate novel scientific directions with contextualized literature-based discovery. arXiv preprint arXiv:2305.14259, 2023. [32] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022. [33] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, January 2023. arXiv:2201.11903 [cs]. [34] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022. [35] Chaoyi Wu, Weixiong Lin, Xiaoman Zhang, Ya Zhang, Yanfeng Wang, and Weidi Xie. Pmc- llama: Towards building open-source language models for medicine, 2023. [36] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, and Tao Gui. The Rise and Potential of Large Language Model Based Agents: A Survey, September 2023. arXiv:2309.07864 [cs]. [37] Hui Yang, Sifu Yue, and Yunzhong He. Auto-GPT for Online Decision Making: Benchmarks and Additional Opinions, June 2023. arXiv:2306.02224. [38] Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, and Erik Cambria. Large language models for automated open-domain scientific hypotheses discovery. arXiv preprint arXiv:2309.02726, 2023. [39] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of Thoughts: Deliberate Problem Solving with Large Language Models, May 2023. arXiv:2305.10601 [cs]. [40] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing Reasoning and Acting in Language Models, March 2023. arXiv:2210.03629. [41] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models, September 2023. arXiv:2309.01219 [cs]. [42] Ruiqi Zhong, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, and Jacob Steinhardt. Goal arXiv preprint driven discovery of distributional differences via language descriptions. arXiv:2302.14233, 2023. [43] Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Yu Qiao, Zhaoxiang Zhang, and Jifeng Dai. Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory, June 2023. arXiv:2305.17144 [cs]. [44] Xuekai Zhu, Biqing Qi, Kaiyan Zhang, Xingwei Long, and Bowen Zhou. Pad: Program-aided distillation specializes large models in reasoning. arXiv preprint arXiv:2305.13888, 2023. 12 A Related Works A.1 Data-Driven Scientific Discovery Data-driven knowledge discovery research within LLM is relatively limited, with the current focus primarily on dataset construction and task-driven design. In this context, [42] proposed a dataset for investigating the transition from goals to discoveries. However, it should be noted that accurate discoveries within this dataset are not recent. [31] introduced a method for automatically collecting and constructing publication data, along with a proposal for a hypothesis generation approach in the natural language processing (NLP) domain. However, this method requires prior human knowledge, explicit context, and is not an automated process. It’s worth noting that their data was constructed from literature before 2021 from the ACL collection, implying that the information may already exist in open-source models like chatGPT and LLAMA. Furthermore, [31] focused on integrating computational tools in the field of chemistry, primarily analyzing the capabilities of LLMs in using integrated tools but neglecting the ability for zero-shot generalization in chemistry reactions. [1] delved more into the abilities of LLMs regarding planning and conducting experiments but did not consider proposing new hypotheses. [38] introduced a new task for open-domain hypothesis induction and created a dataset comprising 50 articles from social science journals. Additionally, they developed a multi-module system for exploring feedback mechanisms. However, all of the above-mentioned literature lacks strict guarantees on the visibility of test data to models, thereby limiting our exploration of the zero-shot generalization capability of LLMs through learning from existing knowledge to propose new hypothesis. Unlike existing works, we have designed datasets based on publication dates, which can easily ensure a strict independence between test data and LLMs. A.2 LLM-driven Autonomous Agents Large language models demonstrate exceptional capabilities in tasks such as question answering, program coding, and instruction following. However, they still confront significant challenges related to factual hallucination [41, 23], knowledge outdated [3], and interactions with real-world. To address these challenges, recent research has explored enhancing LLMs by incorporating tools such as search engines [18, 19], calculators [24], code interpreter [44], RESTful APIs [27, 21] and others. The integration of LLMs with tool use, also known as LLM-driven autonomous agents (LAAs), has attracted substantial public attention. These agents are equipped with reasoning [33, 39], planning [25, 29], decision-making [37, 14, 5], and long-term memory capabilities [43, 12], and they are constructed upon the foundation of LLMs. LAAs can autonomously plan sub-goals for complex tasks, execute actions, obtain feedback from the environment, and adjust their behaviors to adapt [40, 36, 26]. LAAs have demonstrated significant potential in addressing complex real-world tasks, including software development [22, 11], drama creation [17], course design [6], chemistry experiments [2] and more. Furthermore, multi-agent collaboration plays a significant role in LAA applications, allowing agents to collaborate and interact to solve problems through various role- playing scenarios [20, 7, 9, 15]. To the best of our knowledge, there is still a dearth of exploration regarding the use of agents, particularly multi-agents, for scientific discovery. In this paper, our objective is to undertake a preliminary effort to enhance the hypothesis proposing capability of LLMs by harnessing tools and multiple agents, along with conducting an analysis of influencing factors. B Implementation Details In this section, we delve into further implementation details of our experiments, including information about the constructed dataset and open-source models. B.1 Details of Dataset Distribution of Training and Test Sets. We present the publication dates and topic distributions of the various datasets for comparison, as illustrated in Figure 5, where we utilize Nomic Atlas 7 to visualize the topic distribution of abstracts in both the training and test datasets. 7https://github.com/nomic-ai/nomic 13 Figure 5: Distribution of the background and hypothesis pairs (BHP) dataset: In the left panel, we present the publication distribution by year for the training and seen test datasets, indicating a steady increase year by year until January 2023. In the center panel, we depict the publication distribution by month for the unseen test dataset, which was sampled from August 2023 and emphasizes the latter part of the month. The right panel displays the distribution of keywords in abstracts from the training, seen test, and unseen test datasets, represented by blue, yellow, and red, respectively. B.2 Details of Models We present the meta-information of the open-source models used in our experiments, as shown in Table 4. We have gathered data regarding their pre-training, supervised learning corpus, and release dates to ensure the non-visibility of the unseen test data. Table 4: To further ensure the non-visibility of the test data, we provide an overview of the related literature corpus within the training set of various LLMs, accompanied by their respective publication dates. The data marked with (*) is the data generated by people talking to ChatGPT. Our date marking is consistent with ChatGPT. Category Model SFT Data (Y/M) Base Model Released API-based General gpt-3.5-turbo (0-shot) GPT-3 gpt-3.5-turbo (5-shot) GPT-3 GPT-4 gpt-4* Llama-1 Vicuna-33b-v1.3 Llama-2 Llama-2-7b-chat Llama 2 Llama-2-13b-chat Llama-2-70b-chat Llama 2 WizardLM-13B-V1.2 Llama-2 WizardLM-70B-V1.0 Llama-2 Llama-2 openchat-v3.2-super Unknown Unknown Unknown ShareGPT (Unknown) Unknown Unknown Unknown Alpaca and ShareGPT (2023/06) Alpaca and ShareGPT (2023/06) Sharegpt4 Dataset (2023/06) Medicine MedAlpaca-13B ChatDoctor* PMC-LLaMA-13B Llama-1* Llama-1* Llama-2* Mixture (2023/03) Mixture (2023/04) Mixture (2023/04) 2022/12 2022/12 2023/06 2023/06 2023/07 2023/07 2023/07 2023/07 2023/08 2023/09 2023/03 2023/04 2023/08* C Additional Results We have included additional results from human evaluations in Table 5, primarily focusing on correlation scores between word overlap metrics and manual evaluations. Note that we continue to use the same samples used in human evaluation to compute BLEU and ROUGE-L for a fair comparison. We calculate the Pearson and Spearman coefficients between each automatic metric and the average human score. These results reveal that word overlap metrics, such as BLEU and ROUGE-L, exhibit notably lower correlation with manual scores. While BLEU and ROUGE-L may have a high correlation with relevance metrics, they are weak in providing a comprehensive evaluation of the generations. Conversely, evaluations conducted by ChatGPT demonstrate higher correlation with human evaluations, as illustrated in Table 2. However, there is still a significant need to explore advanced metrics, particularly automated ones, in the context of scientific discovery. 14 20002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022Year0100200300400500Number of PublicationsPublication Distribution by Year for Train and Seen Test DataTrainTest0510152025Day010203040Number of PublicationsPublication Distribution by Day for Unseen Test in August 2023 Table 5: The table illustrates the correlations between automatic metrics and human evaluations. We annotate the Pearson and Spearman scores after each correlation score, denoting them as r and ρ. Category Model Word Overlap BLEU (r/ρ) ROUGE-L (r/ρ) ChatGPT Avg (r/ρ) Human Avg (r/ρ) API-based gpt-3.5-turbo(0-shot) gpt-3.5-turbo(5-shot)* 16.59(0.03/0.01) 14.99(-0.09/0.12) 29.87(-0.04/-0.05) 27.51(-0.33/-0.35) 1.90(0.87/0.78) 1.96(0.80/0.78) 2.02(1.00/1.00) 2.15(1.00/1.00) General Llama-2-70b-chat(0-shot) Llama-2-70b-chat(5-shot) Llama-2-70b-chat(5-shot)* WizardLM-70B-V1.0(0-shot) WizardLM-70B-V1.0(5-shot) 9.64(-0.21/-0.20) 9.42(-0.58/-0.65) 9.60(-0.16/-0.10) 11.42(0.21/0.36) 9.86(-0.28/-0.37) 22.17(-0.31/-0.28) 20.59(-0.47/-0.42) 19.99(-0.15/-0.17) 24.11(0.29/0.49) 23.52(-0.17/-0.24) 2.04(0.89/0.84) 2.20(0.96/0.90) 2.01(0.97/0.94) 1.91(0.90/0.75) 2.01(0.85/0.89) 1.96(1.00/1.00) 2.33(1.00/1.00) 2.08(1.00/1.00) 1.81(1.00/1.00) 2.02(1.00/1.00) Medicine PMC-LLaMA-13B(0-shot) PMC-LLaMA-13B(5-shot)* 8.19(0.32/0.39) 5.52(0.06/-0.01) 21.85(0.18/0.27) 13.64(0.26/0.23) 1.41(0.73/0.73) 1.97(0.95/0.94) 1.88(1.00/1.00) 1.92(1.00/1.00) SFT WizardLM-13B-V1.2 21.48(-0.00/0.00) 27.83(0.17/0.27) 1.79(0.83/0.85) 1.77(1.00/1.00) D Case Study In this section, we present several generated hypotheses from various models and provide examples of the evaluation process, step by step, using ChatGPT. D.1 Generated Hypothesis We compare the generated hypotheses of different LLMs selected in human evaluation. The selected medicine literature was published in August 2023 [4], which proposed the power law index as an early marker of myocardial infarction. As shown in Table 6, some responses like gpt-3.5-turbo (0-shot) propose this hypothesis from zero, based only on the background. This indicates that LLMs have great potential in proposing highly novel and verifiable hypotheses and could play a significant role in scientific discovery. Furthermore, we showcase the proposed hy- potheses in a multi-agent collaboration involving three roles: Analyst, Scientist, and Critic, working together to generate hypotheses across 6 rounds, as shown in Table 7. Table 6: This table illustrates the hypothesis proposals from different LLMs. INSTRUCTED LLMS CASE Background: (1) In the early stage of MI, αshort and αlong can track and distinguish MI samples from healthy control samples with an AUC > 0.7. (2) In the mid-stage and late-stage of MI, E2, E3, and Etotal exhibit excellent discriminating performance with an AUC > 0.6. (3) All cardiac tissues display a universal two-stage power-law rheology at different time scales, independent of their disease status. Golden Hypothesis: (1) The sequential order of MI-related changes in cardiac tissues starts from the cytoplasmic level and progresses to the cellular and tissue levels. (2) Pathological changes at subcellular, cellular, and tissue levels are accompanied by mechanical alterations. (3) The proposed mechanical markers have diagnostic potential for assessing the progression of MI and studying other cardiovascular diseases. gpt-3.5-turbo (0-shot): (1) The combination of αshort and αlong can be used as a reliable biomarker for early detection and differentiation of myocardial infarction (MI) from healthy controls. (2) E2, E3, and Etotal can serve as effective biomarkers for distinguishing between mid-stage and late-stage MI patients with high accuracy. (3) The universal two-stage power-law rheology observed in cardiac tissues may play a crucial role in maintaining tissue integrity and function, regardless of the presence or absence of disease. 15 gpt-3.5-turbo (5-shot): (1) The levels of αshort and αlong may serve as potential biomarkers for early detection and diagnosis of myocardial infarction (MI). (2) E2, E3, and Etotal may have utility as biomarkers for assessing the progression and severity of MI in the mid and late stages. (3) The universal two-stage power-law rheology observed in cardiac tissues suggests a fundamental biomechanical principle underlying tissue behavior, regardless of disease status. Llama-2-70b-chat (0-shot): (1) E2, E3, and Etotal can also track and distinguish MI samples from healthy control samples in the early stage of MI, with an AUC > 0.7, similar to αshort and αlong. (2) The universal two-stage power-law rheology of cardiac tissues is related to the alteration of the extracellular matrix and the degradation of collagen in the early stage of MI, which can be detected by E2, E3, and Etotal. (3) The discriminating performance of E2, E3, and Etotal in the mid-stage and late-stage of MI is affected by the changes in the levels of collagen and proteoglycans in the extracellular matrix, which can be evaluated by measuring the mechanical properties of cardiac tissues using a uniaxial tensiometer. Llama-2-70b-chat (5-shot): (1) The discriminating performance of αshort and αlong in the early stage of MI is attributed to their ability to capture the subtle changes in cardiac tissue mechanics caused by the onset of fibrosis. (2) E2, E3, and Etotal exhibit excellent discriminating performance in the mid-stage and late-stage of MI due to their sensitivity to changes in collagen content and cross-linking, which are hallmarks of fibrosis progression. (3) The universal two-stage power-law rheology observed in all cardiac tissues, independent of disease status, suggests a common mechanism of mechanical regulation that can be exploited for early disease detection and monitoring. Llama-2-70b-chat (5-shot)*: (1) The αshort and αlong parameters can detect changes in myocardial stiffness in the early stages of myocardial infarction (MI). (2) E2, E3, and Etotal can differentiate between healthy controls and MI samples in the mid and late stages. (3) The universal two-stage power-law rheology in cardiac tissues may be useful in developing novel therapeutic approaches for MI. WizardLM-70B-V1.0 (0-shot): (1) αshort and αlong can track and distinguish MI samples from healthy control samples in all stages of MI with an AUC > 0.7. (2) E2, E3, and Etotal exhibit excellent discriminating performance in the early stage of MI with an AUC > 0.6. (3) The two-stage power-law rheology observed in all cardiac tissues is related to the progression of MI. WizardLM-70B-V1.0 (5-shot): (1) The discriminating performance of αshort and αlong in the early stage of MI may be attributed to their sensitivity to early molecular changes in the myocardium. (2) The discriminating performance of E2, E3, and Etotal in the mid-stage and late-stage of MI may be due to their ability to reflect the progression of myocardial damage and remodeling. 16 (3) The universal two-stage power-law rheology observed in all cardiac tissues may be a funda- mental property of cardiac tissue biomechanics, potentially useful for developing diagnostic and therapeutic strategies for MI. PMC-LLaMA-13B (0-shot): (1) αshort and αlong are effective biomarkers for distinguishing MI samples from healthy control samples both in the early stage and the late stage of MI. (2) E2 and E3 (Etotal) are likely to serve as effective biomarkers for discriminating mid-stage or late-stage MI samples from healthy control samples. (3) Cardiac tissues display a universal two-stage power-law rheology that is independent of their disease status. PMC-LLaMA-13B (5-shot): (1) Further comprehensive validation of the proposed framework is needed to elucidate the pathological changes in MI patients. (2) Consideration of multiple markers is important. (3) The influence of various external factors should be incorporated for accurate interpretation of the mechanical markers. WizardLM-13B-V1.2 (SFT): (1) The combination of αshort and αlong features can provide a more robust identification of MI samples. (2) E2, E3, and Etotal features can serve as independent predictors of MI. (3) The universal two-stage power-law rheology observed in all cardiac tissues may have diagnostic implications. Table 7: This table illustrates a case from multi-agent collaboration on hypothesis proposing. MULTI-AGENT CASE Background: (1) The past 25 years have seen significant development in the field of digital electrophysiology, including advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring. (2) Telemonitoring of CIEDs has been shown to reduce time to detection of clinically actionable events and improve patient outcomes. (3)Wearable devices, such as smartwatches, rings, and fitness trackers, have become more accurate and reliable in monitoring cardiac rhythm using photoplethysmography. Golden Hypothesis: (1) The implementation of telemonitoring and the use of wearable devices for cardiac rhythm monitoring will continue to increase in the future. (2) Artificial intelligence can be utilized to better identify actionable alerts and improve patient outcomes in telemonitoring of CIEDs. (3) Smartwatches may have the potential to replace CIEDs in certain cases. Generated Hypothesis: (1) The advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring have led to improved patient outcomes, such as reduced hospitalizations, improved quality of life, and decreased mortality rates. (2) These advancements achieve these outcomes by enabling early detection of abnormal rhythms, facilitating timely interventions, and providing valuable data for personalized treatment plans. 17 (3) This hypothesis is supported by studies that have demonstrated the effectiveness of tele- monitoring in reducing time to detection of clinically actionable events and improving patient outcomes.The refined hypothesis captures the essence of the research background and the collabo- rative insights, providing a logical and scientifically grounded statement. It can now be further evaluated and tested in future research. Dialogue process of multi-agent collaboration. (Round 1) Analyst: Based on the provided research background, I have identified the following keywords and topics: Keywords: - Digital electrophysiology - Telemonitoring - Cardiac implantable electronic devices (CIEDs) - Wearable devices - Cardiac rhythm monitoring - Photoplethysmography Topics: 1. Advancements in digital electrophysiology and telemonitoring of CIEDs 2. Benefits of telemonitoring in improving patient outcomes 3. Accuracy and reliability of wearable devices for cardiac rhythm monitoring using photoplethys- mography Preliminary insights: - The field of digital electrophysiology has seen significant development in the past 25 years, particularly in the areas of telemonitoring of CIEDs and the use of wearable devices for cardiac rhythm monitoring. - Telemonitoring of CIEDs has been shown to have positive impacts, such as reducing the time to detect clinically actionable events and improving patient outcomes. - Wearable devices, including smartwatches, rings, and fitness trackers, have become more accurate and reliable in monitoring cardiac rhythm using photoplethysmography. Based on these insights, it can be hypothesized that the advancements in telemonitoring and wearable devices have led to improved cardiac rhythm monitoring and patient outcomes. However, further analysis and collaboration with the Scientist and Critic are needed to refine and validate this hypothesis. (Round 2) Scientist: Based on the insights provided by the Analyst and the research background, I propose the following hypothesis: Hypothesis: The advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring have led to improved patient outcomes and more accurate detection of clinically actionable events. This hypothesis suggests that the development and implementation of telemonitoring technologies for CIEDs, along with the increasing accuracy and reliability of wearable devices using pho- toplethysmography, have positively impacted patient outcomes by reducing the time to detect clinically actionable events. By continuously monitoring cardiac rhythm, these advancements have the potential to provide timely interventions and improve overall patient care. The hypothesis is grounded in the existing literature, which highlights the benefits of telemonitor- ing in improving patient outcomes and the increasing accuracy of wearable devices for cardiac rhythm monitoring. However, further collaboration with the Critic is necessary to evaluate the novelty, coherence, and scientific validity of the hypothesis. 18 (Round 3) Critic: The hypothesis proposed by the Scientist is well-aligned with the research background and the insights provided by the Analyst. It effectively captures the essence of the advancements in telemonitoring of CIEDs and the use of wearable devices for cardiac rhythm monitoring. The hypothesis also highlights the potential positive impact on patient outcomes and the detection of clinically actionable events. However, there are a few areas that could be further refined or expanded upon in the hypothesis: 1. **Specific outcomes**: The hypothesis mentions ""improved patient outcomes,"" but it would be beneficial to specify the specific outcomes that are expected to improve. For example, are we referring to reduced hospitalizations, improved quality of life, or decreased mortality rates? 2. **Mechanisms**: The hypothesis briefly mentions the potential for timely interventions and improved patient care, but it would be helpful to elaborate on the specific mechanisms through which the advancements in telemonitoring and wearable devices are expected to achieve these outcomes. Are there specific interventions or treatments that can be administered based on the data collected from these devices? 3. **Scientific validity**: While the hypothesis is grounded in the existing literature, it would be valuable to provide some references or studies that support the claims made in the hypothesis. This would enhance the scientific validity of the hypothesis and demonstrate that it is not just speculative but based on empirical evidence. To refine the hypothesis, I would recommend the Scientist to consider incorporating these sugges- tions and providing more specific details and references to support the claims made. This will strengthen the coherence and scientific validity of the hypothesis. Analyst, based on the feedback provided, please re-evaluate the research background and provide any additional insights or interpretations that could further refine the hypothesis. (Round 4) Analyst: Based on the feedback from the Critic, I have re-evaluated the research background and identified additional insights: 1. **Specific outcomes**: The research background mentions that telemonitoring of CIEDs has been shown to reduce time to detection of clinically actionable events and improve patient outcomes. These outcomes could include early detection of arrhythmias, reduced hospitalizations due to cardiac events, improved quality of life, and decreased mortality rates. It would be beneficial to specify one or more of these specific outcomes in the hypothesis. 2. **Mechanisms**: The advancements in telemonitoring and wearable devices can potentially lead to improved patient outcomes through several mechanisms. For example, continuous mon- itoring of cardiac rhythm can enable early detection of abnormal rhythms, allowing for timely interventions such as medication adjustments or electrical cardioversion. Additionally, the data collected from these devices can provide valuable insights for personalized treatment plans and remote patient management. Including these specific mechanisms in the hypothesis would enhance its coherence. 3. **Scientific validity**: To strengthen the scientific validity of the hypothesis, it would be valuable to reference specific studies or research that support the claims made. For example, citing studies that have demonstrated the effectiveness of telemonitoring in reducing time to detection of clinically actionable events or improving patient outcomes would provide empirical evidence to support the hypothesis. Incorporating these additional insights and suggestions, the refined hypothesis could be: 19 Hypothesis: The advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring have led to improved patient outcomes, such as reduced hospitalizations, improved quality of life, and decreased mortality rates. These advancements achieve these outcomes by enabling early detection of abnormal rhythms, facilitating timely interventions, and providing valuable data for personalized treatment plans. This hypothesis is supported by studies that have demonstrated the effectiveness of telemonitoring in reducing time to detection of clinically actionable events and improving patient outcomes. This refined hypothesis takes into account the specific outcomes, mechanisms, and scientific validity, as suggested by the Critic. It provides a more comprehensive and well-supported statement that captures the essence of the research background and the collaborative insights. (Round 5) Scientist: Based on the additional insights and suggestions provided by the Analyst and the Critic, I propose the following refined hypothesis: Hypothesis: The advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring have led to improved patient outcomes, such as reduced hospitalizations, improved quality of life, and decreased mortality rates. These advancements achieve these outcomes by enabling early detection of abnormal rhythms, facilitating timely interventions, and providing valuable data for personalized treatment plans. This hypothesis is supported by studies that have demonstrated the effectiveness of telemonitoring in reducing time to detection of clinically actionable events and improving patient outcomes. This refined hypothesis incorporates the specific outcomes mentioned in the research background, such as reduced hospitalizations, improved quality of life, and decreased mortality rates. It also expands on the mechanisms through which these advancements are expected to achieve these outcomes, including early detection of abnormal rhythms, timely interventions, and personalized treatment plans. Additionally, the hypothesis acknowledges the importance of scientific validity by referencing studies that support the claims made. The refined hypothesis is both innovative and scientifically grounded, as it builds upon the existing literature and incorporates the collaborative insights provided by the Analyst and the Critic. It provides a logical and well-supported statement that can be further evaluated and tested. (Round 6) Critic: The refined hypothesis proposed by the Scientist is well-structured and addresses the feedback provided by the Critic. It incorporates specific outcomes, mechanisms, and scientific validity, making it a comprehensive and scientifically robust statement. Final Answer: (1) The advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring have led to improved patient outcomes, such as reduced hospitalizations, improved quality of life, and decreased mortality rates. (2) These advancements achieve these outcomes by enabling early detection of abnormal rhythms, facilitating timely interventions, and providing valuable data for personalized treatment plans. (3) This hypothesis is supported by studies that have demonstrated the effectiveness of tele- monitoring in reducing time to detection of clinically actionable events and improving patient outcomes. The refined hypothesis captures the essence of the research background and the collaborative insights, providing a logical and scientifically grounded statement. It can now be further evaluated and tested in future research. 20 D.2 Evaluation Examples We provide evaluation results conducted by ChatGPT in Table 8. ChatGPT offers a step-by-step explanation for each metric, ultimately providing independent scores in integer format. Table 8: This table illustrates a case for hypothesis evaluation using ChatGPT. CASE OF CHATGPT AS ANNOTATOR Background: (1) The past 25 years have seen significant development in the field of digital electrophysiology, including advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring. (2) Telemonitoring of CIEDs has been shown to reduce time to detection of clinically actionable events and improve patient outcomes. (3)Wearable devices, such as smartwatches, rings, and fitness trackers, have become more accurate and reliable in monitoring cardiac rhythm using photoplethysmography. Golden Hypothesis: (1) The implementation of telemonitoring and the use of wearable devices for cardiac rhythm monitoring will continue to increase in the future. (2) Artificial intelligence can be utilized to better identify actionable alerts and improve patient outcomes in telemonitoring of CIEDs. (3) Smartwatches may have the potential to replace CIEDs in certain cases. Generated Hypothesis: (1) The advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring have led to improved patient outcomes, such as reduced hospitalizations, improved quality of life, and decreased mortality rates. (2) These advancements achieve these outcomes by enabling early detection of abnormal rhythms, facilitating timely interventions, and providing valuable data for personalized treatment plans. (3) This hypothesis is supported by studies that have demonstrated the effectiveness of tele- monitoring in reducing time to detection of clinically actionable events and improving patient outcomes.The refined hypothesis captures the essence of the research background and the collabo- rative insights, providing a logical and scientifically grounded statement. It can now be further evaluated and tested in future research. Evaluation Results Novelty: Explanation: The generated scientific hypothesis builds upon the background information by stating that the advancements in telemonitoring of CIEDs and the use of wearable devices for It further suggests that cardiac rhythm monitoring have led to improved patient outcomes. these advancements achieve these outcomes by enabling early detection of abnormal rhythms, facilitating timely interventions, and providing valuable data for personalized treatment plans. This hypothesis goes beyond the background by proposing specific mechanisms through which the advancements in telemonitoring and wearable devices contribute to improved patient outcomes. It also mentions the supporting evidence from studies that have demonstrated the effectiveness of telemonitoring in reducing time to detection of clinically actionable events and improving patient outcomes. Overall, the hypothesis provides new insights and expands upon the existing knowledge in the field. Score: 2 Relevance: 21 Explanation: The generated scientific hypothesis is strongly related to the given background. The background provides information about the advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring. The hypothesis builds upon this information by stating that these advancements have led to improved patient outcomes, such as reduced hospitalizations, improved quality of life, and decreased mortality rates. It further explains that these outcomes are achieved by enabling early detection of abnormal rhythms, facilitating timely interventions, and providing valuable data for personalized treatment plans. The hypothesis is supported by the background information, which mentions that telemonitoring of CIEDs has been shown to reduce time to detection of clinically actionable events and improve patient outcomes. Overall, the hypothesis is directly related to the background and provides a logical and scientifically grounded statement. Score: 3 Significance: Explanation: The generated scientific hypothesis builds upon the background information provided and presents a logical and scientifically grounded statement. It suggests that the advancements in telemonitoring of CIEDs and the use of wearable devices for cardiac rhythm monitoring have led to improved patient outcomes, such as reduced hospitalizations, improved quality of life, and decreased mortality rates. The hypothesis further suggests that these advancements achieve these outcomes by enabling early detection of abnormal rhythms, facilitating timely interventions, and providing valuable data for personalized treatment plans. This hypothesis is supported by the background information that mentions the effectiveness of telemonitoring in reducing time to detection of clinically actionable events and improving patient outcomes. Overall, the hypothesis provides significant insights beyond the background and presents a testable statement for future research. Score: 3 Verifiability: Explanation: The generated scientific hypothesis is highly verifiable based on the given back- ground. The background provides evidence of significant advancements in telemonitoring of cardiac implantable electronic devices (CIEDs) and the use of wearable devices for cardiac rhythm monitoring. It also states that telemonitoring of CIEDs has been shown to reduce time to detection of clinically actionable events and improve patient outcomes. Additionally, it mentions that wearable devices have become more accurate and reliable in monitoring cardiac rhythm. The hypothesis builds upon this background by suggesting that these advancements have led to improved patient outcomes, such as reduced hospitalizations, improved quality of life, and decreased mortality rates. It further proposes that these outcomes are achieved by enabling early detection of abnormal rhythms, facilitating timely interventions, and providing valuable data for personalized treatment plans. The hypothesis is supported by studies that have demonstrated the effectiveness of telemonitoring in reducing time to detection of clinically actionable events and improving patient outcomes. Therefore, the hypothesis is highly verifiable and has a strong likelihood of being confirmed in future research. Score: 3 E Prompts Design In this section, we offer prompts for prompt LLMs for hypothesis generation and for ChatGPT in the evaluation process. 22 E.1 Zero- and Few-shot Prompts We provide prompts for hypothesis generation under both zero-shot and few-shot settings, which are detailed in Table 9 and Table 10 respectively. Specially, the latter includes two methods to obtain in-context examples: random sampling and similarity retrieval. Table 9: Prompt for zero-shot hypothesis generation. ZERO-SHOT INSTRUCTION FOR HYPOTHESIS GENERATION. You are a researcher. You can give novel hypothesis based on your exist knowledge and the given background. Based on the known existing knowledge, generate new conjectures in the following format: (1) xxx (2) xxx (3) xxx Be sure to use English answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not have too much repetitive information, numerical values using the original numbers, be sure to strictly follow the format, the corresponding content output to xxx. Note: Please respond directly to the multiple hypotheses without adding any extra sentences. Now give hypothesis based on the following background: {user_input} E.2 Prompts for Multi-agent Collaboration We present prompts for each role in multi-agent collaboration in Table 11, and prompts for environ- ment settings in Table 12. E.3 Prompts for ChatGPT Evaluation The instruction formats for prompting ChatGPT for evaluation on novelty, relevance, significance, and verifiability are displayed in Table 13, Table 14, Table 15, and Table 16, respectively. 23 Table 10: Manually constructed context examples of background-hypothesis pairs sampling from literatures before January 2023. FEW-SHOT EXAMPLES FOR HYPOTHESIS GENERATION. You are a renowned biomedical researcher. You can give novel hypothesis for the background based on your exist knowledge. Please follow the given examples and give the hypothesis in the SINGLE TURN. Background: (1) Neonatal intensive care is associated with long-term health problems in children such as cerebral palsy, mental retardation, deafness, blindness, learning disabilities, and behavioral problems. (2) Mothers of preterm infants experience more severe psychological distress compared to mothers of healthy full- term infants, but the impact of caregiving on parents of children discharged from NICUs is not well-researched. (3) Parents of NICU children show no difference in psychosocial health compared to parents of healthy full-term children. Hypothesis: (1) The mental health of parents of NICU children may improve over time due to adaptation and relief from initial fear and anxiety. (2) Child characteristics, such as health status, behavior problems, and birth-related risk factors, may influence parental psychosocial health. (3) Certain factors, such as caregiver strain, family function, and demographic variables, may predict parental psychosocial health. Background: (1) Recruitment of tumor supporting stromal cells and tissue remodeling in the tumor microenvironment support cancer cell proliferation, invasion, metastasis, and drug resistance. (2) Mesenchymal stem cells (MSC) are recruited by cancer cells into the tumor site and play a role in modulating tumor progression. (3) Intratumoral heterogeneity exists in solid tumors, with cancer stem cells (CSCs) and clonal evolution contributing to the complexity of cancer. Hypothesis: (1) Transcriptional regulators are responsible for tumor-supporting stromal reprogramming, specifically in MSC in the tumor stroma. (2) Intercellular communication between cancer cells and recruited MSCs is mediated by cell-to-cell contact, paracrine interactions, and microvesicles. (3) Epithelial cancer cell plasticity is regulated by tumor stroma interaction signals, enabling non-CSCs to convert into CSCs. ... Background: {input} Hypothesis: 24 Table 11: Prompts for role design in multi-agent collaboration on hypothesis proposing task. PROMPTS FOR ROLE DESIGN IN MULTI-AGENT COLLABORATION Analyst: You are the Analyst. Depending on the phase of the iteration, your role may slightly differ: - **Initial Phase**: Analyze the provided research background to distill its core components into pivotal keywords or topics. This will set the stage for the Engineer’s search efforts. - **Feedback Phase**: Based on feedback from the Critic, you might need to re-analyze the research background or provide additional insights to refine the search direction. In either case, ensure clarity and relevance in your analysis. Conclude by listing the identified keywords or topics or by providing revised insights. Engineer: You are the Engineer. Your task revolves around searching based on the received keywords or insights, and this can involve multiple iterations: - Plan your search strategies by crafting logical keyword combinations. - Conduct systematic searches for each combination, meticulously gathering data and results. - Refine your searches iteratively based on initial findings and any new insights from the Analyst. Your output should be comprehensive and organized. For each keyword combination: - **Title of Source**: Provide the title of the paper, article, or material you’ve found. - **Abstract/Summary**: A brief summary or the abstract of the source. - **Key Findings**: Highlight pivotal points or findings from the source that are relevant to the research background. - **Implications**: If any, mention the implications or significance of the findings. - **Relevant Quotes/Excerpts**: Extract direct quotes or sections that are particularly insightful. Group your findings into individual "clues" based on themes or topics that emerge. This structure will provide the Scientist with detailed and organized data, enabling them to craft a robust hypothesis. Conclude by presenting the structured "clues" for each keyword combination. Scientist: You are the Scientist. Your task is to craft a hypothesis based on the Engineer’s findings and the initial research background: - Derive a potential hypothesis that bridges the existing literature with new insights. - Ensure the hypothesis is both innovative and scientifically grounded. Clearly state the proposed hypothesis, preparing it for evaluation by the Critic. Critic: You are the Critic, responsible for evaluating the collaborative endeavor. Scrutinize the Scientist’s hypothesis in light of the ‘Research Background‘. Gauge its novelty, coherence, and scientific validity. Should the hypothesis necessitate refinement: - Clearly articulate feedback, specifying areas needing improvement. - Instruct the Analyst to either re-evaluate the ‘Research Background‘ or offer new insights to reshape the Engineer’s subsequent search iteration. When the hypothesis aligns with expectations and meets the desired standards, present and approve it using the structured format: Final Answer: (1) [First Point or Aspect of the Hypothesis] (2) [Second Point or Aspect of the Hypothesis] (3) [Third Point or Aspect of the Hypothesis] ... 25 Table 12: Prompts for environment setting in multi-agent collaboration. PROMPT FOR ENVIRONMENT SETTING IN MULTI-AGENT COLLABORATION. Topic Prompt for All Agents: You are part of a collaborative multi-agent system designed to propose a hypothesis based on a given research background. Each of you has a specific role: - **Analyst**: Analyzes the research background, distills its essence, and provides pivotal keywords or topics for further exploration. - **Engineer**: Uses the keywords to plan and conduct systematic searches, meticulously gathering and organizing findings into detailed and structured "clues". - **Scientist**: Crafts a potential hypothesis based on the organized findings and the original research back- ground. - **Critic**: Evaluates the hypothesis for its novelty, coherence, and scientific validity, providing feedback for refinement if necessary. Your collaboration is iterative. Based on feedback from the Critic, the process can loop back to the Analyst for refined insights, leading to new searches by the Engineer, and a refined hypothesis by the Scientist. Stay focused on your individual roles, collaborate effectively, and aim to derive a well-informed, novel hypothesis based on the research background provided. Research Background: background Objective: Using the research background and collaborative insights, the goal is to construct the most logical and scientifi- cally robust hypothesis. Let’s collaborate effectively to achieve this. Table 13: Prompts for ChatGPT evaluation on novelty metric. PROMPT FOR CHATGPT EVALUATION ON NOVELTY METRIC. You are an expert in biomedicine. Evaluate the novelty of the generated scientific hypothesis and the given background. The score range should be 0 to 3. 0 means there’s no novelty, which indicates that the hypothesis is a paraphrase of the background. 1 means there’s slight novelty. 2 means there’s moderate novelty. 3 means the hypothesis has strong novelty, which gives new insights beyond the background. Output is an integer. Please provide a step-by-step explanation supporting your score. At the end of your response, clearly state the score in the format ’Score: [value]’, where [value] can be 1, 2, or 3. Background: {background} Generated scientific hypothesis: {hypothesis} Table 14: Prompts for ChatGPT evaluation on relevance metric. PROMPT FOR CHATGPT EVALUATION ON RELEVANCE METRIC. You are an expert in biomedicine. Evaluate the relevance of the generated scientific hypothesis and the given background. The score range should be 0 to 3. 0 means there’s no relevance. 1 means there’s slight relevance. 2 means there’s moderate relevance. 3 means they are strongly related. Output is an integer. Please provide a step-by-step explanation supporting your score. At the end of your response, clearly state the score in the format ’Score: [value]’, where [value] can be 1, 2, or 3. Background: {background} Generated scientific hypothesis: {hypothesis} 26 Table 15: Prompts for ChatGPT evaluation on significance metric. PROMPT FOR CHATGPT EVALUATION ON SIGNIFICANCE METRIC. You are an expert in biomedicine. Evaluate the significance of the generated scientific hypothesis and the given background. The score range should be 0 to 3. 0 means there’s no significance, which indicates that the hypothesis is just a common knowledge. 1 means there’s slight significance. 2 means there’s moderate significance. 3 means the hypothesis has strong significance, which gives significant insights beyond the background. Output is an integer. Please provide a step-by-step explanation supporting your score. At the end of your response, clearly state the score in the format ’Score: [value]’, where [value] can be 1, 2, or 3. Background: {background} Generated scientific hypothesis: {hypothesis} Table 16: Prompts for ChatGPT evaluation on verifiability metric. PROMPT FOR CHATGPT EVALUATION ON VERIFIABILITY METRIC. You are an expert in biomedicine. Evaluate the verifiability of the generated scientific hypothesis and the given background. The score range should be 0 to 3. 0 means there’s no verifiability, which indicates that the hypothesis is not possible to be verified in future work. 1 means there’s slight verifiability. 2 means there’s moderate verifiability. 3 means the hypothesis has strong verifiability, which means the hypothesis is very likely to be verified in future work. Output is an integer. Please provide a step-by-step explanation supporting your score. At the end of your response, clearly state the score in the format ’Score: [value]’, where [value] can be 1, 2, or 3. Background: {background} Generated scientific hypothesis: {hypothesis} 27
ai_researcher
5
LLM-SR_Scientific_Equation_Discovery_via_Programming_with_Large_Language_Models.pdf
4 2 0 2 n u J 3 1 ] E S . s c [ 1 v 0 0 3 0 1 . 6 0 4 2 : v i X r a Large Language Models as Software Components: A Taxonomy for LLM-Integrated Applications Irene Weber Kempten University of Applied Sciences, Germany [email protected] Abstract Large Language Models (LLMs) have become widely adopted recently. Research explores their use both as autonomous agents and as tools for software engineering. LLM-integrated applications, on the other hand, are software systems that leverage an LLM to perform tasks that would otherwise be impossible or require significant coding effort. While LLM-integrated application engineering is emerging as new discipline, its terminology, concepts and methods need to be established. This study provides a taxonomy for LLM- integrated applications, offering a framework for analyzing and describing these systems. It also demonstrates various ways to utilize LLMs in applications, as well as options for implementing such integrations. Following established methods, we analyze a sample of recent LLM-integrated applications to identify rel- evant dimensions. We evaluate the taxonomy by applying it to additional cases. This review shows that applications integrate LLMs in numerous ways for various purposes. Frequently, they comprise multiple LLM integrations, which we term “LLM components”. To gain a clear understanding of an application’s architecture, we examine each LLM component separately. We identify thirteen dimensions along which to characterize an LLM component, including the LLM skills leveraged, the format of the output, and more. LLM-integrated applications are described as combinations of their LLM components. We suggest a concise representation using feature vectors for visualization. The taxonomy is effective for describing LLM-integrated applications. It can contribute to theory building in the nascent field of LLM-integrated application engineering and aid in developing such systems. Researchers and practitioners explore numerous creative ways to leverage LLMs in applications. Though challenges persist, integrating LLMs may revolutionize the way software systems are built. Keywords: component large language model, LLM-integrated, taxonomy, copilot, architecture, AI agent, LLM 1. Introduction fields, such as medicine, law, marketing, education, human resources, etc. Large Language Models (LLMs) have significantly impacted various sectors of economy and society [47]. Due to their proficiency in text understanding, cre- ative work, communication, knowledge work, and code writing, they have been adopted in numerous Public discussions often focus on the ethical aspects and societal consequences of these systems [36, 39]. Meanwhile, research investigates Artificial General Intelligences and autonomous AI agents that can use services, data sources, and other tools, and collabo- rate to solve complex tasks [11, 62, 57, 21]. In addi- tion, LLMs offer many opportunities to enhance soft- ware systems. They enable natural language interac- tion [59], automate complex tasks [19], and provide supportive collaboration, as seen with recent LLM- based assistant products often branded as “copilots” 1. This paper addresses the potential of LLMs for soft- ware development by integrating their capabilities as components into software systems. This contrasts with current software engineering research, which views LLMs as tools for software development rather than as software components [14, 22], and with the considerable body of research examining LLMs as au- tonomous agents within multiagent systems [21]. Software systems that invoke an LLM and process its output are referred to as “LLM-integrated appli- cations”, “LLM-integrated systems”, “LLM-based ap- plications”, etc. [32, 13, 57]. LLMs are versatile, mul- tipurpose tools capable of providing functionalities that would otherwise be unfeasible or require sub- stantial development efforts [15, 24]. By significantly expediting system development, they have the poten- tial to revolutionize not only the way users interact with technology, but also the fundamental processes of software development. LLM-integrated applications engineering is emerging as a research field. E.g., [10] proposes LLM Sys- tems Engineering (LLM-SE) as a novel discipline, and [44, 8, 7] discuss experiences and challenges that de- velopers of such systems encounter in practice. This study develops a taxonomy that provides a structured framework for categorizing and analyzing LLM-integrated applications across various domains. To develop and evaluate the taxonomy, we collected a sample of LLM-integrated applications, concentrat- ing on technical and industrial domains. These ap- plications showcase a broad range of opportunities to leverage LLMs, often integrating LLMs in mul- tiple ways for distinct purposes. In developing the taxonomy, we found that examining each of these in- tegrations, termed “LLM components”, separately is crucial for a clear understanding of an application’s architecture. The taxonomy adopts an original architectural per- spective, focusing on how the application interacts with the LLM while abstracting from the specifics of application domains. For researchers, the taxon- omy contributes to shape a common understanding and terminology, thus aiding theory building in this emerging domain [29, 50, 18]. For practitioners, the taxonomy provides inspiration for potential uses of LLMs in applications, presents design options, and helps identify challenges and approaches to address them. Objectives. In this study, a taxonomy is understood as a set of dimensions divided into characteristics. The objective is to identify dimensions that are useful for categorizing the integration of LLMs in applica- tions from an architectural perspective. To be most effective, the taxonomy should be easy to understand and apply, yet distinctive enough to uncover the es- sential aspects. Additionally, we aim to develop a visual representation tailored to the taxonomy’s in- tended purposes. Overview. The following section 2 provides back- ground on LLMs and introduces relevant concepts. Section 3 presents an overview of related work. The study design adheres to a Design Science Research approach [46]. We apply established methods for tax- onomy design [42, 48] as described in Section 4. This section also presents the sample of LLM-integrated applications used for this study. The developed tax- onomy is presented, demonstrated and formally eval- uated in section 5. In section 6, we discuss its usabil- ity and usefulness. Section 7 summarizes the contri- butions, addresses limitations, and concludes. 2. Large Language Models 2.1. Background 1E.g., https://docs.github.com/en/copilot, https://copilot.cloud.microsoft/en-us/copilot-excel, https://www.salesforce.com/einsteincopilot State-of-the-art LLMs such as GPT-3.5, GPT-4, Llama, PALM2, etc., are artificial neural networks i.e., very simple processing consisting of neurons, 2 units, that are organized in layers and connected by weighted links. Training a neural network means adapting these weights such that the neural network shows a certain desired behavior. Specifically, an LLM is trained to predict the likelihoods of pieces of text termed, tokens, to occur as continuations of a given text presented as input to the LLM. This in- put is referred to as prompt. The prompt combined with the produced output constitutes the context of an LLM. It may comprise more than 100k tokens in state-of-the-art LLMs2. Still, its length is limited and determines the maximum size of prompts and outputs that an LLM is capable of processing and generating at a time. Training of an LLM optimizes its parameters such that its computed likelihoods align with real text ex- amples. The training data is a vast body of text snip- pets extracted, processed, and curated from sources such as Wikipedia, Github code repositories, common websites, books, or news archives. An LLM trained on massive examples is termed a foundation model or pre-trained model. During training, an LLM not only learns to produce correct language but also ab- sorbs and stores information and factual knowledge. However, it is well known that LLMs frequently pick up biases, leading to ethical problems. They may also produce factually incorrect outputs that sound plausible and convincing, termed hallucinations. Recent findings show that LLMs can be applied to a wide range of tasks by appropriately formulating prompts. Different prompt patterns succeed in dif- ferent tasks. Basic approaches rely on instructing the LLM to solve a task described or explained in the prompt. In few-shot prompting (also known as few-shot learning), the prompt is augmented with ex- ample input-output pairs illustrating how to solve the task, e.g., the requested output format. The number of examples can vary. Prompting with one example is called one-shot prompting, while prompting without any examples is called zero-shot prompting. One-shot and few-shot prompting fall under the broader cat- egory of in-context learning. Prompt patterns such 2https://platform.openai.com/docs/models as chain-of-thought and thinking-aloud aim to elicit advanced reasoning capabilities from LLMs. As effective prompts are crucial for unlocking the di- verse capabilities of an LLM, the discipline of prompt engineering is evolving, focusing on the systematic design and management of prompts [66, 9, 53, 31]. 2.2. Definitions Invoking an LLM results in an input-processing- output sequence: Upon receiving a prompt, the LLM processes it and generates an output. We refer to an individual sequence of input-processing-output per- formed by the LLM as LLM invocation, and define an LLM-integrated application as a system in which the software generates the prompt for the LLM and processes its output. The concept of an application is broad, encompassing service-oriented architectures and systems with components loosely coupled via API calls. Given an LLM’s versatility, an application can uti- lize it for different tasks, each demanding a specific approach to create the prompt and handle the re- sult. This paper defines a particular software compo- nent that accomplishes this as an LLM-based software component or, simply, LLM component. An LLM- integrated application can comprise several LLM components. The study develops a taxonomy for LLM components. LLM-integrated applications are described as combinations of their LLM components. 3. Related Work With the recent progress in generative AI and LLMs, the interest in these techniques has increased, and numerous surveys have been published, providing an extensive overview of technical aspects of LLMs [72], reviewing LLMs as tools for software engineering [22], and discussing the technical challenges of applying LLMs across various fields [25]. Further studies ad- dress the regulatory and ethical aspects of Genera- tive AI and ChatGPT, with a particular focus on AI-human collaboration [41], and Augmented Lan- guage Models (ALMs), which are LLMs that enhance 3 their capabilities by querying tools such as APIs, databases, and web search engines [38]. Taxomonies related to LLMs include a taxonomy for prompts designed to solve complex tasks [49] and a taxonomy of methods for cost-effectively invoking a remote LLM [60]. A comparative analysis of stud- ies on applications of ChatGPT is provided by [27], whereas LLMs are compared based on their applica- tion domains and the tasks they solve in [20]. Most closely related to the taxonomy developed here is a taxonomy for LLM-powered multiagent architectures [21] which focuses on autonomous agents with less technical detail. Taxonomies of applications of AI in enterprises [48] and applications of generative AI, in- cluding but not limited to LLMs [52], are developed using methods similar to those in our study. Several taxonomies in the field of conversational agents and task-oriented dialog (TOD) systems ad- dress system architecture [1, 40, 12, 3]. However, they omit detailed coverage of the integration of generative language models. 4. Methods We constructed the taxonomy following established guidelines [42, 48, 29], drawing from a sample of LLM-integrated applications. These applications are detailed in section 4.1. 4.1. Development Taxonomy. We derived an initial taxonomy from the standard architecture of conversational assistants de- scribed in [3], guided by the idea that conversational assistants are essentially “chatbots with tools”, i.e., language-operated user interfaces that interact with external systems. This approach proved unsuccessful. The second version was based on the classical three- tier software architecture, and then extended over several development cycles. By repeatedly apply- ing the evolving taxonomy to the example instances, we identified dimensions and characteristics using an “empirical-to-conceptual” approach. When new di- mensions emerged, additional characteristics were de- rived in a “conceptual-to-empirical” manner. After five major refinement cycles, the set of dimensions and characteristics solidified. In the subsequent eval- uation phase, we applied the taxonomy to a new set of example instances that were not considered while constructing the taxonomy. As the dimensions and characteristics remained stable, the taxonomy was considered complete. In the final phase, we refined the wording and visual format of the taxonomy. Visualization. Developing a taxonomy involves cre- ating a representation that effectively supports its intended purpose [29]. Taxonomies can be repre- sented in various formats, with morphological boxes [54, 55] or radar charts [21] being well-established approaches. We evaluated morphological boxes, be- cause they effectively position categorized instances within the design space. However, we found that they make it difficult to perceive a group of categorized in- stances as a whole since they occupy a large display area. This drawback is significant for our purposes, as LLM-integrated applications often comprise mul- tiple LLM components. Therefore, we developed a more condensed visualization of the taxonomy based on feature vectors. Example instances. We searched for instances of LLM-integrated applications for taxonomy develop- ment that should meet the following criteria: • The application aims for real-world use rather than focusing on research only (such as testbeds for experiments or proofs-of-concept). It demon- strates efforts towards practical usability and ad- dresses challenges encountered in real-world sce- narios. • The application’s architecture, particularly its LLM components, is described in sufficient de- tail for analysis. • The sample of instances covers a diverse range of architectures. • The example instances are situated within indus- trial or technical domains, as we aim to focus on LLM-integrated applications beyond well-known fields like law, medicine, marketing, human re- sources, and education. 4 The search revealed a predominance of theoretical re- search on LLM-integrated applications while papers focusing on practically applied systems were scarce. Searching non-scientific websites uncovered commer- cially advertised AI-powered applications, but their internal workings were typically undisclosed, and reli- able evaluations were lacking. Furthermore, the het- erogeneous terminology and concepts in this emerg- literature ing field make a comprehensive formal search unfeasible. Instead, by repeatedly search- ing Google Scholar and non-scientific websites using terms “LLM-integrated applications”, “LLM-powered applications”, “LLM-enhanced system”, “LLM” and “tools”, along similar variants, we selected six suitable instances. Some of them integrate LLMs in multiple ways, totaling eleven distinct LLM components. For a thorough evaluation, we selected new instances using relaxed criteria, including those intended for research. Additionally, we included a real-world ex- ample lacking explicit documentation to broaden the diversity of our sample and assess the taxonomy’s coverage. Within the five selected instances, we iden- tified ten LLM components. 4.2. Sample of LLM-integrated applications Table 1 gives an overview of the sample. Names of ap- plications and LLM components are uniformly writ- ten as one CamelCase word and typeset in small caps, deviating from the format chosen by the respective authors. LowCode. LowCode is a web-based application consisting of a prompt-definition section and a di- alogue section. The prompt-definition section sup- ports the design of prompts for complex tasks, such as composing extensive essays, writing resumes for job applications or acting as a hotel service chatbot [5]. In the dialogue section, users converse with an LLM to complete the complex task based on the de- fined prompt. LowCode comprises two LLM components termed Planning and Executing. Planning operates in the prompt-definition section, where a user roughly describes a complex task, and Planning designs a workflow for solving it. The prompt-definition section offers a low-code development environment where the LLM-generated workflow is visualized as a graphi- cal flowchart, allowing a user to edit and adjust the logic of the flow and the contents of its steps. For instance, in essay-writing scenarios, this involves in- serting additional sections, rearranging sections, and refining the contents of sections. Once approved by the user, LowCode translates the modified work- flow back into natural language and incorporates it into a prompt for Executing. In the dialogue sec- tion, users converse in interactive, multi-turn dia- logues with Executing. As defined in the prompt, it acts as an assistant for tasks such as writing an essay or resume, or as a hotel service chatbot. While the idea of the LLM planning a workflow might suggest using the LLM for application control, LowCode Planning actually serves as a prompt generator that supports developing prompts for complex tasks. Honeycomb. Honeycomb is an observability plat- form collecting data from software applications in distributed environments for monitoring. Users define queries to retrieve information about the observed software systems through Honeycomb’s Query Builder UI. The recently added LLM-based QueryAssistant allows users to articulate inquiries in plain English, such as “slow endpoints by status code” or “which service has the highest latency?” The QueryAssistant converts these into queries in Honeycomb’s format, which users can execute and manually refine [7, 8]. MyCrunchGpt. MyCrunchGpt acts as an ex- pert system within the engineering domain, specif- ically for airfoil design and calculations in fluid me- chanics. These tasks require complex workflows com- prising several steps such as preparing data, param- eterizing tools, and evaluating results, using vari- ous software systems and tools. The aim of My- CrunchGpt is to facilitate the definition of these workflows and automate their execution [28]. MyCrunchGpt offers a web interface featuring a dialogue window for inputting commands in plain English, along with separate windows displaying the 5 Table 1: Example instances selected for development (top 6) and evaluation (bottom 5) Application References LLM components Honeycomb QueryAssistant [7, 8] Planning, Executing LowCode [5],[35] DesignAssistant, SettingsEditor, DomainExpert [28] MyCrunchGpt Manager, Operator MatrixProduction [69] TaskPlanning [37] WorkplaceRobot TaskExecutor, MemoryGenerator [64] AutoDroid ActionPlanning, ScenarioFeedback [51] ProgPrompt QuestionAnswering [26] FactoryAssistants DstPrompter, PolicyPrompter [71] SgpTod Reporting [70] TruckPlatoon ActionExecutor, Advisor, IntentDetector, Explainer [16, 44] ExcelCopilot output and results of software tools invoked by My- CrunchGpt in the backend. MyCrunchGpt relies on predefined workflows, not supporting deviations or cycles. By appending a specific instruction to the dialogue history in the prompt for each step of the workflow, it uses the LLM as a smart parser to ex- tract parameters for APIs and backend tools from user input. APIs and tools are called in the prede- fined order [28, p. 56]. MyCrunchGpt is still in development. The paper [28] explains the domain as well as the integration of the LLM, but does not fully detail the implementa- tion of the latter. Still, MyCrunchGpt illustrates innovative applications of an LLM in a technical do- main. We categorize three LLM components solving tasks within MyCrunchGpt: a DesignAssistant guiding users through workflows and requesting pa- rameters for function and API calls; a SettingsEd- itor updating a JSON file with settings for a back- end software tool; and a DomainExpert which helps evaluating results by comparing them to related re- sults, e.g., existing airfoil designs, which it derives from its trained knowledge. MatrixProduction. MatrixProduction em- ploys an LLM for controlling a matrix production system [69]. While in a classical line production setup, workstations are arranged linearly and the manufacturing steps follow a fixed sequence, matrix production is oriented towards greater flexibility. transport vehicles Autonomous carry materials and intermediate products to workstations, termed automation modules, each offering a spectrum of manufacturing skills that it can contribute to the production process. Compared to line production, matrix production is highly adaptable and can manufacture a variety of personalized products with full automation. This requires intelligent production management to (a) create workplans that orchestrate and schedule the automation modules’ skills, and (b) program the involved automation modules such that they execute the required processing steps. MatrixProduction incorporates two LLM compo- nents: Manager creates workplans as sequences of skills (a), while Operator generates programs for the involved automation modules (b). MatrixProduction prompts Manager and Op- erator to provide textual explanations in addition to the required sequences of skills or automation module programs. The LLM output is processed by a parser before being used to control the physi- cal systems. Manager relies on built-in production- specific knowledge of the LLM such as “a hole is pro- duced by drilling”. Noteworthy in this approach is its tight integra- tion into the system landscape of Industry 4.0. The few-shot Manager and Operator prompts are generated automatically using Asset Adminis- tration Shells, which are standardized, technology- 6 independent data repositories storing digital twins of manufacturing assets for use in Industry 4.0 [2]. WorkplaceRobot. An experimental robot system is enhanced with LLM-based task planning in [37]. The robot operates in a workplace environment fea- turing a desk and several objects. It has previously been trained to execute basic operations expressed in natural language such as “open the drawer” or “take the pink object and place it in the drawer”. LLM-based task planning enables the robot to per- form more complex orders like “tidy up the work area and turn off all the lights”. To this end, an LLM is prompted to generate a sequence of basic operations that accomplish the complex order. Although the robot expects operations phrased in language, the LLM is prompted with a natural Python coding task. For instance, the basic opera- tion “turn on the green light” corresponds to a Python command push_button(’green’). The prompt for the LLM includes several examples each consisting of a description of an environment state, a complex order formatted as a comment, and a sequence of Python robot commands that accomplish the com- plex order. When invoking the LLM to generate the Python program for a new order, the prompt is aug- mented with a description of the environment’s cur- rent state and the new order as a comment. The Python code produced by the LLM is trans- lated back to a sequence of basic operations in nat- ural language. When the robot executes these oper- ations, there is no feedback about successful comple- tion. Rather, the system assumes that all basic op- erations require a fixed number of timesteps to com- plete. AutoDroid. The goal of mobile task automation is hands-free user interaction for smartphones through voice commands. AutoDroid is a voice control sys- tem for smartphones that can automatically execute complex orders such as “remind me to do laundry on May 11th” or “delete the last photo I took” [64, 65]. as “scroll down, then press button x” in the calen- dar app. AutoDroid employs an LLM component TaskExecutor to plan these sequences of opera- tions. The challenge is that the next operation to ex- ecute depends on the current state of the Android app which continuously changes as the app is operated. AutoDroid solves this by invoking the TaskEx- ecutor repeatedly after each app operation with the prompt comprising the updated state of the Graph- ical User Interface (GUI) along with the user’s com- plex order. Before executing irrevocable operations, such as per- manently deleting data or calling a contact, Auto- Droid prompts the user to confirm or adjust the op- eration. TaskExecutor is instructed to include a “confirmation needed” hint in its output for such op- erations. The prompt for TaskExecutor comprises an ex- tract from a knowledge base which is built automati- cally in an offline learning phase as follows: In a first step, a “UI Automator” (which is not an LLM com- ponent) automatically and randomly operates the GUI elements of an Android app to generate a UI Transition Graph (UTG). The UTG has GUI states as nodes and the possible transitions between GUI states as edges. As next steps, AutoDroid invokes two LLM components referred to as MemoryGen- erators to analyze the UTG. The first MemoryGenerator is prompted repeat- edly for each GUI state in the UTG. Its task is to explain the functionality of the GUI elements. Be- sides instructions and examples of the table format desired as output, its prompt includes an HTML rep- resentation of the GUI state, the GUI actions preced- ing this state, and the GUI element operated next. Its output consists of tuples explaining the function- ality of a GUI element by naming the derived func- tionality (e.g., “delete all the events in the calendar app”) and the GUI states and GUI element actions in- volved. Similarly, the second MemoryGenerator is prompted to output a table listing GUI states and explanations of their functions. These tables consti- tute AutoDroid’s knowledge base. Such complex orders are fulfilled by performing se- quences of basic operations in an Android app, such ProgPrompt. ProgPrompt [51] is an approach to to LLM-based robot task planning similar 7 Its robot is controlled by WorkplaceRobot. Python code and works in a real and a simulated household environment. ProgPrompt comprises two LLM components. Ac- tionPlanning generates Python scripts for tasks such as “microwave salmon” using basic opera- tions like grab(’salmon’), open(’microwave’), and putin(’salmon’, ’microwave’), notably with- out considering the current state of the environment. To establish a feedback loop with the environment, ActionPlanning adds assert statements. These statements verify the preconditions of basic opera- tions and trigger remedial actions when preconditions are not met. For instance, a script for “microwave salmon” comprises the following code fragment: if assert(’microwave’ is ’opened’) else: open(’microwave’) putin(’salmon’, ’microwave’) When operating in the simulated environment, ProgPrompt can verify an assert statement through its second LLM component, Scenario- Feedback. Prompted with the current state of the environment and the assert statement, Scenario- Feedback evaluates it and outputs True or False. FactoryAssistants. FactoryAssistants advise workers on troubleshooting production line issues in two manufacturing domains: detergent production and textile production [26]. The assistants leverage domain knowledge from FAQs and documented prob- lem cases to answer user queries. The required do- main knowledge is provided as a part of the prompt. SgpTod. SgpTod employs an LLM to implement a chatbot, specifically, a task-oriented dialogue (TOD) system [71]. TOD systems are also known as conver- sational assistants. In contrast to open-domain dia- logue (ODD) systems, which engage users in goalless conversations, they are designed for assisting users in specific tasks. In general, TOD systems require the following components [3]: Natural Language Understanding (NLU), analyzing the user’s input to classify intents and extract entities; Dialogue Management (DM) for deciding on a system action that is appropriate in a given dialogue state (e.g., ask for more informa- tion or invoke a hotel booking service); and Natu- ral Language Generation (NLG) for producing a re- sponse that the TOD system can present to the user. Intent classification, also known as intent detection, matches free-text user input to one of several tasks a TOD system can perform (e.g., book a hotel). Entity extraction isolates situational values, called entities, from the user input (e.g., the town and the date of the hotel booking). The TOD system may require several dialogue turns to elicit all necessary entities from the user. In TOD research, the system’s in- ternal representation of the user’s intentions and the entity values is commonly referred to as its “belief state”. For example, in the restaurant search domain, the belief state may include attribute-value pairs like cuisine:Indian and pricerange:medium. SgpTod is a multi-domain TOD system, concur- rently handling multiple task domains found in stan- dard TOD evaluation datasets, such as recommend- ing restaurants or finding taxis. Similar to other ex- perimental TOD systems [23], SgpTod accesses a database that stores information from the task do- mains, such as available hotels and restaurants. SgpTod comprises two LLM components, called DstPrompter and PolicyPrompter, that are both invoked in every dialogue turn between SgpTod and the user. The DstPrompter handles the NLU aspect, analyzing the user’s input and populating the system’s belief state. It outputs is an SQL query suited to extract the database entries that match the current belief state. Upon retrieving the database en- tries, SgpTod invokes its PolicyPrompter which covers both DM and NLG. Prompted with the dia- logue history and the database entries retrieved, it produces a two-part output: a natural language re- sponse for NLG and a system action for DM. TruckPlatoon. The concept of truck platooning means that trucks travel closely together for bet- ter fuel efficiency and traffic flow. TruckPla- toon comprises an algorithmic control loop which autonomously maintains a consistent distance be- tween trucks. It invokes an LLM to generate natural- language reports on the platoon’s performance and 8 stability from measurements tracked by the control algorithm, providing easily understandable informa- tion for engineers involved in monitoring and opti- mizing the truck platooning system. ExcelCopilot. ExcelCopilot is an example of a recent trend where software companies integrate LLM-based assistants, often termed “copilots”, into their products [44]. These copilots not only provide textual guidance but also perform actions within the software environment, constituting a distinctive type of LLM-integrated application. We chose Excel- Copilot as an example for evaluating our taxonomy. Since its implementation is undisclosed, we infer its architecture from indirect sources, including a screen- cast and a report on insights and experiences from copilot developers [16, 44]. This inferred architecture may deviate from the actual implementation. ExcelCopilot is accessible in a task bar along- side the Excel worksheet. It features buttons with context-dependent suggestions of actions and a text box for users to type in commands in natural lan- guage. ExcelCopilot only works with data tables, so its initial suggestion is to convert the active work- sheet’s data into a data table. Copilot functions ac- tivate when a data table or part of it is selected. It then presents buttons for four top-level tasks: “add formula columns”, “highlight”, “sort and filter”, and “analyze”. The “analyze” button triggers the copilot to display more buttons, e.g., one that generates a pivot chart from the selected data. ExcelCopilot can also add a formula column to the data table and explain the formula in plain language. When a user inputs a free-text command, Excel- Copilot may communicate its inability to fulfill it. This constantly occurs with commands requiring multiple steps, indicating that ExcelCopilot lacks a planning LLM component as seen in, for example, MatrixProduction. This observation, along with its mention in [44], suggests that ExcelCopilot em- ploys an intent detection-skill routing architecture. This architecture includes an LLM component that maps free-text user commands to potential intents and then delegates to other LLM components tasked with generating actions to fulfill those intents. Ac- cordingly, ExcelCopilot comprises several types of LLM components: • Several distinct Action Executors generate code for specific application actions, such as cre- ating a pivot table, designing a worksheet for- mula, inserting a diagram, and so on. • An Advisor suggests meaningful next actions. Its outputs serve to derive button captions and prompts for ActionExecutors. • When a user inputs a free-text command, the IntentDetector is invoked to determine and trigger a suitable ActionExecutor. The In- tentDetector communicates its actions to users and informs them when it cannot devise a suitable action. • The Explainer generates natural language ex- planations of formulae designed by ExcelCopi- lot. It is unclear whether under the hood, the ActionExecutor is generating both the for- mula and the explanation, or if two separate LLM components are being invoked. We assume the latter, i.e., that a separate Explainer LLM component exists. While users interact repeatedly with ExcelCopi- lot, each interaction adheres to a single-turn pat- tern, with the user providing a command and Ex- celCopilot executing it [44]. 5. A Taxonomy for LLM Components and LLM-Integrated Applications When developing the taxonomy, it emerged that an- alyzing an LLM-integrated application should begin with identifying and describing its distinct LLM com- ponents. Analyzing each LLM component separately helps capture details and provides a clear understand- ing of how the application utilizes LLM capabili- ties. The LLM-integrated application can then be described as a combination of the LLM components it employs. 9 Function Meta Invocation Table 2: Dimensions and characteristics of the taxonomy. Codes of characteristics are printed in uppercase. “Meta” means “metadimension”. “MuEx” means “mutual exclusiveness”. Dimension Interaction Frequency Logic UI Data Instruction State Task Check Skills Format Revision Consumer Characteristics App, Command, Dialog Single, Iterative cAlculate, Control none, Input, Output, Both none, Read, Write, Both none, User, LLM, Program none, User, LLM, Program none, User, LLM, Program none, User, LLM, Program reWrite, Create, conVerse, Inform, Reason, Plan FreeText, Item, Code, Structure none, User, LLM, Program User, LLM, Program, Engine MuEx enforced yes yes yes yes enforced enforced yes enforced no no enforced enforced Prompt Output 5.1. Overview and demonstration The taxonomy identifies 13 dimensions for LLM com- ponents, grouped into five metadimensions as shown in table 2. It comprises both dimensions with gen- uinely mutually exclusive characteristics and those with non-exclusive characteristics. For dimensions related to the technical integration of LLMs within applications, mutual exclusiveness is enforced. Given the open nature of software architecture, the inte- gration of LLMs allows for significant diversity. In practice, LLM components may show multiple char- acteristics within these dimensions. Nonetheless, the taxonomy requires categorizing each component with a predominant characteristic, enforcing a necessary level of abstraction to effectively organize and struc- ture the domain. We applied the taxonomy to categorize each of the example instances described in section 4.2. The re- sults are depicted in figure 1. The dimensions and their characteristics are detailed and illustrated with examples in section 5.2. The taxonomy visualizes an LLM component by a feature vector comprising binary as well as multi- valued features. Non-mutually exclusive dimensions are represented by a set of binary features. The re- maining dimensions are encoded as n-valued features where n denotes the number of characteristics. For compactness, we use one-letter codes of the charac- teristics as feature values in the visualizations. In table 2, these codes are printed in upper case in the respective characteristic’s name. A feature vector representing an LLM component is visualized in one line. For dimensions with non- mutually exclusive characteristics, all possible codes are listed, with the applicable ones marked. The re- maining dimensions are represented by the code of the applicable characteristic, with the characteris- tic none shown as an empty cell. We shade feature values with different tones to support visual percep- tion. LLM components within the same application are grouped together, visualizing an LLM-integrating application in a tabular format. 5.2. Dimensions and characteristics 5.2.1. Invocation dimensions Two Invocation dimensions address the way the LLM is invoked within the application. Interaction describes how the user interacts with the LLM with three characteristics: App: Users never converse with the LLM directly in natural language, rather the application invokes the LLM automatically. E.g., users do not interact 10 Invocation Function Prompt (cid:125)(cid:124) (cid:123) (cid:122) (cid:125)(cid:124) (cid:123) (cid:125)(cid:124) (cid:123) (cid:122) Skills (cid:125)(cid:124) Out. Format Output (cid:123) (cid:122) (cid:125)(cid:124) (cid:123) (cid:122) (cid:125)(cid:124) (cid:123) (cid:122) n o i t c a r e t n I C C D Honeycomb QueryAssistant LowCode Planning LowCode Executing MyGrunchGpt DesignAssistant D C MyGrunchGpt SettingsEditor C MyGrunchGpt DomainExpert MatrixProduction Manager MatrixProduction Operator WorkplaceRobot AutoDroid Executor AutoDroid MemoryGenerator2 C A C C A C ProgPrompt ActionPlanning ProgPrompt ScenarioFeedback A FactoryAssistant SgpTod DstPrompter SgpTod PolicyPrompter TruckPlatoon D D A A ExcelCopilot ActionExecutor∗ A A ExcelCopilot Advisor C ExcelCopilot IntentDetector A ExcelCopilot Explainer y c n e u q e r F S S I I S S S S S I I S I S S S S S S S S (cid:122) n o i t c u r t s n I a t a D I U c i g o L A e t a t S k s a T k c e h C e t i r W e r e t a e r C e s r e V n o c m r o f n I n o s a e R A A B A B A A I I I I C C C C A C C A R P P U P P U P L U P P U P P P P P P P P U P P L P P U I C V I V W I I P L U P P P P P U P P L P P U W V V A I R P P U P P P C O A O P P P W A A C A P P L P P P P P U P P P t x e T e e r F m e t I n a l P P P F F P F P F P P P F F F P F F F R R R R R R R I I I I e d o C C C C C C C e r u t c u r t S n o i s i v e R r e m u s n o C P E S U L U S S S S S S S E E U L E E E L E E U E P U E P P U Figure 1: Categorized example instances. See table 2 for a legend. ∗, 2: multiple LLM components. directly with ExcelCopilot ActionExecutor or with MatrixProduction Operator. Command : Users input single natural language commands. E.g., users interact with AutoDroid TaskExecutor through single natural language commands. Dialog: Users engage in multi-turn dialogues with the LLM component to achieve a use goal. E.g., users repeatedly prompt LowCode Executing or My- CrunchGpt DesignAssistant in multi-turn dia- logues to obtain an essay or an airfoil design, respec- tively. Frequency addresses how often the application in- vokes a specific LLM component to fulfill a goal: Single: A single invocation of an LLM component is sufficient to produce the result. E.g., in My- CrunchGpt, the application internally invokes dis- tinct LLM components once for each user input by injecting varying prompt instructions. Iterative: The LLM component is invoked repeatedly to produce the result. E.g., AutoDroid TaskEx- 11 ecutor is invoked multiple times to fulfill a com- mand with an updated environment description in the State prompt; LowCode Executing is repeat- edly prompted by the user to achieve the use goal while the application updates the dialogue history. 5.2.2. Function dimensions The Function dimensions are derived from the classi- cal three-tier software architecture model which seg- regates an application into three distinct layers: pre- sentation, logic and data [17]. The presentation layer implements the UI. On the input side, it allows users to enter data and commands that control the appli- cation. On the output side, it presents information and provides feedback on the execution of commands. The logic layer holds the code that directly realizes the core objectives and processes of an application such as processing data, performing calculations, and making decisions. The data layer of an application manages the reading and writing of data from and to persistent data storage. Due to its versatility, an LLM component can simultaneously implement func- tionality for all three layers. The taxonomy addresses this with three Function dimensions. UI indicates whether an LLM component contributes significantly to the user interface of an application, avoiding the need to implement graphical UI controls or display elements: none: No UI functionality is realized by the LLM. E.g., in ExcelCopilot, the LLM does not replace any UI elements. Input: is (partially) implemented by the LLM. E.g., in MatrixProduction Manager, users input their order in natural language, obviating a product configuration GUI. Output: Output UI is (partially) implemented by the LLM. E.g., in TruckPlatoon, the output gener- ated by the LLM component can replace a data cock- pit with gauges and other visuals displaying numeri- cal data. Input and output UI are (partially) imple- Both: mented by the LLM. E.g., in MyCrunchGpt, the DesignAssistant provides a convenient conversa- interface for parameterization of APIs and tional Input UI tools and feedback on missing values, which other- wise might require a complex GUI. Logic indicates whether the LLM component deter- mines the control flow of the application. It discerns two characteristics: cAlculate: The output does not significantly impact the control flow of the application, i.e., the output is processed like data. E.g., MyCrunchGpt Set- tingsEditor modifies a JSON file, replacing a pro- grammed function; MyCrunchGpt DesignAssis- tant asks the user for parameters, but the sequence of calling APIs and tools follows a predefined work- flow; the workflow computed by LowCode Plan- ning is displayed without influencing the applica- tion’s control flow. Control : The output of the LLM is used for con- trolling the application. E.g., the plans generated by MatrixProduction Manager serve to sched- ule and activate production modules; the actions pro- posed by AutoDroid TaskExecutor are actually executed and determine how the control flow of the app proceeds. Since an LLM invocation always computes a result, cAlculate is interpreted as “calculate only”, making cAlculate and Control mutually exclusive. Data addresses whether the LLM contributes to read- ing or writing persistent data: none: The LLM does not contribute to reading or writing persistent data. This characteristic applies to most sample instances. Read : The LLM is applied for reading from persistent data store. E.g., SgpTod DstPrompter generates SQL queries which the application executes; Honey- comb QueryAssistant devises analytical database queries. Write and Both: No LLM component among the samples generates database queries for creating or updating persistent data. 5.2.3. Prompt-related dimensions Integrating an LLM into an application poses spe- cific requirements for prompts, such as the need for prompts to reliably elicit output in the requested 12 form [68]. While a broad range of prompt patterns have been identified and investigated [66], there is still a lack of research on successful prompt pat- terns specifically for LLM-integrated applications, on which this taxonomy could build. Developing prompt taxonomies is a challenging research endeavor in itself [49] and is beyond the scope of this research. There- fore, the taxonomy does not define a dimension with specific prompt patterns as characteristics, but rather focuses on how the application generates the prompt for an LLM component from a technical perspective. Prompts generally consist of several parts with dis- tinct purposes, generated by different mechanisms. Although many authors explore the concepts, a com- mon terminology has yet to be established. This is illustrated in table 3, showing terms from an ad-hoc selection of recent papers addressing prompt gener- In the table, italics indicate ation in applications. that the authors refrain from introducing an abstract term and instead use a domain-specific description. The term “examples” indicates a one-shot or few-shot prompt pattern. The terms that are adopted for the taxonomy are underlined. The taxonomy distinguishes three prompt parts re- ferred to as Prompt Instruction, Prompt State, and Prompt Task. These parts can occur in any order, potentially interleaved, and some parts may be ab- sent. • Instruction is the part of a prompt that outlines how to solve the task. Defined during LLM com- ponent development, it remains static through- out an application’s lifespan. • State is the situation-dependent part of the prompt that is created dynamically every time the LLM is invoked. The taxonomy opts for the term State instead of “context” in order to avoid confusion with the “LLM context” as explained in section 2. The State may include the current dialogue history, an extract of a knowledge base needed specifically for the current LLM invoca- tion, or a state or scene description, etc. • Task is the part of the prompt conveying the task to solve in a specific invocation. Prompt Instruction, State and Task describe the ori- gins of the prompt parts by uniform characteristics: none: The prompt part is not present. E.g., Prog- Prompt ActionPlanning has no State prompt, nor does LowCode Planning (except the dialogue history when planning a subprocess). Instruction and Task prompt parts are present in all sample in- stances. User : The user phrases the prompt part. E.g., the Task for ExcelCopilot IntentDetector or for LowCode Planning is phrased by the user. There are no sample instances where the user provides the Instruction or State prompt parts. LLM : The prompt part is generated by an LLM. E.g., LowCode Planning generates the State for Low- Code Executing and ExcelCopilot IntentDe- tector generates the Task for ExcelCopilot Ac- tionExecutors. Program: Application code generates the prompt part. E.g., AutoDroid programmatically generates the State and the Task parts for its MemoryGen- erators in the knowledge base building phase. The Prompt Instruction dimension is always gener- ated by Program. While a user and possibly an LLM have defined this prompt part during application de- velopment, this falls outside the scope of this taxon- omy. Therefore, the Prompt Instruction dimension is not discriminating and categorizes all cases as Pro- gram. It is retained in the taxonomy for completeness and better understandability. Prompt Check describes whether the application em- ploys a review mechanism to control and modify the prompt before invoking the LLM. The same charac- teristics as for the prompt parts are applicable: none: The prompt is used without check. User : The user checks and revises the prompt. LLM : Another LLM component checks or revises the prompt. Program: The application comprises code to check or revise the prompt. E.g., AutoDroid removes personal data, such as names, to ensure privacy before invoking the TaskExecutor; Honeycomb QueryAssistant incorporates a coded mechanism against prompt injection attacks. 13 Table 3: Terms used for prompt parts. Expressions specific to a domain are printed in italics, “examples” indicates a one-shot or few-shot prompt pattern. Terms adopted for the taxonomy are underlined. Source [72] [34] [32] [45] [45] [37] Instruction task description + examples instruction prompt predefined prompt prompt template + examples examples prompt context, i.e., examples [5] [5] [69] [26] education prompt education prompt role and goal + instruction + examples predefined system instruction + domain-specific information State DB schema environment state, scene description dialogue history dialogue history + provided workflow context query results from knowledge graph Task test instance data prompt user prompt user input question SQL query result input task commands user input task prompt (circumscribed) current task the user’s request Most example instances omit prompt checks. There are no examples where a Check is performed by a User or an LLM. 5.2.4. Skills dimensions The Skills dimension captures the types of LLM ca- pabilities that an application utilizes. It is designed as a dimension with six non-mutually exclusive char- acteristics. Skills is decomposed into six specific capabilities: reWrite: The LLM edits or transforms data or text, such as rephrasing, summarizing, reformat- ting, correcting, or replacing values. E.g., My- CrunchGpt SettingsEditor replaces values in JSON files; TruckPlatoon converts measurements into textual explanations. Create: The LLM generates novel output. E.g., LowCode Executing generates substantial bodies of text for tasks like essay writing. conVerse: The application relies on the LLM’s capa- bility to engage in purposeful dialogues with humans. E.g., MyCrunchGpt DesignAssistant asks users for missing parameters; SgpTod PolicyPrompter decides how to react to user inputs and formulates chatbot responses. Inform: The application depends on knowledge that the LLM has acquired during its training, unlike applications that provide all necessary information within the prompt. E.g., MyCrunchGpt Domain- Expert provides expert knowledge on airfoil designs; MatrixProduction relies on built-in knowledge of production processes, such as “a hole is produced by drilling”; LowCode Executing uses its learned knowledge for tasks like essay writing. Reason: The LLM draws conclusions or makes log- ical inferences. E.g., FormulaExplainer in Ex- celCopilot explains the effects of Excel functions in formulas; AutoDroid MemoryGenerators ex- plain the effects of GUI elements in Android apps. Plan: The LLM designs a detailed method or course E.g., Au- of action to achieve a specific goal. toDroid TaskExecutor and WorkplaceRobot TaskPlanning devise action plans to achieve goals. The Plan and Reason characteristics are interrelated, as planning also requires reasoning. The intended handling of these characteristics is to categorize an LLM component as Plan only and understand Plan as implicitly subsuming Reason. The effectiveness of LLMs as components of software applications relies on their commonsense knowledge and their ability to correctly interpret and handle a broad variety of text inputs, including instructions, 14 examples, and code. It is reasonable to assume that a fundamental capability, which might be termed Un- terstand, is leveraged by every LLM component. As it is not distinctive, the taxonomy does not list it explicitly in the Skills dimension. Applying this taxonomy dimension requires users to determine which skills are most relevant and worth highlighting in an LLM component. Given the versa- tility of LLMs, reducing the focus to few predominant skills is necessary to make categorizations distinctive and expressive. 5.2.5. Output-related dimensions Output Format characterizes the format of the LLM’s output. As an output may consist of several parts in diverse formats, this dimension is designed as non- mutually exclusive, same as the Skills dimension. It distinguishes four characteristics that are distinctive and well discernible: FreeText: unstructured natural language text out- put. E.g., TruckPlatoon and MyCrunchGpt DomainExpert generate text output in natural lan- guage; MatrixProduction Manager and Ma- trixProduction Operator produce FreeText ex- planations complementing output in custom formats to be parsed by the application. Item: a single text item from a predefined set of items, such as a class in a classification task. E.g., ProgPrompt ScenarioFeedback outputs either True or False. Code: source code or other highly formalized output that the LLM has learned during its training, such as a programming language, XML, or JSON. E.g., AutoDroid TaskExecutor produces code to steer an Android app; MyCrunchGpt SettingsEditor outputs JSON. Structure: structured, formalized output adhering to a custom format. E.g., LowCode Planning out- puts text in a format that can be displayed as a flow chart; MatrixProduction Manager and Oper- ator produce output in custom formats combined with FreeText explanations. Output Revision indicates whether the application checks or revises the LLM-generated output before utilization. These characteristics and their interpre- tations mirror those in the Prompt Check dimension: none: There is no revision of the LLM output. User : The user revises the LLM output. E.g., the user improves the plan generated by LowCode Planning. LLM : A further LLM component checks or revises the output of the LLM component under considera- tion. Program: Programmed code checks or revises the LLM output. E.g., Honeycomb QueryAssistant corrects the query produced by the LLM before exe- cuting it [7]. There are no instances in the sample set where an- other LLM revises or checks the output of the LLM. Most sample applications do not check or revise the LLM’s output, though several of them parse and transform it. The purpose of the Output Revision dimension is to indicate whether the application in- cludes control or correction mechanisms, rather than just parsing it. Output Consumer addresses the way of utilizing the LLM output: User signifies that the LLM output is presented to a human user. E.g., the text output of TruckPla- toon is intended for humans, as well as the output of MyCrunchGPT DomainExpert. LLM indicates that the output serves as a prompt part in a further LLM invocation. E.g., the knowl- edge base entries generated by an AutoDroid Mem- oryGenerator become part of the prompt for AutoDroid TaskExecutor; the plan output by LowCode Planning serves as a part of the prompt for LowCode Executing. Program describes instances where the LLM output is consumed and processed further by a software com- ponent of the application. E.g., the output of Ma- trixProduction Manager is handled by software systems (including a Manufacturing Execution Sys- tem) which use it to compute prompts for other LLM components. Engine covers scenarios where the LLM output is in- tended for execution on a runtime engine. E.g., the SQL query generated by SgpTod DstPrompter is 15 processed by a SQL interpreter; a part of the output of MatrixProduction Operator is executed by automation modules. Although applications may parse and transform the LLM output before use, the Output Consumer di- mension is meant to identify the ultimate consumer, such as an execution engine, rather than an interme- diary parser or transformation code. When applica- tions divide the LLM output into parts for different consumers, users applying the taxonomy need to de- termine which consumer is most relevant, since this dimension is designed to be mutually exclusive. 5.3. Evaluation Figure 2 displays the number of occurrences of char- It must acteristics within the example instances. be noted, however, that these do not reflect actual frequencies, as similar LLM components within the same application are aggregated together, indicated by symbols ∗ and 2 in figure 1. Furthermore, Ex- celCopilot likely includes occurrences of Prompt Check and Output Revision which are not counted due to insufficient system documentation. We evaluate the taxonomy against commonly ac- cepted quality criteria: comprehensiveness, robust- ness, conciseness, mutual exclusiveness, explanatory power, and extensibility [58, 42]. The taxonomy encompasses all example instances including those that were not considered during its development. This demonstrates comprehensiveness. As figure 1 shows, all example instances have unique categoriza- tions, supporting the taxonomy’s robustness. This not only indicates that the dimensions and charac- teristics are distinctive for the domain, but also high- lights the wide variety possible in this field. Concise- ness demands that the taxonomy uses the minimum number of dimensions and characteristics. The tax- onomy gains conciseness by identifying relatively few and abstract characteristics within each dimension. However, it does not adhere to the related subcri- terion that each characteristic must be present in at least one investigated instance [54]. Unoccupied char- acteristics are retained for dimensions whose char- acteristics were derived conceptually, specifically, for the Prompt dimensions, the Output Revision dimen- sion, and the Data Function dimension, enhancing the taxonomy’s ability to illustrate design options and inspire novel uses for LLM integrations in ap- plications. Some dimensions are constructed in par- allel, sharing common sets of characteristics. While this affects conciseness, it makes the taxonomy easier to understand and apply. As is often seen in tax- onomy development [54], we deliberately waived the requirement for mutual exclusiveness for some di- mensions, specifically the Output Format and Skills dimensions. In the context of this taxonomy, these can equivalently be understood as a set of of six and four binary dimensions respectively, each divided into characteristics “yes” and “no”. However, framing them as a single dimension with non-mutually exclu- sive characteristics seems more intuitive. Metadimensions structure the taxonomy, and most of the characteristics are illustrated through exam- ples. These measures are recognized for enhancing the explanatory power of a taxonomy [58]. The taxonomy’s flat structure allows for the easy addition of dimensions and characteristics, indicating that its extensibility is good. Potential extensions and fur- ther aspects of the taxonomy, including its usefulness and ease of use, are discussed in section 6. We visualize the taxonomy (or, strictly speaking, cat- egorized instances) in a compact form using feature vectors with characteristics abbreviated to single- letter codes. This approach has a drawback, as it requires referencing a legend. Additionally, non- applicable characteristics in mutually exclusive di- mensions are not visible, which means the design space is not completely shown. However, the com- pactness of the representation allows LLM compo- nents within a common application to be grouped closely, so that an LLM-integrated application can be perceived as a unit without appearing convoluted. This is a significant advantage for our purposes. 6. Discussion The discussion first focuses on the taxonomy’s appli- cability and ease of use before considering its overall usefulness. 16 Invocation (cid:122) (cid:123) (cid:125)(cid:124) Inter. Freq. Logic UI Function (cid:125)(cid:124) (cid:122) (cid:123) Data (cid:122) Instr. Prompt (cid:125)(cid:124) State Task (cid:123) Check Skills (cid:125)(cid:124) (cid:122) (cid:123) Output Format (cid:122) (cid:122) (cid:125)(cid:124) (cid:123) Revision Consumer Output (cid:125)(cid:124) (cid:123) A C D I S C A I O B R W B U L P U L P U L P U L P W C V I R P F I C S U L P U L P E 8 9 4 5 16 8 13 5 2 2 2 0 0 0 0 21 0 2 17 11 3 7 0 0 2 3 1 4 4 7 8 10 4 6 8 1 0 1 5 3 3 10 Figure 2: Occurrences of characteristics in the sample set of LLM-integrated applications. 6.1. Applicability and ease of use The taxonomy was effectively applied to LLM- integrated applications based on research papers, source code blog posts, recorded software demonstra- tions, and developer experiences. The analysis of LowCode revealed it to be a prompt definition tool combined with an LLM-based chatbot, which devi- ates from the strict definition of an LLM-integrated application. Still, the taxonomy provided an effective categorization and led to a clear understanding of the system’s architecture. Obviously, the ease of categorization depends on the clarity and comprehensiveness of the available infor- mation, which varies across analyzed systems. An- alyzing applications of LLMs in novel and uncom- mon domains can be challenging. While these papers present inspiring and innovative ideas for LLM inte- gration, such as MyCrunchGpt and TruckPla- toon, they may prioritize explaining the application area and struggle to detail the technical aspects of the LLM integration. A taxonomy for LLM-integrated applications can guide and facilitate the writing pro- cess and lead to more standardized and comparable descriptions. Applying the taxonomy is often more straightforward for research-focused systems. Omitting the com- plexities required for real-world applications, such as prompt checks and output revisions, their architec- tures are simpler and easier to describe. A taxonomy can point out such omissions. A fundamental challenge in applying the taxonomy arises from the inherent versatility of LLMs, which allows to define LLM components serving multiple purposes. This is exemplified by SgpTod Poli- cyPrompter, where the prompt is designed to pro- duce a structure with two distinct outcomes (a class label and a chatbot response), and similarly by Ma- trixProduction, as detailed section 4.2. Draw- ing an analogy to “function overloading” in classical programming, such LLM components can be termed “overloaded LLM components”. A taxonomy can handle overloaded LLM components in several ways: (1) define more dimensions as non- mutually exclusive, (2) label overloaded LLM compo- nents as “overloaded” without a more detailed catego- rization, or (3) categorize them by their predominant purpose or output. While the first approach allows for the most precise categorization, it complicates the taxonomy. Moreover, it will likely result in nearly all characteristics being marked for some LLM compo- nents, which is ultimately not helpful. The second approach simplifies categorization but sacrifices much detail. Our taxonomy adopts the third approach, en- forcing simplification and abstraction in descriptions of overloaded LLM components while retaining es- sential detail. The taxonomy can easily be extended to include approach (2) as an additional binary di- mension. 6.2. Usefulness The search for instances of LLM-integrated appli- cations uncovered activities across various domains. Substantial research involving LLM integrations, of- ten driven by theoretical interests, is notable in robot task planning [37, 51, 61, 33, 63] and in the TOD field [23, 71, 4, 6, 56]. Research exploring LLM po- tentials from a more practical perspective can be found in novel domains, such as industrial produc- tion [69, 26] and other technical areas [28, 70]. Fur- 17 thermore, developers of commercial LLM-based ap- plications are beginning to communicate their efforts and challenges [44, 7]. The taxonomy has been ap- plied to example instances from these and additional areas. This demonstrates its potential as a common, unified framework for describing LLM-integrated ap- plications, facilitating the comparison and sharing of development knowledge between researchers and practitioners across various domains. When applying the taxonomy to the example in- stances, it proved to be effective and useful as an analytical lens. Descriptions of LLM-integrated ap- plications commonly explain background information and details of the application domain in addition to its LLM integration. When used as an analytical lens, the taxonomy quickly directs the analysis to- wards the aspects of LLM integration, abstracting from the specificities of the domain. The taxonomy describes how LLM capabilities can be leveraged in software systems, offers inspiration for LLM-based functions, and outlines options for their implementation as follows. The Skills dimension out- lines the range of capabilities an LLM can contribute to an application through a concise set of characteris- tics, while the Function dimension suggests potential uses, further supported by the Interaction dimension. The Output Type dimension indicates options for en- coding the output of an LLM in formats beyond plain text, making it processable by software. The Output Consumer dimension illustrates the diverse ways to utilize or act upon LLM output. Thus, the taxonomy, as intended, spans a design space for LLM integra- tions. The sampled LLM-integrated applications showcase the creativity of researchers and developers in ap- plying and exploiting the potentials of LLMs, rang- ing from straightforward solutions (e.g., TruckPla- toon) to highly sophisticated and technically com- plex ones (e.g., AutoDroid). When using the tax- onomy to inspire innovative uses of LLMs, we recom- mend supplementing it with descriptions of example applications to enhance its illustrativeness. The char- acteristics of the Skills dimension are derived prag- matically from the investigated example instances. While they do not claim to be exhaustive or deeply 18 rooted in LLM theory or cognitive science, they add relevant details to the categorizations and illustrate design options and potentials for using LLMs as soft- ware components. It emerged as a key insight of this research that, rather than analyzing an LLM-integrated application in whole, analysis should start with the identifica- tion and description of its distinct LLM components. This is essential for gaining a clear understanding of how the application utilizes the capabilities of LLMs. The LLM-integrated application then manifests as a combination of its LLM components. As shown in fig- ure 1, the visualization effectively displays both the quantity and the variety of LLM components in an LLM-integrated application. LLM components interact through prompt chaining, where one LLM component’s output feeds into an- other’s input [67]. When an LLM-integrated applica- tion involves such an interaction, the taxonomy rep- resents it as an LLM characteristic within a Prompt dimension. The taxonomy can capture the variance in these interactions. For instance, in AutoDroid TaskExecutor and LowCode Executing, the LLM characteristic appears in the Prompt State di- mension, because their prompt components (knowl- edge base excerpts and prompt definition, respec- tively) are generated by other LLM components in a preparatory stage. In contrast, the LLM character- istic appears in the Prompt Task dimension for Ma- trixProduction Operator, because its prompt part is generated individually by the MatrixPro- duction Manager almost immediately before use. that cover Taxonomy dimensions entire LLM- integrated applications may be useful. Given their complexity, these dimensions should be designed based on a broader range of examples, which will only become available as more LLM-integrated applica- tions are developed and their architectures disclosed in the future. Extensions to the taxonomy could also include dimensions for describing the structure of prompts in more detail, as well as dimensions ad- dressing characteristics of the language models used. Table 4: LLM usage in the sample instances. “Evals” indicates evaluations of various LLMs. Used or best LLM Evals Comments GPT-3.5 GPT-3.5-turbo GPT-3.5 yes GPT-4 far too slow then awaiting the publication of GPT-4 Application Honeycomb LowCode MyCrunchGpt MatrixProduction text-davinci-003 WorkplaceRobot AutoDroid ProgPrompt FactoryAssistants GPT-3.5 GPT-3.5 SgpTod GPT-3.5-turbo TruckPlatoon N/A ExcelCopilot GPT-3 GPT-4 GPT-3 yes GPT-4 best for tasks requiring many steps CODEX better, but access limits prohibitive yes GPT-3.5 best more often than others combined combined LLMs in Copilot for Microsoft 365 [43] 7. Conclusion This paper investigates the use of LLMs as soft- ware components. Its perspective differs from cur- rent software engineering research, which investigates LLMs as tools for software development [14, 22] and from research examining LLMs as autonomous agents [11, 62, 57, 21]. This paper defines the concept of an LLM component as a software component that re- alizes its functionality by invoking an LLM. While LLM components implicitly appear in various works, termed, for example, “prompters”, “prompted LLM”, “prompt module”, or “module” [30, 71, 6, 7], to our knowledge, this concept has not yet been formalized or systematically investigated. The main contribution of this study is a taxonomy for the analysis and description of LLM components, extending to LLM-integrated applications by charac- terizing them as combinations of LLM components. In addition to the dimensions and characteristics of the taxonomy, the study contributes a taxonomy vi- sualization based on feature vectors, which is more compact than the established visualizations such as morphological boxes [55] or radar charts. It repre- sents an LLM-integrated application as one visual en- tity in a tabular format, with its LLM components displayed as rows. The taxonomy was constructed using established methods, based on a set of example instances, and evaluated with a new set of example instances. The combined samples exhibit broad variation along the identified dimensions. For some instances, informa- tion was not available, necessitating speculative in- terpretation. However, since the sample is used for identifying options rather than quantitative analysis, this issue and the representativeness of the sample are not primary concerns. The evaluation was con- ducted by the developer of the taxonomy, consistent with recent related work [21, 52, 48]. Using a new sample for evaluation strengthens the validity of the results. A further significant contribution of the paper is a systematic overview of a sample of LLM-integrated applications across various industrial and technical domains, illustrating a spectrum of conceptual ideas and implementation options. As the examples show, LLM components can re- place traditionally coded functions in software sys- tems and enable novel use cases. However, practi- cal challenges persist. Developers report that new software engineering methods are required, e.g., for managing prompts as software assets and for test- ing and monitoring applications. For instance, the costs of LLM invocations prohibit the extensive au- tomated testing that is standard in software devel- opment practice [44, 7]. Challenges also arise from the inherent indeterminism and uncontrollability of LLMs. Small variations in prompts can lead to differ- ences in outputs, while automated output processing 19 in LLM-integrated applications requires the output to adhere to a specified format. Furthermore, the deployment mode of LLMs, whether local (on the same hardware as the ap- plication) or remote, managed privately or offered as Language-Models-as-a-Service (LMaaS), has im- pact on performance and usability. Table 4 gives an overview of the LLMs used in our sample of appli- cations. Where papers report evaluations of mul- tiple LLMs, the table displays the chosen or best- performing LLM. Although not representative, the table provides some insights. LMaaS dominates, likely due to its convenience, but more importantly, due to the superior performance of the provided LLMs. Concerns regarding LMaaS include privacy, as sensi- tive data might be transmitted to the LLM through the prompt [64], and service quality, i.e., reliability, availability, and costs. Costs typically depend on the quantity of processed tokens. This quantity also af- fects latency, which denotes the processing time of an LLM invocation. A further important factor for latency is the size of the LLM, with larger models being slower [7]. When building LLM-based applications for real- world use, the reliability and availability of an LMaaS are crucial. Availability depends not only on the technical stability of the service, but also on factors such as increased latency during high usage periods or usage restrictions imposed by the provider of an LMaaS, as reported for ProgPrompt [51]. Beyond technical aspects, the reliability of an LMaaS also en- compasses its behavior. For instance, providers might modify a model to enhance its security, potentially impacting applications that rely on it. Despite practical challenges, integrating LLMs into systems has the potential to alter the way software is constructed and the types of systems that can be realized. Prompts are central to the functioning of LLM components which pose specific requirements such as strict format adherence. Therefore, an im- portant direction for future research will be prompt engineering specifically tailored for LLM-integrated applications. In future work, the taxonomy will be extended to distinguish finer-grained parts of prompts, allowing a more detailed description and comparison of prompts and related experimental results. Initial studies share results on the format-following behavior of LLMs [68] as a subtopic of instruction-following [73], derived with synthetic benchmark data. It is necessary to complement their results with experiments using data and tasks from real application development projects because, in the early stages of this field, synthetic benchmarks may fail to cover relevant aspects within the wide range of possible options. Another crucial research direction involves exploring how LLM char- acteristics correspond to specific tasks, such as de- termining the optimal LLM size for intent detection tasks. The taxonomy developed in this study can sys- tematize such experiments and their outcomes. Ad- ditionally, it provides a structured framework for de- lineating design choices in LLM components, making it a valuable addition to future training materials. Acknowledgements Special thanks to Antonia Weber and Constantin We- ber for proofreading and providing insightful and con- structive comments. References [1] Eleni Adamopoulou and Lefteris Moussiades. An Overview of Chatbot Technology. In Ilias Ma- glogiannis, Lazaros Iliadis, and Elias Pimeni- dis, editors, Artificial Intelligence Applications and Innovations, IFIP Advances in Information and Communication Technology, pages 373–383, Cham, 2020. Springer International Publishing. doi:10.1007/978-3-030-49186-4_31. [2] Sebastian Bader, Erich Barnstedt, Heinz Be- denbender, Bernd Berres, Meik Billmann, and Marko Ristin. Details of the asset adminis- tration shell-part 1: The exchange of informa- tion between partners in the value chain of in- dustrie 4.0 (version 3.0 rc02). Working Paper, Berlin: Federal Ministry for Economic Affairs 20 and Climate Action (BMWK), 2022. doi.org/ 10.21256/zhaw-27075. Soft Computing, 151:111165, January 2024. doi:10.1016/j.asoc.2023.111165. [3] Marcos Baez, Florian Daniel, Fabio Casati, and Boualem Benatallah. Chatbot integration in few patterns. IEEE Internet Computing, pages 1–1, 2020. doi:10.1109/MIC.2020.3024605. [4] Tom Bocklisch, Thomas Werkmeister, Task- Daksh Varshneya, and Alan Nichol. Oriented Dialogue with In-Context Learn- ing. (arXiv:2402.12234), February 2024. doi:10.48550/arXiv.2402.12234. [5] Yuzhe Cai, Shaoguang Mao, Wenshan Wu, Ze- hua Wang, Yaobo Liang, Tao Ge, Chenfei Wu, Wang You, Ting Song, Yan Xia, Jonathan Tien, and Nan Duan. Low-code LLM: Visual Pro- gramming over LLMs. (arXiv:2304.08103), April 2023. doi:10.48550/arXiv.2304.08103. [6] Lang Cao. DiagGPT: An LLM-based Chatbot with Automatic Topic Management for Task- Oriented Dialogue. (arXiv:2308.08043), August 2023. doi:10.48550/arXiv.2308.08043. [7] Phillip Carter. All the Hard Stuff No- body Talks About When Building Prod- ucts with LLMs. Honeycomb, May 2023. https://www.honeycomb.io/blog/ hard-stuff-nobody-talks-about-llm. [8] Phillip Carter. So We Shipped an AI Prod- Honeycomb, Octo- uct. Did It Work? ber 2023. https://www.honeycomb.io/blog/ we-shipped-ai-product. [9] Banghao Chen, Zhaofeng Zhang, Nicolas Langrené, Unleash- and Shengxin Zhu. ing the potential of prompt engineering in Large Language Models: A comprehensive review. (arXiv:2310.14735), October 2023. doi:10.48550/arXiv.2310.14735. [10] Wang Chen, Yan-yi Liu, Tie-zheng Guo, Da- peng Li, Tao He, Li Zhi, Qing-wen Yang, Hui-han Wang, and Ying-you Wen. Sys- industry appli- tems engineering issues cations of Applied large language model. for 21 [11] Yuheng Cheng, Ceyao Zhang, Zhengwen Zhang, Xiangrui Meng, Sirui Hong, Wenhao Li, Zihao Wang, Zekai Wang, Feng Yin, Junhua Zhao, and Xiuqiang He. Exploring Large Language Model based Intelligent Agents: Definitions, Methods, and Prospects. (arXiv:2401.03428), January 2024. doi:10.48550/arXiv.2401.03428. [12] Silvia Colabianchi, Andrea Tedeschi, and Francesco Costantino. Human-technology in- tegration with industrial conversational agents: A conceptual architecture and a taxonomy for manufacturing. Journal of Industrial Infor- mation Integration, 35:100510, October 2023. doi:10.1016/j.jii.2023.100510. [13] Jonathan Evertz, Merlin Chlosta, Lea Schön- herr, and Thorsten Eisenhofer. Whispers in the Machine: Confidentiality in LLM-integrated Systems. (arXiv:2402.06922), February 2024. doi:10.48550/arXiv.2402.06922. [14] Angela Fan, Beliz Gokkaya, Mark Harman, Mitya Lyubarskiy, Shubho Sengupta, Shin Yoo, and Jie M. Zhang. Large Language Models for Software Engineering: Survey and Open Problems. (arXiv:2310.03533), November 2023. doi:10.48550/arXiv.2310.03533. [15] Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, and Qing Li. Recommender Systems in the Era of Large Language Models (LLMs). (arXiv:2307.02046), August 2023. doi:10.48550/arXiv.2307.02046. [16] David Fortin. Microsoft Copilot in Excel: What It Can and Can’t Do. YouTube, Jan- uary 2024. https://www.youtube.com/watch? v=-fsu9IXMZvo. [17] Martin Fowler. Patterns of Enterprise Applica- tion Architecture. 2002. ISBN 978-0-321-12742- 6. [18] Shirley Gregor. The nature of theory in infor- mation systems. MIS quarterly, pages 611–642, 2006. doi:10.2307/25148742. [19] Yanchu Guan, Dong Wang, Zhixuan Chu, Shiyu Wang, Feiyue Ni, Ruihua Song, Longfei Li, Jin- jie Gu, and Chenyi Zhuang. Intelligent Vir- tual Assistants with LLM-based Process Au- tomation. (arXiv:2312.06677), December 2023. doi:10.48550/arXiv.2312.06677. [20] Muhammad Usman Hadi, Qasem Al Tashi, Rizwan Qureshi, Abbas Shah, Amgad Muneer, Muhammad Irfan, Anas Zafar, Muhammad Bi- lal Shaikh, Naveed Akhtar, Jia Wu, and Seyedali Mirjalili. Large Language Models: A Compre- hensive Survey of its Applications, Challenges, Limitations, and Future Prospects, September 2023. doi:10.36227/techrxiv.23589741.v3. [21] Thorsten Händler. A Taxonomy for Au- tonomous LLM-Powered Multi-Agent Architec- tures:. In Proceedings of the 15th Interna- tional Joint Conference on Knowledge Discov- ery, Knowledge Engineering and Knowledge Management, pages 85–98, Rome, Italy, 2023. SCITEPRESS - Science and Technology Publi- cations. doi:10.5220/0012239100003598. [22] Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, and Haoyu Wang. Large Language Models for Software Engineering: A Systematic Literature Review. (arXiv:2308.10620), Septem- ber 2023. doi:10.48550/arXiv.2308.10620. [23] Vojtěch Hudeček and Ondrej Dusek. Are Large Language Models All You Need for Task- In Svetlana Stoyanchev, Oriented Dialogue? Shafiq Joty, David Schlangen, Ondrej Dusek, Casey Kennington, and Malihe Alikhani, edi- tors, Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Di- alogue, pages 216–228, Prague, Czechia, Septem- ber 2023. Association for Computational Lin- guistics. doi:10.18653/v1/2023.sigdial-1.21. [24] Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M. Bran, Stefan Bringuier, Catherine L. Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nico- las Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Im- ran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majum- dar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel Rodriques, Jacob Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean War- ren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scour- tas, K. Schmidt, Ian Foster, Andrew White, and Ben Blaiszik. 14 examples of how LLMs can transform materials science and chem- istry: A reflection on a large language model hackathon. Digital Discovery, 2(5):1233–1250, 2023. doi:10.1039/D3DD00113J. [25] Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. Challenges and Applica- tions of Large Language Models, July 2023. doi:10.48550/arXiv.2307.10169. [26] Samuel Kernan Freire, Mina Foosherian, Chao- fan Wang, and Evangelos Niforatos. Harnessing Large Language Models for Cognitive Assistants in Factories. In Proceedings of the 5th Interna- tional Conference on Conversational User Inter- faces, CUI ’23, pages 1–6, New York, NY, USA, July 2023. Association for Computing Machin- ery. doi:10.1145/3571884.3604313. [27] Anis Koubaa, Wadii Boulila, Lahouari Ghouti, Ayyub Alzahem, and Shahid Latif. Explor- ing ChatGPT Capabilities and Limitations: A Survey. IEEE Access, 11:118698–118721, 2023. doi:10.1109/ACCESS.2023.3326474. [28] Varun Kumar, Leonard Gleyzer, Adar Ka- hana, Khemraj Shukla, and George Em Karni- 22 adakis. MyCrunchGPT: A LLM Assisted Frame- work for Scientific Machine Learning. Jour- nal of Machine Learning for Modeling and Computing, 4(4), 2023. doi.org/10.1615/ JMachLearnModelComput.2023049518. [29] Dennis Jan Kundisch, Muntermann, Anna Maria Oberländer, Daniel Rau, Maxi- milian Röglinger, Thorsten Schoormann, and Daniel Szopinski. An Update for Taxonomy Designers. Business & Information Systems Engineering, 2022. doi:10.1007/s12599-021-00723-x. 64(4):421–439, August Prompted LLMs as Jongho [30] Gibbeum Lee, Volker Hartmann, and Kang- Park, Dimitris Papailiopoulos, wook Lee. chatbot modules for long open-domain conversation. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the as- sociation for computational linguistics: ACL 2023, pages 4536–4554, Toronto, Canada, July 2023. Association for Computational Linguistics. doi:10.18653/v1/2023.findings-acl.277. [31] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zheng- bao Jiang, Hiroaki Hayashi, and Graham Neu- big. Pre-train, Prompt, and Predict: A Sys- tematic Survey of Prompting Methods in Nat- ural Language Processing. ACM Comput- ing Surveys, 55(9):195:1–195:35, January 2023. doi:10.1145/3560815. [32] Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, and Yang Liu. Prompt Injection at- tack against LLM-integrated Applications, June 2023. doi:10.48550/arXiv.2306.05499. [33] Yuchen Liu, Luigi Palmieri, Sebastian Ilche Georgievski, and Marco Aiello. Koch, DELTA: Decomposed Efficient Long-Term Robot Task Planning using Large Language Models. (arXiv:2404.03275), April 2024. doi:10.48550/arXiv.2404.03275. [34] Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, and Neil Zhenqiang Gong. Prompt Injec- tion Attacks and Defenses in LLM-Integrated 23 Applications. (arXiv:2310.12815), October 2023. doi:10.48550/arXiv.2310.12815. [35] Shaoguang Mao, Qiufeng Yin, Yuzhe Cai, https: and Dan Qiao. //github.com/chenfei-wu/TaskMatrix/ tree/main/LowCodeLLM, May 2023. LowCodeLLM. [36] Scott McLean, Gemma J. M. Read, Jason Thompson, Chris Baber, Neville A. Stanton, and Paul M. Salmon. The risks associated with Ar- tificial General Intelligence: A systematic re- view. Journal of Experimental & Theoretical Artificial Intelligence, 35(5):649–663, July 2023. doi:10.1080/0952813X.2021.1964003. [37] Oier Mees, Jessica Borja-Diaz, and Wolfram Burgard. Grounding Language with Visual Af- In 2023 fordances over Unstructured Data. IEEE International Conference on Robotics and Automation (ICRA), pages 11576–11582, London, United Kingdom, May 2023. IEEE. doi:10.1109/ICRA48891.2023.10160396. [38] Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pa- sunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Ce- likyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. Augmented Lan- guage Models: A Survey, February 2023. doi:10.48550/arXiv.2302.07842. [39] Melanie Mitchell. ture of artificial general ence, doi:10.1126/science.ado7069. intelligence. 383(6689):eado7069, March Debates on the na- Sci- 2024. [40] Quim Motger, Xavier Franch, and Jordi Marco. Survey, Software-Based Dialogue Systems: Taxonomy, and Challenges. ACM Comput- ing Surveys, 55(5):91:1–91:42, December 2022. doi:10.1145/3527450. [41] Fiona Fui-Hoon Nah, Ruilin Zheng, Jingyuan Cai, Keng Siau, and Langtao Chen. Gen- erative AI and ChatGPT: Applications, chal- lenges, and AI-human collaboration. Jour- nal of Information Technology Case and Ap- plication Research, 25(3):277–304, July 2023. doi:10.1080/15228053.2023.2233814. [42] Robert C Nickerson, Upkar Varshney, and taxon- Jan Muntermann. omy development and its application in in- formation systems. European Journal of In- formation Systems, 22(3):336–359, May 2013. doi:10.1057/ejis.2012.26. A method for [43] Camille Pack, Cern McAtee, Samantha Robert- son, Dan Brown, Aditi Srivastava, and Kweku Ako-Adjei. Microsoft Copilot for Microsoft 365 overview. https://learn.microsoft. com/en-us/copilot/microsoft-365/ microsoft-365-copilot-overview, 2024. March Sumit Gulwani, [44] Chris Parnin, Gustavo Soares, Rahul Pan- dita, and Austin Z. Henley. Building Your Own Prod- uct Copilot: Challenges, Opportunities, and Needs. (arXiv:2312.14231), December 2023. doi:10.48550/arXiv.2312.14231. Jessica Rich, [45] Rodrigo Pedro, Daniel Castro, Paulo Car- From Prompt In- reira, and Nuno Santos. jections to SQL Injection Attacks: How Pro- tected is Your LLM-Integrated Web Appli- cation? (arXiv:2308.01990), August 2023. doi:10.48550/arXiv.2308.01990. [46] Ken Peffers, Tuure Tuunanen, Marcus A. Rothenberger, and Samir Chatterjee. A De- sign Science Research Methodology for Infor- mation Systems Research. Journal of Man- agement Information Systems, 24(3):45–77, De- cember 2007. ISSN 0742-1222, 1557-928X. doi:10.2753/MIS0742-1222240302. [47] Mohaimenul Azam Khan Raiaan, Md. Sad- dam Hossain Mukta, Kaniz Fatema, Nur Mo- hammad Fahad, Sadman Sakib, Most Mar- Jubaer Ahmad, Mo- ufatul Jannat Mim, hammed Eunus Ali, and Sami Azam. A Review on Large Language Models: Architectures, Ap- plications, Taxonomies, Open Issues and Chal- 24 lenges. doi:10.1109/ACCESS.2024.3365742. IEEE Access, 12:26839–26874, 2024. [48] Jack Daniel Rittelmeyer and Kurt Sandkuhl. Morphological Box for AI Solutions: Evalua- tion and Refinement with a Taxonomy Develop- ment Method. In Knut Hinkelmann, Francisco J. López-Pellicer, and Andrea Polini, editors, Per- spectives in Business Informatics Research, Lec- ture Notes in Business Information Process- ing, pages 145–157, Cham, 2023. Springer Na- ture Switzerland. doi:10.1007/978-3-031-43126- 5_11. [49] Shubhra Kanti Karmaker Santu and Dongji TELeR: A General Taxonomy of for Benchmarking Complex (arXiv:2305.11430), October 2023. Feng. LLM Prompts Tasks. doi:10.48550/arXiv.2305.11430. [50] Thorsten Schoormann, Frederik Möller, and Daniel Szopinski. Exploring Purposes of Us- In Proceedings of the Inter- ing Taxonomies. national Conference on Wirtschaftsinformatik (WI), Nuernberg, Germany, February 2022. [51] Ishika Singh, Valts Blukis, Arsalan Mousa- vian, Ankit Goyal, Danfei Xu, Jonathan Trem- blay, Dieter Fox, Jesse Thomason, and Ani- mesh Garg. ProgPrompt: Generating Situated Robot Task Plans using Large Language Mod- els. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11523– 11530, London, United Kingdom, May 2023. IEEE. doi:10.1109/ICRA48891.2023.10161317. [52] Gero Strobel, Leonardo Banh, Frederik Möller, and Thorsten Schoormann. Exploring Gener- ative Artificial Intelligence: A Taxonomy and Types. In Proceedings of the 57th Hawaii Inter- national Conference on System Sciences, Hon- olulu, Hawaii, January 2024. https://hdl. handle.net/10125/106930. [53] Hendrik Strobelt, Albert Webson, Victor Sanh, Benjamin Hoover, Johanna Beyer, Hanspeter Pfister, and Alexander M. Rush. Interac- tive and Visual Prompt Engineering for Ad- hoc Task Adaptation With Large Language Models. IEEE Transactions on Visualization and Computer Graphics, pages 1–11, 2022. doi:10.1109/TVCG.2022.3209479. Effective Invocation Methods of Massive LLM Services. (arXiv:2402.03408), February 2024. doi:10.48550/arXiv.2402.03408. [54] Daniel Szopinski, Thorsten Schoormann, and Dennis Kundisch. Criteria as a Prelude for Guid- ing Taxonomy Evaluation. In Proceedings of the 53rd Hawaii International Conference on Sys- tem Sciences, 2020. https://hdl.handle.net/ 10125/64364. [55] Daniel Szopinski, Thorsten Schoormann, and Visualize different: To- Dennis Kundisch. researching the fit between taxon- wards omy visualizations and taxonomy tasks. In Tagungsband Der 15. Internationalen Tagung Wirtschaftsinformatik (WI 2020), Potsdam, 2020. doi:10.30844/wi_2020_k9-szopinski. [56] Manisha Thakkar and Nitin Pise. Unified Ap- proach for Scalable Task-Oriented Dialogue Sys- tem. International Journal of Advanced Com- puter Science and Applications, 15(4), 2024. doi:10.14569/IJACSA.2024.01504108. [57] Oguzhan Topsakal and Tahir Cetin Akinci. Cre- ating Large Language Model Applications Uti- lizing Langchain: A Primer on Developing LLM Apps Fast. In International Conference on Applied Engineering and Natural Sciences, vol- ume 1, pages 1050–1056, 2023. [58] Michael Unterkalmsteiner and Waleed Adbeen. A compendium and evaluation of taxonomy quality attributes. Expert Systems, 40(1): e13098, 2023. doi:10.1111/exsy.13098. [59] Bryan Wang, Gang Li, and Yang Li. En- Interaction with Mo- abling Conversational In bile UI using Large Language Models. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, pages 1–17, New York, NY, USA, April 2023. Association for Computing Machinery. doi:10.1145/3544548.3580895. [61] Jun Wang, Guocheng He, and Yiannis Kan- Safe Task Planning for Language- taros. Instructed Multi-Robot Systems using Confor- mal Prediction. (arXiv:2402.15368), February 2024. doi:10.48550/arXiv.2402.15368. [62] Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, and Jirong Wen. A survey on large language model based autonomous agents. Frontiers of Com- puter Science, 18(6):186345, March 2024. doi:10.1007/s11704-024-40231-1. [63] Shu Wang, Muzhi Han, Ziyuan Jiao, Zeyu Zhang, Ying Nian Wu, Song-Chun Zhu, and Hangxin Liu. LLM3:Large Language Model- based Task and Motion Planning with Motion Failure Reasoning. (arXiv:2403.11552), March 2024. doi:10.48550/arXiv.2403.11552. [64] Hao Wen, Yuanchun Li, Guohong Liu, Shan- hui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang, Yunhao Liu, Yaqin Zhang, and Yunxin Liu. Em- powering LLM to use Smartphone for Intelligent Task Automation. (arXiv:2308.15272), Septem- ber 2023. doi:10.48550/arXiv.2308.15272. [65] Hao Wen, Yuanchun Li, and Sean KiteFly- Kid. MobileLLM/AutoDroid. Mobile LLM, Jan- uary 2024. https://github.com/MobileLLM/ AutoDroid. [66] Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, and Dou- Jesse Spencer-Smith, glas C. Schmidt. A Prompt Pattern Cat- alog to Enhance Prompt Engineering with ChatGPT. (arXiv:2302.11382), February 2023. doi:10.48550/arXiv.2302.11382. [60] Can Wang, Bolin Zhang, Dianbo Sui, Zhiying Tu, Xiaoyu Liu, and Jiabao Kang. A Survey on [67] Tongshuang Wu, Michael Terry, and Car- rie Jun Cai. AI Chains: Transparent and 25 Instruction- and Le Hou. Denny Zhou, Following Evaluation for Large Language Mod- els. (arXiv:2311.07911), November 2023. doi:10.48550/arXiv.2311.07911. Controllable Human-AI Interaction by Chain- ing Large Language Model Prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, pages 1–22, New York, NY, USA, April 2022. Association for Computing Machinery. doi:10.1145/3491102.3517582. [68] Congying Xia, Chen Xing, Jiangshu Du, Xinyi Yang, Yihao Feng, Ran Xu, Wenpeng Yin, and Caiming Xiong. FOFO: A Benchmark to Evaluate LLMs’ Format-Following Capa- bility. (arXiv:2402.18667), February 2024. doi:10.48550/arXiv.2402.18667. [69] Yuchen Xia, Manthan Shenoy, Nasser Jazdi, and Michael Weyrich. Towards autonomous system: Flexible modular production sys- language model tem enhanced with large agents. In 2023 IEEE 28th International Con- ference on Emerging Technologies and Fac- tory Automation (ETFA), pages 1–8, 2023. doi:10.1109/ETFA54631.2023.10275362. [70] I. de Zarzà, J. de Curtò, Gemma Roig, and Carlos T. Calafate. LLM Adaptive PID Control for B5G Truck Platooning Sys- tems. Sensors, 23(13):5899, January 2023. doi:10.3390/s23135899. [71] Xiaoying Zhang, Baolin Peng, Kun Li, Jingyan SGP-TOD: Build- Zhou, and Helen Meng. ing Task Bots Effortlessly via Schema-Guided LLM Prompting. (arXiv:2305.09067), May 2023. doi:10.48550/arXiv.2305.09067. [72] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. A Survey of Large Lan- guage Models. (arXiv:2303.18223), May 2023. doi:10.48550/arXiv.2303.18223. [73] Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, 26
ai_researcher
9
Acceleron_A_Tool_to_Accelerate_Research_Ideation.pdf
UCRHEP-T405 February 2006 6 0 0 2 r a M 6 3 v 6 1 1 2 0 6 0 / h p - p e h : v i X r a Connecting Dark Energy to Neutrinos with an Observable Higgs Triplet Ernest Ma Physics Department, University of California, Riverside, California 92521, USA Utpal Sarkar Physical Research Laboratory, Ahmedabad 380009, India Abstract To connect the scalar field (acceleron) responsible for dark energy to neutrinos, the usual strategy is to add unnaturally light neutral singlet fermions (right-handed neutrinos) to the Standard Model. A better choice is actually a Higgs triplet, through the coupling of the acceleron to the trilinear Higgs triplet-double-doublet interaction. This hypothesis predicts an easily observable doubly-charged Higgs boson at the forth- coming Large Hadron Collider (LHC). The existence of dark energy [1] may be attributed to a scalar field called the acceleron (or quintessence) [2] whose equation of motion involves a term of negative pressure, allowing the present Universe to expand at an accelerated rate. The acceleron may also form a con- densate and couple to matter in such a way that the observed neutrino masses are dynamical quantities. This is the scenario of mass varying neutrinos [3], motivated by the proximity of the effective mass scale of dark energy to that of neutrinos, which may have some interesting consequences [4, 5]. To make the connection, the usual strategy is to introduce 3 right-handed neutrinos Ni, i.e. 3 neutral fermion singlets under the electroweak SU(2)L × contrary to the cherished expectation that mNi should be very large (thereby triggering the U(1)Y gauge group. However, canonical seesaw mechanism [6] and yielding naturally small Majorana neutrino masses mνi), they have to be very small, i.e. of order eV, to be compatible with dark energy. In view of this problem, alternative mechanisms for the origin of mνi should be explored [7]. In the Standard Model, naturally small Majorana neutrino masses come from the unique dimension-five operator [8] Lef f = fij Λ (νiφ0 − liφ+)(νjφ0 − ljφ+) + H.c., (1) which can be realized at tree level in exactly 3 ways [9], one of which is of course the canonical seesaw mechanism with 3 right-handed neutrinos. Another way is to add a Higgs triplet [10] ∆ = ξ+/√2 ξ0 − ξ++ ξ+/√2 ! − (2) with trilinear couplings to both the lepton doublets (νi, li) and the Higgs doublet Φ = (φ+, φ0), i.e. Lint = fij[νiνjξ0 + 1 √2 (νilj + liνj)ξ+ + liljξ++] + µΦ†∆ ˜Φ + H.c., (3) 2 where ˜Φ = ( ¯φ0, φ−). As a result [11], − ( Mν)ij = φ0 2fijµ h m2 ξ0 2 i . (4) If µ = µ( A ), i.e. a function of the acceleron field A , then this is in fact a natural realization of mass varying neutrinos with mξ of order the electroweak scale. In all previous proposals of neutrino mass with a Higgs triplet, there is no compelling reason for mξ to be this low. One possible exception [12] is the case of large extra space dimensions, where mξ should be below whatever the cutoff energy scale is, but that is only a phenomenological lower bound. On the other hand, if dark energy is indeed connected to neutrinos through the Higgs triplet, then at least ξ++ will be unambiguously observable at the forthcoming Large Hadron Collider (LHC). Consider the most general Higgs potential consisting of Φ and ∆, i.e. V = m2(Φ†Φ) + M 2(T r∆†∆) + 1 2 λ3(T r∆†∆∆†∆) + λ4(Φ†Φ)(T r∆†∆) + λ5(Φ†∆†∆Φ) λ1(Φ†Φ)2 + λ2(T r∆†∆)2 1 2 + 1 2 + µ( ˜Φ†∆†Φ + Φ†∆ ˜Φ). Let φ0 h i = v and ξ0 h i = u, then For << µ | | , m | | | M | , and we have the unique solution v[m2 + λ1v2 + λ4u2 2µu] = 0, − u[M 2 + (λ2 + λ3)u2 + λ4v2] µv2 = 0. − m2 < 0, λ1M 2 λ4m2 > 0, − v2 ≃ − m2 λ1 , µv2 M 2 + λ4v2 . u ≃ 3 (5) (6) (7) (8) (9) The Higgs triplet masses are then m2 ξ++ m2 ξ+ m2 ξ0 ≃ ≃ ≃ M 2 + (λ4 + λ5)v2, M 2 + (λ4 + λ5/2)v2, M 2 + λ4v2. (10) (11) (12) Once produced, the decay of ξ++ into two charged leptons is an unmistakable signature with negligible background. Its decay branching fractions also map out , i.e. the entire fij| | neutrino mass matrix up to an overall scale [12]. In a model of neutrino dark energy (νDE), the neutrino mass mν is a dynamical quantity. It is assumed to be a function of a scalar field A (the acceleron) with a canonically normalized kinetic term and ∂mν /∂ = 0. In the nonrelativistic limit, mν depends on the total density A 6 nν of the thermal background of neutrinos and antineutrinos, and the energy or effective potential of the system is given by V = mνnν + V0(mν). (13) The thermal background and the scalar potential V0(mν) will act in opposite directions and at any instant of time, the minimum of the effective potential is given by V ′(mν) = nν + V ′ 0(mν) = 0. (14) We assume the curvature scale of V to be much larger than the Hubble expansion rate, so that the adiabatic approximation is valid. In other words, the solution of Eq. (14) for mν is assumed to be valid instantaneously. For an adiabatic expansion of the Universe, the density of matter varies with the scale factor as R−3(1+ω), ρ ∝ 4 (15) where ω is a time-independent parameter, which enters in the following simple equation of state: ρ(t) = ωp(t). In a νDE model, it was shown that ω satisfies the equation ω + 1 = V ′(mν) mνV − = Ων Ων + ΩDE , (16) (17) where ΩDE = ρDE/ρc is the contribution of V0(mν) to the energy density and Ων = nν/ρc is the neutrino energy density. Since the observed value [1] ω = 0.98 − ± 0.12 is close to 1 at the present time, Ων should be much less compared to ΩDE. These consid- − erations restrict the possibilities of the form of the potential. For small dω/dnν, the variable mass of the neutrino is proportional to the neutrino density to the power ω: mν ∝ The above general considerations are valid, independent of the details of the particular model nω ν . of neutrino mass. However, most phenomenological implications are specific to such details, with a few general features which are common to all models [4]. In the present scenario, for the effective neutrino mass to vary, we have to associate the acceleron field with the trilinear coupling of ∆ with Φ, so that the effective neutrino A mass becomes dependent on the field . This simply means that we set µ = µ( ) in the A A scalar potential of Eq. (5). As for the self-interactions of , we may assume for example the A following potential: V0 = Λ4 log(1 + ¯µ/µ( ). ) | A | Using Eq. (4), the effective low-energy Lagrangian is then given by − Lef f = fij | µ( ) | A 2 φ0 i h m2 ξ0 νiνj + H.c. + Λ4 log(1 + ¯µ/µ( ), ) | A | (18) (19) 5 and Eq. (13) is of the form V (x) = ax + b log 1 + (cid:18) c x (cid:19) , where x = mν ∝ | µ( ) | and a, b, c are all positive. For 4b/ac << 1, xmin ≃ A b/a, so mν ∝ n−1 ν , (20) (21) as desired. As a condition of naturalness, it has been argued that the mass of the scalar field should not be larger than the order of 1 eV and Λ 10−3 eV. In the canonical realization ∼ of mass varying neutrinos using right-handed neutrinos N, this would imply small NN Majorana masses as well as tiny νN Dirac masses, which are clearly rather unnatural. Here, the requirement is simply that mξ0 be of order , which is a much more reasonable φ0 h i condition. Thus the mass of ξ0 is predicted to be in the range of 80 500 GeV. The lower limit − is the present experimental bound from the direct search of the triplet Higgs scalar, while the upper limit comes from the requirement that it should not be too large compared to the electroweak breaking scale, otherwise it would be difficult to explain neutrino masses much below 1 eV. The form of µ( A ) was discussed in the original paper [3] to be µ( ) A ∼ λ A or µ( ) A ∼ µeA2/f 2 . We shall not go into the details of this discussion on the dynamics of this model, although some of the generic problems of mass varying neutrinos are common to the present model as well [14]. Depending on the form of µ( A ), global lepton number may be broken spontaneously in such a model of νDE, thereby creating a massless Goldstone boson, i.e. the Majoron. However, as shown below, its coupling to ordinary matter is highly suppressed, hence its existence is acceptable phenomenologically. If we take the case µ( ) A ∼ λ A (where is A complex), we can express the field as A = A 1 √2 (ρ + √2z)eiϕ 6 where z is the vacuum expectation value or condensate of . Similarly, A φ0 = 1 √2 (H + √2v)eiθ, ξ0 = 1 √2 (ζ + √2u)eiη, (22) with v and u as the vacuum expectation values of φ0 and ξ0 respectively. The longitudinal component of the Z boson (G0), the physical Majoron (J 0) and the massive combination (Ω0) of (zϕ, uη, vθ) are given by: G0 = J 0 = Ω0 = v2θ + 2u2η √v2 + 4u2 , (v2 + 4u2)z2ϕ + v2u2η z2(v2 + 4u2)2 + u2v4 + 4v2u4 2u2v2θ − q ϕ η + 2θ − √z−2 + u−2 + 4v−2 , (23) respectively. The heavy Ω0 is almost degenerate in mass with ζ. They are essentially the reincarnations of ξ0. The massless J 0 is potentially a problem phenomenologically but its couplings to all leptons are strongly suppressed by (u/v)2, and can safely be neglected in all present experiments. Since the triplet Higgs scalars cannot be much heavier than the usual Higgs doublet, they should be observable at the LHC as well as the proposed future International Linear Collider (ILC). The phenomenology of such triplet Higgs scalars has been discussed in [12]. The same-sign dileptons will be the most dominating decay modes of the ξ++. Complementary measurements of fij| | at the ILC by the process e−e−(µ−µ−) i l− l− j would allow us to study → the structure of the neutrino mass matrix in detail. Of course, these features are generic to any model with a Higgs triplet as the origin of Majorana neutrino masses. The difference here is that it is also accompanied by the unusual predictions of mass varying neutrinos in neutrino oscillations [4, 15]. In conclusion, we have pointed out in this paper that if the neutrino mass mν is dynamical and related to dark energy through the acceleron , then the most natural mechanism for A 7 generating mν is that of the Higgs triplet, rather than the canonically assumed right-handed neutrino. The mass scale of the triplet Higgs scalars is predicted to be close to that of electroweak symmetry breaking, hence it has an excellent chance of being observed at the LHC and ILC. Aspects of this model relating to cosmology and neutrino oscillations are similar to other existing models of dark energy. This work was supported in part by the U. S. Department of Energy under Grant No. DE-FG03-94ER40837. EM thanks the Physical Reserach Laboratory, Ahmedabad, India for hospitality during a recent visit. We thank Bipin Desai for an important comment. References [1] A. Riess et al., Astron. J. 116, 1009 (1998); S. Perlmutter et al., Astrophys. J. 517, 565 (1999); D. Spergel et al., Astrophys. J. Suppl. 148, 175 (2003). [2] C. Wetterich, Nucl. Phys. B302, 668 (1988); P. J. E. Peebles and B. Ratra, Astrophys. J. 325, L17 (1988). [3] R. Fardon, A. E. Nelson, and N. Weiner, JCAP 0410, 005 (2004); see also P. Q. Hung, hep-ph/0010126; P. Gu, X. Wang, and X. Zhang, Phys. Rev. D68, 087301 (2003). [4] D. B. Kaplan, A. E. Nelson, and N. Weiner, Phys. Rev. Lett. 93, 091801 (2004); G. Dvali, Nature 432, 567 (2004). [5] X.-J. Bi, P. Gu, X. Wang, and X. Zhang, Phys. Rev. D69, 113007 (2004); P.-H. Gu and X.-J. Bi, Phys. Rev. D70, 063511 (2004). [6] M. Gell-Mann, P. Ramond, and R. Slansky, in Supergravity, edited by P. van Nieuwen- huizen and D. Z. Freedman (North-Holland, Amsterdam, 1979), p. 315; T. Yanagida, in Proceedings of the Workshop on the Unified Theory and the Baryon Number in the 8 Universe, edited by O. Sawada and A. Sugamoto (KEK Report No. 79-18, Tsukuba, Japan, 1979), p. 95; R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, 912 (1980). [7] R. Barbieri, L. J. Hall, S. J. Oliver, and A. Strumia, Phys. Lett. B625, 189 (2005); R. Takahashi and M. Tanimoto, Phys. Lett. B633, 675 (2006); R. Fardon, A. E. Nelson, and N. Weiner, hep-ph/0507235. [8] S. Weinberg, Phys. Rev. Lett. 43, 1566 (1979). [9] E. Ma, Phys. Rev. Lett. 81, 1171 (1998). [10] G. Lazarides, Q. Shafi, and C. Wetterich, Nucl. Phys. B181, 287 (1981); J. Schechter and J. W. F. Valle, Phys. Rev. D22, 2227 (1980). [11] E. Ma and U. Sarkar, Phys. Rev. Lett. 80, 5716 (1998). [12] E. Ma, M. Raidal, and U. Sarkar, Phys. Rev. Lett. 85, 3769 (2000), Nucl. Phys. B615, 313 (2001). [13] D. N. Spergel, et al., Astrophys. J. Suppl. 148, 175 (2003). [14] R. D. Peccei, Phys. Rev. D71, 023527 (2005); N. Afshordi, M. Zaldarriaga, K. Kohri, Phys. Rev. D72, 065024 (2005); X.-J. Bi, B. Feng, H. Li, and X. Zhang, Phys. Rev. D72, 123523 (2005); A. W. Brookfield, C. van de Bruck, D. F. Mota, and D. Tocchini- Valentini, Phys. Rev. Lett. 96, 061301 (2006), astro-ph/0512367; H. Li, B. Feng, J.-Q. Xia, and X. Zhang, astro-ph/0509272. [15] V. Barger, P. Huber, and D. Marfatia, Phys. Rev. Lett. 95, 211802 (2005); M. Cirelli, M. C. Gonzalez-Garcia, and C. Pena-Garay, Nucl. Phys. B719, 219 (2005). 9
ai_researcher
1
Characterising_Successful_Fashion_Blogs_and_Their_Evaluation_Metrics.pdf
Identifying Influential Bloggers: Time Does Matter Leonidas Akritidis, Dimitrios Katsaros, Panayiotis Bozanis Department of Computer & Communication Engineering University of Thessaly Volos, Greece {leoakr, dkatsar, pbozanis}@inf.uth.gr 9 0 0 2 y a M 4 1 ] R I . s c [ 1 v 6 1 4 2 . 5 0 9 0 : v i X r a Abstract—Blogs have recently become one of the most favored services on the Web. Many users maintain a blog and write posts to express their opinion, experience and knowledge about a product, an event and every subject of general or specific interest. More users visit blogs to read these posts and comment them. This “participatory journalism” of blogs has such an impact upon the masses that Keller and Berry argued that through blogging “one American in tens tells the other nine how to vote, where to eat and what to buy” [9]. Therefore, a significant issue is how to identify such influential bloggers. This problem is very new and the relevant literature lacks sophisticated solutions, but most importantly these solutions have not taken into account temporal aspects for identifying influential bloggers, even though the time is the most critical aspect of the Blogosphere. This article investigates the issue of identifying influential bloggers by proposing two easily computed blogger ranking methods, which incorporate temporal aspects of the blogging activity. Each method is based on a specific metric to score the blogger’s posts. The first metric, termed MEIBI, takes into consideration the number of the blog post’s inlinks and its comments, along with the publication date of the post. The second metric, MEIBIX, is used to score a blog post according to the number and age of the blog post’s inlinks and its comments. These methods are evaluated against the state-of-the-art influential blogger identification method utilizing data collected from a real-world community blog site. The obtained results attest that the new methods are able to better identify significant temporal patterns in the blogging behaviour. Keywords-Blogosphere; influential bloggers; ranking I. INTRODUCTION During the last years, we have witnessed a massive transition in the applications and services hosted on the Web. The obsolete static Web sites have been replaced by numerous novel, interactive services whose common feature is their dynamic content. The social and participatory characteristics that were included in these services, led to the generation of virtual communities, where users share their ideas, knowledge, experience, opinions and even media con- tent. Examples include blogs, forums, wikis, media sharing, bookmarks sharing and many others, which are collectively known as the Web 2.0. Blogs are locations on the Web where individuals (the bloggers) express opinions or experiences about a subject. Such entries are called blog posts and may contain text, images, embedded videos or sounds and hyperlinks to other blog posts and Web pages. On the other hand, the readers are provided with the ability to submit their own comments in order to express their agreement or disagreement to the ideas or opinions contained in the blog post. The comments are usually placed below the post, displayed in reverse chronological order. The virtual universe that contains all blogs is known as the Blogosphere and accommodates two types of blogs [1]: a) individual blogs, maintained and updated by one blogger (the blog owner), and b) community blogs, or multi-authored blogs, where several bloggers may start discussions about a product or event. Since in the former type of blogs, only the owner can start a new line of posts, the present article focuses only on community blogs. In a physical community, people use to consult others about a variety of issues such as which restaurant to choose, which medication to buy, which place to visit or which the Blogosphere is a virtual movie to watch. Similarly, world where bloggers buy, travel and make decisions after they listen to the opinions, knowledge, suggestions and experience of other bloggers. Hence, they are influenced by others in their decision making and these others are defined in [9] as the influentials. The identification of the influentials is of significant importance, because they are usually connected in large virtual communities and thus they can play a special role in many ways. For instance, commercial companies can turn their interest in gaining the respect of the influentials to become their “unofficial spokesmen”, instead of spending huge amounts of money and time to advertise their products to thousands of other potential customers. It can also lead to the development of innovative business opportunities (related to commercial transactions and travelling), can assist in finding significant blog posts [3], [7], and can even be used to influence other peoples’ voting behavior. The issue of identifying influential bloggers is very recent and despite it seems similar to problems like the identifica- tion of influential blog sites [4] and the identification of authoritative Web pages [11], the techniques proposed for these problems can not be applied to the identification of in- fluential bloggers. The problem of identifying the influential bloggers has been introduced in [2], and the literature lacks other sophisticated solutions. That initial model, mentioned here as the influence flow method, explicitly discriminated the influential from the active (i.e., productive) bloggers, and considered features specific to the Blogosphere, like the blog post’s size, the number of comments, and the incoming and outgoing links. Nevertheless, this model fails to incorporate temporal aspects which are crucial to the Blogosphere and does not take into account the productivity as another factor which affects the influence. Motivated by these observations, this article proposes a new way of identifying influential bloggers in community blogs, by considering both the temporal and productivity aspects of the blogging behavior, along with the inter-linkage among the blogs posts. The proposed methods are evaluated against the aforementioned initial model (which is the only competitor so far) using data from a real-world blog site. The rest of the paper is organized as follows: In Section II we briefly present the relevant work, describing in more details the only method which is closely relevant to the problem considered here. Section III introduces the proposed algorithms for the identification of influential bloggers; in Section IV we conduct experiments with a dataset obtained from a real-world blog community and finally, conclude the paper in Section V. II. RELEVANT WORK The recent explosion of Blogosphere has attracted a surge of research on issues related to Blogosphere modeling, mining, trust/reputation, spam blog recognition, and many others [1]; these issues though are not directly relevant to the present work. The specific problem of identifying the influential bloggers in a blog site draws analogies from the problems of identifying influential blog sites and identifying authoritative Web pages (Web ranking). The identification of influential blog sites [4] and the related study of the spread of influence among blog sites [5], [6], [8], [12] are orthogonal to the problem considered here, since we are interested in identifying influential bloggers in a single blog site, which might be or might not be an influential blog site. Similarly, the eigenvector-based methods for identifying authoritative Web pages [11], like PageRank and HITS, “are not useful to our problem, since blog sites in Blogosphere are very sparsely linked” [10]. Finally, it is obvious that the works which propose methodologies for discovering and analyzing blog communities [13], [15] can not be exploited/tailored to our problem. The only work directly relevant to our problem is that reported in [2], which introduced the problem. To solve it, the authors proposed an intuitive model for evaluating the blog posts. This model is based on four parameters: Recognition (proportional to the incoming links), Activity Generation (proportional to the number of comments), Nov- elty (inversely proportional to the outgoing links) and Elo- quence (inversely proportional to the post’s length). These parameters are used to generate an influence graph in which the influence flows among the nodes. Each node of this graph represents a single blog post characterized by the four aforementioned properties. An influence score is calculated for each post; the post with maximum influence score is used as the blogger’s representative post. The influence score I(p) of a blog post p that is being referenced by ι posts and cites θ external posts, is determined by the following equation: |ι| |θ| I(p) = w(λ)(wcomγp + win Ip(m) − wout Ip(n)) X m=1 X n=1 (1) where w(λ) is a weight function depending on the length λ of a post and wcom denotes a weight that can be used to regulate the contribution of the number of comments (γp). Finally, win and wout are the weights that can be used to adjust the contribution of incoming and outgoing influence respectively. The calculation of this influence score is recursive (positive reinforcement from incoming links and negative reinforcement from outgoing links), similar to the PageRank definition. This score is the ιIndex metric, which can be later used to identify the most influential bloggers. Isolating a single post to identify whether a blogger is influ- ential or not, is an oversimplistic approach, and so it would be if they have used gross metrics, like average, median and so on. A blogger may have published only a handful of influential posts and numerous others of low quality, whereas other bloggers may have published several tens of influential blogs only, whose score though is lower than the score of the most influential blog of the former blogger. Therefore, the productivity of bloggers is a significant issue that has been overlooked by this preliminary model. Another drawback of this preliminary model is that its output depends highly on user defined weights. The value change of the above properties can lead to different rankings. Hence, its outcome is not objective, as tuning the appropriate weights the model identifies influential bloggers with differ- ent characteristics. In other words this model cannot provide a satisfactory answer to the question “who is the most influential blogger?”; but it can give answers to questions of type “who is the most influential blogger according to the number of comments that his/her posts received?”. But most importantly, this model (and also the naive models which are based on the k most active bloggers), ignore one of the most important factors in Blogosphere: Time. As already known [1], the Blogosphere expands at very high rates, as new bloggers enter the communities and some others leave it. Hence, an effective model that identifies influential bloggers, should take into consideration the date that a post was submitted and the dates that the referencing posts were published, in order to be able to identify the now- influential bloggers. Additionally, with such requirements it is mandatory to have fast methods (even on-line methods) for the discovery of the influentials, which precludes the use of demanding and unstable recursive definitions, like that used by the influence-flow method proposed in [2]. III. NEW METRICS FOR EVALUATING THE IMPACT OF BLOG POSTS In this section we present new methods to assign influence scores to the blog posts of a blogger. These scores that will be used later to identify the influentials. At first, we argue about what the desirable properties of these scores should be, and then we provide the formulae for their calculation. A. Factors measuring a blogger’s influence Beyond any doubt, the number of incoming links to a blog post is a strong evidence of its influence. Similarly, the number of comments made to a post is another strong indi- cation that this blog post has received significant attention by the community. The case of outlinks is more subtle. In Web ranking algorithms like PageRank and HITS, the links are used only as a recognition of (or to convey) authority. The influence-flow method of [2] assigns two semantics to a link: it is the means to convey authority, and also it the means to reduce the novelty. This mechanism results in two significant problems: a) it misinterprets the intention of the link creators, and b) it causes stability and convergence problems to the algorithm for the influence score calculation. It is characteristic that the authors admit ([2, page 215]) that the presence of outlinks in novel posts is quite common and it is used “to support the post’s explanations”. Therefore, we argue that the outlinks are not relevant to the post’s novelty, and all links should have a single semantic, that of implying endorsement (influence). The temporal dimension is of crucial importance for identifying the influentials. The time is related to the age of a blog post and also to the age of the incoming links to that post. An influential is recognized as such if s/he has written influential posts recently or if its posts have an impact recently. In the former case, the time involves the age of the post (e.g., in days since the current day) and in the latter case, the time involves the age (e.g., in days since the current day) of the incoming links to the post. There is another observation evident by the analysis presented in [2]: a lot of the influential bloggers were also active (i.e., productive) bloggers (see Table 1 and Tables 3–5 of [2]). Although, productivity and influence do not coincide, there is a quite strong correlation among them. Therefore, productivity should somehow be taken into account when seeking for influential bloggers. B. The novel influence scores Based on the requirements described in the previous subsection, we develop formulae to estimate the influence of a blog post. We summarize some useful notation in Table I. As already mentioned, the map in Blogosphere changes rapidly, in a manner that a blogger who would currently Symbol BP (j) bpj(i) Cj(i) Rj(i) ∆T Pj(i) ∆T P (x) Meaning the set of blog posts of blogger j i-th blog post of blogger j the set of comments to post i of blogger j set of posts referring (have link to) the i-th post of blogger j time interval (in days) between current time and the date that j-th blogger’s post i was submitted time interval (in days) between current time and the date that post x was submitted Table I NOTATION. considered as an influential, is not guaranteed to remain influential in the future. New bloggers enter the community and thousands of posts are submitted every day. In Sec- tion IV it is demonstrated that a blogger may submit up to hundreds (or even thousands) of posts yearly. In this dynamic environment, the date that a blogger’s post was submitted is crucial, since a blog post becomes “old” very quickly. An issue being discussed in a blog post at the present time and is now of major importance, may be totally outdated after two months. To account for this, we assign a score Sm j (i) to the i-th post of the j-th blogger as follows: j (i) = γ(|C(i)| + 1)(∆T Pj(i) + 1)−δ|Rj(i)| Sm (2) The parameter γ is not absolutely necessary, but it is used to grant to the quantities Sm j (i) a value large enough to be meaningful. Similarly, parameter δ does not affect the relative score values in a crucial way, but it is used to allow for fast decaying of older posts. Both parameters do not need complicated tuning, since they are not absolutely necessary; in our experiments, γ and δ are assigned values equal to 4 and 1, respectively. Since a post may receive no comments at all, we add one to the factor that counts the number of comments, to prevent null scores. Using the definition of scores Sm j (i), we introduce a new metric MEIBI1 for identifying influential bloggers. The definition of MEIBI follows: if m of his/her BP (j) posts get a score Sm and the rest BP (j) − m posts get a score of Sm Definition 1. A blogger j has MEIBI index equal to m, j (i) ≥ m each, j (i) ≤ m. This definition awards both influence and productivity of a blogger. Moreover, a blogger will be influential if s/he has posted several influential posts recently. But an old post may still be influential. How could we deduce this? Only if we examine the age of the incoming links to this post. If a post is not cited anymore, it is an indication that it negotiates outdated topics or proposes outdated solutions. On the other, if an old post continues to be linked to presently, then this is an indication that it contains influential material. Based on the ideas developed 1Metric for Evaluating and Identifying a Blogger’s Influence. for the MEIBI metric, we work in an analogous fashion. Instead of assigning to a blogger’s old posts smaller scores depending on their age, we can assign to each incoming link of a blogger’s post a smaller weight depending on the link’s age. This idea is quantified into the following equation: Sx j (i) = γ(|C(i)| + 1) X (∆T P (x) + 1)−δ (3) ∀x∈Rj(i) Based on equation 3 the definition of the MEIBIX (MEIBI eXtended) metric is formulated as follows: Definition 2. A blogger j has MEIBIX index equal to x, j (i) ≥ x each, and if x of his/her BP (j) posts get a score Sx the rest BP (j) − x posts get a score of Sx j (i) ≤ x. The introduction of the MEIBI and MEIBIX generates a straightforward policy for evaluating the influence of both blog posts and bloggers. No user-defined weights need to be set before these metrics provide results, whereas the most sound features of Blogosphere are considered. Moreover, the calculation of the metrics can be performed in an online fashion, since they do not involve complex computation and they do not present stability problem like those encountered when using eigenvector-based influence scores. Note that the developed metrics are similar in spirit with the h-index and its variations (see [14]) that recently became popular in the scientometrics litareture, but the challenges in Blogosphere are completely different: there are comments associated with each blog post, the time granularity is finer, the author of a post is a singe person, the resulting graph might contain cycles, and many more. There is also the possibility of taking into account the time that each comment was written, but such an extension does not contribute significantly to the strength of the model, since the time-varying interest to the post is captured by the time-weighting scheme to the incoming links, and moreover, it introduces the problem of having to handle two time scales, i.e., days for the links and the posts themselves, and hours or minutes for the comments. In the sequel, we will evaluate the effectiveness of the proposed metrics to a real- world dataset, comparing it with its only competitor [2]. IV. EXPERIMENTAL EVALUATION The evaluation of the methods proposed here, but in general, of a lot others developed in the context of in- formation retrieval, is tricky, because there is no ground truth to compare against; things are more challenging in this case, since there is only alternative [2] to contrast with. Nevertheless, we firmly believe that our evaluation is useful and solid as long as the proposed methods reveal some latent facts that are not captured by the competitor and by some straightfoward methods, which result in different rankings for the final influential bloggers. In the sequel of this section, we first describe the real data we collected for the experiments, and then present the actual experiments and the obtained results. A. Data characteristics Millions of blog sites exist. The Technorati2 blog search engine claims to have indexed more than 115 million blogs. Since it is impossible to crawl the entire Blogosphere to obtain a complete dataset, it is essential to detect an active blog community that provides blogger identification, date and time of posting, number of comments and outlinks. The Unofficial Apple Weblog3 (TUAW) is a community that meets all these requirements; the same source of data was used also in [2]. Although we use data from only one blog, the proposed methods can be appplied to every blog com- munity having characteristics similar to these of TUAW. We crawled4 TUAW and collected approximately 160 thousand pages, from which we extracted 17831 blog posts authored by 51 unique bloggers. This accounts for approximately 350 posts per blogger on average. Moreover, the posts received totally 269449 comments (15 comments per post on aver- age); only 1761 posts (ratio 10%) were left uncommented. To obtain the incoming links to each blog post, we used the Technorati API5. Apart from the number of the incoming links, we also retrieved the date that the referring post was submitted and its author’s name. This information is necessary for the calculation of the MEIBI and MEIBIX metrics. From the total 17831 blog posts, only 4586 of them had incoming links. Table II depicts the time distribution of both the blog posts and the incoming links. Year 2008 2007 2006 2005 2004 Total Posts 3676 4497 4354 4307 997 17831 Posts with inlinks 3653 662 186 77 8 4586 Inlinks 53204 259 18 1 0 53575 Table II TIME DISTRIBUTION OF POSTS AND INLINKS. It is interesting to note, that 80% of the total posts which have received at least one incoming link (3653 posts out of the total 4586), were submitted within the year 2008. Consequently, either TUAW was not so popular before 2008 and the bloggers were unaware of the information published there, or the posts submitted before 2008 were of medium or low quality, so that only a few other bloggers referred to them. Hence, time-aware influence metrics which measure time difference in days, are indeed necessary to differentiate between influential bloggers. We investigate also the temporal distribution of the in- coming links for a blog post measuring the intermediate 2http://technorati.com 3http://www.tuaw.com 4First week of December 2008. 5http://technorati.com/developers/api/cosmos.html time between the date a post was submitted and the date it received each of the incoming links. The results are depicted in Table III. Almost half of the total inlinks were received (published) the same day that the post was submitted. Only a percentage of 2.3% of all inlinks are dated one or more years after the publication of the post. These results prove the necessity of time-aware metrics for the identification of the influentials; since the posts are influential for a few days, it is not particularly useful to identify influentials for the whole lifetime of the blog site, but it is more substantial to identify the now-influential bloggers of the blog site. they have published in TUAW. We also provide the dates that the first post (fourth column) and the last post (fifth column) of each blogger was published. Although S. McNulty is ranked first, he has not submitted any posts during the last 4.5 months. A similar observation of inactivity holds also for other top-10 influential bloggers, like D. Chartier who is inactive in the last 3.5 months and C.K. Sample, III, who has no posts in the last 1.5 year. Recall, that both S. McNulty and D. Chartier, were ranked among the top-5 influential bloggers with the information-flow method ([2, Table 1]). Age 0 days 1 day between 1 and 7 days between 7 and 30 days between 30 and 60 days between 60 and 365 days over 365 days Total Inlinks 26346 13470 6653 2406 928 2523 1249 53575 Percentage 49,2% 25,1% 12,4% 4,5% 1,7% 4,7% 2,3% 99,9% Table III THE AGE OF THE INCOMING LINKS WITH RESPECT TO THE PUBLICATION DATE OF THE POST THEY CITE. B. Identifying the influential bloggers In this subsection we apply the proposed methods on the acquired dataset. Apart from the proposed methods, we also examine a naive method which ranks the bloggers by using only their activity, i.e., number of published posts – the activity index, one ranking method which is a straightfor- ward adaptation of a method coming from the bibliometric literature – the h-index [14] (we call these two methods as the plain methods), and a more sophisticated method, proposed in [2]. We divide the experimentation into three parts: in the first part, we compare the influential bloggers indicated by the proposed methods, to the bloggers found by the plain methods. We use the entire dataset as a baseline experiment, examine whether temporal considerations are worthy examining; in the second part, we compare the influential bloggers indicated by the proposed methods, with those found by the influence-flow method using the posts published in November 2008, to prove that even for small time intervals the rankings are different; finally, we examine the temporal evolution of the influential bloggers identified by the proposed methods during the year 2008, to examine whether the most influential bloggers lose their lead in influence and strengthen even more the necessity for temporal considerations. 1) The new methods vs. the plain ones: Table IV includes the ten most influential bloggers based solely on their activity (i.e., productivity) measured by the number of posts Bloggers S. McNulty D. Caolo D. Chartier E. Sadun C.K. Sample III 1 2 3 4 5 6 M. Lu L. Duncan 7 8 C. Bohon 9 M. Rose 10 M. Schramm N 3037 2242 1835 1560 1057 1043 954 793 793 648 First 06/01/2005 07/06/2005 26/08/2005 09/11/2006 01/03/2005 13/12/2006 19/09/2004 24/02/2004 29/11/2006 07/06/2007 Last 31/07/2008 04/12/2008 30/08/2007 26/09/2008 05/06/2006 04/12/2008 23/01/2007 04/12/2008 05/12/2008 04/12/2008 Table IV BLOGGERS RANKING BASED ON THE NUMBER OF POSTS SUBMITTED (ACTIVE BLOGGERS). Table V presents a ranking of the ten most influential bloggers when the h-index [14] metric is used; recall that this metric examines the number of posts of each blogger and the number of incoming links to each posts, awarding both productivity and influence. The third column of Table V displays the value of the h-index metric for each blogger and the next two columns show the total number of posts he/she has submitted in TUAW and how many of them have been cited by other posts respectively. Finally, the last column illustrates the total number of incoming links that all the posts of a blogger have received. R. Palmer Bloggers E. Sadun C. Bohon h 31 1 2 29 3 M. Schramm 25 25 4 24 5 M. Rose 23 6 D. Caolo 23 7 M. Lu 23 8 22 9 22 10 S. McNulty B. Terpstra C. Warren Posts Cited 489 1560 676 793 339 648 354 354 364 793 459 2242 397 1043 334 3037 223 226 112 133 Inlinks 5759 9439 4322 4809 4222 4907 4282 3212 3013 1605 Table V BLOGGERS RANKING BASED ON THE H-INDEX. Comparing Table V to Table IV, some significant dif- ferences derive. These differences justify that productivity and influence do not coincide. The most active blogger, S. McNulty is ranked 8th when the ranking is done in decreasing h-index order. According to the h-index metric, the most influential blogger is E. Sadun who has 31 articles that has at least 31 incoming links each. E. Sadun is the fourth most active blogger in TUAW, though she has posted nothing in the last 2.5 months. Although she has been inactive recently, she is still the most influential according to the h-index metric. This proves that the h-index can indicate the most influential blogger, but cannot identify bloggers who are both influential and active. In the sequel, we apply the two proposed metrics MEIBI and MEIBIX in our dataset. The ranking of the bloggers according to the MEIBI metric is displayed in Table VI. m Bloggers 49 C. Bohon 1 46 R. Palmer 2 36 S. Sande 3 34 4 E. Sadun 5 M. Rose 30 6 M. Schramm 30 28 7 27 8 25 9 M. Lu 17 10 B. Terpstra C. Warren D. Caolo Cj 14745 9916 7246 32432 13499 12838 4857 27985 17966 3770 Table VI BLOGGERS RANKING BASED ON THE MEIBI INDEX. The data displayed in Table VI indicate that the blogger whose posts were the most influential recently, is C. Bohon. This is partially explained by the fact that 676 out of the total 793 posts, have received 9439 references; it is the highest number of incoming links among the other bloggers. Furthermore, all posts have been commented 14745 times. On the other hand, E. Sadun, the most influential blogger according to the h-index metric, falls in the fourth position; considering the fact that she has remained relatively inactive this is a satisfactory result. R. in the past 2.5 months, Palmer and S. Sande occupy the second and third position respectively. All top-three bloggers have submitted posts within December 2008. This is an indication that the MEIBI index not only identifies the most influential bloggers, but also the most active. It is a metric that suits very well to our case, as Blogosphere changes rapidly and our metric manages to keep track of these changes by handling the ages of the posts and the comments that they receive. Table VII presents the most influential bloggers according to the MEIBIX index. One may detect several similarities between Table VI and Table VII. The most active blogger of TUAW, S. McNulty, is not among the top-10 influential bloggers when the ranking is performed according to either MEIBI or MEIBIX. This indicates that although S. McNulty is undoubtedly an active blogger, he has not submitted influential posts recently. Table V though, reveals that the blogger in question, is the 8th most influential when the ranking is determined by the plain h-index metric. Bloggers C. Bohon R. Palmer S. Sande E. Sadun C. Warren x 48 1 47 2 37 3 33 4 30 5 6 M. Rose 29 7 M. Schramm 27 26 8 M. Lu 25 9 15 10 D. Caolo B. Terpstra Table VII BLOGGERS RANKING BASED ON THE MEIBIX INDEX. Finally, we computed the correlation of the rankings pro- duced by h-index, MEIBI and MEIBIX by using the Spear- man’s rho metric. The results (Table VIII) indicate that MEIBI and MEIBIX produce similar rankings, but both of them diverge from the h-index ordering significantly. ρ Methods 0.478788 h-index – MEIBI h-index – MEIBIX 0.321212 0.951515 MEIBI – MEIBIX Table VIII CORELLATION OF RANKINGS i.e., 2) The new methods vs. the influence-flow method: For the comparison of the proposed metrics against the basic competitor, influence-flow method [2], we select a subset of the real data in order to be fairer. It was obvious by the experimentation of the previous paragraphs, that the inactivity has a dramatic effect upon the final ranking. The real question concerning the usefulness of the proposed methods is whether in a small period of time, say a month, these methods would provide different rankings than those of the influence-flow method. Thus, we selected to work upon the blog posts of November 2008 only. For comparison purposes, we present in Table IX the top-10 of active (most productive) bloggers during November 2008 as this ranking is provided by the TUAW site itself. the most In Table IX we present influential bloggers for November 2008 as they are provided by the influence- flow method and the MEIBI and MEIBIX metrics. Neither MEIBI nor MEIBIX generate rankings that agree with the TUAW ranking of bloggers. TUAW concerns R. Palmer as more influential than S. Sande. On the other hand, MEIBI concerns R. Palmer and S. Sande to be equally influential. The former has authored more posts which received more Bloggers C. Bohon R. Palmer S. Sande N Inlinks 47 1 42 2 3 34 4 M. Schramm 29 20 D. Caolo 5 19 6 M. Rose 15 7 8 8 8 9 M. Lu 5 10 V. Agreda 508 339 354 203 163 138 103 80 71 30 B. Terpstra C. Warren Cj 556 491 177 166 178 154 87 331 248 42 Blogger C. Bohon R. Palmer 1 2 3 M. Lu C. Warren 4 D. Caolo 5 C. Ullrich 6 7 S. Sande 8 M. Rose 9 10 V. Agreda Jason Clarke Blogger C. Bohon R. Palmer S. Sande D. Caolo m 26 1 20 2 20 3 4 17 5 M. Schramm 16 13 6 M. Rose 8 7 M. Lu 7 B. Terpstra 8 7 9 C. Warren 4 10 V. Agreda Blogger C. Bohon S. Sande R. Palmer D. Caolo x 27 1 20 2 19 3 4 18 5 M. Schramm 16 13 6 M. Rose 8 7 M. Lu 7 B. Terpstra 8 7 9 C. Warren 4 10 V. Agreda Table IX BLOGGERS RANKING ACCORDING TO: TUAW (LEFT). INFLUENCE-FLOW MODEL (CENTER). MEIBI AND MEIBIX (RIGHT). comments, whereas the latter’s posts although fewer, have been referenced more times by other posts. The ranking produced by MEIBIX positions S. Sande into the second place, higher than R. Palmer. We could state that MEIBIX is more sensitive to the number of incoming references than MEIBI. Comparing the rankings produced by the proposed meth- ods with the ranking according to the influence-flow model, we can state that this model assigns to C. Bohon the first position of the list. The model concerns R. Palmer as the second most influential blogger for the period of November of 2008 and agrees with TUAW. Despite S. Sande has published more articles that received more incoming links, M. Lu’s posts have attracted more comments. Hence, we conclude that M. Lu is primarily influential inside the TUAW community, whereas S. Sande has published influential posts that stimulated other bloggers to refer to them. D. Caolo has authored less posts than S. Sande. Although his articles attracted both less comments and inlinks, the influence-flow model assigns him a higher rank than S. Sande. Obviously, the model’s determination of influential bloggers, by taking into consideration only the best post and discarding all others, leads to erroneous rankings. The Spearman’s rho metric was used to compute the correlation of the rankings of Table IX. The results illus- trated in Table X, reveal that MEIBI and MEIBIX produce rankings that diverge significantly from the one generated by the influence-flow model. ρ Methods 0.284848 TUAW – influence-flow model 0.948485 TUAW – MEIBI 0.939394 TUAW – MEIBIX influence-flow model – MEIBI 0.418182 influence-flow model – MEIBIX 0.357576 0.987879 MEIBI – MEIBIX Table X CORELLATION OF RANKINGS 3) Temporal evolution of the rankings produced by MEIBI and MEIBIX: Finally, it is interesting to examine how the rankings generated by the proposed metrics vary over time. Figures 1 and 2 depict the top-10 influence rankings of the bloggers in the past 11 months (from January 2008 to November 2008), when MEIBI and MEIBIX are applied respectively. The columns in Figures 1 and 2 represent the progression of time, whereas the rows contain the bloggers, ordered according to the time they were recognized as influential. Therefore, the (i, j)-th cell stores the rank of the ith blogger in the jth time window. The dash symbol signifies that the particular blogger was not among the top- 10 of that period. Figure 1. MEIBI. Influential bloggers’ blogging behavior over 2008, according to MEIBI and MEIBIX produce similar rankings; MEIBIX is more affected by the number of incoming links, whereas MEIBI assigns better scores to the posts that attracted more comments. Studying the blogger rankings fluctuation over time, com- poses a valuable tool for distinguishing bloggers that have been influential for a very long or very short time. The former can be considered as more influential, as compared to the latter which are proved more trustworthy. Certainly, many other categories of bloggers can be derived from the retrospection of their activity through time and many poten- tial applications can be developed using these categories. Figure 2. MEIBIX. Influential bloggers’ blogging behavior over 2008, according to V. CONCLUSIONS The Blogosphere has recently become one of the most favored services on the Web. Many users maintain a blog and write posts to express their opinion, experience and knowledge about a product, an event, and several others comment upon these opinions. This “participatory journal- ism” of blogs has such an impact upon the masses that Keller and Berry [9] argued that through blogging “one American in tens tells the other nine how to vote, where to eat and what to buy”. Therefore, a significant issue is how to identify such influential bloggers, because commercial companies can turn the influentials to become their “unof- ficial spokesmen”, innovative business opportunities related to commercial transactions and traveling can be developed capitalizing upon the influentials, and so on. This article investigated the problem of identifying in- fluential bloggers in a blog site and proposed two new methods that provide rankings of the influentials. The main motivation for the introduction of these methods is that the closely relevant, competing methods have not taken into account temporal aspects of the problem, which we argue are the most important ones when dealing with spaces like the Blogosphere, which is highly volatile and doubles in size every six months. termed MEIBI, The first proposed metric, takes into consideration the number of the blog post’s inlinks and its comments, along with the publication date of the post. The second metric, MEIBIX, is used to score a blog post according to the number and age of the blog post’s inlinks and its comments. The metrics can be computed very fast because they do not involve complex recursive definitions of influence, and in addition they do not use tunable parameters which are difficult to set. Therefore, they can be used in an online fashion for the identification of the now-influential bloggers. These methods were evaluated against the state-of-the- art influential blogger identification method, namely that reported in [2], utilizing data collected from a real-world community blog site. The obtained results attested that the new methods are able to better identify significant temporal patterns in the blogging behaviour, and reveal some latent facts about the blogging activity. REFERENCES [1] N. Agarwal and H. Liu. Blogosphere: Research issues, tools and applications. ACM SIGKDD Explorations, 10(1):18–31, 2008. [2] N. Agarwal, H. Liu, L. Tang, and P. S. Yu. Identifying the influential bloggers in a community. In Proceedings of ACM WSDM Conf., pages 207–218, 2008. [3] J. L. Elsas, J. Arguello, J. Callan, and J. G. Carbonell. Retrieval and feedback models for blog feed search. In Proceedings of ACM SIGIR Conf., pages 347–354, 2008. [4] K. E. Gill. How can we measure the influence of the Blogosphere? In Proceedings of WWE Workshop, 2004. [5] D. Gruhl, R. Guha, R. Kumar, J. Novak, and A. Tomkins. The predictive power of online chatter. In Proceedings of ACM KDD Conf., pages 78–87, 2005. [6] D. Gruhl, D. Liben-Nowell, R. Guha, and A. Tomkins. Information diffusion through Blogosphere. ACM SIGKDD Explorations, 6(2):43–52, 2004. [7] B. He, C. Macdonald, and I. Ounis. Ranking opinionated blog In Proceedings of ACM SIGIR posts using OpinionFinder. Conf., pages 727–728, 2008. [8] A. Java, P. Kolari, T. Finin, and T. Oates. Modeling the spread In Proceedings of ACM of influence on the Blogosphere. WWW Conf., 2006. [9] E. Keller and J. Berry. One American in ten tells the other nine how to vote, where to eat and, what to buy. They are The Influentials. The Free Press, 2003. [10] A. Kritikopoulos, M. Sideri, and I. Varlamis. BlogRank: Ranking Weblogs based on connectivity and similarity fea- tures. In Proceedings of AAA-IDEA Workshop, 2006. [11] A. Langville and C. Meyer. The Google’s PageRank and Beyond: The Science of Search Engine Rankings. Princeton University Press, 2006. [12] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. van- Briesen, and N. Glance. Cost-effective outbreak detection in networks. In Proceedings of ACM KDD Conf., 2007. [13] Y.-R. Lin, H. Sundaram, Y. Chi, Y. Tatemura, and B. Tseng. Discovery of blog communities based on mutual awareness. In Proceedings of WWE Workshop, 2006. [14] Wikipedia. The Hirsch h-index, Jan. 2009. Available from http://en.wikipedia.org/wiki/H-index. [15] Y. Zhou and J. Davis. Community discovery and analysis In Proceedings of ACM WWW Conf., pages in Blogspace. 1017–1018, 2006.
ai_researcher
1
Rare_disease-based_scientific_annotation_knowledge_graph.pdf
4 2 0 2 l u J 4 ] L C . s c [ 2 v 1 4 3 6 0 . 2 0 4 2 : v i X r a RareBench: Can LLMs Serve as Rare Diseases Specialists? Xuanzhong Chen∗ Department of Computer Science and Technology & Institute of Artificial Intelligence & BNRist Tsinghua University Beijing, China [email protected] Xiaohao Mao∗ Department of Computer Science and Technology & Institute of Artificial Intelligence & BNRist Tsinghua University Beijing, China [email protected] Qihan Guo∗ Department of Computer Science and Technology & Institute of Artificial Intelligence & BNRist Tsinghua University Beijing, China [email protected] Lun Wang Department of Internal Medicine Peking Union Medical College Hospital Chinese Academy of Medical Sciences & Peking Union Medical College Beijing, China [email protected] Shuyang Zhang† Department of Cardiology Peking Union Medical College Hospital Chinese Academy of Medical Sciences & Peking Union Medical College Beijing, China [email protected] Ting Chen† Department of Computer Science and Technology & Institute of Artificial Intelligence & BNRist Tsinghua University Beijing, China [email protected] ABSTRACT Generalist Large Language Models (LLMs), such as GPT-4, have shown considerable promise in various domains, including medical diagnosis. Rare diseases, affecting approximately 300 million peo- ple worldwide, often have unsatisfactory clinical diagnosis rates primarily due to a lack of experienced physicians and the complex- ity of differentiating among many rare diseases. In this context, recent news such as "ChatGPT correctly diagnosed a 4-year-old’s rare disease after 17 doctors failed" underscore LLMs’ potential, yet underexplored, role in clinically diagnosing rare diseases. To bridge this research gap, we introduce RareBench, a pioneering benchmark designed to systematically evaluate the capabilities of LLMs on 4 critical dimensions within the realm of rare diseases. Meanwhile, we have compiled the largest open-source dataset on rare disease patients, establishing a benchmark for future studies in this domain. To facilitate differential diagnosis of rare diseases, we develop a dynamic few-shot prompt methodology, leveraging a comprehensive rare disease knowledge graph synthesized from multiple knowledge bases, significantly enhancing LLMs’ diagnos- tic performance. Moreover, we present an exhaustive comparative study of GPT-4’s diagnostic capabilities against those of specialist physicians. Our experimental findings underscore the promising potential of integrating LLMs into the clinical diagnostic process for rare diseases. This paves the way for exciting possibilities in future advancements in this field. ∗Co-first authors. †Corresponding authors. This work is licensed under a Creative Commons Attribution International 4.0 License. KDD ’24, August 25–29, 2024, Barcelona, Spain © 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0490-1/24/08 https://doi.org/10.1145/3637528.3671576 CCS CONCEPTS • Applied computing → Health informatics. KEYWORDS benchmark for LLMs; rare disease diagnosis; evaluation ACM Reference Format: Xuanzhong Chen, Xiaohao Mao, Qihan Guo, Lun Wang, Shuyang Zhang, and Ting Chen. 2024. RareBench: Can LLMs Serve as Rare Diseases Spe- cialists?. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’24), August 25–29, 2024, Barcelona, Spain. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3637528.3671576 1 INTRODUCTION Large Language Models (LLMs) like ChatGPT [39] have obtained widespread attention for their exceptional human-like language understanding and generation capabilities. As a result, applying LLMs in medicine is emerging as a promising research direction in artificial intelligence and clinical medicine. Several studies have been taken to explore how LLMs can assist doctors in various medical and clinical tasks, including medical diagnosis [24, 56], clinical report generation [48, 59], and medicine education [23]. However, there is currently a lack of research investigating the capabilities and limitations of LLMs in the context of rare diseases. Rare diseases collectively refer to a broad category of diseases, typically defined by their low prevalence in the population. Over 7,000 types of rare diseases are currently recognized [12], with ap- proximately 80% being genetic in origin. Patients with rare diseases often face a high probability of misdiagnosis or underdiagnosis [33], and the average time before receiving a confirmative diag- nosis extends over several years [8]. The difficulty in diagnosis is largely attributed to the lack of prior exposure to rare diseases among physicians, hindering the accurate recognition of rare dis- eases and their associated phenotypes. The phenotypes are typically symptoms, signs, or other disease-related information observed in rare disease patients that are used for disease diagnosis. However, KDD ’24, August 25–29, 2024, Barcelona, Spain Xuanzhong Chen et al. there is significant phenotypic overlap among different rare dis- eases, as well as between rare diseases and common diseases, which further increases the difficulty of disease identification and diag- nosis. Clinical diagnosis of rare diseases typically involves two primary steps. Initially, physicians collect clinical information from patients, including epidemiological information, symptoms, signs, past medical history, and family history, etc., to formulate an initial diagnosis. Next, specialized tests such as laboratory tests or imag- ing examinations will be conducted to further facilitate diagnosis and differential diagnosis. Additionally, due to the frequent involve- ment of numerous organs and systems in rare diseases, consulting specialists from different fields during the diagnostic process can help achieve a more comprehensive insight and final diagnosis. There have been many prior works to improve the diagnosis of rare diseases, including standardizing disease phenotype terminol- ogy into a hierarchical structure in the Human Phenotype Ontology (HPO) [20–22] and building knowledge bases annotating rare dis- eases with phenotypes such as the Online Mendelian Inheritance in Man (OMIM) [13], Orphanet [2], and the Compendium of China’s Rare Diseases (CCRD) [15]. These efforts result in a clear and struc- tured representation of rare diseases: a disease or a patient can be represented by a set of associated phenotypes. From a machine learning perspective, computational methods can be developed to classify or rank diseases based on a patient’s phenotype informa- tion. These computational methods can be classified into two main categories. The first category treats the diagnosis of rare diseases as a ranking problem. In these approaches, a patient or a disease can be represented as phenotype vectors. Diseases are subsequently ranked by computing their semantic similarities with the patient. Methods falling into this category include PhenoMizer [21], RDAD [16], RDD [45], and LIRICAL [46]. However, these methods are con- strained by their underlying assumptions, the lack of good-quality cases for training and testing, and the incomplete phenotypic infor- mation on many rare diseases in knowledge bases, often leading to relatively poorer diagnostic performance. The second category treats the diagnosis of rare diseases as an extreme multi-class clas- sification task. Due to the scarcity of real-world data and the vast number of rare diseases to classify, this becomes a typical few-shot classification problem. Additionally, because of a lack of large-scale public rare disease patient datasets, most computational methods were only tested on simulated rare disease patient cases or small disease datasets with few diseases involved. Therefore, the clinical diagnostic capability of these methods remains unclear. The prerequisite for diagnosing rare diseases using computa- tional methods is to extract standardized and essential pheno- types/symptoms from electronic health records (EHRs) of clinical cases. To map clinical texts into standardized phenotypes, vari- ous natural language processing (NLP) methods have been devel- oped, including EHR-Phenolyzer [49], ClinPhen [6], PhenoTagger [32], and PhenoBERT [10]. In a recent study, PhenoBCBERT and PhenoGPT [58] models were introduced to identify clinical pheno- types in clinical texts from pediatric patients. However, there are substantial variations in clinical texts in how physicians may record patient phenotypes, incorporating distinct details and terminology. Moreover, the current count of human phenotypes surpasses 17,000 [20]. All these factors present a significant challenge in precisely mapping or deducing phenotypes from clinical texts. Figure 1: RareBench’s overview of evaluation tasks. Our work leverages LLMs to conduct comprehensive evalua- tions in the challenging field of rare diseases. Figure 1 displays the overview of evaluation dimensions for the four tasks of RareBench, and more detailed definitions and descriptions are provided in Sec- tion 3. The main contributions are: 1) Dataset and Benchmark- ing: We develop a diverse, multi-center, and specifically tailored dataset for rare diseases. Alongside this, we introduce RareBench, a comprehensive benchmarking framework for evaluating LLMs’ capabilities in real-world complex clinical scenarios like phenotype extraction and differential diagnosis. 2) Advanced Knowledge Integration Prompt: We integrate rich knowledge sources to create an exhaustive knowledge graph for rare diseases. Utilizing a disease-phenotype graph and the hierarchical structure of the phenotype graph, we devise a novel random walk algorithm capital- izing on phenotype Information Content (IC) values to implement dynamic few-shot prompting strategies. This advancement signifi- cantly boosts the performance of LLMs excluding GPT-4 in differ- ential diagnosis, even surpassing GPT-4. 3) Human-versus-LLMs Comparative Studies: We demonstrate GPT-4 on par with senior doctors across five distinct specialties in the differential diagnosis of rare diseases through comparative analysis. The experiments show that GPT-4’s diagnostic capabilities in rare diseases are now commensurate with those of experienced specialist physicians. 2 RELATED WORK Medical Benchmarks for LLMs. Prominent medical question- and-answer data benchmarks, such as MedQA [18], PubMedQA [19], MedMCQA [41], MultiMedQA [48], and CMExam [30], primar- ily derive from medical examinations, typically featuring multiple- choice formats. MedBench [3] introduces a large-scale Chinese benchmark for evaluating LLMs in clinical knowledge and diagnos- tics. Additionally, DDXPlus [9] provides a detailed medical diag- nostic dataset covering symptoms and diagnoses. Our RareBench extends this landscape by focusing on complex clinical scenarios specific to rare diseases. RareBench: Can LLMs Serve as Rare Diseases Specialists? KDD ’24, August 25–29, 2024, Barcelona, Spain Figure 2: RareBench is the first benchmark to evaluate LLMs as rare disease specialists on 4 distinct tasks. LLMs’ Medical Capability. The evolution of General Medical Artificial Intelligence (GMAI) is reshaping healthcare by automat- ing learning processes and incorporating domain knowledge to reduce the clinical workload [36]. GPT-4 [1], a notable example in this field, has demonstrated exceptional skills in medical questions [37] and rivals or surpasses state-of-the-art models in tasks such as radiology text interpretation [31]. Medprompt [38] further en- hances this capability through specialized prompting techniques, enabling foundational models like GPT-4 to outperform dedicated healthcare models. Besides GPT-4, models like AMIE [54] show superior performance in specific medical tasks, even exceeding the diagnostic abilities of general primary care physicians in some cases. These LLMs not only assist in differential diagnosis but also engage in clinical reasoning, thus improving diagnostic accuracy [24, 34]. Moreover, fine-tuned LLMs can efficiently extract valuable data from clinical notes, significantly boosting patient care quality [55]. Despite these advancements, challenges in accuracy, inter- pretability, and safety persist in the medical application of LLMs, underscoring the need for continuous refinement [27, 51]. Notably, the potential of LLMs in rare disease contexts is yet to be fully explored, and our research aims to fill this gap. Diagnosis of Rare Disease. The initial step in clinical diagno- sis involves extracting standardized phenotypes from a patient’s electronic health record (EHR). To translate clinical texts into stan- dardized Human Phenotype Ontology (HPO) terms, various natural language processing (NLP) methods [6, 10, 29, 32, 49, 58] have been developed. For rare disease diagnosis, current computational meth- ods comprise many statistical or machine learning-based methods [16, 21, 28, 42, 45, 46, 49, 61, 63]. 3 COMPOSITION OF RAREBENCH This section presents 4 critical tasks of the RareBench framework with the most extensive collection of rare disease patient datasets currently accessible. These tasks include 1) phenotype extraction from EHRs, 2) screening for specific rare diseases, 3) comparative analysis of common and rare diseases, and 4) differential diagnosis among universal rare diseases. Figure 2 demonstrates the process of employing LLMs as rare disease specialists to complete the four tasks in the RareBench using an EHR or a set of phenotypes of an ALS (Amyotrophic Lateral Sclerosis) patient from PUMCH. We also describe our prompting techniques for effectively deploying LLMs as rare disease specialists. Please refer to the Appendix due to the page limit for dataset details and prompt examples. 3.1 Tasks of the RareBench Framework 3.1.1 Task 1: Phenotype Extraction from Electronic Health Records. Task 1 involves deriving phenotypes from EHRs for diagnosis. We design 3 sub-tasks: a) Phenotype Extraction: Extracting pheno- types from EHRs precisely. b) General Entity Extraction: Ex- tracting general entities from EHRs. c) General Entity Standard- ization: Standardizing general entities into phenotypes. General entity extraction and standardization can be considered a two-step decomposition of phenotype extraction, allowing for a more detailed evaluation of LLMs’ capabilities of phenotype extraction. 3.1.2 Task 2: Screening for Specific Rare Diseases. Task 2 aims to evaluate the capability of LLMs in identifying risk factors or symptoms associated with specific rare diseases. It utilizes patients’ medical histories and auxiliary examinations to discover potential rare diseases and facilitate further diagnosis. For this study, three rare diseases are selected: ALS (Amyotrophic Lateral Sclerosis), PNH (Paroxysmal Nocturnal Hemoglobinuria), and MSA (Multiple System Atrophy). 3.1.3 Task 3: Comparison Analysis of Common and Rare Diseases. Task 3 aims to validate whether LLMs can differentiate between patients with common and rare diseases that exhibit similar pheno- types/symptoms. From the PUMCH dataset, we select 527 electronic health records containing 60 cases with 13 rare diseases and 467 cases with 64 common diseases, respectively. The task of LLMs is to predict the top ten most likely diseases from the mentioned pool of 77 diseases based on a patient’s electronic health record. KDD ’24, August 25–29, 2024, Barcelona, Spain Xuanzhong Chen et al. 3.1.4 Task 4: Differential Diagnosis among Universal Rare Diseases. Task 4 is centered on a systematic differential diagnosis across the full spectrum of known rare diseases to identify the most probable rare disease. In this process, specialist physicians con- sider a range of possible rare diseases and methodically narrow down the potential diagnoses through a process of elimination or additional diagnostic tests. After gathering adequate evidence, they determine the most likely diagnosis. Unlike task 3, this task does not limit the range of rare diseases. Instead, it leverages LLMs to provide the top ten most likely diagnoses based solely on the pa- tient’s phenotypes/symptoms. Task 4 involves 2,185 patient cases encompassing 421 rare diseases, collected from both Public and PUMCH datasets. Its primary objective is to evaluate LLMs’ capa- bility within the complexities of real-world clinical scenarios. As the most pivotal and forward-thinking task of RareBench, it plays a crucial role in accessing the potential of LLMs in handling intricate medical data. Table 1: Rare disease datasets of RareBench’s four tasks. Dataset Task 1 Task 2 Task 3 Task 4 Type EHRs EHRs EHRs EHRs/symptoms #.Diseases 34 3 77 421 #.Cases 87 33 527 2,185 Source PUMCH PUMCH PUMCH Public & PUMCH 3.2 Rare Disease Patients’ Dataset This study categorizes datasets into two main groups: publicly available datasets and the Peking Union Medical College Hospital (PUMCH) datasets. All of these data are utilized to perform one of the four tasks, as illustrated in Table 1. The publicly available datasets provide phenotypes/symptoms alongside confirmed rare diagnoses in OMIM/Orphanet disease codes. Consequently, they can only be employed to assess the performance of differential diagnosis among universal rare diseases (Task 4). In total, public datasets include 1,122 cases spanning over 362 rare diseases. Further descriptions can be found in Appendix A.1. Electronic health records from PUMCH serve as valuable re- sources for both phenotype extraction and three diagnostic tasks. The PUMCH dataset comprises a total of 1,650 cases, consisting of 1,183 rare disease cases and 467 common disease cases. Among them, we select specific datasets for each of the four tasks. Specifi- cally, we first choose 87 EHRs involving 34 rare diseases annotated with physician-identified phenotypes to perform task 1. Next, we select 33 cases with complete auxiliary examination and medical history information to execute task 2. Then, from the remaining medical records (excluding those for tasks 1 and 2), we select 60 cases with rare diseases and 467 cases with common diseases ex- hibiting phenotypes similar to those of the corresponding rare diseases for task 3. Finally, for task 4, we test all the remaining rare disease records (excluding those for tasks 1 and 2), along with all the aforementioned public cases, totaling 2,185 cases. It’s impor- tant to note that the patient’s personally identifiable information has been removed completely. Additionally, doctors from PUMCH monitored all cases before uploading text information, ensuring the absence of any potential personal information leaks. Moreover, we apply reasonable filtering criteria to identify and remove cases of low quality that may be caused by recording errors or missing information, such as those with uncertain or imprecise diagnoses and those lacking sufficient relevant information, i.e., fewer than three phenotypes. 3.3 Framework Setup 3.3.1 Evaluated Models. We select eleven models in our evaluation framework, including API-based and open-source LLMs, as detailed in Table 2. Specifically, we choose 5 API-based models, which exhibit superior performance as a result of substantial investment and advanced development. On the other hand, due to limitations in computational resources, our selection of open-source models is confined to 3 general LLMs and 3 medical LLMs, each with a model size of fewer than 10 billion parameters. Table 2: Eleven LLMs evaluated as rare disease specialists. Model GPT-4 [1] GPT-3.5-Turbo [39] Gemini Pro [50] GLM4 [7, 60] GLM3-Turbo [7, 60] Mistral-7B [17] Llama2-7B [53] ChatGLM3-6B [7, 60] BioMistral-7B [25] HuatuoGPT2-7B [4] MedAlpaca-7B [14] #Size N/A N/A N/A N/A N/A 7B 7B 6B 7B 7B 7B Form API API API API API Open Source Open Source Open Source Open Source Open Source Open Source Version 1106-preview 1106 - - - instruct-v0.1 chat - - - - 3.3.2 Basic Prompt Design. For the evaluation of 11 LLMs, we pri- marily utilize the most fundamental zero-shot prompt. We assign the role of a rare diseases specialist to the LLMs by incorporat- ing "You are a specialist in the field of rare diseases." as the system prompt/initial statement. Additional details on the configuration of LLMs’ hyper-parameters are available in Appendix A.2. 3.3.3 More Prompting Strategies Exploration on GPT-4. We further explore diverse prompting strategies with GPT-4, including Chain- of-Thought (CoT) [57] and random few-shot methods. In the CoT settings, the zero-shot prompt is supplemented with "Let us think step by step." to foster a sequential thought process, a technique validated in various general tasks. For random few shots, we provide LLMs with 𝑚 random complete input-output examples as prompts, where the choice of 𝑚 depends on the specific task. 3.4 Knowledge Integration Dynamic Few-shot Beyond the prompts above, we construct a rare disease domain knowledge graph by integrating multiple public knowledge bases. This serves as the foundation for our implemented Information Content (IC) based random walk algorithm. The phenotype embed- dings generated via this algorithm have been pivotal in developing a dynamic few-shot prompt, depicted in Figure 3. This innovative approach markedly improves the capabilities of LLMs in differential diagnoses among universal rare diseases (Task 4). RareBench: Can LLMs Serve as Rare Diseases Specialists? KDD ’24, August 25–29, 2024, Barcelona, Spain Table 3: Key statistics of the integrated rare disease knowl- edge graph encompassing phenotype (P) and rare disease(RD) nodes and two types of edges (P-P and P-RD). The asterisk ("∗") indicates the consolidation of rare diseases from various knowledge bases into 9,260 unique entities. Type (Src.) Num. Phenotype (P) (HPO) 17232 Rare Disease (RD) (∗) 9260 P-P (HPO) 21505 P-RD (OMIM) 54169 P-RD (Orphanet) 98031 P-RD (CCRD) 4223 3.4.1 Rare Disease Knowledge Graph Construction. Previous meth- ods [16, 21, 42, 45] for the differential diagnosis of rare diseases primarily rely on similarity calculations between diseases using phe- notypes in knowledge bases. It is feasible to construct a knowledge graph wherein both rare diseases and phenotypes are represented as nodes, connected by their interrelations as edges. There are two types of edges. The first is phenotype-phenotype edges obtained from the HPO hierarchy that organizes phenotypes into a directed acyclic graph where each edge connects a more specific phenotype (child) to a broadly defined phenotype (parent). The other is the disease-phenotype information, for which we integrate data from 4 rare-disease-related knowledge bases: the Human Phenotype Ontol- ogy (HPO) [20–22], Online Mendelian Inheritance in Man (OMIM) [13], Orphanet [2], and the Compendium of China’s Rare Diseases (CCRD) [15]. This integration notably enhances the annotation of associations between rare diseases and phenotypes. The statistical information of the integrated knowledge graph is presented in Ta- ble 3, with detailed descriptions of each knowledge base available in Appendix A.4. 3.4.2 Random Walk Based on Information Content. The concept of Information Content (IC) [5] is similar to Inverse Document Frequency (IDF) utilized in natural language processing. IC is employed as an index of a phenotype’s specificity to a particular disease. Notably, a phenotype’s proximity to the root node in the HPO hierarchy (a broad phenotype), or a higher frequency of its association with multiple diseases (a common phenotype), results in a lower IC, reflecting lesser significance. On the other hand, a phenotype that is highly specific to a rare disease has a high IC. The essence of IC lies in its inverse correlation with the prevalence of a phenotype – the more common a phenotype, the lower its IC. Let 𝑇 be the complete set of phenotype terms. For a given term 𝑡 ∈ 𝑇 , the computation of 𝐼𝐶 (𝑡) is formulated as follows: 𝐼𝐶 (𝑡) = −𝑙𝑜𝑔 𝑛(𝑡) 𝑁 , where 𝑛(𝑡) represents the count of diseases annotated with the HPO phenotype 𝑡 or its descendant phenotypes, and 𝑁 signifies the total number of annotated diseases. In our integrated rare disease knowledge graph, the value of 𝑁 equals 9,260. Node2vec [11], a typical shallow embedding method, is calcu- lated by simulating random walks [43] across a graph through a flexible and parameterized walk strategy. These unsupervised walks generate sequences of nodes, which are employed to create node embeddings by using the methodologies developed for Word2vec [35]. In Node2vec, the search bias parameter 𝛼 controls the search strategy in generating random walk sequences of nodes, effectively Figure 3: The workflow of the dynamic few-shot strategy includes an integrated rare disease knowledge graph from 4 knowledge bases and an IC value-based random walk algo- rithm for phenotype and disease embedding. modulating the preference for breadth-first search (BFS) or depth- first search (DFS) strategies in exploring neighboring nodes. How- ever, a direct application of Node2vec to our phenotype-disease knowledge graph in the rare disease domain falls short of adequately capturing each phenotype’s distinct influence on the differential diagnosis of diseases. We innovatively integrated IC values into the Node2vec framework to address this limitation, formulating an IC value-based random walk algorithm. This enhancement is designed to enrich the interactions between phenotypes and rare diseases. Under this new scheme, when a random walk progresses to a phe- notype node 𝑡1 and the subsequent node is another phenotype 𝑡2, the walk search bias from 𝑡1 to 𝑡2 is determined as 𝛼 = 𝐼𝐶 (𝑡2). Conversely, if the following node is a rare disease 𝑑1, the search bias from 𝑡1 to 𝑑1 is set by 𝛼 = 𝐼𝐶 (𝑡1). This modification ensures that phenotype nodes with higher IC values receive increased focus during the random walk, amplifying associations with the related rare disease nodes. 3.4.3 Dynamic Few-shot Prompting Strategy. Employing the IC value-based random walk algorithm, we developed a function 𝑓 : 𝑇 → R𝑑 to project phenotype terms into a 𝑑-dimensional vector space, where 𝑑 = 256. These embeddings are then utilized to rep- resent patients with rare diseases. A patient 𝑝 with a rare disease 𝑑 is characterized with a set of phenotype terms 𝑡𝑥𝑖 , expressed as 𝑝 = {𝑡𝑥1 , ..., 𝑡𝑥𝑛 }. The embedding of the patient 𝑝 is computed , 𝑡𝑥2 as follows: 𝑛 ∑︁ 𝑖=1 (cid:205)𝑛 𝐸𝑚𝑏𝑒𝑑𝑑𝑖𝑛𝑔(𝑝) = 𝐼𝐶 (𝑡𝑥𝑖 ) · 𝑓 (𝑡𝑥𝑖 ). 1 𝐼𝐶 (𝑡𝑥𝑖 ) In MedPrompt [38], the dynamic few-shot method selects train- ing examples that are most similar to the specific input. However, it relies on a general-purpose text embedding model such as text- embedding-ada-002 [40], which can only measure the relatedness of phenotype text, without considering deeper associations among 𝑖=1 KDD ’24, August 25–29, 2024, Barcelona, Spain Xuanzhong Chen et al. Table 4: Comprehensive results of the RareBench’s 4 tasks. In task 1, "PTE", "GEE", and "GES" respectively stand for pheno- type extraction, general entity extraction, and general entity standardization. Bold numbers indicate the best results, while underlined numbers signify the second-best results. LLMs Task 1: Phenotype Extraction from EHRs F1-score (%) (↑) Task 2: Screening for Specific RDs Recall (%) (↑) GPT-4 0-shot 3-shot CoT GPT-3.5-Turbo Gemini Pro GLM4 GLM3-Turbo Mistral-7B Llama2-7B ChatGLM3-6B BioMistral-7B HuatuoGPT2-7B MedAlpaca-7B PTE GEE 64.9 24.5 26.0 61.9 65.9 25.1 48.4 17.2 50.8 10.1 56.8 15.9 53.5 12.9 26.4 3.3 0.0 0.0 48.3 10.3 14.0 0.4 17.8 3.7 0.0 0.0 GES 38.7 42.0 39.7 32.3 34.3 24.0 31.4 8.8 0.0 14.6 5.2 6.6 0.0 ALS 62.5 - 62.5 37.5 25.0 50.0 12.5 0.0 0.0 12.5 12.5 12.5 0.0 MR (↓) Task 3: Comparative Analysis of Common & RDs Top-k Recall (%) (↑) k=10 k=3 72.1 59.6 - - 75.0 62.0 58.1 45.4 44.8 32.6 57.1 42.5 51.2 37.8 29.0 19.2 - - 22.8 16.1 24.9 19.5 40.8 30.4 - - 3.0 - 2.0 5.0 >10 8.0 10.0 >10 - >10 >10 >10 - PNH MSA k=1 46.1 61.1 57.1 - - - 47.4 66.7 57.1 33.2 44.4 42.9 24.3 33.3 28.6 31.3 50.0 42.9 25.8 22.2 14.3 13.7 11.1 14.3 - 5.6 14.3 9.7 5.6 0.0 16.7 5.6 14.3 18.6 11.1 28.6 - 0.0 14.3 MR (↓) Task 4: Differential Diagnosis among Universal RDs on 2,185 cases Top-k Recall (%) (↑) k=10 k=3 k=1 58.9 45.4 32.3 57.9 43.8 30.4 59.5 46.4 33.2 48.2 34.2 21.1 33.0 22.7 14.6 45.5 30.4 19.1 33.2 21.0 12.4 18.9 12.5 7.2 14.8 11.0 7.4 10.7 7.2 5.0 12.3 8.5 6.5 28.1 17.8 11.4 19.4 14.3 8.4 5.0 5.0 4.0 >10 >10 >10 >10 >10 >10 >10 >10 >10 >10 phenotypes. Therefore, we utilize the phenotype embeddings gen- erated from our IC value-based random walk algorithm to represent rare disease patients. We then select the top 𝑚 most similar exam- ples from the rare disease patient datasets. Data with the highest cosine similarity serve as prompts. From the LLMs’ perspective, such a retrieval augmented generation (RAG) [26] strategy enables LLMs to be more effectively tailored to the rare disease domain by conforming more closely to differential diagnosis tasks and mini- mizing hallucinations through enriched knowledge. From a clinical perspective, the diagnostic process of specialist physicians relies on medical knowledge and past experiences of those previously di- agnosed patients. Consequently, a curated set of relevant examples grants LLMs a distilled version of "clinical experience". It should be noted that there are many other embedding models available that can be used for choosing examples for LLM’s dynamic few-shot prompts, such as MedPrompt [38] and Auto-CoT [62]. However, our IC-value-based random walk strategy is a simple, easy-to-implement method that captures the critical concept of differential diagnosis. Its effectiveness will be evaluated against other state-of-the-art few-shot methods. 4 EVALUATION RESULTS OF RAREBENCH This section presents the comprehensive results of RareBench in Table 4, including the evaluation of GPT-4 across three different settings: zero-shot, few-shot, and Chain of Thought (CoT) with the zero-shot performance of the other 10 LLMs. 4.1 Task 1: Phenotype Extraction from EHR 4.1.1 Metric. Task 1 is evaluated using precision, recall, and F1- score. Accuracy requires exact matches with the reference for phenotype extraction and general entity standardization. For general entity extraction, predictions are correct if they convey the same meaning as the reference. 4.1.2 Results. Although GPT-4 achieves the best among all LLMs, the results show that all LLMs perform poorly in phenotype ex- traction. The general entity extraction and standardization results show that LLMs perform well in entity extraction but have weak capabilities in standardizing general entities into phenotypes. This suggests that the main reason for the poor performance of LLMs in phenotype extraction is their weak standardization ability. Beyond Chain-of-Thought (CoT) and random few-shot, our ex- perimentation includes two additional methods: a) Word-to-phenotype matching, where each word in the output is aligned with the se- mantically nearest word in the HPO phenotype list. The semantic distance is measured by vectorizing words using OpenAI’s text- embedding-ada-002 [40] model and then assessing cosine similarity; 2) Expanded semantic matching, where we associate each output with the semantically closest 𝑛 words in the HPO phenotype list, then integrate these matches as a reference output range before re-querying GPT-4. In our tests, 𝑛 is set at 20. The F1-score of these two experiments is 0.306 and 0.322, respectively. 4.2 Task 2: Screening for Specific Rare Diseases 4.2.1 Metric. Screening for three specific rare diseases (ALS, PNH, MSA) is measured using recall. 4.2.2 Results. For this task, GPT-4 again achieves the best perfor- mance on all three diseases, with recalls exceeding 0.55 in both zero-shot and Chain of Thought (CoT) settings, respectively. Mean- while, we did not conduct few-shot learning on GPT-4 due to the limited number of similar cases, as we were concerned that it might influence or bias the subsequent diagnostic results. Other API-based LLMs have recalls of less than or equal to 0.50 on all three diseases. Open-source LLMs have top-1 recalls of less than 0.30 on all three diseases. The use of the CoT approach by GPT-4 results in a slight RareBench: Can LLMs Serve as Rare Diseases Specialists? KDD ’24, August 25–29, 2024, Barcelona, Spain performance improvement. Overall, LLMs demonstrate the poten- tial for screening three rare diseases using patient information, such as medical history, auxiliary examinations, and laboratory tests. diagnosis results in a dataset of 2,185 rare disease patients, encom- passing 421 different rare diseases. For a comparison of GPT-4 with human physicians, see Section 5.2. 4.3 Task 3: Comparison Analysis of Common & Rare Diseases 4.3.1 Metric. Task 3 employs two key metrics: top-k recall (hit@k, where k=1, 3, 10) and median rank (MR). Top-k recall evaluates the diagnostic accuracy, deeming a diagnosis correct if the actual disease appears within the top-k predictions. Median rank repre- sents the median position of accurate diagnoses within predictions across all cases. 4.3.2 Results. In this task, GPT-4 achieves a top-1 recall of 0.461 under 0-shot settings (with top-3 and top-10 recalls being 0.596 and 0.721, respectively) and a median rank of 3.0. In comparison, under the Chain of Thought (CoT) setting, GPT-4 achieves the best performance on all metrics. The CoT approach contributes to a moderate improvement in GPT-4’s performance. Additionally, we did not perform few-shot learning on GPT-4 due to the limited number of cases, as we were concerned that it might influence or bias the subsequent diagnostic results. Furthermore, the GPT3.5- Turbo achieves a top-1 recall of 0.332, a top-3 recall of 0.454, a top-10 recall of 0.581, and a median rank 5.0, achieving the second-best performance after the GPT-4 method. The third-best performance is exhibited by GLM4, with a top-1 recall of 0.313, a top-3 recall of 0.425, a top-10 recall of 0.571, and a median rank 8.0. The other two API-based models have top-1 recalls less than 0.26, with median ranks greater than 10.0. All the open-source models yield top-1 recalls less than 0.20, and all have median ranks greater than 10.0. Additionally, we did not present the results of Llama2-7B because we found its performance on lengthy Chinese EHR texts too poor to output normal results. 4.4 Task 4: Differential Diagnosis among Universal Rare Diseases 4.4.1 Metric. Task 4 employs the same metric as Task 3. 4.4.2 Results. We yield the following notable findings in our ex- tensive dataset comprising 2,185 patients with 421 distinct rare diseases. For differential diagnosis of universal rare diseases, GPT-4 achieves a top-1 recall of 0.323 under 0-shot settings (with top-3 and top-10 recalls being 0.454 and 0.589, respectively) and a median rank of 5.0. Interestingly, GPT-4’s performance slightly decreases under random 3-shot, while adopting the Chain of Thought (CoT) approach leads to a performance improvement with a median rank of 4.0. In contrast, the top-1 recalls for the other 4 API-based LLMs are around 0.2, with all their median ranks exceeding 10.0; the top-1 recalls for all 6 LLMs with fewer than 10 billion parameters are less than 0.12. Notably, HuatuoGPT2-7B outperforms models like Mistral-7B and Llama2-7B but falls short against API-based LLMs like GPT-4. Conversely, BioMistral-7B’s performance drops post-training on general biomedical data, indicating that crafting or refining LLMs for the rare disease sector could be a fruitful future endeavor. In conclusion, GPT-4 demonstrated promising differential 5 ANALYSIS AND DISCUSSION This section provides a detailed explanation of the performance enhancement for LLMs in Task 4 (differential diagnosis among uni- versal rare diseases) using the dynamic few-shot prompt method based on the knowledge graph. Additionally, it includes a com- parison and thorough analysis and discussion of the diagnostic capabilities in rare diseases between doctors from PUMCH and LLMs, using a high-quality subset of the PUMCH dataset. 5.1 Knowledge Integration Dynamic Few-shot Figure 4: Performance of six LLMs in rare disease differential diagnosis under zero-shot, random 3-shot, and dynamic 3- shot prompts (using GPT-4 zero-shot as a baseline). In our research, the IC value-based random walk algorithm we developed is employed to produce embeddings for Human Pheno- type Ontology (HPO) phenotype nodes within our comprehensively integrated rare disease knowledge graph. These embeddings are subsequently utilized in dynamic few-shot settings. Our experimen- tal evaluation, involving 2,185 rare disease patients and conducted on six LLMs, including GLM4 [7, 60], is benchmarked against the zero-shot performance of GPT-4. The results are illustrated in Figure 4, with key findings summarized below: • A holistic analysis reveals that dynamic 3-shot settings signifi- cantly enhance the performance of the six LLMs compared to their zero-shot capabilities. On average, there is a notable 108% increase in top-1 recall and substantial improvements of 80% and 58% in top-3 and top-10 recalls, respectively. • With dynamic 3-shot, GLM4 outperforms GPT-4’s 0-shot level slightly in top-1 recall. In addition, models with less than 10 billion parameters, like Llama2-7b, achieve 0-shot performance on par with more advanced models, such as Gemini Pro and GLM4. • Our ablation study demonstrates an average enhancement of 23% in top-1 recall across the six LLMs under random 3-shot settings, corresponding increases of 20% and 21% in top-3 and top- 10 recalls, respectively. This highlights the effectiveness of our knowledge graph-based dynamic few-shot approach in boosting Gemini ProGLM4GLM3-TurboMistral-7BLlama2-7BChatGLM3-6B0.00.10.20.30.40.50.6Top-k RecallGPT-4 hit@1GPT-4 hit@3GPT-4 hit@10zero-shotzero-shotzero-shotzero-shotzero-shotzero-shotrandom 3-shotrandom 3-shotrandom 3-shotrandom 3-shotrandom 3-shotrandom 3-shotdynamic 3-shotdynamic 3-shotdynamic 3-shotdynamic 3-shotdynamic 3-shotdynamic 3-shothit@1hit@3hit@10 KDD ’24, August 25–29, 2024, Barcelona, Spain Xuanzhong Chen et al. Table 5: Comparison of various prompts’ performance across 6 LLMs on 75 high-quality patient cases from PUMCH. Gemini Pro LLM hit@1 hit@3 hit@10 MR hit@1 hit@3 hit@10 MR hit@1 hit@3 hit@10 MR hit@1 hit@3 hit@10 MR hit@1 hit@3 hit@10 MR hit@1 hit@3 hit@10 MR Prompt 24.0 56.0 22.7 >10 6.0 16.0 36.0 zero-shot 60.0 18.7 53.3 17.3 >10 9.3 7.0 37.3 54.7 random 3-shot 49.3 >10 52.0 3.0 38.7 66.7 MedPrompt (3-shot) 56.0 74.7 24.0 >10 54.7 Auto-CoT (3 clusters) 40.0 30.7 6.0 13.3 61.3 84.0 1.0 38.7 52.0 dynamic 3-shot (ours) 62.7 74.7 48.0 >10 53.3 2.0 37.3 65.3 52.0 3.0 40.0 73.3 64.0 1.0 52.0 73.3 76.0 50.7 2.0 36.0 82.7 1.0 62.7 70.7 41.3 3.0 21.3 62.7 41.3 2.0 25.3 65.3 60.0 1.0 40.0 72.0 66.7 42.7 3.0 36.0 76.0 1.0 72.0 77.3 36.0 >10 21.3 33.3 >10 16.0 52.0 3.0 61.3 38.7 >10 20.0 64.0 3.0 66.7 49.4 >10 38.7 >10 81.3 40.0 >10 80.0 9.3 9.3 1.0 30.7 9.3 1.0 34.7 16.0 13.3 42.7 12.0 41.3 37.3 25.3 78.7 30.7 73.3 ChatGLM3-6B GLM3-Turbo Llama2-7B Mistral-7B GLM4 the capabilities of LLMs, particularly those far behind GPT-4, in the context of rare disease differential diagnosis. We also compared our approach against MedPrompt [38] and Auto-CoT [62] using 75 high-quality patient cases from PUMCH (employed in our Human-versus-LLMs experiment below) across 6 LLMs. Detailed results are shown in the Table 5. Our Dynamic Few- shot shows superior or comparable performance, notably exceeding GPT-4 and specialist physicians in some cases. However, Auto-CoT underperforms with smaller LLMs like Llama2-7b, highlighting the importance of rationale quality in diagnostics. 5.2 Human versus LLMs experiment on a subset of Task 4 Selection of Test case and Clinical Specialist. We selected a 5.2.1 subset from the PUMCH dataset to compare the diagnostic per- formance of physicians with various LLMs. The selection criteria included a wide range of diseases across human organ systems, and multiple cases were included for each disease to assess the difficulty level in diagnosis. Ultimately, this test set comprised 75 cases spanning 16 diseases across 5 hospital departments, including Cardiology, Hematology, Nephrology, Neurology, and Pediatrics. To ensure the integrity of the diagnostic process, any information within the EHRs that could potentially provide clues about the underlying diseases was removed. A total of 50 physicians were employed from 23 Class-A tertiary hospitals in China. Each of the 5 departments had ten physicians. To minimize diagnostic result errors, we assigned each case to be diag- nosed by 4 physicians. In the diagnostic process, clinical specialists rely initially on their knowledge and experience to diagnose cases. For situations where the diagnosis is uncertain, clinical specialists can use external assistance such as books and web tools. 5.2.2 Results. We initially investigated two input approaches in our comparative experiments. One involved a feature-based input, where the patient’s personal phenotype information was provided to inquire about LLMs’ diagnostic results. The other approach in- volved entering the patient’s EHR text information with all person- ally identifiable information removed. We compared the diagnostic outcomes of two input forms on GPT-4 using the test set from the aforementioned 75 PUMCH cases. In contrast, specialist physicians rely solely on EHR text for diagnosis, as shown in Table 6. When utilizing phenotype input, GPT-4 achieves a top-1 recall of 0.520, a top-3 recall of 0.747, a top-10 recall of 0.827, and a median rank of 1.0. Conversely, EHR text input with GPT-4 results in a top-1 recall of 0.453, a top-3 recall of 0.693, a top-10 recall of 0.800, and a me- dian rank of 2.0. Using extracted phenotypes significantly reduces the number of input tokens for LLMs and marginally outperforms the direct use of medical record text in differential diagnosis. This Table 6: The differential diagnosis performance of GPT-4 with two different input forms and 50 physicians with and without assistance on 75 PUMCH cases. hit@1(%) hit@3(%) hit@10(%) MR (↓) GPT-4 (Phenotypes) GPT-4 (EHR text) Physicians w/o assistance Physicians w/ assistance 52.0 45.3 40.7 44.7 74.7 69.3 46.8 51.1 82.7 80.0 48.1 52.4 1.0 2.0 - - enhancement stems from the extracted phenotypes summarizing crucial information about the patient’s symptoms while eliminating irrelevant details, rendering this input format more economical and efficient for LLMs. In contrast, physicians’ diagnostic performance enhancement when aided by external assistance. The top-1 recall increases from 0.407 to 0.447, the top-3 recall rises from 0.468 to 0.511, and the top-10 recall increases from 0.481 to 0.524. Figure 5: Comparison of top-1 and top-10 recalls in rare dis- ease differential diagnosis across 5 departments between spe- cialist physicians with/without assistance and three LLMs. We also provide a more detailed presentation of GPT-4’s per- formance across diseases related to different medical departments. As shown in Figure 5, GPT-4 achieved the best diagnostic results across all 5 departments, surpassing the level of specialist physi- cians. In contrast, the diagnostic performance of the other two LLMs was inferior to that of specialist physicians. Moreover, there were significant differences in diagnostic results among different departments. For example, all methods achieved high recall rates in pediatrics, possibly due to the comprehensive information available in pediatric cases. Conversely, all methods performed poorly in the cases from the Department of Cardiology, which was speculated to AllCardiologyHematologyNephrologyNeurologyPediatrics0.20.40.60.81.0Top-k RecallPhysicians-v.s.-LLMs' Comparison by Departmentphysicians w/o assistancephysicians w/ assistancegpt-4glm4gemini proTop-1 recallTop-10 recall RareBench: Can LLMs Serve as Rare Diseases Specialists? KDD ’24, August 25–29, 2024, Barcelona, Spain be associated with the inherent features of the diseases themselves. Heart diseases often exhibit similarities and overlaps in symptoms and signs, such as chest pain, chest tightness, palpitations, edema, etc. The differential diagnosis primarily relies on the results of ob- jective auxiliary examinations, including laboratory tests, imaging examinations, and electrophysiological assessments. For instance, in the case of cardiomyopathy, the imaging patterns of echocar- diography and cardiac magnetic resonance imaging (MRI) play a crucial role in the diagnostic process. This suggests a future direc- tion that involves utilizing the large multimodal models (LMMs) to incorporate more comprehensive clinical information for diagnosis. 5.3 Case Study In the evaluation of phenotype extraction using GPT-4, we observe both positive and negative cases that highlight its capabilities and limitations. 5.3.1 Positive cases. • High Precision in Entity Extraction: GPT-4 adeptly avoids extracting text unrelated to the disease, such as hospital or medication details, demonstrating its discriminative capabil- ities. • Recognition of Negated Phenotypes: GPT-4 accurately identifies instances where a phenotype is negated, such as correctly omitting "fever" from statements like "no fever." This indicates that GPT-4’s entity extraction transcends sim- ple keyword matching, reflecting a sophisticated compre- hension of medical record texts. 5.3.2 Negative Cases. • Challenges in Identifying Entities with Descriptions: GPT-4 occasionally struggles to capture entities that include descriptive details comprehensively. For example, it reduces "Shortness of breath after walking up three flights of stairs" to merely "shortness of breath," omitting the crucial context that specifies the phenotype. • Inconsistencies in Identifying Laboratory Test Results: GPT-4 exhibits limited accuracy in detecting abnormal labo- ratory test results. For instance, it failed to recognize several abnormal lab results from a patient’s medical record, includ- ing "𝐾 3.26𝑚𝑚𝑜𝑙/𝐿 ↓," resulting in inaccurate extraction of related phenotypes. 6 CONCLUSION This paper delivers RareBench, an innovative benchmark frame- work aimed at systematically assessing Large Language Models’ (LLMs) capability in rare disease diagnosis. RareBench combines public datasets with the comprehensive datasets from PUMCH to create the most extensive collection of rare disease patient data available. Leveraging an Information-Content-based random walk algorithm and a rare disease knowledge graph, we introduce a dynamic few-shot prompt, significantly enhancing LLMs’ diagnos- tic capabilities. Notably, our comparative analysis with specialist physicians demonstrates GPT-4’s superior accuracy in some specific rare disease diagnoses. We expect RareBench to catalyze further ad- vancements and applications of LLMs in tackling the complexities of clinical diagnosis, especially for rare diseases. 7 LIMITATIONS AND POTENTIAL BIASES Transparency and reliability in decision-making and diagnosis are paramount in medical practice. When utilizing large language mod- els (LLMs) in healthcare, several considerations arise, particularly regarding interpretability, reliability, and the risk of perpetuating existing biases within the models. Firstly, the inner workings of LLMs are often highly complex, non-transparent, and lacking inter- pretability, which may cause clinicians and patients to hesitate in trusting the model’s decisions. Secondly, AI applications in health- care must demonstrate high reliability and accuracy. However, due to the limitations of training data, most AI models, including LLMs, may exhibit significant prediction errors and biases, including the diagnosis of rare diseases. Additionally, AI models must justify their ability to generalize to new environments; otherwise, they may result in suboptimal performance in actual clinical settings. LLMs may unintentionally adopt biases in their training datasets, subsequently influencing predictions. These biases are frequently derived from imbalances within medical data. This issue is particu- larly acute in the context of rare diseases, where data imbalances are more pronounced due to the scarcity and uneven distribution of cases. Therefore, ongoing monitoring and refinement of the model’s training processes are essential to continually identify and correct biases that may emerge as the model evolves. 8 ETHICAL CONSIDERATION Deploying LLMs in clinical settings involves several ethical consid- erations, including the safety, privacy, and equity of patients. Patient data is extremely sensitive. Therefore, using LLMs in such settings necessitates stringent adherence to health information privacy reg- ulations. Although RareBench demonstrates some promising results for LLMs in diagnosing certain rare diseases, it is essential to em- phasize that LLMs should only serve as supplementary tools cur- rently due to issues such as hallucinations. For specific diagnostic decisions, please fully heed the guidance of professional medical practitioners. This study was approved by the Ethics Committees at Peking Union Medical College Hospital, Peking Union Medical College, and the Chinese Academy of Medical Sciences (approval number S- K2051). It is worth noting that all cases were monitored by doctors from Peking Union Medical College Hospital prior to uploading text information, ensuring the absence of any potential personal information leaks. Xuanzhong Chen, Xiaohao Mao, Qihan Guo, and Ting Chen affirm that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as originally planned (and, if relevant, registered) have been explained. ACKNOWLEDGMENTS This work is supported by the National Key R&D Program of China (2021YFF1201303, 2022YFC2703103), Guoqiang Institute of Tsinghua University, and Beijing National Research Center for In- formation Science and Technology (BNRist). The funders had no roles in study design, data collection and analysis, the decision to publish, and manuscript preparation. KDD ’24, August 25–29, 2024, Barcelona, Spain Xuanzhong Chen et al. REFERENCES [1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Floren- cia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023). [2] Ségolène Aymé. 2003. Orphanet, an information site on rare diseases. Soins; la revue de référence infirmière 672 (2003), 46–47. [3] Yan Cai, Linlin Wang, Ye Wang, Gerard de Melo, Ya Zhang, Yanfeng Wang, and Liang He. 2023. MedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models. arXiv:2312.12806 [cs.CL] [4] Junying Chen, Xidong Wang, Anningzhe Gao, Feng Jiang, Shunian Chen, Hongbo Zhang, Dingjie Song, Wenya Xie, Chuyi Kong, Jianquan Li, Xiang Wan, Haizhou Li, and Benyou Wang. 2023. HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs. arXiv:2311.09774 [cs.CL] [5] Thomas M Cover. 1999. Elements of information theory. John Wiley & Sons. [6] Cole A Deisseroth, Johannes Birgmeier, Ethan E Bodle, Jennefer N Kohler, Dena R Matalon, Yelena Nazarenko, Casie A Genetti, Catherine A Brownstein, Klaus Schmitz-Abe, Kelly Schoch, et al. 2019. ClinPhen extracts and prioritizes patient phenotypes directly from medical records to expedite genetic disease diagnosis. Genetics in Medicine 21, 7 (2019), 1585–1593. [7] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 320–335. [8] William RH Evans. 2023. Dare to think rare. Diagnostic delay and rare diseases. (2023). [9] Arsene Fansi Tchango, Rishab Goel, Zhi Wen, Julien Martel, and Joumana Ghosn. 2022. Ddxplus: A new dataset for automatic medical diagnosis. Advances in Neural Information Processing Systems 35 (2022), 31306–31318. [10] Yuhao Feng, Lei Qi, and Weidong Tian. 2022. PhenoBERT: a combined deep learn- ing method for automated recognition of human phenotype ontology. IEEE/ACM Transactions on Computational Biology and Bioinformatics 20, 2 (2022), 1269–1277. [11] Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. 855–864. [12] Melissa Haendel, Nicole Vasilevsky, Deepak Unni, Cristian Bologa, Nomi Harris, Heidi Rehm, Ada Hamosh, Gareth Baynam, Tudor Groza, Julie McMurry, et al. 2020. How many rare diseases are there? Nature reviews drug discovery 19, 2 (2020), 77–78. [13] Ada Hamosh, Alan F Scott, Joanna S Amberger, Carol A Bocchini, and Victor A McKusick. 2005. Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders. Nucleic acids research 33, suppl_1 (2005), D514–D517. [14] Tianyu Han, Lisa C Adams, Jens-Michalis Papaioannou, Paul Grundmann, Tom Oberhauser, Alexander Löser, Daniel Truhn, and Keno K Bressem. 2023. MedAlpaca–An Open-Source Collection of Medical Conversational AI Models and Training Data. arXiv preprint arXiv:2304.08247 (2023). [15] Jiangjiang He, Mi Tang, Xueyan Zhang, Duo Chen, Qi Kang, Yan Yang, Jiahao Hu, Chunlin Jin, and Peipei Song. 2019. Incidence and prevalence of 121 rare diseases in China: Current status and challenges. Intractable & rare diseases research 8, 2 (2019), 89–97. [16] Jinmeng Jia, Ruiyuan Wang, Zhongxin An, Yongli Guo, Xi Ni, and Tieliu Shi. 2018. RDAD: a machine learning system to support phenotype-based rare disease diagnosis. Frontiers in genetics 9 (2018), 587. [17] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, De- vendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7B. arXiv preprint arXiv:2310.06825 (2023). [18] Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2021. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences 11, 14 (2021), 6421. [19] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. PubMedQA: A Dataset for Biomedical Research Question Answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2567–2577. [20] Sebastian Köhler, Michael Gargano, Nicolas Matentzoglu, Leigh C Carmody, David Lewis-Smith, Nicole A Vasilevsky, Daniel Danis, Ganna Balagura, Gareth Baynam, Amy M Brower, et al. 2021. The human phenotype ontology in 2021. Nucleic acids research 49, D1 (2021), D1207–D1217. [21] Sebastian Köhler, Marcel H Schulz, Peter Krawitz, Sebastian Bauer, Sandra Dölken, Claus E Ott, Christine Mundlos, Denise Horn, Stefan Mundlos, and Peter N Robinson. 2009. Clinical diagnostics in human genetics with semantic similarity searches in ontologies. The American Journal of Human Genetics 85, 4 (2009), 457–464. [22] Sebastian Köhler, Nicole A Vasilevsky, Mark Engelstad, Erin Foster, Julie McMurry, Ségolène Aymé, Gareth Baynam, Susan M Bello, Cornelius F Boerkoel, Kym M Boycott, et al. 2017. The human phenotype ontology in 2017. Nucleic acids research 45, D1 (2017), D865–D876. [23] Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al. 2023. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS digital health 2, 2 (2023), e0000198. [24] Taeyoon Kwon, Kai Tzu-iunn Ong, Dongjin Kang, Seungjun Moon, Jeong Ryong Lee, Dosik Hwang, Yongsik Sim, Beomseok Sohn, Dongha Lee, and Jinyoung Yeo. 2023. Large Language Models are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales. arXiv preprint arXiv:2312.07399 (2023). [25] Yanis Labrak, Adrien Bazoge, Emmanuel Morin, Pierre-Antoine Gourraud, Mickael Rouvier, and Richard Dufour. 2024. BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains. arXiv:2402.10373 [cs.CL] [26] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33 (2020), 9459–9474. [27] Hanzhou Li, John T Moon, Saptarshi Purkayastha, Leo Anthony Celi, Hari Trivedi, and Judy W Gichoya. 2023. Ethics of large language models in medicine and medical research. The Lancet Digital Health 5, 6 (2023), e333–e335. [28] Qigang Li, Keyan Zhao, Carlos D Bustamante, Xin Ma, and Wing H Wong. 2019. Xrare: a machine learning method jointly modeling phenotypes and genetic evidence for rare disease diagnosis. Genetics in Medicine 21, 9 (2019), 2126–2134. [29] Cong Liu, Fabricio Sampaio Peres Kury, Ziran Li, Casey Ta, Kai Wang, and Chunhua Weng. 2019. Doc2Hpo: a web application for efficient and accurate HPO concept curation. Nucleic acids research 47, W1 (2019), W566–W570. [30] Junling Liu, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu, et al. 2023. Benchmarking Large Language Models on CMExam–A Comprehensive Chinese Medical Exam Dataset. arXiv preprint arXiv:2306.03030 (2023). [31] Qianchu Liu, Stephanie Hyland, Shruthi Bannur, Kenza Bouzid, Daniel Castro, Maria Wetscherek, Robert Tinn, Harshita Sharma, Fernando Pérez-García, Anton Schwaighofer, et al. 2023. Exploring the Boundaries of GPT-4 in Radiology. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 14414–14445. [32] Ling Luo, Shankai Yan, Po-Ting Lai, Daniel Veltri, Andrew Oler, Sandhya Xi- rasagar, Rajarshi Ghosh, Morgan Similuk, Peter N Robinson, and Zhiyong Lu. 2021. PhenoTagger: a hybrid method for phenotype concept recognition using human phenotype ontology. Bioinformatics 37, 13 (2021), 1884–1890. [33] Shruti Marwaha, Joshua W Knowles, and Euan A Ashley. 2022. A guide for the diagnosis of rare and undiagnosed disease: beyond the exome. Genome medicine 14, 1 (2022), 1–22. [34] Daniel McDuff, Mike Schaekermann, Tao Tu, Anil Palepu, Amy Wang, Jake Garrison, Karan Singhal, Yash Sharma, Shekoofeh Azizi, Kavita Kulkarni, et al. 2023. Towards accurate differential diagnosis with large language models. arXiv preprint arXiv:2312.00164 (2023). [35] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems 26 (2013). [36] Michael Moor, Oishi Banerjee, Zahra Shakeri Hossein Abad, Harlan M Krumholz, Jure Leskovec, Eric J Topol, and Pranav Rajpurkar. 2023. Foundation models for generalist medical artificial intelligence. Nature 616, 7956 (2023), 259–265. [37] Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. 2023. Capabilities of gpt-4 on medical challenge problems. arXiv preprint arXiv:2303.13375 (2023). [38] Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carignan, Richard Edgar, Nicolo Fusi, Nicholas King, Jonathan Larson, Yuanzhi Li, Weishung Liu, et al. 2023. Can generalist foundation models outcompete special-purpose tuning? case study in medicine. arXiv preprint arXiv:2311.16452 (2023). [39] OpenAI. 2022. Introducing ChatGPT. https://openai.com/blog/chatgpt [40] OpenAI. 2022. New and improved embedding model. https://openai.com/blog/ new-and-improved-embedding-model [41] Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. 2022. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. In Conference on Health, Inference, and Learning. PMLR, 248–260. [42] Jiajie Peng, Hansheng Xue, Yukai Shao, Xuequn Shang, Yadong Wang, and Jin Chen. 2016. Measuring phenotype semantic similarity using human phenotype ontology. In 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 763–766. [43] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. 701–710. RareBench: Can LLMs Serve as Rare Diseases Specialists? KDD ’24, August 25–29, 2024, Barcelona, Spain [44] Anthony A Philippakis, Danielle R Azzariti, Sergi Beltran, Anthony J Brookes, Catherine A Brownstein, Michael Brudno, Han G Brunner, Orion J Buske, Knox Carey, Cassie Doll, et al. 2015. The Matchmaker Exchange: a platform for rare disease gene discovery. Human mutation 36, 10 (2015), 915–921. [45] Marc Pinol, Rui Alves, Ivan Teixido, Jordi Mateo, Francesc Solsona, and Ester Vilaprinyó. 2017. Rare disease discovery: An optimized disease ranking system. IEEE Transactions on Industrial Informatics 13, 3 (2017), 1184–1192. [46] Peter N Robinson, Vida Ravanmehr, Julius OB Jacobsen, Daniel Danis, Xing- min Aaron Zhang, Leigh C Carmody, Michael A Gargano, Courtney L Thaxton, Guy Karlebach, Justin Reese, et al. 2020. Interpretable clinical genomics with a likelihood ratio paradigm. The American Journal of Human Genetics 107, 3 (2020), 403–417. [47] Simon Ronicke, Martin C Hirsch, Ewelina Türk, Katharina Larionov, Daphne Tientcheu, and Annette D Wagner. 2019. Can a decision support system accel- erate rare disease diagnosis? Evaluating the potential impact of Ada DX in a retrospective study. Orphanet journal of rare diseases 14 (2019), 1–12. [48] Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2023. Large language models encode clinical knowledge. Nature 620, 7972 (2023), 172–180. [49] Jung Hoon Son, Gangcai Xie, Chi Yuan, Lyudmila Ena, Ziran Li, Andrew Gold- stein, Lulin Huang, Liwei Wang, Feichen Shen, Hongfang Liu, et al. 2018. Deep phenotyping on electronic health records facilitates genetic diagnosis by clinical exomes. The American Journal of Human Genetics 103, 1 (2018), 58–73. [50] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 (2023). [51] Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. 2023. Large language models in medicine. Nature medicine 29, 8 (2023), 1930–1940. [52] Thoralf Töpel, Dagmar Scheible, Friedrich Trefz, and Ralf Hofestädt. 2010. RAMEDIS: a comprehensive information system for variations and corresponding phenotypes of rare metabolic diseases. Human mutation 31, 1 (2010), E1081– E1088. [53] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yas- mine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhos- ale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023). [54] Tao Tu, Anil Palepu, Mike Schaekermann, Khaled Saab, Jan Freyberg, Ryutaro Tanno, Amy Wang, Brenna Li, Mohamed Amin, Nenad Tomasev, et al. 2024. Towards Conversational Diagnostic AI. arXiv preprint arXiv:2401.05654 (2024). [55] Akhil Vaid, Isotta Landi, Girish Nadkarni, and Ismail Nabeel. 2023. Using fine- tuned large language models to parse clinical notes in musculoskeletal pain disorders. The Lancet Digital Health 5, 12 (2023), e855–e858. [56] Sheng Wang, Zihao Zhao, Xi Ouyang, Qian Wang, and Dinggang Shen. 2023. Chatcad: Interactive computer-aided diagnosis on medical image using large language models. arXiv preprint arXiv:2302.07257 (2023). [57] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837. [58] Jingye Yang, Cong Liu, Wendy Deng, Da Wu, Chunhua Weng, Yunyun Zhou, and Kai Wang. 2023. Enhancing phenotype recognition in clinical notes using large language models: PhenoBCBERT and PhenoGPT. Patterns (2023). [59] Xi Yang, Aokun Chen, Nima PourNejatian, Hoo Chang Shin, Kaleb E Smith, Christopher Parisien, Colin Compas, Cheryl Martin, Anthony B Costa, Mona G Flores, et al. 2022. A large language model for electronic health records. NPJ Digital Medicine 5, 1 (2022), 194. [60] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414 (2022). [61] Weiqi Zhai, Xiaodi Huang, Nan Shen, and Shanfeng Zhu. 2023. Phen2Disease: a phenotype-driven model for disease and gene prioritization by bidirectional maximum matching semantic similarities. Briefings in Bioinformatics (2023), bbad172. [62] Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493 (2022). [63] Mengge Zhao, James M Havrilla, Li Fang, Ying Chen, Jacqueline Peng, Cong Liu, Chao Wu, Mahdi Sarmady, Pablo Botas, Julián Isla, et al. 2020. Phen2Gene: rapid phenotype-driven gene prioritization for rare diseases. NAR genomics and Bioinformatics 2, 2 (2020), lqaa032. A APPENDIX A.1 Dataset Details Four publicly available datasets are used in this study: MME[44], LIRICAL[46], HMS[47], and RAMEDIS[52]. These datasets are sourced from published articles or open-access datasets. Quality control measures were implemented to filter out cases with insufficient information. Following screening procedures, the MME1 dataset comprises 40 cases across 17 diseases, the LIRICAL2 dataset en- compasses 370 cases spanning 252 diseases, the HMS dataset en- compasses 88 cases covering 39 diseases, and the RAMEDIS dataset consists of 624 cases spanning 63 diseases. A.2 Framework Settings The implementation details of the models listed in Table 2 are as follows: Both GPT-4 and GPT-3.5 are accessed through OpenAI’s API 3. The models used are "gpt-4-1106-preview" and "gpt-3.5-turbo- 1106" respectively. For the two models, the seed parameter is set to 42, while all other parameters are left at their default settings. Both GLM4 and GLM3 are accessed through Zhipu AI’s API 4. The models used are "glm-4" and "glm-3-turbo" respectively. In the model parameter settings, temperature is set to 0.15 and top_p to 0.7; all other parameters are maintained at their default values. Gemini is accessed through DeepMind’s API 5. The model used is "gemini-pro", and the parameters use default settings. A.3 Prompt Examples A.3.1 Task 1: Phenotype Extraction from Electronic Health Records. Sub-task 1: Phenotype Extraction [Task] Extract disease phenotypes from the following medical record text. [Requirement] (1) Output format: One phenotype per line, the format is [index]. [English name], such as "1. Cough", "2. Pain". (2) Answer according to the standard English terminology in the HPO database (https://hpo.jax.org/app/). Do not use colloquialisms, and try to be as concise as possible. It is not a restatement of the original words, but a refinement of the phenotype. (3) Extract all phenotypes appearing in the text without omitting any. output as much as possible. (4) For symptoms that are denied in the text, such as "no chest pain," and "no cough," do not extract the corresponding phenotypes. [Medical Record Text] The patient developed shortness of breath accompanied by chest tightness and facial edema about 3 months ago after physical exertion, with the eyelid edema being the most severe. The shortness of breath was most severe after physical activity and improved after rest. He was treated at a local hospital, and on December 1, 2013, the cardiac function showed by color echocardiography: tricuspid valve disease. Then he went to our hospital. Ultrasound showed myocardial disease, right atrium and right ventricle enlargement, moderate to severe tricuspid valve insufficiency, severe reduction in left and right ventricular systolic function, aortic valve degeneration, and a small amount of pericardial effusion. The patient was admitted to our hospital for outpatient consideration of tricuspid valve disease and is now admitted to our department for further diagnosis and treatment. Since the onset of the disease, the patient’s energy, diet, and sleep have been acceptable, and his bowel movements have been normal. There is no significant change in weight from before. No dry mouth, dry eyes, mouth ulcers, joint swelling and pain, rash, etc. Sub-task 2: General Entity Extraction 1https://github.com/ga4gh/mme-apis/tree/master/testing 2https:// zenodo.org/record/3905420 3https://platform.openai.com/docs/introduction 4https://open.bigmodel.cn/ 5https://deepmind.google/technologies/gemini/#introduction KDD ’24, August 25–29, 2024, Barcelona, Spain Xuanzhong Chen et al. [Task] For the medical record text provided below, mark the text that represents the disease entity. [Requirement] (1) Output format: One disease entity per line. The format is [Index]. [Original text]. For example, "1. Fever", "2. Body temperature 39℃". (2) Output in the order in which it appears in the text; do not omit anything, and output as much as possible. (3) For symptoms that are denied in the text, such as "no chest pain," and "no cough," do not extract the corresponding entities. [Medical Record Text] · · · Sub-task3: General Entity Standardization [Task] Standardize the following medical entities to Human Phenotype Ontology pheno- types. [Requirement] (1) Answer according to the standard English terminology in the HPO database (https://hpo.jax.org/app/). Do not be colloquial, and try to be as formal, standardized, and concise as possible. (2) Output format: Each line contains one medical entity and its corresponding HPO pheno- type. The format is [index]. [Entity name]: [English name of phenotype]. For example, "1. Body temperature 39℃: Fever", "2. Excessive urine output: Polyuria". [Entity list] Chest tightness ST segment elevation Activity tolerance gradually decreases ventricular fibrillation Aspen syndrome sore throat J-point elevation greater than 0.2mv and saddle-like elevation Shortness of breath pneumonia cough A.3.2 Task 2: Screening for Specific Rare Diseases. Zero-shot prompt As an expert in the field of rare diseases, you are tasked with diagnosing a real clinical case. Please carefully review the patient’s basic medical history, specialized examinations, and auxiliary tests to determine whether the patient may be suffering from [SCREENING DISEASE]. CoT prompt [Zero-shot prompt], Let us think step by step. A.3.3 Task 3: Comparison Analysis of Common and Rare Diseases. Zero-shot prompt Please, as a rare disease specialist, answer the following questions. [EHRs] is the patient’s admitted record, including chief complaint, present medical history, etc. [77 CANDIDATE DISEASES] are the types of diseases that the patient may suffer from. The top 10 diagnosed diseases are selected from highest to lowest probability. CoT prompt [Zero-shot prompt], Let us think step by step. A.3.4 Task 4: Differential Diagnosis among Universal Rare Diseases. Here, we present the specific configurations for zero-shot, Chain of Thought (CoT), and (dynamic) few-shot settings. System prompt You are a specialist in the field of rare diseases. You will be provided and asked about a complicated clinical case; read it carefully and then provide a diverse and comprehensive differential diagnosis. Zero-shot prompt This rare disease patient suffers from symptoms: [PA- TIENT_PHENOTYPE]. Enumerate the top 10 most likely diagnoses. Be precise, listing one diagnosis per line, and try to cover many unique possibilities (at least 10). The top 10 diagnoses are: CoT prompt [Zero-shot prompt]. Let us think step by step. (Dynamic) Few-shot prompt Let me give you [K] examples first: The [i] patient has a rare disease [EXAMPLE_DIAGNOSIS], and his/her [PHENOTYPE or EHR] is as follows: [EXAMPLE_PHENOTYPE or EXAMPLE_EHR]. Next is the patient case you need to diagnose: [Zero-shot prompt]. A.4 Knowledge Integration Dynamic Few-shot A.4.1 Knowledge Bases Details. The Human Phenotype Ontology (HPO) 6 provides a standardized vocabulary for phenotypic abnor- malities encountered in human disease. These phenotype terms form a directed acyclic graph (DAG) and are connected by directed "IS_A" edges (denoting subclass relationships). The files used in this study are "hp.obo" and "phenotype.hpoa", with the version being hp/releases/2023-06-06. Orphanet 7, funded by the French Ministry of Health, is a non- profit online resource and information platform focusing on rare diseases. It offers information on over 6,000 rare diseases, written by medical experts and researchers and subjected to rigorous quality review. This information includes etiology, symptoms, diagnostic methods, treatment options, and prognosis. Our work primarily utilizes the annotation information on rare diseases and genes from this platform. Online Mendelian Inheritance in Man (OMIM) 8 is a compre- hensive database of genes and genetic diseases, collecting and or- ganizing extensive information about human genetic disorders. It includes descriptions of over 20,000 genetic diseases, covering genetic etiology, clinical manifestation, inheritance patterns, molec- ular mechanisms, and related literature. The National Rare Disease Registry System of China (NRDRS) 9, overseen by Peking Union Medical College Hospital, is a national- level online registry platform for rare diseases, aimed at establish- ing unified rare disease registration standards and norms. The first list of the Compendium of China’s Rare Disease (CCRD) compiles detailed information on 144 rare diseases. The annotations of rela- tionships between diseases and phenotypes are manually extracted. The version used in our study is 2019-11. A.4.2 Comparison Analysis of Node2vec and IC value-based ran- dom walk. In our integrated rare disease knowledge graph, we implement an IC value-based random walk algorithm to obtain embeddings for phenotype and disease nodes within the graph. While Node2Vec utilizes two parameters, 𝑝 and 𝑞, to control the walking probability, we dynamically adjust this based on the IC values of different phenotypes. The settings for other parameters are as follows: ’embedding_dim’: 256, ’walk_length’: 45, ’context_size’: 35, ’walks_per_node’: 40, ’num_negative_samples’: 1, ’sparse’: True, ’loader_batch_size’: 256, ’loader_shuffle’: True, ’loader_num_workers’: 4, ’learning_rate’: 0.01, ’epoch_nums’: 36 6https://hpo.jax.org/app/ 7https://www.orpha.net/consor/cgi-bin/index.php 8https://www.omim.org/ 9https://www.nrdrs.org.cn/xhrareweb/homeIndex
ai_researcher
2
One_by_One_Continual_Coordinating_with_Humans_via_Hyper-Teammate_Identification.pdf
7 1 0 2 b e F 3 2 ] A C . h t a m [ 1 v 1 0 2 7 0 . 2 0 7 1 : v i X r a BOUNDEDNESS OF SINGULAR INTEGRALS ON THE FLAG HARDY SPACES ON HEISENBERG GROUP GUORONG HU AND JI LI Abstract. We prove that the classical one-parameter convolution singular integrals on the Heisenberg group are bounded on multiparameter flag Hardy spaces, which satisfy ‘intermediate’ dilation between the one-parameter anisotropic dilation and the product dilation on C n × R implicitly. 1. Introduction and statement of main results The purpose of this note is to show that the classical one-parameter convolution singular integrals on the Heisenberg group are bounded on multiparameter flag Hardy spaces. Recall that the Heisenberg Hn is the Lie group with underlying manifold Cn × R = {[z, t] : z ∈ Cn, t ∈ R} and multiplication law [z, t] ◦ [z′, t′] = [z1, · · · , zn, t] ◦ [z′ 1, · · · , z′ n, t′] := z1 + z′ 1, · · · , zn + z′ n, t + t′ + 2Im n zj ¯zj . (cid:1)i The identity of Hn is the origin and the inverse is given by [z, t]−1 = [−z, −t]. Hereafter we agree to identify Cn with R2n and to use the following notation to denote the points of Cn × R ≡ R2n+1: g = [z, t] ≡ [x, y, t] = [x1, · · · , xn, y1, · · · , yn, t] with z = [z1, · · · , zn], zj = xj + iyj and xj, yj, t ∈ R for j = 1, . . . , n. Then, the composition law ◦ can be explicitly written as h (cid:0) Xj=1 g ◦ g′ = [x, y, t] ◦ [x′, y′, t′] = [x + x′, y + y′, t + t′ + 2hy, x′i − 2hx, y′i], where h·, ·i denotes the usual inner product in Rn Consider the dilations δr : Hn → Hn, δr(g) = δr([z, t]) = [rz, r2t]. A trivial computation shows that δr is an automorphism of Hn for every r > 0. Define a “norm” function ρ on Hn by ρ(g) = ρ([z, t]) := max{|z|, |t|1/2}. It is easy to see that ρ(g−1) = ρ(−g) = ρ(g), ρ(δr(g)) = rρ(g), ρ(g) = 0 if and only if g = 0, and ρ(g ◦ g′) ≤ γ(ρ(g) + ρ(g′)), where γ > 1 is a constant. The Haar measure on Hn is known to just coincide with the Lebesgue measure on R2n+1. For any measurable set E ⊂ Hn, we denote by |E| its (Harr) measure. The vector fields T := ∂ ∂t , Xj := ∂ ∂xj − 2yj ∂ ∂t , Yj := ∂ ∂yj + 2xj ∂ ∂t , j = 1, · · · , n, form a natural basis for the Lie algebra of left-invariant vector fields on Hn. For convenience Xj, j = 1, · · · , 2n + 1, we set Xn+j := Yj for j = 1, 2, · · · , n, and set X2n+1 := T . Denote by the right-invariant vector field which coincides with Xj at the origin. Let N be the set of Date: September 23, 2018. 2010 Mathematics Subject Classification. 42B30, 43A80, 42B25, 42B20. Key words and phrases. Discrete Littlewood–Paley analysis, Heisenberg group, flag Hardy spaces, singular e integrals. 1 2 GUORONG HU AND JI LI all non-negative integers. For any multi-index I = (i1, · · · , i2n+1) ∈ N2n+1, we set X I := X i1 2n+1 . It is well known that ([9]) 2 · · · X i2n+1 1 X i2 2 · · · X i1 1 X I := 2n+1 and X I (f1 ∗ f2) = f1 ∗ (X I f2), e e X i2n+1 X i2 X I (f1 ∗ f2) = ( e e X I f1) ∗ f2, (X I f1) ∗ f2 = f1 ∗ ( X I f2), e and e X I ˜f = (−1)|I| where ˜f is given by ˜f (g) := f (g−1). We further set |I| := i1 + · · · + i2n+1 and e X I f , ge d(I) := i1 + · · · + i2n + 2i2n+1. e X I . X I , while d(I) is said Then |I| is said to be the order of the differential operators X I and to be the homogeneous degree of X I and Definition 1.1 ([17]). A function φ is called a normalized bump function on Hn if φ is supported in the unit ball {g = [z, t] ∈ Hn : ρ(g) ≤ 1} and (1.1) uniformly for all multi-indices I ∈ N2n+1 with |I| ≤ N , for some fixed positive integer N . Remark 1.2. The condition (1.1) is equivalent (module a constant) to the following one: |X I φ(g)| ≤ 1 for all multi-indices I with |I| ≤ N . Indeed, this follows from the following the homogeneous property of the “norm” ρ and the fact that z,tφ(z, t)| ≤ 1 (1.2) |∂I e X I f (g) = PIJ (g)(∂J z,tf )(g) X|J|≤|I|, d(J)≥d(I) (∂I z,tf )(g) = QIJ (g)(X J f )(g), X|J|≤|I|, d(J)≥d(I) where PIJ , QIJ are polynomials of homogeneous degree d(J) − d(I) (see [9]) . We assume that K is a distribution on Hn that agrees with a function K(g), g = [z, t] 6= [0, 0], and satisfies the following regularity conditions: (1.3) |K(g)| ≤ Cρ(g)−2n−2, |∇zK(g)| ≤ Cρ(g)−2n−3, | ∂ ∂t K(g)| ≤ Cρ(g)−2n−4, and the cancellation condition |K(φr)| ≤ C (1.4) for all normalized bump function φ and for all r > 0, where φr(g) = φ(δr(g)). It is well known that the classical one-parameter convolution singular integral T defined by T (f ) = f ∗ K is bounded on Lp, 1 < p < ∞, and on the classical Hardy spaces on the Heisenberg group H p(Hn) for p ∈ (p0, 1]. See [9] and [17] for more details and proofs. M¨uller, Ricci and Stein ([13], [14]) proved that Marcinkiewicz multipliers are Lp bounded for 1 < p < ∞ on the Heisenberg group Hn. This is surprising since these multipliers are invariant under a two parameter group of dilations on Cn × R, while there is no two parameter group of automorphic dilations on Hn. Moreover, they show that Marcinkiewicz multiplier can be characterized by convolution operator with the form f ∗K where, however, K is a flag kernel. At the endpoint estimates, it is natural to expect that Hardy space and BMO bounds are available. However, the lack of automorphic dilations underlies the failure of such multipliers to be in general bounded on the classical Hardy space H 1 and also precludes a pure product Hardy space theory on the Heisenberg group. This was the original motivation in [11] (see also [12]) to develop a theory of flag Hardy spaces H p f lag on the Heisenberg group, 0 < p ≤ 1, that is in a sense ‘intermediate’ between the classical BOUNDEDNESS OF SINGULAR INTEGRALS ON Hn 3 Hardy spaces H p(Hn) and the product Hardy spaces H p product(Cn × R) (A. Chang and R. Fefferman ([1], [2], [6], [7], [8]). They show that singular integrals with flag kernels, which include the aforementioned Marcinkiewicz multipliers, are bounded on H p f lag, as well as from H p f lag to Lp, for 0 < p ≤ 1. Moreover, they construct a singular integral with a flag kernel on the Heisenberg group, which is not bounded on the classical Hardy spaces H 1(Hn). Since, as pointed out in [11, 12], the flag Hardy space H p f lag(Hn) is contained in the classical Hardy space H p(Hn), this counterexample implies that H 1 f lag(Hn) $ H 1(Hn). A natural question aries: Is it possible that the classical one-parameter singular integrals on the Heisenberg group are bounded on flag Hardy spaces H p f lag(Hn)? Note that the classical singular integrals on the Heisenberg group satisfy the one-parameter anisotropic dilation as mentioned above. However, the flag Hardy spaces do not satisfy such a dilation, but satisfy ‘intermediate’ dilation between the one-parameter anisotropic dila- tion and the product dilation on Cn × R implicitly. We would like to point out that Nagel, Ricci and Stein [15] introduced a class of singular integrals with flag kernels on the Eu- clidian space. They also pointed that singular integrals with flag kernels on the Euclidian space belong to product singular integrals, see Remark 2.1.7 and Theorem 2.1.11 in [15], where the characterizations in terms of the corresponding multipliers between the flag and product singular integrals are given. See also [16] for singular integrals with flag kernels on homogeneous groups. Recently, in [18] it was proved that the classical Calderon-Zygmund convolution operators on the Euclidean space are bounded on the product Hardy spaces. In this note we address this deficiency by showing that the classical one-parameter con- volution singular integrals on Hn are bounded for flag Hardy spaces on Hn. Before stating the main results in this note, we begin with recalling the Calder´on’s re- f lag(Hn). c (Hn) and all arbitrarily large moments vanish and such that the following producing formula, Littlewood–Paley square function and the flag Hardy space H p Let ψ(1) ∈ C ∞ Calder´on reproducing formula holds: ∞ ∨ (ψ(1) s ) ∗ ψ(1) s ∗ f ds s , f = 0 Z f ∈ L2 (Hn) , where ∗ is Heisenberg convolution, (ψ(1))∨ (ζ) = ψ(1) (ζ −1) and ψ(1) s for s > 0. See Corollary 1 of [10] for the existence of the function ψ(1). (z, u) = s−2n−2ψ(1) z s , u s2 (cid:0) (cid:1) Let ψ(2) ∈ S (R) satisfying ∞ 0 Z for all η ∈ R\{0}. Assume along with the following moment conditions d ψ(2)(tη)|2 dt | t = 1 zαuβψ(1)(z, u)dzdu = 0, |α| + 2β ≤ M, ZHn ZR vγψ(2)(v)dv = 0, γ ≥ 0. Here the positive integer M may be taken arbitrarily large. Thus, we have ∞ ∞ (1.5) f (z, u) = ψs,t ∨ ∗ ψs,t ∗ f (z, u) ds s dt t , 0 Z 0 Z (cid:0) (cid:1) ψs,t(ζ) = ψs,t(ζ −1) for every ζ ∈ Hn, and the series converges in the where f ∈ L2(Hn), L2(Hn) norm. Following [14], a Littlewood–Paley component function ψ is defined on Hn ≃ e 4 GUORONG HU AND JI LI Cn × R by the partial convolution ∗2 in the second variable only: ψ(z, u) = ψ(1) ∗2 ψ(2)(z, u) = ψ(1)(z, u − v)ψ(2)(v)dv, (z, u) ∈ Cn × R, and the function ψs,t(z, u) is given by ZR ψs,t(z, u) = ψ(1) s ∗2 ψ(2) t (z, u) = s (z, u − v)ψ(2) ψ(1) t (v)dv. ZR We now set Q = ψ(1) ψ′ R = ψj,k = ψ(1) ψ′ if Q ∈ Q (j) , j ∗2 ψ(2) j k if R ∈ R (j, k) , where Q ∈ Q (j) are cubes and R ∈ R (j, k) with k < j are rectangles, and Q ≡ Q (j) , [j∈Z and the collection of all strictly vertical dyadic rectangles as Rvert ≡ R (j, k) . [j>k The wavelet Calder´on reproducing formula is then given by the following (Theorem 3 in [11]) (1.6) f (z, u) = fQ ΨQ(z, u) + fR ΨR(z, u), f ∈ MM ′+δ f lag (Hn), where XQ∈Q XR∈Rvert fQ ≡ cα |Q| ψj,k ∗ f (zQ, uQ) , fR ≡ cα |R| ψj,k ∗ f (zR, uR) , for Q ∈ Q (j) and k ≥ j, for R ∈ R (j, k) and k < j, the functions ΨQ and ΨR are in MM ′+δ f lag (Hn) . and Lp (Hn) and the Banach space MM ′+δ MM +δ MM +δ ψ′ R ΨR f lag (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (Hn). f lag (Hn) satisfying ΨQ f lag (Hn) f lag (Hn), and the convergence of the series holds in both (cid:13) (cid:13) f lag (Hn) . MM +δ MM +δ ψ′ Q (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) Based on the above reproducing formula, the wavelet Littlewood–Paley square function is defined by Sf lag(f )(z, u) :=   XQ∈Q (cid:12) (cid:12) ψ′ Q ∗ f (zQ, uQ) 2 χQ (z, u) + ψ′ R ∗ f (zR, uR) 2 χR (z, u) (cid:12) (cid:12) XR∈Rvert (cid:12) (cid:12) (cid:12) (cid:12) where (zQ, uQ) is any fixed point in the cube Q; and (zR, uR) is any fixed point in the rectangle R.   We now recall the precise definition of the flag Hardy spaces. Definition 1.3 ([11, 12]). Let 0 < p < ∞. Then for M sufficiently large depending on n and p we define the flag Hardy space H p f lag (Hn) on the Heisenberg group by and for f ∈ H p H p f lag (Hn) := f lag (Hn) we set f ∈ MM +δ f lag (Hn)′ : Sf lag(f ) ∈ Lp (Hn) o , n (1.7) kf kH p f lag := kSf lag(f )kp. 1 2 ,   BOUNDEDNESS OF SINGULAR INTEGRALS ON Hn 5 See [11, 12] for more details about structures of dyadic cubes and strictly vertical rectan- gles, test function space MM +δ f lag (Hn) and its dual MM +δ f lag (Hn)′ . The main results in this note are the following Theorem 1.4. Suppose that K is a distribution kernel on Hn satisfying the regularity condi- tions (1.3) and the cancelation condition (1.4). Then the operator T defined by T (f ) := f ∗K is bounded on H p f lag(Hn) for We remark that the lower bound 4n 4n 4n+1 < p ≤ 1. 4n+1 for p in Theorem 1.4 can be getting smaller if the regularity and cancellation conditions on K are required to be getting higher. We leave these details to the reader. As a consequence of Theorem 1.4 and the duality of H 1 f lag(Hn) with BM Of lag(Hn) as given in [11, 12], we obtain Corollary 1.5. Suppose that K is a distribution kernel on Hn as given in Theorem 1.4. Then the operator T defined by T (f ) := f ∗ K is bounded on BM Of lag(Hn). The main idea to show our results is to apply the discrete Calder´on reproducing formula, almost orthogonal estimates associated with the flag structure and the Fefferman–Stein vector valued maximal function. Notations: Throughout this paper, N will denote the set of all nonnegative integers. For any function f on Hn, we define ˜f (g) = f (g−1) and f ∨(g) = ˜f (g) = f (g−1), g ∈ Hn. If h is a fixed point on Hn, we define the function fh by fh(g) := f (h ◦ g), g ∈ Hn. Finally, if f is a function or distribution on Hn and r > 0, we set Drf (g) = r2n+2f (δr(g)). 2. Proof of Theorem 1.4 Note that it was proved in [11, 12] that L2(Hn) ∩ H p f lag(Hn). To show Theorem 1.4, by the Definition 1.3 of the flag Hardy space, it suffices to prove that there exists a constant C such that for every f ∈ L2(Hn) ∩ H p f lag(Hn) is dense in H p f lag(Hn), (2.1) and (2.2) ψ′ Q ∗ T (f ) (zQ, uQ) 2 χQ (z, u) (cid:13) (cid:26) XQ∈Q (cid:13) (cid:13) (cid:13) (cid:12) (cid:12) (cid:12) (cid:12) 1 2 (cid:27) ψ′ R ∗ T (f ) (zR, uR) 2 χR (z, u) p (cid:13) (cid:13) (cid:13) (cid:13) 1 2 ≤ Ckf kH p f lag(Hn) ≤ Ckf kH p f lag(Hn). p (cid:26) XR∈Rvert (cid:12) (cid:12) (cid:13) (cid:13) (cid:13) (cid:13) (cid:12) (cid:12) (cid:27) (cid:13) (cid:13) (cid:13) (cid:13) To achieve the estimates in (2.1) and (2.2), we need the almost orthogonality estimates and a new version of discrete Calder´on -type reproducing formula. We first give the almost orthogonality estimate as follows. Lemma 2.1. Suppose that ϕ, φ are functions on Hn satisfying that for all g ∈ Hn, ϕ(g)dg = 0, φ(g)dg = 0, ZHn |ϕ(g)|, |φ(g)| ≤ C |∇zϕ(g)|, |∇z φ(g)| ≤ C | ∂ ∂t ϕ(g)|, | ∂ ∂t φ(g)| ≤ C ZHn 1 (1 + ρ(g))2n+3 , 1 (1 + ρ(g))2n+4 , and 1 (1 + ρ(g))2n+5 . 6 GUORONG HU AND JI LI Then for any ε ∈ (0, 1), there is a constant C > 0 such that for all j, j′ ∈ Z, |ϕj ∗ φj′(g)| . 2−|j−j′|ε 2−(j∧j′) (2−(j∧j′) + ρ(g))2n+3 . where ϕj(g) := (D2j ϕ)(g) = 2j(2n+2)ϕ(δ2j (g)). The proof of Lemma 2.1 is routing and we omit the details of the proof. Lemma 2.2. Suppose K is a classical Calder´on–Zygmund kernel and ψ(1) is a smooth function on Hn with support in B(0, 1/100γb) (where γ > 1 is the constant in the quasi- triangle inequality for the “norm”) and b > 1 is the constant in the stratified mean value Hn ψ(1)(g)dg = 0. Then for any ε ∈ (0, 1), there is a constant C > 0 such theorem [9]), and that for any 0 < ε < 1 and all j, j′ ∈ Z, R (2.3) |ψ(1) j ∗ K ∗ ψ(1) j′ (g)| . 2−|j−j′|ε 2−(j∧j′) (2−(j∧j′) + ρ(g))2n+3 , where ψ(1) j (g) := (D2j ψ(1))(g) = 2(2n+2)j ψ(δ2j (g)). Proof. We first recall that there is a constant C independent of j such that (2.4) |(D2−j K) ∗ ψ(g)| ≤ C 1 (1 + ρ(g))2n+3 . See [17] for the detail of the proof. Note that we also have (2.5) |ψ(1) ∗ (D2−j K)(g)| . 1 (1 + ρ(g))2n+3 . Indeed, this follows from (2.4), the observation ψ(1) ∗ (D2−j K)(g) = (D2−j and the fact that ψ(1)(g−1), K satisfies the same size, smoothness, and cancellation conditions to K. K) ∗ g Now we can derive (2.3) from (2.4) and (2.5). To see this, we write e e j ∗ K ∗ ψ(1) ψ(1) j′ = (D2j ψ(1)) ∗ K ∗ (D2j′ ψ(1)) = D2j [ψ(1) ∗ (D2−j K)] ∗ (D2j′ ψ(1)) (D2j ψ(1)) ∗ D2j′ [(D2−j′ K) ∗ ψ(1)] ( if j ≥ j′, if j < j′. Thus by Lemma 2.1 we obtain |ψ(1) j ∗ K ∗ ψ(1) 2−(j−j′)ε 2−(j′−j)ε j′ (g)| .    = 2−|j−j′|ε (2−j′ 2−j′ + ρ(g))2n+3 2−j (2−j + ρ(g))2n+3 2−(j∧j′) (2−(j∧j′) + ρ(g))2n+3 if j ≥ j′ if j < j′ , for any ε ∈ (0, 1). The proof of Lemma 2.2 is concluded. (cid:3) The key estimate is the following Lemma 2.3. Let ψ(1) be as in Lemma 2.2 and let ψ(2) ∈ S(R) with Set ψ(1) [ψ(1) ψ(2)(u)udu = 0. k (u) := 2kψ(2)(2ku), and ψj,k(g) = ψj,k(z, u) := k (u)du. Then, for ε ∈ (0, 1), j (g) := 2j(2n+2)ψ(1)(δ2j (g)), ψ(2) j (z, t − u)ψ(2) ψ(1) j (z, ·) ∗R ψ(2) k ](t) = R R R |ψj,k ∗ K ∗ ψj′,k′(z, t)| R BOUNDEDNESS OF SINGULAR INTEGRALS ON Hn 7 2−(j∧j′)/2 (2−(j∧j′) + |z|)2n+ 1 2−(j ∧ j′)/2 (2−(j∧j′) + |z|)2n+ 1 2 2 2−(k∧k′)/4 (2−k∧k′ + |t|)1+ 1 4 2−(j∧j′)/2 (2−(j∧j′) + |t|)2+ 1 2 if 2(j ∧ j′) ≥ k ∧ k′, if 2(j ∧ j′) ≤ k ∧ k′. 2−|j−j′|ε2−|k−k′| 2−|j−j′|ε2−|k−k′| .    Proof. We write ψj,k ∗ K ∗ ψj′,k′ = (ψ(1) = (ψ(1) p k ) ∗Hn K ∗Hn (ψ(1) j′ ) ∗R (ψ(2) j ∗R ψ(2) j ∗Hn K ∗Hn ψ(1) j′ ∗R ψ(2) k′ ) k ∗R ψ(2) k′ ). By almost orthogonal estimate on R we have |(ψ(2) k ∗R ψ(2) k′ (t)| . 2−|k−k′| 2−(k∧k) (2−k∧k + |t|)2 . Combining this with (2.3), we obtain |ψj,k ∗ K ∗ ψj′,k′(z, t)| |(ψ(1) j ∗Hn K ∗Hn ψ(1) . ZR . 2−|j−j′|ε2−|k−k′| ∼ 2−|j−j′|ε2−|k−k′| ZR ZR k′ )(u)|du k ∗R ψ(2) j′ )(z, t − u)||(ψ(2) 2−(j∧j′) [2−(j∧j′) + (|z|2 + |t − u|)1/2]2n+3 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t − u|)(n+1)+ 1 2 2−(k∧k′) (2−(k∧k′) + |u|)2 2−(k∧k′) (2−(k∧k′) + |u|)2 du du Case 1: If 2(j ∧ j) ≥ k ∧ k′ and |t| ≥ 2−(k∧k′), write 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t − u|)(n+1)+ 1 2 2−(k∧k′) (2−(k∧k′) + |u|)2 du Z|u|≤ 1 2 |t|, or |u|≥2t + 1 2 |t|≤|u|≤2|t| Z = I + II. ZR = It is easy to see that |I| . . . Next, we estimate 2 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t|)(n+1)+ 1 2−(j∧j′)/2 |t|1+ 1 2−(k∧k′)/4 2−(j∧j′)/2 (2−2(j∧j′) + |z|2)n+ 1 2−(j∧j′)/2 (2−(j∧j′) + |z|)2n+ 1 (2−k∧k′ 4 4 2 + |t|)1+ 1 4 . |II| . . . ZR 2−(k∧k′) (2−(k∧k′) + |t|)2 2−(k∧k′) (2−(k∧k′) + |t|)2 2−(j∧j′)/2 (2−(j∧j′) + |z|)2n+ 1 ZR 2 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t − u|)(n+1)+ 1 2 du 2−(j∧j′)/2 (2−2(j∧j′) + |t − u|)1+ 1 4 du 2−(j∧j′)/2 (2−2(j∧j′) + |z|2)n+ 1 2−(k∧k′)/4 (2−(k∧k′) + |t|)1+ 1 . 4 4 8 GUORONG HU AND JI LI Case 2: If 2(j ∧ j′) ≥ k ∧ k′ and |t| ≤ 2−(k∧k′), then 2−(k∧k′) (2−(k∧k′) + |u|)2 du 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t − u|)(n+1)+ 1 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t − u|)(n+1)+ 1 2−(j∧j′)/2 (2−(j∧j′) + |z|)2n+ 1 ZR 2−(k∧k′)/4 (2−(k∧k′) + |t|)1+ 1 1 2−(k∧k′) . 2 4 2 2 ZR . . du Case 3: We now consider the case 2(j ∧ j′) ≤ k ∧ k′ and |t| ≤ 2−2(j∧j′). Then 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t − u|)(n+1)+ 1 2 2−(k∧k′) (2−(k∧k′) + |u|)2 du 2−(j∧j′) (2−2(j∧j′) + |z|2)(n+1)+ 1 2 ZR . . 2−(j∧j′)/2 (2−2(j∧j′) + |z|2)n+ 1 2−(j∧j′)/2 (2−(j∧j′) + |z|)2n+ 1 Case 4: If 2(j ∧ j′) ≤ k ∧ k′ and |t| ≥ 2−2(j∧j′), write 2−(j∧j′)/2 (2−2(j∧j′) + |t|)1+ 1 2−(j∧j′)/2 (2−(j∧j′) + |t|)2+ 1 ∼ 4 4 2 2 p . 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t − u|)(n+1)+ 1 2 2−(k∧k′) (2−(k∧k′) + |u|)2 du Z|u|≤ 1 2 |t|, or |u|≥2t + 1 2 |t|≤|u|≤2|t| Z = I + II. ZR = It is easy to see that |I| . 2 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t|)(n+1)+ 1 2−(j∧j′)/2 (2−2(j∧j′) + |z|2)n+ 1 2−(j∧j′)/2 (2−(j∧j′) + |z|)2n+ 1 ∼ . 2 4 2−(j∧j′)/2 (2−2(j∧j′) + |t|)1+ 1 2−(j∧j′)/2 4 (2−(j∧j′) + |t|)2+ 1 2 . . ZR |II| . To estimate II, we have 2−(k∧k′) (2−(k∧k′) + |t|)2 2−(k∧k′) (2−(k∧k′) + |t|)2 2−(j∧j′)/2 (2−2(j∧j′) + |z|2)n+ 1 2−(j∧j′)/2 (2−(j∧j′) + |z|)2n+ 1 ZR . ∼ 2 4 This finishes the proof. p 2−(j∧j′) (2−2(j∧j′) + |z|2 + |t − u|)(n+1)+ 1 2 du 2−(j∧j′)/2 (2−2(j∧j′) + |z|2)n+ 1 2−2(j∧j′) (2−2(j∧j′) + |t|)2 2−(j∧j′)/2 4 (2−(j∧j′) + |t|)2+ 1 2 p 2−(j∧j′)/2 (2−2(j∧j′) + |t − u|)1+ 1 4 du . (cid:3) BOUNDEDNESS OF SINGULAR INTEGRALS ON Hn 9 Now we prove the following new version of discrete Calderon’s reproducing formula. Theorem 2.4. Suppose 0 < p ≤ 1. For any given f ∈ L2(Hn) ∩ H p h ∈ L2(Hn) ∩ H p f lag(Hn) such that, for a sufficiently large integer N ∈ N, f lag(Hn), there exists (2.6) f (z, u) = |R| ψj,k((z, u) ◦ (zI , uJ )−1)(ψj,k ∗ h)(zI , uJ ), Xj,k∈Z XR=I×J, ℓ(I)=2−j−N , ℓ(J)=2−j−N +2−k−N e where the series converges in L2(Hn) and zI , uJ are any fixed points in I, J, respectively. Moreover, (2.7) kf kH p f lag(Hn) ≈ khkH p f lag(Hn), kf kL2(Hn) ≈ khk2. Proof. Following [11](see also [12]) and beginning with the Calder´on reproducing formula in (1.5) that holds for f ∈ L2(Hn) and converges in L2(Hn), for any given α > 0, we discretize (1.5) as follows: ∞ ∞ f (z, u) = 0 Z Z ψs,t ∗Hn ψs,t ∗Hn f (z, u) 0 ds s dt t 2−αj e 2−α(j+1) 2−2αk 2−2α(k+1) Z ψs,t ∗ ψs,t ∗ f (z, u) dt t ds s ψj,k ∗ ψj,k ∗ f (z, u) + cα e ψj,k ∗ ψj,k ∗ f (z, u) e 2−αj 2−2αk Xj>k e = Xj,k∈Z Z = cα Xj≤k + 2−α(j+1) 2−2α(k+1) Xj,k∈Z Z Z n e α f (z, u) + T (2) α f (z, u) + Rαf (z, u) , =: T (1) o e ψs,t ∗ ψs,t − ψj,k ∗ ψj,k ∗ f (z, u) dt t ds s where ψj,k = ψ2−αj ,2−2αk , 2−αj 2−2αk cα = dt t α f (z, u) and T (2) We further discretize the terms T (1) 2−2α(k+1) 2−α(j+1) ds s = ln 2−αj 2−α(j+1) ln Z Z the one-parameter structure of the Heisenberg group for T (1) product structure for T (2) α . More precisely, 2−2αk 2−2α(k+1) = 2 (α ln 2)2 . α f (z, u) in different ways, exploiting α , and exploiting the implicit T (1) α f (z, u) = fQψQ (z, u) + R(1) α,N f (z, u) , T (2) α f (z, u) = Xj≤k XQ∈Q(j) Xj>k XR∈R(j,k) fRψR (z, u) + R(2) α,N f (z, u) , fQ ≡ cα |Q| ψj,k ∗ f (zQ, uQ) , fR ≡ cα |R| ψj,k ∗ f (zR, uR) , for Q ∈ Q (j) and k ≥ j, for R ∈ R (j, k) and k < j, where ψQ (z, u) = ψR (z, u) = 1 |Q| 1 |R| ψj,k (z, u) ◦ z′, u′ −1 dz′du′, for Q ∈ Q (j) and k ≥ j, ZQ ZR (cid:16) (cid:16) e ψj,k e (z, u) ◦ (cid:0) (cid:1) z′, u′ (cid:17) −1 (cid:0) (cid:17) (cid:1) dz′du′, for R ∈ R (j, k) and k < j. 10 and R(1) α,N f (z, u) = cα × R(2) α,N f (z, u) = cα × GUORONG HU AND JI LI ψj,k (z, u) ◦ z′, u′ −1 Xj≤k XQ∈Q(j) ZQ z′, u′ ψj,k ∗ f e (cid:16) (cid:1) − ψj,k ∗ f (zQ, uQ) (cid:0) (cid:17) dz′du′, (cid:2) (cid:0) (cid:1) ψj,k Xj>k XR∈R(j,k) ZR z′, u′ ψj,k ∗ f (z, u) ◦ z′, u′ −1 (cid:3) (cid:16) (cid:1) e − ψj,k ∗ f (zR, uR) (cid:0) (cid:17) dz′du′. Altogether we have (cid:2) (cid:0) (cid:1) (cid:3) (2.8) f (z, u) = fQψQ (z, u) + fRψR (z, u) Xj∈Z XQ∈Q(j) + Rαf (z, u) + R(1) Xj>k XR∈R(j,k) α,N f (z, u) + R(2) α,N f (z, u) . Recall that we denote by Q ≡ Q (j) the collection of all dyadic cubes, and by Rvert ≡ j>k R (j, k) the collection of all strictly vertical dyadic rectangles. Finally, we can rewrite j∈Z o n S the right-hand side of the equality (2.8) as S (2.9) f (z, u) = fQψQ (z, u) + fRψR (z, u) + (cid:18) XQ∈Q =: TN (f ) + RN (f ), XR∈Rvert (cid:19) n where the series converge in the norm of L2(Hn). It was proved in [11, 12] that Rα + R(1) α,N + R(2) α,N (f ) (z, u) o Lp(Hn) + (cid:13) (cid:13) R(1) MM ′ (cid:13) (Hn) . (cid:13) α,N f kRαf kLp(Hn) + R(1) α,N f for all f ∈ Lp (Hn) , 1 < p < ∞, (cid:13) (cid:13) (cid:13) (cid:13) R(2) α,N f Lp(Hn) ≤ C2−N kf kLp(Hn) (cid:13) (cid:13) kRαf k + MM ′ +δ f lag (Hn) for all f ∈ MM ′+δ (cid:13) (cid:13) f lag +δ f lag (Hn) + R(2) α,N f (cid:13) (cid:13) +δ f lag (Hn) MM ′ (cid:13) (cid:13) ≤ C2−N kf k MM ′ +δ f lag (Hn) Thus, we have Rα + R(1) α,N + R(2) α,N (f ) (cid:13) (cid:13) Next we claim that (cid:13) Rα + R(1) (2.10) n α,N + R(2) α,N o (f ) L2(Hn) (cid:13) (cid:13) (cid:13) ≤ C2−N kf kL2(Hn). H p f lag(Hn) ≤ C2−N kf kH p f lag(Hn). n (cid:13) (cid:13) (cid:13) Indeed, the above claim follows from the following general result: (cid:13) (cid:13) (cid:13) Proposition 2.5. If T is a bounded operator on L2(Hn) and molecular space MM ′+δ then T is bounded on H p o f lag. Moreover, f lag (Hn), kT (f )kH p f lag ≤ C kT k2,2 + kT k MM ′ f lag ,MM ′ +δ f lag +δ kf kH p f lag , (cid:16) where we denote kT k2,2 for the operator norm of T on L2(Hn) and kT k operator norm on the molecular space MM ′+δ f lag . (cid:17) MM ′ f lag ,MM ′ +δ f lag for the +δ BOUNDEDNESS OF SINGULAR INTEGRALS ON Hn 11 Proposition 2.5 follows from the discrete Caldero´n’s reproducing formula (1.6) (Theorem 3 in [11]) and the almost orthogonality estimates (Lemma 6 in [11]). We only give an outline of the proof. Suppose f ∈ L2(Hn) ∩ H p f lag(Hn). By (1.6), it follows that T (f ) (z, u) = fQT (ΨQ) (z, u) + fRT (ΨR) (z, u) . XQ∈Q XR∈Rvert Thus, kT f kp H p f lag = kSf lag(T f )kp p ≤ ≤ ψ′ Q ∗ T f (zQ, uQ) 2 χQ (z, u) 1 2 p (cid:13) (cid:26) XQ∈Q (cid:13) (cid:13) (cid:13) + (cid:12) (cid:12) (cid:12) (cid:12) R ∗ T f (zR, uR) ψ′ (cid:27) p (cid:13) (cid:13) (cid:13) (cid:13) 2 χR (z, u) 1 2 p (cid:13) (cid:26) XR∈Rvert (cid:12) (cid:13) (cid:12) (cid:13) (cid:13) ψ′ Q ∗ (cid:12) (cid:12) fQ′T (ΨQ′) (zQ, uQ) (cid:27) p (cid:13) (cid:13) (cid:13) (cid:13) χQ (z, u) 2 1 2 p (cid:13) (cid:26) XQ∈Q (cid:12) (cid:13) (cid:12) (cid:13) (cid:12) (cid:13) + XQ′∈Q ψ′ R ∗ (cid:12) (cid:12) (cid:12) fQ′T (ΨQ′) (zR, uR) 2 p (cid:27) (cid:13) (cid:13) (cid:13) (cid:13) χR (z, u) (cid:13) (cid:26) XR∈Rvert (cid:12) (cid:13) (cid:12) (cid:13) (cid:12) (cid:13) ψ′ + Q ∗ XQ′∈Q (cid:12) (cid:12) (cid:12) fR′T (ΨR′) (zQ, uQ) 2 (cid:27) χQ (z, u) p p p 1 2 (cid:13) (cid:13) (cid:13) 1 (cid:13) 2 (cid:26) XQ∈Q (cid:12) (cid:12) (cid:12) (cid:13) (cid:13) (cid:13) (cid:13) + XR′∈Rvert ψ′ R ∗ (cid:12) (cid:12) (cid:12) fR′T (ΨR′) (zR, uR) (cid:13) (cid:26) XR∈Rvert (cid:12) (cid:13) (cid:12) (cid:13) =: A1 + A2 + A3 + A4. (cid:12) (cid:13) XR′∈Rvert (cid:27) p (cid:13) (cid:13) (cid:13) (cid:13) χR (z, u) 2 1 2 (cid:27) (cid:12) (cid:12) (cid:12) p p (cid:13) (cid:13) (cid:13) (cid:13) To estimate the term A1, note that ΨQ′ (z, u) = 1 |Q′| ZQ′ ψj′,k′ (z, u) ◦ z′, u′ −1 dz′du′. (cid:16) e (cid:0) (cid:17) (cid:1) We have A1 = fQ′ 1 |Q′| ZQ′ XQ′∈Q (cid:26) XQ∈Q (cid:12) (cid:13) (cid:12) (cid:13) (cid:12) (cid:13) (cid:12) (cid:13) Since T is bounded on the molecular space MM ′+δ same conditions as ψj′,k′ does with an extra constant kT k ψ′ Q ∗ T ψj′,k′ (zQ, uQ) ◦ z′, u′ −1 dz′du′ 2 χQ (z, u) (cid:16) e (cid:0) (cid:1) (cid:17) (cid:12) (cid:12) (cid:12) (cid:12) f lag (Hn), we obtain that T f lag (Hn),MM ′ MM ′ +δ ψj′,k′ satisfies the . Thus, by +δ f lag (Hn) e 1 2 p p (cid:13) (cid:13) (cid:13) (cid:13) (cid:27) Lemma 6 in [12], we have e ψ′ Q ∗ T ψj′,k′ (zQ, uQ) ◦ z′, u′ −1 (cid:12) (cid:12) (cid:12) (cid:16) e (cid:0) (cid:1) (cid:17) (cid:12) (cid:12) (cid:12) 12 .    GUORONG HU AND JI LI kT k MM ′ f lag ,MM ′ +δ f lag 2−|j−j′|ε2−|k−k′| +δ kT k MM ′ f lag ,MM ′ +δ f lag 2−|j−j′|ε2−|k−k′| +δ 2−(j∧j′)/2 (2−(j∧j′) + |zQ − z′|)2n+ 1 2 2−(j∧j′)/2 (2−(j∧j′) + |zQ − z′|)2n+ 1 2 (2−k∧k′ 2−(k∧k′)/4 + |uQ − u′|)1+ 1 if 2(j ∧ j′) ≥ k ∧ k′; 4 2−(j∧j′)/2 (2−(j∧j′) + |uQ − u′|)2+ 1 if 2(j ∧ j′) ≤ k ∧ k′. p 2 Then following the same steps as in the proof of Plancherel–P´olya inequalities for the Hardy spaces H p f lag(Hn) (see Theorem 4 in [12]), we obtain that A1 ≤ C kT k2,2 + kT k MM ′ f lag ,MM ′ +δ f lag kf kp f lag(Hn). H p p +δ (cid:17) Similarly we can estimate the terms A2, A3 and A4. We leave the details to the reader. Now by Proposition 2.5 we obtain that the claim (2.10) holds, which implies that (cid:16) kRN (f )kH p f lag(Hn) ≤ C2−N kf kH p f lag(Hn). Thus, choosing N large enough implies that TN is invertible and T −1 H p f lag(Hn). Set h = T −1 α,N f . Then N is bounded on f (x, y) = Tα,N (T −1 α,N f ) = |R| ψj,k((x, y) ◦ (xI , yJ )−1)(ψj,k ∗ h)(xI , yJ ). Xj,k∈Z XR=I×J, ℓ(I)=2−j−N , ℓ(J)=2−j−N +2−k−N e We now return to Theorem 1.4. Proof of Theorem 1.4. We first verify (2.2). To this end, applying the discrete version of the reproducing formula (2.6) for f in the term ψ′ R ∗ T (f ) (zR, uR) given in (2.2) implies that (cid:3) ψ′ = ψ′ R ∗ T (f ) (zR, uR) R ∗ K ∗ (cid:18) Xj′,k′∈Z |R′| ψj′,k′((x, y) ◦ (xI ′, yJ ′)−1)(ψj′,k′ ∗ h)(xI ′, yJ ′) (zR, uR) XR′=I ′×J ′, ′ −N , ℓ(I ′)=2−j ′ −N +2−k ℓ(J ′)=2−j ′−N e (cid:19) = |R′|ψ′ R ∗ K ∗ ψj′,k′((zR, uR) ◦ (xI ′, yJ ′)−1)(ψj′,k′ ∗ h)(xI ′ , yJ ′). Xj′,k′∈Z XR′=I ′×J ′, ′−N , ℓ(I ′)=2−j ′−N +2−k ℓ(J ′)=2−j ′−N e ψj′,k′((zR, uR) ◦ (xI ′, yJ ′)−1) in the right-hand Then, by Lemma 2.3 to the term ψ′ R ∗ K ∗ side of the last equality above, we obtain that e |ψ′ R ∗ T (f ) (zR, uR) | 2−|j−j′|ε2−|k−k′| ≤ Xj′,k′∈Z XR′=I ′×J ′, ′ −N , ℓ(I ′)=2−j ′ −N +2−k ℓ(J)=2−j ′−N |R′| 2−(j∧j′)/2 (2−(j∧j′) + |zR − xI ′|)2n+ 1 2 BOUNDEDNESS OF SINGULAR INTEGRALS ON Hn 13 × (2−k∧k′ 2−(k∧k′)/4 + |uR − yJ ′|)1+ 1 4 |(ψj′,k′ ∗ h)(xI ′ , yJ ′)| if 2(j ∧ j′) ≥ k ∧ k′, and |ψ′ R ∗ T (f ) (zR, uR) | 2−|j−j′|ε2−|k−k′| ≤ Xj′,k′∈Z XR′=I ′×J ′, ′−N , ℓ(I ′)=2−j ′−N +2−k ℓ(J)=2−j ′−N |R′| 2−(j∧j′)/2 (2−(j∧j′) + |zR − xI ′|)2n+ 1 2 × 2−(j∧j′)/2 (2−(j∧j′) + |uR − yJ ′|)2+ 1 2 |(ψj′,k′ ∗ h)(xI ′ , yJ ′)| if 2(j ∧ j′) < k ∧ k′. Using Lemma 7 in [11, 12], for p |ψ′ R ∗ T (f ) (zR, uR) | 4n 4n+1 < r < p and any (z∗ R, u∗ R) ∈ R, we get that ≤ C 2−|j−j′|ε2−|k−k′|2( 1 r −1)N (2n+1)2[2n(j∧j′−j′)+(k∧k′−k′)](1− 1 r ) Xj′,k′∈Z × Ms (cid:20)(cid:18) XR′=I ′×J ′, ′−N , ℓ(I ′)=2−j ′−N +2−k ℓ(J)=2−j ′−N |(ψj′,k′ ∗ h)(xI ′, yJ ′)|χI ′χJ ′ r 1 r (cid:19) (cid:21)! (z∗ R, u∗ R) +C 2−|j−j′|ε2−|k−k′|2( 1 r −1)N (2n+1)2[2n(j∧j′−j′)+(j∧j′−j′∧k′)](1− 1 r ) Xj′,k′∈Z: 2(j∧j′)<k∧k′ × M |(ψj′,k′ ∗ h)(xI ′, yJ ′)|χI ′χJ ′ (cid:20)(cid:18) XR′=I ′×J ′, ′−N , ℓ(I ′)=2−j ′−N +2−k ℓ(J)=2−j ′−N r 1 r (cid:19) (cid:21)! (z∗ R, u∗ R) , where M is the Hardy-Littlewood maximal function and Ms is the strong maximal function on Hn, respectively. Applying H¨older’s inequality and Fefferman-Stein vector valued maximal inequality and summing over R ∈ Rvert yield ψ′ R ∗ T (f ) (zR, uR) 2 χR (z, u) 1 2 (cid:27) p (cid:13) (cid:13) (cid:13) (cid:13) (cid:12) (cid:12) 2−|j−j′|ε2−|k−k′|2[2n(j∧j′−j′)+(k∧k′−k′)](1− 1 r ) (cid:13) (cid:26) XR∈Rvert (cid:12) (cid:13) (cid:13) (cid:12) (cid:13) ≤ C (cid:26) XR∈Rvert (cid:12) (cid:13) (cid:12) (cid:13) (cid:12) (cid:13) (cid:12) (cid:13) (cid:12) Ms Xj′,k′∈Z (cid:20)(cid:18) XR′=I ′×J ′, ′−N , ℓ(I ′)=2−j ′ −N +2−k ℓ(J)=2−j ′−N |(ψj′,k′ ∗ h)(xI ′, yJ ′)|χI ′χJ ′ r 1 r (cid:19) (cid:21)! (z∗ R, u∗ R) 2 (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) χR (z, u) 1 2 (cid:27) p (cid:13) (cid:13) (cid:13) (cid:13) 14 GUORONG HU AND JI LI ≤ C (cid:26) Xj′,k′∈Z (cid:13) (cid:13) (cid:13) (cid:13) XR′=I ′×J ′, ′−N , ℓ(I ′)=2−j ′−N +2−k ℓ(J)=2−j ′−N ≤ CkhkH p(Hn) ≤ Ckf kH p(Hn). |(ψj′,k′ ∗ h)(xI ′ , yJ ′)|2χI ′(·)χJ ′(·) 1 2 (cid:27) p (cid:13) (cid:13) (cid:13) (cid:13) The proof for (2.1) is similar and easier. The proof of Theorem 1.4 is concluded. (cid:3) Acknowledgement: J. Li is supported by ARC DP 160100153. References 1. S-Y. A. Chang and R. Fefferman, Some recent developments in Fourier analysis and H p theory on product domains, Bull. Amer. Math. Soc. 12 (1985), 1–43. 2. S-Y. A. Chang and R. Fefferman, The Calder´on-Zygmund decomposition on product domains, Amer. J. Math. 104 (1982), 455–468. 3. S-Y. A. Chang and R. Fefferman, A continuous version of duality of H 1 with BM O on the bidisc, 4. M. Christ, A T (b) theorem with remarks on analytic capacity and the Cauchy integral, Colloq. Math. Ann. of math. 112 (1980), 179–201. 61 (1990), 601–628. 5. C. Fefferman and E. Stein, H p spaces of several variables, Acta Math. 129 (1972) 137–193. 6. R. Fefferman, Multi-parameter Fourier analysis, Study 112, Beijing Lectures in Harmonic Analysis, Edited by E. M. Stein, 47–130. Annals of Mathematics Studies Princeton University Press. 7. R. Fefferman, Harmonic Analysis on product spaces, Ann. of Math. 126 (1987), 109–130. 8. R. Fefferman, Multiparameter Calder´on-Zygmund theory, Harmonic analysis and partial differential equations (Chicago, IL, 1996), 207-221, Chicago Lectures in Math., Univ. Chicago Press, Chicago, IL, 1999. 9. G.B. Folland and E.M. Stein, Hardy Spaces on Homogeneous Groups, Princetion University Press, Prince- ton, N.J., 1982. 10. D. Geller and A. Mayeli, Continuous wavelets and frames on stratified Lie groups, I, J. Fourier Anal. Appl. 12 (2006), 543–579. 11. Y. Han, G. Lu and E. Sawyer, Flag Hardy spaces and Marcinkiewicz multipliers on the Heisenberg group, Anal. and PDE, 7 (2014), 1465–1534. 12. Y. Han, G. Lu and E. Sawyer, Flag Hardy spaces and Marcinkiewicz multipliers on the Heisenberg group: expanded version, arXiv 1208.2484. 13. D. M¨uller, F. Ricci, and E. M. Stein, Marcinkiewicz multipliers and multi-parameter structure on Heisenberg(-type) groups, I, Invent. math. 119 (1995), 119–233. 14. D. M¨uller, F. Ricci, and E. M. Stein, Marcinkiewicz multipliers and multi-parameter structure on Heisenberg(-type) groups, II, Math. Z. 221 (1996), 267–291. 15. A. Nagel, F. Ricci, and E. M. Stein, Singular integrals with flag kernels and analysis on quadratic CR manifolds, J. Func. Anal. 181 (2001), 29–118. 16. A. Nagel, F. Ricci, E. M. Stein, and S. Wainger, Singular integrals with flag kernels on homogeneous groups: I, Revista Mat. Iberoam. 28 (2012) 631–722. 17. E. Stein, Harmonic Analysis Real-Variable Methods, Orthogonality, and Oscillatory Integrals, Princeton Mathematical Series, 43, Princeton University Press, Princeton, New Jersey, 1993. 18. C. Tan, Boundedness of classical Calder´on-Zygmund convolution operators on product Hardy space, Math. Res. Lett. 20 (2013), 591–599. Department of Mathematics, Jiangxi Normal University, Nanchang, Jiangxi 330022, China E-mail address: [email protected] Department of Mathematics, Macquarie University, NSW, 2109, Australia E-mail address: [email protected]
ai_researcher
3
Gemini_15_Unlocking_multimodal_understanding_across_millions_of_tokens_of_context.pdf
2 0 0 2 r a M 0 2 1 v 3 4 3 3 0 2 0 / h p - o r t s a : v i X r a Galaxies: The Third Dimension ASP Conference Series, Vol. **VOLUME**, 2002 M. Rosado, L. Binnette, L. Arias Integral Field Spectroscopy with the Gemini 8-m Telescopes Bryan W. Miller, James Turner Gemini Observatory, Casilla 603, La Serena, Chile Marianne Takamiya, Doug Simons Gemini Observatory, 670 N. A’ohoku Place, Hilo, HI, 97620, USA Isobel Hook UK Gemini Project Office, University of Oxford, UK Abstract. We give an overview of the current and future IFU capa- bilities on the Gemini 8-m telescopes. The telescopes are well-suited to integral field spectroscopy and both telescopes will have optical and near- infrared IFUs within the next few years. Commissioning for the GMOS IFU on Gemini North has begun recently and it is now available to the community. Future integral field instruments will take advantage of wide- field adaptive optics systems. 1. Introduction The high image quality and large collecting areas of the Gemini 8-m telescopes make them well-suited to integral field spectroscopy. On smaller telescopes signal-to-noise considerations have forced integral-field units (IFUs) to have ei- ther coarse spatial sampling (e.g. INTEGRAL, DensePak, SAURON) or fine sampling of the highest surface brightness targets (e.g. OASIS, TEIFU). More collecting area, a smaller diffraction limit, and new technology in today’s 8-10 meter telescopes means that higher signal-to-noise spectra of finer spatial struc- tures can be obtained. Therefore, the first generation of instrumentation on Gemini will include optical and near-infrared IFUs on both telescopes: GMOS and NIFS instruments on Gemini North, and GMOS and GNIRS on Gemini South (see Table 1). This paper summarizes the capabilities of the telescopes and these instruments. While they are sensitive from optical through mid-infrared wavelengths, the Gemini telescopes are optimized for observing in the near and mid-infrared. Special features of the telescopes include minimizing the mass above the primary mirror in order to lower the thermal background and improve airflow over the mirror, future low-emissivity mirror coatings, daytime climate control, and a tip/tilt chopping secondary. In addition, both telescopes will have facility adap- tive optics units that will be able to feed a near-diffraction-limited beam to any instrument. Therefore, Gemini will be able to deliver image quality in the near- 1 2 Miller et al. Figure 1. Predicted optimal image quality versus wavelength for HST and an 8-meter telescope with three levels of correction: no cor- rection, tip/tilt correction, and adaptive optics. The Gemini telescopes always use tip/tilt correction and will eventually have facility adaptive optics systems that can feed any instrument. Therefore, in the near-IR the image quality produced by Gemini is equal to or better than that produced by HST. infrared about a factor of two better than HST (Figure 1). The Gemini IFUs have been designed to work with the expected image quality from either tip/tilt or adaptive-optics corrected beams. The rest of this paper briefly describes the capabilities of each Gemini integral field unit in approximate order of when it will come into service. Table 1. Summary of Gemini Integral Field Instruments Instrument/ FOV Location GMOS GN and GS GNIRS GS NIFS GN 7′′ × 5′′+ 0.2′′ 3.′′5 × 5′′ 3.′′2 × 4.′′4 0.15′′ Sampling Wavelength R Range 0.4–1.1 µm 500–8000 3′′ × 3′′ 0.1′′ 0.9–2.5 µm ∼ 5000 1–5 µm 667,2000, 6000 IFS with Gemini 3 2. GMOS The Gemini Multi-Object Spectrograph (GMOS) instruments — one for each telescope — provide the primary optical imaging and spectroscopic capabilities at Gemini (Murowinski et al. 2002). Each instrument will have a lenslet/fiber- based IFU that sits in one of the three mask cassettes and it is deployed like any other GMOS slit mask. The IFUs are built by the University of Durham and the design of the first unit is described by Allington-Smith et al. (2002; also see Allington-Smith et al. in these proceedings). Commissioning for the first IFU was begun in September, 2001, and its performance has met or exceeded expectations. It was first offered to the community for use in the 2002A semester. As the design of the IFU is covered elsewhere, this paper will focus on the data format and data reduction procedure. The IFU has two sub-fields — one with 1000 lenslets (7′′ × 5′′), and one with 500 lenslets (3.′′5 × 5′′) — separated by 1 arcminute so that one field can be used for sky subtraction. The light entering the lenslets is redirected by fibers into two pseudo-slits of 750 fibers each. These are imaged on the detector like regular longslits. The current detector is an array of three butted 2048 × 4608 pixel EEV CCDs with an effective size of 6144 × 4608. The gaps between the CCDs are equivalent to 37 pixels in width. The dispersion axis is along the long axis of the mosaic. When both IFU slits are used two banks of 750 spectra appear side-by-side on the detector and blocking filters must be used to avoid spectral overlap. However, either of the slits can be closed, resulting in one-half the field-of-view but allowing twice the spectral coverage. A raw GMOS file is a multi-extension FITS (MEF) file with one extension for each amplifier used for readout (usually 3 or 6). All Gemini pipeline processing will be done within IRAF. The Gemini staff are writing packages of scripts for handling the data from facility instruments. Scripts for handling GMOS IFU data will be part of the GMOS package and will allow for the extraction, calibration, and analysis of the IFU spectra (Table 2). Most of these scripts are in draft form and the first release is planned for the middle of semester 2002A. Table 2. IFU related scripts in the GMOS IRAF package Task gfapsum gfdisplay gfextract gfmosaic gfquick gfreduce gfresponse gfskysub gftransform gscrrej gsreduce gswavelength Function sum spectra in a spatial region display datacubes using ldisplay extract spectra, apply fiber throughput correction merge datacubes quick image reconstruction for target acquisition apply reduction/calibration to object frames determine relative fiber responses subtract sky apply wavelength calibration remove cosmic rays bias subtraction determine wavelength calibration 4 Miller et al. Figure 2. The format of an example GMOS IFU extracted datacube. The Mask Definition File (MDF) is a binary table that contains infor- mation about each lenslet, such as relative position on the sky. There can be up to three extensions for each slit block extracted. Each image plane is an IRAF multispec file (one spectrum per line). The SCI ex- tension has the extracted spectra. The optional VAR and DQ planes hold the variances of the spectra and the data quality flags. The current extraction routine uses the IRAF task apall to find, trace, and extract the spectra. Most of the spectra are separated well enough that identifying the individual spectra is not a problem. However, the peaks of three low-throughput fibers cannot be distinguished from their neighbors, so they are lost. More sophisticated reduction techniques, such as deconvolution, may be able to recover these spectra. The current format of the extracted “datacube” is shown in Figure 2. The spectra are packed in 2D IRAF multispec images (one spectrum per image line) within a MEF file. The relative positions of the lenslets on the sky are contained in the binary table (MDF) extension. This format is similar in concept to the proposed Euro3D format. Tasks such as gfdisplay are used to visualize the 3D data. Additional analysis tools will be released as they are developed. 3. GNIRS The Gemini Near-Infrared Spectrograph (GNIRS) is a fully cryogenic instru- ment sensitive from 1 to 5 microns. The instrument is being built by NOAO in Tucson and the IFU module is being built by the University of Durham. Similar to the GMOS IFU, the GNIRS IFU is in a cassette that is inserted into the beam on a slit slide. Since the instrument is cooled the IFU is an image slicer containing 21 diamond-turned mirrors that reformat the focal plane into a bank of spectra (Figure 3). The width of a slicer mirror is 0.′′15 and the slitlets are 4.′′4 long, giving 609 independent spatial elements. With slices of this width the IFU is optimized for tip/tilt correction rather than adaptive optics. Spectral resolu- tions for the short camera are R = 667, R = 2000, and R = 6000, depending on the grating. Delivery to Gemini South is expected to be toward the end of 2002. IFS with Gemini 5 Figure 3. Schematic of the image-slicer concept used by GNIRS and NIFS (Allington-Smith, private communication). Specifications are for the GNIRS IFU. Cooled aluminum slicing mirrors divide the focal plane into a stack of slitlets. Reduction will be similar to infrared MOS slit spectroscopy except that full 2D spatial information is preserved. 6 Miller et al. 4. NIFS The Gemini Near-infrared Integral Field Spectrograph (NIFS), being built by the Australia National University, is designed to work behind the Altair adaptive optics system on Gemini North. To reduce cost and speed development the project is copying the the designs of the on-instrument wavefront sensor and the cryostat from the NIRI instrument already in use at Gemini North. The IFU is based on an image slicer similar to that used in GNIRS. The slicing mirrors have a projected width of 0.′′1 and the total field-of-view is 3′′ × 3′′. The 20482 Hawaii-II detector will give wavelength coverage from 0.9–2.5 µm with a spectral resolution of 5000. Delivery is expected in the middle of 2003. 5. Future Future integral field spectrographs are likely to take advantage of the multi- conjugate adaptive optics system being developed for Gemini South. In this system multiple deformable mirrors will produce a uniform, near diffraction- limited PSF over a 2 arcminute field. Several designs for a spectrograph with multiple, deployable IFUs are under consideration. 6. Summary Large collecting areas, good image quality (with or without adaptive optics), and infrared optimization make the Gemini telescopes well-suited for integral field spectroscopy. Therefore, Gemini will be offering both optical and near-IR IFU capability at both telescopes within the next two years. These instruments will be powerful tools for studies of galaxy dynamics, black holes, and ISM kinematics and abundances, to name a few possible projects. Acknowledgments. The Gemini Observatory is operated by the Associa- tion of Universities for Research in Astronomy, Inc., under a cooperative agree- ment with the NSF on behalf of the Gemini partnership: the National Sci- ence Foundation (United States), the Particle Physics and Astronomy Research Council (United Kingdom), the National Research Council (Canada), CONI- CYT (Chile), the Australian Research Council (Australia), CNPq (Brazil) and CONICET (Argentina). References Allington-Smith, J., Murray, G., Content, R., Dodsworth, G., Davies, R., Jor- gensen, I., Miller, B. W., Hook, I., Crampton, D., & Murowinski, R. 2002, in preparation Murowinski, R., et al. 2002, in preparation
ai_researcher
1
Application_of_idea_of_quality_control_circle_in_management_of_medical_consumables_in_operating_room.pdf
A Packetized Direct Load Control Mechanism for Demand Side Management* Bowen Zhang1 and John Baillieul2 3 1 0 2 n a J 3 2 ] Y S . s c [ 1 v 3 9 5 5 . 1 0 3 1 : v i X r a Abstract— Electricity peaks can be harmful to grid stability and result in additional generation costs to balance supply with demand. By developing a network of smart appliances together with a quasi-decentralized control protocol, direct load control (DLC) provides an opportunity to reduce peak consumption by directly controlling the on/off switch of the networked appliances. This paper proposes a packetized DLC (PDLC) solution that is illustrated by an application to air conditioning temperature control. Here the term packetized refers to a fixed time energy usage authorization. The consumers in each room choose their preferred set point, and then an operator of the local appliance pool will determine the comfort band around the set point. We use a thermal dynamic model to investigate the duty cycle of thermostatic appliances. Three theorems are proposed in this paper. The first two theorems evaluate the performance of the PDLC in both transient and steady state operation. The first theorem proves that the average room temperature would converge to the average room set point with fixed number of packets applied in each discrete interval. The second theorem proves that the PDLC solution guarantees to control the temperature of all the rooms within their individual comfort bands. The third theorem proposes an allocation method to link the results in theorem 1 and assumptions in theorem 2 such that the overall PDLC solution works. The direct result of the theorems is that we can reduce the consumption oscillation that occurs when no control is applied. Simulation is provided to verify theoretical results. I. INTRODUCTION It is well known that the day-to-night electricity usage is oscillatory, with a usage valley appearing through the night and a peak occurring during the day. At the same time, high- frequency (minute-to-minute and faster) oscillation results from randomly occurring aggregations of individual loads with short duty cycle [1]. The importance of reducing high- frequency peaks in usage is multi-fold. We can more easily maintain the stability of the grid with reduced amounts of generation reserves such that the grid frequency and voltage are stable. Generation cost can be reduced since we will not use generators with large marginal costs. Among all classes of electricity demand, thermostatic loads have been a major contributor to problems of high peak usage [2]. At the same time, thermostatic loads provide thermal capacity such that we can regulate their usage pattern as long as certain baseline thermal requirements are met. This paper presents *The authors gratefully acknowledge support of the U.S. National Science Foundation under EFRI Grant 1038230 1Corresponding author. Division of Systems Eng., Boston University, 15 St. Marys St., Brookline, MA 02446, email: [email protected] 2Dept. of Electrical and Computer Eng., Dept. of Mechanical Eng., and Division of Systems Eng., Boston University, 110 Cummington St., Boston, MA 02215, email: [email protected] an approach to carrying out such regulation by means of a novel information-based method for direct load control. Historically, thermostatic loads (air conditioners, electric space heating systems, water heaters, etc.) have been oper- ated in an uncoordinated fashion resulting in the power grid being exposed to costly random load fluctuations. Taking note of the past decades’s development of networked control system technologies [3] and novel concepts enabled by smart appliances, such as the so-called Internet of Things [4], we shall study the control of a local network of loads wherein the objective of control is to suppress spikes and fluctuations in usage. The approach uses real-time data from individual devices and local temperature sensors communicating with a central operator who distributes quantized amounts of energy to service the load demands according to a protocol for direct load control (DLC) that we shall describe below. Various approaches have been proposed to formulate the DLC problem with the objective of peak load management. The load curve has been studied using a state-queueing model where thermal set points are adjusted automatically as a function of electricity price or outside temperature in [5] and [6]. Dynamic programming has been applied in [7] to minimize the production cost in a unit commitment problem, and in [8] to minimize the disutility of consumers resulting from DLC disruption. Monte Carlo simulation has been applied in [2] to evaluate the effectiveness of a specific DLC approach which minimized the discomfort of overall temperature deviation subject to constraints in transmission lines. Multi-server queueing theory has been used to cal- culate the mean waiting time of consumers when the usage authorization is limited during peak hours in [1]. This system has been applied in a total of 449 residential units located in Seoul with good performance. The objective of our approach is to monitor and control aggregate electricity use in order to avoid random spikes in demand that would otherwise occur. The mechanism that implements the approach is something that we call packetized direct load control (PDLC). The term packetized refers to the idea of time-packetized energy where the central operator authorizes electricity usage of individual loads for a fixed amount of time ∆t. After the elapse of time ∆t, the central operator reschedules the authorization. For each building, the central operator is connected to the on/off switch of thermo- static loads (fan coils or room air conditioners). Users in the building are assumed to authorize the operator to control the on/off switch of their thermostatic smart appliances once they provide the operator their preferred temperature set point. The central operator, who receives thermal information on all the appliances at each decision instant, has the objective to maintain all appliances within their comfort band by selectively turning on or off these thermostatic loads at discrete time instants. The PDLC provides flexibility in adjusting building consumption since we are actually dealing with a discrete time decision making problem where the central operator schedules packets at the beginning of each interval. It will be shown that given a minimum critical level of energy capacity, it is possible to both eliminate demand peaks and guarantee a narrow comfort band around each consumer’s preferred temperature setting. In a theoretical sense, it is further shown that the width of the comfort band can be made to approach zero by letting the packet length approach zero, although practically speaking the cycle time of an air conditioning unit cannot be made arbitrarily short. In the end, the PDLC solution is able to smooth the consumption oscillations, and this in turn enables buildings to consume smaller amounts of reserves dispatched from the ISO. The remainder of the paper is organized as follows. Section II introduces the set up of the PDLC mechanism, followed by the investigation of a thermal model in section III. Section IV and V discuss the transient and steady state operation of the PDLC solution respectively. Section VI dis- cusses an allocation solution to link theorem 1 and theorem 2. A robustness analysis is given in section VII. Section VIII provides simulation results. Section IX concludes the paper and proposes future work. II. THE PDLC MECHANISM SETUP This section describes the model in terms of which the PDLC mechanism is proposed. The following few points compose the background of the proposed approach. (1) The PDLC controls the thermostatic loads in a build- ing, such as air conditioners, refrigerators, and water heaters. The thermal dynamic model of these appliances does not differ much; the investigation of the thermal model of air conditioners in the next section can be extended to other thermostatic loads with minor change. (2) The PDLC is assumed to be an on/off control. All the appliances are assumed to running with rated power if packet is authorized, or consume nothing if packet authorization is denied. There is no intermediate operation choice. (3) Different feeders are in charge of different types of loads, and they are all connected to the central operator who schedules electricity packets. The loads that have been grouped together in the same feeder consume energy at the same rates when they are operating. The overall consumption of the building is the sum of the consumption in each feeder controlled by the PDLC mechanism plus a certain portion of uncontrollable loads, including lighting and plug-in devices such as computers, televisions, and other small appliances. We assume that the consumption of the uncontrollable loads is independent of thermostatic loads and the environments (temperature, humidity, etc.), and these uncontrollable loads are subtracted from the analysis of the PDLC framework. (4) It is assumed that the target level of consumption in each feeder of thermostatic load is available beforehand, which is defined as the average consumption during peak time when no control is applied. The value of the proposed method rests on the evidence-based assumption that the con- sumption curve without control would oscillate around the target level, and consumption peaks will frequently exceed the target level by a significant amount. The control objective of the PDLC is to make the consumption curve smoother around the target level with minimum oscillation. III. AIR CONDITIONER THERMAL MODEL A model of the thermal dynamics of an air conditioner is developed as follows. Ihara and Schweppe presented a dynamic model for the temperature of a house regulated by air conditioning, and this has been shown to capture the behaviour of air conditioner loads accurately [9]. The temperature dynamics in continuous time (CT) is given by dT dt = Tout − T − Tgu τ , (1) where Tout is the outside temperate, Tg is the temperature gain of air conditioner if it is on, τ is the effective thermal time constant of the room, and u is binary valued specifying the state of thermostat. The unit of parameters is Fahrenheit as in the original paper. The temperature dynamic model in discrete time (DT) with interval ∆t is given by Tk+1 = (1 − a)Tk + aTout − buk, (2) where a = 1 − e− ∆t τ , b=aTg, and uk is u’s value during the k-th interval. We first derive the duty cycle off-time to f f and on-time ton based on the CT model for the case in which there is no PDLC and the air conditioner is operating in the traditional way under the control of its own thermostat. Tmax and Tmin are the comfort band boundaries. To get to f f , we set u = 0 in (1), which means that the air conditioner is turned off. Rearranging terms we have Tout τ whose general solution is given by dT dt T − 1 τ + = 0, T (t) = Ce− t τ + Tout . (3) (4) Since to f f is the time that temperature arises from Tmin to Tmax in the case of traditional thermostat control, we choose initial condition T (0) = Tmin to solve for to f f . See Fig.1. C = Tmin −Tout . The overall solution of temperature evolution is given then by T (t) = (Tmin − Tout )e− t τ + Tout . (5) The value of to f f would satisfy T (to f f ) = Tmax. After calcu- lation we will have Tout − Tmin Tout − Tmax Similarly we calculate ton when u = 1, to f f = τ ln . ton = τ ln Tmax + Tg − Tout Tmin + Tg − Tout . (6) (7) T ave k+1 can be expressed recursively as k + k+1 = T ave T ave 1 Nc We will have the difference between T ave temperature at time k + 1 given by k = T ave T ch k + a(T ave set − T ave k ). (12) set and average room set − T ave T ave k+1 = (1 − a)(T ave set − T ave k ) = e− ∆t τ (T ave set − T ave k ). (13) Fig. 1. Typical air conditioner duty cycle after k steps, with k satisfying For any small deviation ε > 0 from T ave set , we will have |T ave set − T ave k | = e− k∆t τ |T ave set − T ave 0 | < ε, (14) The traditional duty cycle dynamics characterized by to f f and ton provide the baseline against which the PDLC protocol of the next section is evaluated. To evaluate the PDLC solution, we consider its transient and steady state operation. The next section will discuss its transient operation. IV. TRANSIENT OPERATION OF THE PDLC The motivation of the PDLC solution is to allow buildings consume electricity at a level that minimizes oscillation close to a target. Denote the total number of consumers by Nc, the number of authorized packets by m, the set point in room i set , and the room temperature in room i at time k by T i by T i k . The transient process is defined as the duration before the average room temperature converges to the average room set point T ave set . The theorem below provides a solution that guarantees the convergence of average room temperature under the assumption that m packets are being allocated to a pool of appliances during each packet interval. Tout −T ave set Tg Theorem 1. If the fixed number of packets m = Nc set = 1 Nc ∑Nc i=1 T i is used in each time interval ∆t, then the average room temperature T ave k converges to the average room set point T ave set . k = 1 Nc ∑Nc i=1 T i Proof: We use the DT model to derive the convergence of the average room temperature. According to (2), we can represent the number of authorized packets in terms of the DT model parameters a and b as follows m = Nc Tout − T ave set Tg = Nc(Tout − T ave set ) a b . In one packet length, the total temperature decrease T dec m packets is given by k (8) by k = mb = Nca(Tout − T ave T dec where the last equality follows from (8). Similarly, the total temperature increase T inc , which is caused by indoor/outdoor temperature difference, is given by set ), (9) k T inc k = a(Tout − T i Nc ∑ i=1 The total temperature change T ch k k = aNc(T ave k = T inc T ch k − T dec is given by set − T ave k k ) = NcaTout − a Nc ∑ i=1 T i k . (10) ). (11) k > τ ∆t ln |T ave set − T ave ε 0 | . (15) This means the average room temperature will converge to an arbitrarily small neighbourhood of T ave set after finite number of steps.(cid:4) We say that the system is in Steady State Thermal Equilib- rium (SSTE) when the average room temperature is within a sufficiently small neighbourhood of T ave set . If the system is in SSTE at time k(cid:63), then the system will be in SSTE for k ≥ k(cid:63) as long as we provide m = Nc packets at each interval. According to (15), the convergence speed depends on T ave and τ. If these two parameters do not provide a quick 0 convergence with few steps (T ave being large in a warm load 0 pick up process), we can adjust the number of packets as a function of the average temperature deviation T ave k − T ave set at time k. Let the modified number of packets be given by Tout −T ave set Tg m = Nc Tout − T ave set Tg [1 + g(T ave k − T ave set )], (16) where g is a non-negative coefficient. In this case, for any ε > 0 we can similarly prove that after k steps the deviation of average room temperature from T ave is smaller than ε, set with k satisfying (cid:48) (cid:48) (cid:48) 0 | , k |T ave τ )G] ln > − ln[1 − (1 − e− ∆t set − T ave ε where G = 1 + g(Tout − T ave set ) can be understood as the convergence gain parameter. Comparing (17) with (15), we have k < k for the same ∆t since G > 1. The larger the value of G (or g), the quicker the convergence. If m in (16) is not an integer, we can choose the ceil (cid:100)m(cid:101) as the number of packets scheduled. The proof remains valid under this choice. (17) (cid:48) Theorem 1 indicates that the average consumption is proportional to the total population Nc by the coefficient Tout −T ave set . The physical meaning of this coefficient is the Tg thermostat mean status. Define Tout − T ave set Tg son = , so f f = 1 − son, (18) representing the mean on-status and off-status of the thermo- stat. These two variables will be used in the second theorem for the steady state analysis of the PDLC. Note that an essential implicit assumption is that Tout −T ave < 1, i.e. there is enough cooling capacity to serve the consumer population. Tg set V. STEADY STATE OPERATION OF THE PDLC We have min, T i k ∈ (T i When no control is applied, each air conditioner will operate according to its own duty cycle as described in Sec.III. All the room temperatures are controlled around their respective set points, and the average room temperature is approximately equal to the average room set point, namely T ave k ≈ T ave set . From the first theorem, the system will evolve into SSTE within a few steps when the PDLC is applied. We say that the system is in steady state at time k if it is in SSTE and T i max) for all i. When the PDLC solution is applied in steady state, consumers in each room have the freedom to choose the set point to be whatever they want. After the set point is given, the operator will choose the comfort band for consumers around their preferred setting. The comfort band may be large or small, depending on the outside temperature and the energy we have purchased a day ahead. It is a compromise in the PDLC that consumers allow the operator to calculate the comfort band in order to achieve a smoother consumption. Denote the comfort band min, T i for room i around T i set + ∆1) (∆ = ∆1 + ∆2 being a fixed value, namely we provide a fixed- valued comfort band for all the consumers). Define set − ∆2, T i max) = (T i set by (T i T i cr = T i max − aTout 1 − a , (19) as the critical temperature point of room i. The physical meaning of T i is the following: if room i’s temperature cr exceeds T i time k. time k, then it needs packet at cr at Otherwise its room temperature will exceed T i max at time k + 1. The following two lemmas provide restrictions on how we choose ∆1 and ∆2. The first lemma provides a condition that the temperature of room i will not exceed T i max for all i, and the second lemma provides a condition that the temperature of room i will not go below T i min for all i. Lemma 1. Assuming the system is in SSTE, and T i min, T i k(cid:63) ∈ max) for all i at time k(cid:63), if we provide m packets, and (T i ∆ and ∆2 have been chosen to satisfy ∆2 ∆ < m + 1 Nc , (20) then there exists δ > 0 such that T i any packet length ∆t ∈ (0, δ ). k(cid:63)+1 < T i max for all i with max, k(cid:63)+1 ≥ T i Proof: If T r then we have at least m + 1 rooms with temperature beyond their critical point at time k(cid:63). Enumerate the m + 1 (or more) consumers whose room temperature T i time k(cid:63):S = {i1, · · · , im+1}. The remaining Nc − m − 1 (or fewer) rooms’ temperature are greater than T i min for i = m + 2, · · · , Nc. The average room temperature lower bound at time k(cid:63) is given by k(cid:63) ≥ T i cr at k(cid:63) − T ave T low k(cid:63) = 1 Nc = 1 Nc i j cr + ∑i j /∈S T [∑i j∈S T i j max−aTout T [∑i j∈S 1−a − ∑i j∈S T set ] i j min − ∑Nc i=1 T i i j set (22) ∝ [∑i j∈S T −(Nc − m − 1)∆2] i j max − (m + 1)Tout ]− i j min + Nc∆2 − (m + 1)Tout ]e− ∆t τ . k(cid:63) = 1 [∑i j∈S T Nc ∑Nc The first equality is derived from T ave set , namely at time k(cid:63) in SSTE the average room temperature is equal to the average temperature set point. The second equality is derived from T i min = T i set − ∆2 and (19). The last proportion- ality is derived by plugging a = 1 − e− ∆t τ If we choose ∆ and ∆2 to satisfy (20), then from (2). i=1 T i (m + 1)Tout − ∑i j∈S T i j max (m + 1)Tout − ∑i j∈S T i j min − Nc∆2 < 1. (23) Note that the above inequality is strict, so there exists δ > 0 such that (m + 1)Tout − ∑i j∈S T i j max (m + 1)Tout − ∑i j∈S T i j min − Nc∆2 = e− δ τ . (24) k(cid:63) = T ave k(cid:63) Letting ∆t = δ in (22), we have T low . Since (22) is monotonically decreasing as a function of ∆t, then for packet length ∆t ∈ (0, δ ) we will have T low k(cid:63) > 0. Namely the average room temperature lower bound is greater than the average room temperature, which is a contradiction. We must have T i k(cid:63) − T ave max for all i. (cid:4) k(cid:63)+1 < T i Lemma 2. Assuming the system is in SSTE, and T i min, T i k(cid:63) ∈ max) for all i at time k(cid:63), if we provide m packets, and (T i ∆ and ∆1 have been chosen to satisfy Nc − m + 1 Nc ∆1 ∆ < , (25) then there exists γ > 0 such that T i packet length ∆t ∈ (0, γ). k(cid:63)+1 > T i min for all i with Proof: The proof is similar to lemma 1. We first assume min, then derive a average temperature upper to show that T r bound T upp contradiction. We omit the details. (cid:4) at time k(cid:63) which is smaller than T ave k(cid:63) k(cid:63)+1 ≤ T i k(cid:63) Based on the above two lemmas, we provide the following theorem for the steady state operation of the PDLC. Theorem 2. Assuming that the system is in SSTE at time max) for all i, if we provide m = sonNc k(cid:63), and T i number of packets over time and choose ∆1, ∆2 such that k(cid:63) ∈ (T i min, T i = ∆1 ∆ min, T i k ∈ (T i = so f f , Nc − m Nc m Nc max) for all i and k ≥ k(cid:63) + 1 with packet = son, ∆2 ∆ (26) = then T i length ∆t ∈ (0, min{δ , γ}). T low k(cid:63) = 1 Nc [ ∑ i j∈S i j cr + ∑ T i j /∈S i j T min]. (21) Proof: Clearly (26) satisfies (20) and (25), and with packet length ∆t ∈ (0, min{δ , γ}) both lemma 1 and lemma 2 will stand. We will have T i max) for all i. Since we k(cid:63)+1 ∈ (T i min, T i provide m = sonNc packets at time k(cid:63), the system is also in SSTE at time k(cid:63) + 1. By mathematical induction we can prove that T i max) for all i and k ≥ k(cid:63) + 1. (cid:4) min, T i k ∈ (T i min ≈ T i Remark 1. As the comfort band ∆ → 0, we have ∆1 → 0, ∆2 → 0, T i max, ∀i. According to (24) we must have ∆t → 0, which means we switch packets at increasingly large frequencies. In this case, individual room temperatures will stay at individual room set points after time k > k(cid:63) once T i k(cid:63) ≈ T i set at time k(cid:63) for all i. This means that the width of the temperature band can be made to approach zero by letting the packet length approach zero. In actual implementation, there are practical limits on the minimum acceptable value of ∆t, say 30 seconds or 1 minute, since the air conditioning unit cannot be switched on and off at an arbitrary frequency. Hence, convergence is to the comfort band and not to the actual set point. Remark 2. From (26), ∆1 = so f f ∆, ∆2 = son∆. When son > so f f , we have ∆2 > ∆1. This can be explained by the intuition that since we are providing packets to more than a half number of consumers (son > 0.5), it is more likely to have consumers being over-cooled. Thus we set a larger value of ∆2 to avoid such an occurrence. Similarly when son < so f f , we set a larger value of ∆1 to avoid consumers being over-warmed. Remark 3. Based on the weather prediction, the building would purchase certain amount of packets a day ahead. In real time, the number of packets may not be enough if the predicted temperature is lower than what is actually realized. With the PDLC solution, the operator does not need to purchase additional energy from the real time market when the price is high. The operator can make packets switch more frequently to guarantee temperature control. In such cases, the average room temperature will converge to another value within the comfort band. Remark 4. The packet length above is a theoretical value to guarantee temperature control in steady state. In the proof we focus on the worst case when initially at time k(cid:63) the temperatures of many rooms are in the vicinity of their maximum or minimum comfort boundary. In practice, the initial temperatures will be distributed more evenly across the comfort band. In such cases, the practical packet length can be larger than the theoretical value. Remark 5. In our model we assume that the operator achieves all the temperature information within the building, and such information is continuous. In a companion technical report [11], we assume that the operator must act on more restricted information. In this model, the appliance pool operator does not have complete and continuous access to appliance information, but instead receives requests for electricity that appliances send based on their own sensor readings. The operator receives packet request (withdrawal) from room i when its room temperature reaches T i min). The total number of available packets is limited which is equal to the expected average consumption. Packet supply is modelled as a multi-server queuing system with fixed service time (packet length). In a stochastic simulation, at certain times consumers have to wait to be served, and at other max (T i times the total number of packets cannot be fully used, see Fig.2. This indicates that continuous temperature information and control by an appliance pool operator results in a better control solution than binary information. Fig. 2. Number of packets consumption and waiting consumers VI. FROM SSTE TO STEADY STATE The final question is how we start from SSTE and find a packet allocation mechanism such that at time k(cid:63) we can start at T i k(cid:63) ∈ (T i max) for all i. According to the discrete time thermal dynamics, min, T i Tk+1 = Tk + a(Tout − Tk − ukTg) τ )(Tout − Tk − ukTg) = Tk + (1 − e− ∆t = Tk + (1 − (1 − e− ∆t ≈ Tk(1 − ∆t τ ) + ∆t τ (Tout − ukTg), τ + o(∆t)))(Tout − Tk − ukTg) (27) where the third equality and fourth approximation are by Taylor series expansion for small packet length ∆t. By a similar derivation we have, Tk+2 ≈ Tk+1(1 − ∆t = Tk(1 − 2∆t τ ) + ∆t τ ) + ∆t τ (Tout − ukTg) τ (2Tout − (uk + uk+1)Tg), (28) where the second equality is obtained by plugging into (27) and ignoring terms of o(∆t) for small ∆t. For N intervals, we have, Tk+N = Tk(1 − N∆t τ ) + ∆t τ (NTout − N ∑ i=0 uk+iTg). (29) Denote n = N ∑ i=0 uk+i (30) as the number of packets received within N periods, then the temperature at time t + N is given by, Tk+N = Tk(1 − N∆t τ ) + ∆t τ (NTout − nTg). (31) Having discussed the discrete time temperature evolution, we propose the following theorem to guarantee that if we start from SSTE, then there exists a packet allocation solution to satisfy the assumptions in theorem 2. Theorem 3. If the aggregate system is in SSTE at time k (per the conclusion of Theorem 1), let ni denote the number of packets received by room i over the next N successive time intervals of length ∆t. There exists a choice of packet allocation {n1, n2, . . . , nNc} such that each room temperature is within the consumer’s designated comfort band at time k + N. That is, T i max), with the total of allocated packets satisfying k+N ∈ (T i min, T i Nc ∑ i=1 ni = mN. (32) Proof. According to (31), after a total number of ni packet consumption in a successive N periods starting at time k, the temperature in room i at time k + N is given by, k+N = T i T i k (1 − N∆t τ ) + ∆t τ (NTout − niTg). (33) The allowable choice of ni such that T i ∆2) is given by, k+N ∈ (T i set − ∆1, T i set + (T i k −T i set +∆1)τ+N∆t(Tout −T i k ) ∆tTg . (34) In order to have at least one integer ni within the bounds above, we need to have, > ni > ∆tTg (T i k −T i set −∆2)τ+N∆t(Tout −T i k ) and this holds as long as we choose ∆t such that ∆1τ ≥ 1. ∆tTg In the derivation above, the third equality is obtained by the SSTE at time k satisfying Nc ∑ i=1 T i k = Nc ∑ i=1 set = NcT ave T i set . (41) With similar derivation, a packet length ∆t such that ∆2τ ≥ 1 ∆tTg will guarantee the second inequality in (38). To summarize, a packet length satisfying ∆t ≤ min{∆1, ∆2} τ Tg (42) will make (38) hold. This ends the proof of theorem 3. (cid:4) Remark. According to (42), the upper bound of packet length is directly proportional to τ and inversely proportional to Tg. The intuition is that large value of τ impedes and Tg facilitates the thermal transmission, which allows larger and requires smaller packet length respectively. The remaining issue is to assign m packets at each period. Denote ai,k as the binary variable representing packet assignment at time k for room i. Up to time k + j, define ni(k + j) = ni − k+ j ∑ l=k ai,l (43) (T i k −T i set +∆1)τ+N∆t(Tout −T i k ) ∆tTg − (T i k −T i set −∆2)τ+N∆t(Tout −T i k ) ∆tTg which can be achieved with a packet length ∆t ≤ (∆1 + ∆2)τ Tg . We introduce the floor and ceil operator (cid:98)·(cid:99), (cid:100)·(cid:101). Let α i k = (cid:98) β i k = (cid:100) (T i k −T i set +∆1)τ+N∆t(Tout −T i k ) ∆tTg (T i k −T i set −∆2)τ+N∆t(Tout −T i k ) ∆tTg (cid:99), (cid:101), then ni can be chosen from integers between α i the following inequality holds, k and β i k. If Nc ∑ i=1 α i k ≥ mN ≥ β i k, Nc ∑ i=1 of there then {n1, n2, . . . , nNc} such that (34) holds and choice exists a packet Note that Nc ∑ i=1 α i k ≥ Nc ∑ i=1 ni = mN. (T i k −T i set +∆1)τ+N∆t(Tout −T i k ) ∆tTg − 1) T i set +Nc∆1)τ+N∆t(NcTout − Nc ∑ i=1 Nc ( ∑ i=1 Nc ( ∑ i=1 T i k − Nc ∑ i=1 = = NNc(Tout −T ave set ) = mN + Nc( ∆1τ ∆tTg ≥ mN, Tg ∆tTg + Nc( ∆1τ ∆tTg − 1) − 1) (38) allocation (39) T i k ) − Nc ≥ 1, (35) (36) (37) as the remaining number of packet needed for room i until time k + N. A simple allocation algorithm works as follows, time k we allocate packets to the m rooms starting at with largest ni(k). Let ai,k = 1 if packet is allocated and 0 otherwise. Use (43) to update ni(k + 1) for all i. Repeating such allocation procedure until the end of interval k + N will guarantee m allocation each period. We first prove the following inequality of ni(k + j), 0 ≤ ni(k + j) ≤ N − j. (44) We prove with induction. Note that for apparently true. Also at time k + l, j = l = 0 is it Nc ∑ i=1 ni(k + l) = mN − l ∑ j=0 Nc ∑ i=1 ai,k+ j = m(N − l). (45) For j = l + 1, we proof with contradiction. If there exists a room i(cid:63) such that ni(cid:63)(k + l) ≤ N − l and ni(cid:63)(k + l + 1) > N −l −1, then ni(cid:63)(k +l) = N −l. It also indicates that room i(cid:63) does not get a packet and there are at least m rooms, indexed by i j, j = 1, . . . , m, other than i(cid:63) such that ni j (k + l) = N − l to get packets. Then Nc ∑ i=1 ni(k + l) ≥ m ∑ j=1 ni j (k + l) + ni(cid:63)(k + l) (46) = (m + 1)(N − l), which contradicts (45). So we will have ni(k + l + 1) ≤ N − l − 1 for j = l + 1 and all i. To show that ni(k +l +1) ≥ 0 for all i. Suppose that ni(cid:63)(k + l + 1) < 0, it indicates that ni(cid:63)(k + l) = 0 and room i(cid:63) gets (40) a packet. Thus there are at most m − 1 rooms, indexed by i j, j = 1, . . . , m−1, with positive value of ni j (k +l) > 0. Then Nc ∑ i=1 ni(k + l) = m−1 ∑ j=1 ni j (k + l) + ni(cid:63)(k + l) (47) ≤ (m − 1)(N − l), contradicting (45) again. So we will have ni(k + l + 1) ≥ 0 for j = l + 1 and all i. Using mathematical induction, for all i = 1, . . . , Nc and j = 0, . . . , N, (44) holds. Then for j = N and all i, we will have ni(k + N) = 0. (48) min, T i k+N ∈ (T i Namely all the rooms will have received the exact packets they need and T i max) for all i with m packets allocation for each period. The intuition of such allocation is to provide packets to the m rooms that have largest temperature deviation above their target, namely at time k + j the m rooms with largest ni(k + j) receive packet for j = 0, . . . , N. To summarize, theorem 1 guarantees that the systems will evolve into SSTE, theorem 3 guarantees that starting from SSTE we have an allocation solution such that we can have T i k(cid:63) within the comfort band of room i for all i, and theorem 2 guarantees temperature control after the allocation. The three theorems complete the overall PDLC mechanism. VII. ROBUSTNESS ANALYSIS OF THE PDLC While Ihara and Schweppe’s model [9] is deterministic, we have also considered temperature disturbances to get a ther- mal model that reflects uncertainty. Temperature disturbance in real life may come with the inaccuracy of sensors, the unpredictability of consumers, etc. The revised temperature dynamics is therefore given by dT dt = Tout − T − Tgu + ε(t) τ , (49) where ε(t) is a bounded thermal stochastic disturbance uniformly distributed between [−¯ε, ¯ε]. We investigate the transient and steady state operation of the PDLC solution under this model of disturbance to illustrate the robustness of the PDLC. The discrete version of the model becomes Tk+1 = (1 − a)Tk + aTout − aTguk + aεk. (50) Repeating the derivation in theorem 1, the average room temperature evolution from time k to k + 1 given by k+1 = T ave T ave k + a(T ave set − T ave k ) + a Nc Nc ∑ i=1 ε i k. (51) can similarly derive the contingent packet length δ in lemma 1 and 2. For example, the value of δ and γ as will satisfy (cid:48) (cid:48) (cid:48) (m+1)(Tout +¯ε)−∑i j ∈S T i j min−Nc∆2 (m+1)(Tout +¯ε)−∑i j ∈S T i j max = e− δ (cid:48) τ . (52) Compared with (24), the only difference is that the term Tout in (24) is replaced by Tout + ¯ε. Hence the disturbance in (49) can be understood as the uncertainty introduced by the outside temperature. Also, the above δ is smaller than the δ in lemma 1. This is no surprise since the existence of uncertainty forces us to switch packets more frequently. (cid:48) VIII. SIMULATION A. Air Conditioner Temperature Control We simulate air conditioner temperature control process to verify theoretical results. Environmental parameters are Tg = 40, Tout = 93, τ = 20, Nc = 100, ¯ε = 10. Consumers preferred set point is T i set = 73 for all i. After calculation we choose Tmax = 74, Tmin = 72, ∆t = 1. Fig.3 is the process of warm load pick up. Fig.3(a) shows that the average room temperature converges to the set point when we applied the k − T ave number of packets at time k as a function of T ave set , which verifies theorem 1. Compared with Fig.3(b) where no control is applied, the consumption oscillation by the PDLC solution is reduced by a large amount after the system evolves into SSTE. The oscillation magnitude in Fig.3(b) if we simulate for longer time. Fig.4 continues to exist (a) Warm load pickup process with the PDLC Nc ∑Nc i=1 ε i Note that the term a k is bounded between [−a¯ε, a¯ε]. When packet length ∆t is small, a will approach zero, and this makes the disturbance term approach zero. Then the average room temperature will still converge to T ave set . As for the steady state operation, we will have the same comfort band selection as in Theorem 2, namely ∆1 = so f f ∆, ∆2 = son∆ with the difference in the boundary of packet length selection. In the model with disturbances, we (b) Warm load pickup process without control Fig. 3. Loads start outside the comfort zone is the steady state process where all the rooms have their initial temperatures randomly distributed within their comfort bands. We see two main advantages of our PDLC solution. First, the maximum and minimum room temperature are controlled within the comfort band in steady state, which cannot be achieved without control since then the disturbance drives the temperature outside the comfort band. Second, the consumption process is smoother with PDLC solution than in the stochastic uncontrolled case. (a) Steady state operation with the PDLC (b) Steady state operation without control Fig. 4. Loads start in the comfort zone B. Multiple Appliances Simulation Consider the simulation of multiple appliances. The con- trollable thermostatic loads are air conditioners and refrig- erators. We also add uncontrollable loads, such as light- ing and plug-in devices. The thermal characteristics of the refrigerator is similar to the air conditioner. Refrigerator parameters is given by Tset = 35, Tg = 75, Tout = 73, τ = 185, we choose Tmax = 38, Tmin = 32. ton and to f f are around 20 minutes according to (6) and (7), which is typical duty cycle of refrigerator [10]. We assume there are 60 refrigerators each consuming around 600 watts of power. The air conditioner consumes around 3kW each. There is also an industrial chiller that consumes with small variation in steady state, which is uniformly distributed between [135, 145]kW . Uncontrollable loads are uniformly distributed between [180, 200]kW . Table.1 shows the comparison result between the PDLC solution and the case when no control is applied. We find that standard deviation of consumption TABLE I COMPARISON OF CONSUMPTION STATISTICS PDLC No Control Mean 725.86 724.11 Std Dev Maximum Minimum 744.09 761.43 709.92 687.23 8.18 15.06 by the PDLC solution is nearly half of that without control. Also the maximum electric usage is reduced nearly 50% from above its average. IX. CONCLUSIONS AND FUTURE WORK This paper proposes an innovative PDLC solution for demand side management. We have discussed a thermal dy- namic model of typical thermostatic appliances and derived a mathematical expression of its duty cycle. Three theorems are proposed to illustrate overall PDLC solution. The first theorem proves the convergence of the average room temper- ature to average room set point. The second theorem provides comfort band choice such that we can guarantee effective temperature control in steady state. The third theorem builds the bridge between the first two theorems. Simulation shows that the PDLC solution can provide comfortable temperature control with minimum consumption oscillation, and reduce consumption peaks at the same time. Future research will compare the performance of the PDLC as described here with comparable distribution control approaches using market based signaling. Renewable energy sources will be included, and the dynamics of an appliance pool operator buying and selling resources under different communication protocols will be studied. REFERENCES [1] S. C. Lee, S. J. Kim, and S. H. Kim, “Demand Side Management With Air Conditioner Loads Based on the Queuing System Model”, IEEE Trans. Power Syst., Vol. 26, No. 2, pp. 661-668, May. 2011 [2] B. Ramanathan, and V. Vittal, “A Framework for Evaluation of Advanced Direct Load Control With Minimum Disruption”, IEEE Trans. Power Syst., Vol. 23, No. 4, pp. 1681-1688, Nov. 2008 [3] J. Baillieul, and P. Antsaklis, “Special Issue on the Technology of Networked Real-Time Systems”, Proceedings of the IEEE, 95:1, pp. 5- 8, Jan. 2007. [4] http://www.readwriteweb.com/archives/internet-of-things/, The Alexandra Institute [5] N. Lu, and D. P. Chassin, “A State-Queueing Model of Thermostati- cally Controlled Appliances”, IEEE Trans. Power Syst., Vol. 19, No. 3, pp. 1666-1673, Aug. 2004 [6] N. Lu, D. P. Chassin, and S. E. Widergren, “Modeling Uncertainties in Aggregated Thermostatically Controlled Loads Using a State Queue- ing Model”, IEEE Trans. Power Syst., Vol. 20, No. 2, pp. 725-733, May 2005 [7] Y. Hsu, and C. Su, “Dispatch of Direct Load Control Using Dynamic Programming”, IEEE Trans. Power Syst., Vol. 6, No. 3, pp. 1056-1061, Aug. 1991 [8] T. Lee, H. Wu, Y. Hsiao, P. Chao, F. Fang, and M. Cho, “Re- laxed Dynamic Programming for Constrained Economic Direct Loads Control Scheduling”, International Conference on Intelligent Systems Applications to Power Systems, Toki Messe, Niigata, pp. 1-6, 2007. [9] S. Ihara, and F. C. Schweppe, ”Physically Based Modeling of Cold Load Pickup”, IEEE Trans. Power Syst., Vol. PAS-100, pp. 4142-4150, Sep. 1981 [10] J. Cavallo, and J. Mapp, ”Targeting Refrigerators for Repair or Replacement”, Proceedings of 2000 ACEEE Summer Study on Energy Efficiency in Buildings, Wisconsin, Mar. 2000 [11] B. Zhang, and J. Baillieul, “A Noval Electric Packet Switching Framework Based on Queuing System Analysis”, technical report, 2011
ai_researcher
2
Scissorhands_Exploiting_the_Persistence_of_Importance_Hypothesis_for_LLM_KV_Cache_Compression_at_Test_Time.pdf
4 2 0 2 l u J 7 1 ] G L . s c [ 3 v 7 8 1 6 0 . 1 0 4 2 : v i X r a Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks Jing Wu1 , Mehrtash Harandi1 1Department of Electrical and Computer Systems Engineering Monash University, Australia {jing.wu1, mehrtash.harandi}@monash.edu Abstract. Machine unlearning has become a pivotal task to erase the influence of data from a trained model. It adheres to recent data reg- ulation standards and enhances the privacy and security of machine learning applications. In this work, we present a new machine unlearn- ing approach Scissorhands. Initially, Scissorhands identifies the most pertinent parameters in the given model relative to the forgetting data via connection sensitivity. By reinitializing the most influential top-k percent of these parameters, a trimmed model for erasing the influ- ence of the forgetting data is obtained. Subsequently, Scissorhands fine- tunes the trimmed model with a gradient projection-based approach, seeking parameters that preserve information on the remaining data while discarding information related to the forgetting data. Our exper- imental results, conducted across image classification and image gener- ation tasks, demonstrate that Scissorhands, showcases competitive per- formance when compared to existing methods. Source code is available at https://github.com/JingWu321/Scissorhands. Keywords: Machine unlearning · Connection sensitivity · Diffusion model Warning: This paper contains explicit sexual imagery that may be offensive. 1 Introduction In this work, we aim to propose an effective machine unlearning method. Under data regulations like the European Union General Data Protection Regulation (GDPR) [53] and the California Consumer Privacy Act (CCPA) [22], all users are granted the right to be forgotten. In machine learning, these legal provisions empower data owners with the right not only to withdraw their data from trained models but also to ensure that their data’s influence on these models is erased. The most direct approach to accomplish this objective is to retrain the model from scratch, excluding the data requested for deletion from the train- ing process. Retraining from scratch is typically considered the gold standard in the field of machine unlearning [12, 52]. Yet, this poses a challenge as nu- merous in-production models require prolonged training periods and substan- tial computing resources. While retraining is feasible, it is often impractical. 2 Jing Wu, Mehrtash Harandi Consequently, the proposal and development of efficient approximate unlearning methods [13, 23, 26, 50, 51] have become essential. Currently, most approximate unlearning techniques achieve forgetting by adding normally distributed noise to the parameters [19–21] or by estimating the influence of a particular data point on the model’s predictions [23, 37, 38, 48] based on the influence function method [10]. Jia et al. [27] demonstrate that model sparsity can help to unlearn and fuse model sparsity into the unlearning process. Fan et al. [12] recently raised concerns regarding the instability in approxi- mate unlearning methods, and highlighted that the current machine unlearning approaches, initially crafted for image classification tasks, fall short in effectively tackling the challenges of machine unlearning within the realm of image genera- tion. The authors introduce a novel unlearning method SalUn that can effectively perform forgetting in both image classification and generation tasks. Their un- learning mechanism includes finding salient weights and then fine-tuning these weights using forgetting data assigned with random labels. However, studies like [59] show that the model can still learn the true data distribution even using data with random labels. While SalUn has shown state-of-the-art performance, it could potentially memorize information about forgetting data points, albeit associating them with random labels but still can be undesired in sensitive ap- plications. In § 4, results also show that SalUn tends to memorize knowledge about the forgetting data. This work. We present an unlearning approach, Scissorhands, designed to erase data influence in the classification models and eliminate a particular concept from a text-to-image model. To achieve this, inspired by [27], our key insight is to first erase the critical influence of forgetting data in models and then relearn. Unlike previous methods that employ the forgetting data with noisy labels as a part of relearning, we propose to use an efficient gradient projection method to relearn the critical features and patterns while ensuring the exclusion of influ- ences associated with the forgetting data. Through a series of experiments and evaluations, including the classification on SVHN [39], CIFAR-10 and CIFAR- 100 [30], CelebAMASK-HQ [31] datasets, as well as the open-source stable diffu- sion [44] text-to-image model, results demonstrate the viability and effectiveness of our technique in forgetting the influence of random samples, discrete classes, and sensitive content such as nudity. 2 Methodology In this section, we propose Scissorhands, our unlearning framework that scrubs data from a model by eliminating their influence on model parameters. Through- out the paper, we denote scalars and vectors/matrices by lowercase and bold symbols, respectively (e.g., a, a, and A). We show a d-dimensional vector of ones by 1d, and use ⊙ to denote the Hadamard product. Overview. Consider a model f with parameters θ ∈ Rd trained on a dataset D = {xi, yi}N i=1. Suppose a subset of D, denoted as Df , is requested for deletion. Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 3 The remaining data is defined as Dr := D \ Df . Our objective is to develop the unlearning algorithm to generate a scrubbed model fu, effectively removing the influence of Df from model f , while maintaining its utility over Dr. In Scissorhands, we initially identify critical parameters w.r.t. the forgetting data via connection sensitivity analysis [32, 33]. We then reinitialize the top-k percent of these key parameters, resulting in a trimmed model with parameters θt. This step aims to diminish the model’s memory of the forgetting data Df . Nevertheless, this process runs the risk of erasing valuable information from the data set Dr that we aim to retain. To address this, the next phase of Scissorhands concentrates on restoring the performance of the trimmed model on remaining data Dr, while ensuring that the influence of the forgetting data Df remains excluded. We accomplish this balance and obtain the final scrubbed model with parameters θ⋆ through a gradient projection-based approach. 2.1 Trimming To effectively identify salient parameters of a network with respect to Df , one has to establish a good criterion to determine such salient connections. A commonly used criterion is the magnitude of the weights, with those exceeding a certain threshold deemed salient as suggested in [25]. Our requirement is more nuanced, as we need a measure that specifically discerns saliency based on Df . Therefore, we adopt a single-shot approach as proposed in [33], which we outline briefly to ensure clarity and completeness in our methodology. Consider sj(D) := Ex,y∼D (cid:104) (cid:105) ℓ(θ; x, y) − ℓ((1d − ej) ⊙ θ; x, y) , (1) which measures the influence of parameter j ∈ {1, . . . , d} on a model in terms of the empirical risk for a given dataset D. Here, ℓ : Rd × X × Y → R+ is the loss of the model and ej is the indicator vector of element j (i.e., a binary vector with its j-th component set to one). Note that computing sj for each j ∈ {1, . . . , d} is prohibitively expensive, as it demands d + 1 forward passes through the dataset. An approximation to Equation (1) [25] is in the form of sj(D) ≈ Ex,y∼D (cid:104) ∂ℓ(θ; x, y) ∂θj (cid:105) . θj (2) Detailed proofs can be found in Appendix A. Equation (2) defines the sensitivity of parameter j as the average (i.e., expectation) of a product, the gradient of the loss and the current value of the parameter. As such, sj encodes the loss sensitiv- ity according to the parameter’s magnitude. Intuitively, a parameter with a large value that significantly affects the loss, as captured by the gradient, is considered more influential or salient. The form in Equation (2) offers several benefits, most importantly, the ability to calculate the sensitivity for all parameters with just a single sweep through the dataset. To scrub the influence of the forgetting data Df in the model, we first obtain sj(Df ), saliency of parameters w.r.t. the forgetting data. We then re-initialize 4 Jing Wu, Mehrtash Harandi the top-k% of the parameters based on their saliency rankings. This process is akin to performing a targeted ‘lobotomy’ on the model specifically concern- ing the data Df , thereby selectively erasing their influence. Unfortunately, this aggressive approach of reinitializing parameters can detrimentally impact the model’s performance on the remaining dataset Dr, which we will correct in the next phase of Scissorhands. Remark 1. In principle, Scissorhands can benefit from any algorithms that can identify important connections w.r.t. Df . The main reason behind choosing SNIP is its single-shot nature. As presented in the study [12], the gradient of loss w.r.t. the model parameters over the forgetting data can help identify important parameters w.r.t. Df and hence help with erasing. Remark 2. In practice, we have observed that even a small subset of Df can be sufficient for the trimming. For example, to unlearn a large set of data on CIFAR-100, our algorithm outperforms baselines by utilizing approximately 3% of the forgetting data for trimming. Remark 3. Initializing weights of the trimmed neurons can take various forms. Empirically, we observed initializing with a uniform distribution is particularly effective. Details on the influence of different initialization strategies and the choice of k% trimming on the model’s performance will be discussed in § 4.2. 2.2 Repairing Following the parameter-trimming process, aimed at mitigating the influence of the forgetting data Df , a challenge presents itself: the potential erasure of crucial information associated with the remaining data Dr. A straightforward solution to recover the model utility is to relearn the model using the remaining data. However, this approach risks biasing the model towards features of the remaining data and inadvertently reintegrating information about the forgetting samples. In the quest to ensure that models retain essential information from the remaining data Dr while effectively forgetting Df , our strategy involves maximizing the loss over the forgetting data Df while concurrently minimizing the loss over the remaining data Dr. Therefore, to achieve the balance between these goals, we propose the objective of an efficient practice for unlearning: L(θ, Dr, Df ) := L(θ; Dr) − λL(θ; Df ) . (3) We aim to optimize Equation (3) w.r.t. θ through gradient descent, ensuring minimal reintroduction of information about Df . This is particularly challeng- ing when there are similarities between samples in Df and Dr. To achieve this desideratum, it is vital that updates to the model do not improve (i.e., reduce) L(θ; Df ). To this end, we define gf = ∇θL(θ; Df ) as the gradient direction for Df and go = ∇θL(θ, Dr, Df ) as the gradient direction for optimizing Equa- tion (3). The optimal descent direction g ∈ Rd should thus satisfy two criteria: 1. It should exhibit maximum similarity to go to ensure swift convergence. The Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 5 Algorithm 1 The procedure of Scissorhands. 1: procedure Trimming(θ, Df ) 2: 3: 4: X, y ← {xi, yi ∼ Df |i ∈ [B]}. // Compute connection sensitivity Get sj(θ; X, y), ∀j ∈ [1, d] using Eq. (2). // Re-initialization Re-initialize parameters according 5: 6: to the top-k% value of sj. return θt 7: 8: end procedure 9: procedure Repairing(θt, Df , Dr) 10: θ0 = θt. for e = 0 to E − 1 do Get L(θ, Dr, Df ) (cf. Eq. (3)). Get go and gf . // Get the optimal direction if ⟨go, gf ⟩ > 0 then Compute v∗ (cf. Eq. (5)). g = go − v∗gf . go ← g. 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: end procedure end if θe+1 ← θe − ηgo. end for return θ∗ = θE. notion of similarity can be captured by go using ∥g − go∥2 2. 2. It should not align with gf to prevent improving L(θ; Df ). This can be mathematically for- malized as ⟨gf , g⟩ ≤ 0. Consequently, after computing go = ∇θL(θ, Dr, Df ) and gf = ∇θL(θ; Df ) in each iteration, we formulate the following optimization challenge to identify the model update direction: arg min g 1 2 ∥g − go∥2 2 , s.t. ⟨gf , g⟩ ≤ 0. (4) This problem may be addressed using the Frank-Wolfe algorithm. Given the high dimensionality of gradients in neural networks, directly addressing the constraint optimization in Equation (4) could become overwhelming. Inspired by [36], we propose to make use of the dual formulation of the problem Equation (4) as: sup v s.t. f gf v2 − g⊤ g⊤ o gf v, 1 2 v ≥ 0. (5) Then, the optimal descent direction g is given from the solution v∗ in Equa- tion (5) as g = go − v∗gf . Detailed proofs can be found in Appendix A. By doing so, we ensure a balance between unlearning and retaining utility. Remark 4. The primal problem (cf. Equation (4)) involves optimizing over the high-dimensional space of gradient vectors, while the dual problem (cf. Equa- tion (5)) simplifies this problem to optimize over a single scalar variable v ∈ R. This dimensionality reduction can significantly decrease the computational com- plexity, especially for large-scale neural networks. 6 Jing Wu, Mehrtash Harandi 2.3 Algorithm Description Algorithm 1 describes the procedure of Scissorhands in detail. We first identify the important parameters w.r.t. the forgetting data in single-shot via connec- tion sensitivity in networks, then re-initialize these parameters for erasing the influence of the forgetting data Df in the given model. To repair the perfor- mance of the trimmed model on the remaining data Dr, while ensuring the exclusion of information associated with the forgetting data Df , we employ the gradient projection-based optimization algorithm. By doing so, we can scrub the influence of the forgetting data while retaining model utility on the re- maining data. The loss depicted in Equation (3) can be viewed as a scalar- ization of a Multi-Objective Optimization (MOO) problem, namely minimizing (cid:0)L(θ; Dr), −L(θ; Df )(cid:1)⊤ . A common issue in MOO is the gradient conflict, as optimizing one objective could hinder another one. Our proposal on gradient projection is designed to address and mitigate the gradient conflict. Empirically (see Table 3), we studied the effect of removing gradient projection. 3 Related Work The development of efficient machine unlearning methods [2, 3, 11, 17, 18, 20, 23, 27, 28, 37, 40, 42, 45, 48, 51, 58, 61] has gained prominence. Applications of ma- chine unlearning span various domains, including regression tasks [51], feder- ated learning [24, 34, 35, 54, 56], graph neural networks [5, 7], and diffusion mod- els [12, 15, 16, 26, 57, 60, 62], as well as scenarios where training data are not available [8, 50]. Retraining the model from scratch without forgetting data is typically con- sidered the standard gold unlearning algorithm [12, 52]. However, this is often deemed impractical as most in-production models require extensive training pe- riods and considerable computing resources. While fine-tuning the model for a new task might lead to forgetting previous knowledge (i.e., catastrophic for- getting) [36], it fails to adequately erase the data influence in the models. Ap- proximate unlearning methods, as such, become attractive alternatives. We will briefly discuss these methods in image classification and image generation. Unlearning in Classification Models. Most unlearning algorithms are based on the influence function [23,38,48] and the Fisher Information Matrix (FIM) [19– 21]. Influence function-based methods estimate the influence of the particular training data points on the model’s predictions, and Fisher unlearning methods assume that the unlearned model and the retrained model are close to each other, Golatkar et al. [20] leverage FIM and hide the difference between the unlearned models and retrained models via adding noise. Golatkar et al. [20] also present an upper bound for the amount of retained information, offering a quantifiable mea- sure of the effectiveness of the unlearning process. Such approximate unlearning methods need to compute the Hessian matrix, or FIM w.r.t. the data. This may render them impractical for certain scenarios, such as federated learning, where computational resources are distributed and limited. To mitigate these compu- tational demands, Mehta et al. [37] introduce L-CODEC, a strategy aimed at Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 7 pre-selecting a subset of parameters, allowing for more efficient computation. Jia et al. [27] recently propose to fuse the model sparsity into the unlearning algo- rithm, helping improve approximate unlearning methods, and achieving effective and efficient unlearning effects. Concept Erasure in Generative Models. The advent of generative models, particularly those converting text to images, has been a significant milestone in the field of artificial intelligence. A notable concern is the risk of it being tainted or manipulated [6,43], resulting in not-safe-for-work (NSFW) generations [46], as these models leverage training data from a wide array of open sources. To address these challenges, data censoring [1, 14, 41, 47] to exclude black-listed images is employed. Studies [15, 46] introduce methods to update the models away from the inappropriate concepts. Heng and Soh [26] recently propose to adopt Elastic Weight Consolidation (EWC) and Generative Replay (GR) to effectively unlearn without access to the training data for a wide range of generative models. Unlearning across Domains. Recently, Fan et al. [12] highlight the difficulty in cross-domain applicability of the machine unlearning algorithms, i.e., existing machine unlearning methods designed for classification tasks cannot be effec- tive when applied to image generation. To address this gap, they introduce a novel method, Saliency unlearning (SalUn), which shifts attention from the en- tire model to target parameters. SalUn achieves state-of-the-art performance and is effective in both image classification and image generation tasks. In this work, we propose Scissorhands which could achieve improved perfor- mance compared to existing methods across various scenarios. Inspired by [27] that apply ℓ1 regularization to simply fuse sparsity into the unlearning methods, we identify and re-initialize the key model parameters w.r.t. the forgetting data via connection sensitivity. Concurrently, Fan et al. [12] also explore unlearning via salience scores, determining weight salience via the gradient of the forgetting loss w.r.t. model parameters, and employing random labeling for model fine- tuning. While there are similarities in our overarching goals, the methodologies present unique perspectives and solutions to the problem of unlearning. Fan et al. [12] only focus on the salient weights, while our mechanism aims to first ‘de- stroy’ the key information and then relearn for the whole model. Additionally, using the forgetting data with random labels could potentially lead to memoriz- ing information about the forgetting data points [59]. In contrast, Scissorhands excludes knowledge about the forgetting data via a gradient projection-based approach, resulting in the effective erasure of information about the forgetting data while preserving the knowledge about the retained data. 4 Experimental Evaluation In this section, we illustrate how Scissorhands effectively eliminates the data influence in the models. For sample-wise forgetting, where the forgetting data shares the same distribution as the training data, we evaluate Scissorhands on SVHN [39], CIFAR-10 and CIFAR-100 [30]. We further extend our evaluation on CelebAMask-HQ [31] where targeting to remove the entirety of specific identities. 8 Jing Wu, Mehrtash Harandi Additionally, our experimentation encompasses the open-source text-to-image model, Stable Diffusion v1.4 [44], which is conditioned on CLIP text embeddings through the cross-attention mechanism. Further detailed experimental setups and additional results can be found in Appendix B. Baselines. We primarily compare against the following standard baselines that are frequently employed in machine unlearning, alongside recently proposed SOTA methods: (i) Retrain: models obtained by only retraining from scratch on Dr. This will provide the performance of the oracle for us. (ii) Fine-tuning (FT) [55]: models that are fine-tuned on Dr, i.e., taking advantage from catas- trophic forgetting in neural networks to unlearn. (iii) Gradient ascent (GA) [52]: gradient ascent on Df . This will provide a simple way of unlearning by mak- ing the performance of the model worse on Df . (iv) Influence unlearning (IU) [29]: utilizes the influence function method to estimate the change in model parameters when transitioning from the unscrubbed model to the re- trained model. (v) Boundary shrink (BS) [4] and (vi) Boundary expanding (BE) [4]: shift the decision boundary of the original model to imitate that of the retrained model. (vii) ℓ1-sparse [27]: fine-tuning models on Dr with ℓ1-norm sparse regularization. (viii) Saliency unlearning (SalUn) [12]: adopt weight saliency and random labeling for unlearning. Metrics. To assess the unlearning algorithms, we employ several metrics: (i) Accuracy: the accuracy of the model on Df (denoted as AccDf ), Dr (denoted as AccDr ) and the test set (denoted as AccDt). It provides insight into how well the model performs after undergoing the unlearning process. (ii) Mem- bership inference attack (MIA): a standard metric for verifying the un- learning effect. A classifier ϕ is trained using both training data (marked with a label of 1, indicating data points included in training) and test data (marked with a label of 0, indicating data points not seen during training), and subse- quently assessed on Df . An effective unlearning method should make it chal- lenging to infer whether a particular sample was part of the training set, mit- igating potential privacy breaches. In this context, we define MIA as Pϕ(y = 0|xf ), representing the probability assigned by the classifier ϕ that a given sam- ple xf was not included in the training data. (iii)Avg. Gap [12]: the aver- age of the performance gaps measured in accuracy-related metrics: Avg. Gap = (|AccDt − Acc∗ | + |MIA − MIA∗|)/4. Dt where Acc∗ and MIA∗ are metric values of the retrained model. A , Acc∗ Df Dt better performance of an unlearning method corresponds to a lower performance gap with retraining, measuring how close the unlearned model is to the retrained model. (iv) Run-time efficiency (RTE) [27]: evaluate the computational ef- ficiency of the unlearning method. Specifically, we have RTE = T/Tr, where T and Tr present the time for the unlearning methods to get the scrubbed mod- els and the time for retraining from scratch with Dr, respectively. An efficient unlearning method should aim for minimal computational overhead compared to the baseline. (v) Relearn time [8, 20, 51]: the number of epochs to fine-tune the scrubbed models for regaining performance on Df (i.e., reach the original models’ accuracy on Df ). | + |AccDf − Acc∗ Df , Acc∗ Dr | + |AccDr − Acc∗ Dr Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 9 Table 1: Quantitative results for forgetting 10% data. CIFAR-100 CIFAR-10 SVHN Retrain Method AccDf (↓) AccDt (↑) AccDr (↑) MIA(↑) Avg. Gap 75.13±0.85 74.69±0.08 99.98±0.01 50.22±0.62 97.98±1.36 75.28±0.12 99.95±0.02 9.64±3.6 FT [55] 98.00±1.34 75.59±0.11 98.24±1.16 5.00±2.25 GA [52] 95.67±4.82 72.13±4.58 96.14±4.51 9.43±5.98 IU [29] 97.94±1.38 74.16±0.09 98.12±1.24 7.60±3.05 BE [4] BS [4] 97.65±1.48 73.20±0.18 97.93±1.30 8.24±3.23 ℓ1-sparse [27] 96.35±0.67 70.06±0.46 96.35±0.67 21.33±1.95 88.56±1.18 71.34±0.48 99.40±0.35 74.66±2.48 SalUn [12] 68.76±1.81 73.17±0.24 99.24±0.30 42.42±2.06 Ours 16.01 17.68 16.93 16.96 17.01 14.59 10.45 4.11 - Retrain 94.81±0.53 94.26±0.14 100.0±0.00 13.05±0.64 99.15±0.46 93.83±0.45 99.84±0.11 3.01±0.93 FT [55] 99.66±0.23 94.57±0.01 99.62±0.25 0.91±0.29 GA [52] 98.08±2.10 91.91±2.73 98.01±2.26 4.01±3.44 IU [29] 99.41±0.38 93.79±0.15 99.41±0.38 16.16±0.78 BE [4] 99.60±0.25 94.24±0.07 99.56±0.54 4.46±0.33 BS [4] ℓ1-sparse [27] 94.17±0.49 90.64±0.52 96.64±0.54 11.87±0.61 98.07±0.42 93.92±0.25 99.89±0.07 17.93±0.37 SalUn [12] 95.40±1.48 92.92±0.48 98.93±0.57 9.56±2.13 Ours Retrain 91.81±1.11 91.17±1.77 97.73±1.12 15.74±1.28 FT [55] 99.60±0.24 95.28±0.04 99.99±0.00 3.85±0.4 GA [52] 98.41±0.23 92.87±0.06 98.52±0.29 5.96±0.38 IU [29] 92.47±1.62 86.92±2.06 93.36±1.82 17.00±2.67 BE [4] 98.48±0.33 92.62±0.05 98.44±0.29 6.77±0.36 98.29±0.32 92.48±0.02 98.36±0.29 6.74±0.39 BS [4] ℓ1-sparse [27] 97.79±0.10 93.59±3.18 99.49±0.23 7.57±0.29 96.29±4.80 93.59±3.18 97.41±4.96 24.47±4.08 SalUn [12] 91.62±1.18 93.05±0.56 99.51±0.14 17.62±2.54 Ours - 3.74 4.42 4.16 2.19 3.46 2.20 2.15 1.62 - 6.51 4.72 2.63 4.45 4.36 4.65 3.99 1.43 4.1 Results on Classification Task In this experiment on SVHN, CIFAR-10, and CIFAR-100, we try to forget ran- domly selected 10% of the training data; on CelebAMask-HQ, we attempt to forget randomly selected 10% identities among 307 identities. We further apply GradCAM [49] to visualize regions where models focus on w/ and w/o ma- chine unlearning algorithms. In brief, the results suggest that Scissorhands has successfully induced forgetting for the relevant samples and classes, with minor degradation in model performance over the remaining data and classes. We will discuss the results in depth below. Sample-wise unlearning. Table 1 presents the results when forgetting randomly selected samples. Scissorhands achieves the lowest average performance gap on all the presented datasets and shows good generalization ability compared to 10 Jing Wu, Mehrtash Harandi Table 2: Quantitative results for forgetting 10% identities on the CelebAMask-HQ. Retrain Method AccDt (↑) AccDr (↑) MIA(↑) Avg. Gap AccDf (↓) 87.02±0.80 99.96±0.01 100.0±0.00 0.00±0.00 5.28±2.03 88.59±0.59 99.97±7.02 99.94±0.12 FT [55] 87.60±8.71 81.22±2.11 99.74±0.26 51.37±5.96 GA [52] 88.92±10.25 70.24±11.77 95.27±5.07 29.59±18.59 IU [29] 44.11±2.08 95.58±1.23 46.24±5.90 69.07±2.73 BE [4] 81.92±0.27 99.86±0.03 45.93±5.11 BS [4] 98.18±1.92 89.37±0.70 99.97±0.00 76.78±5.66 ℓ1-sparse [27] 98.81±0.72 78.36±1.34 96.90±1.11 100.0±0.00 0.00±0.00 SalUn [12] 80.18±6.60 97.20±3.81 99.83±0.35 1.52±2.73 Ours 49.06 35.56 45.20 42.53 39.36 31.10 2.93 2.82 - Fig. 1: Visualizations of regions where models focus on generated by GradCAM [49]. Retrain. For example, on SVHN, Scissorhands achieves ∼ 91.62% accuracy on the forgetting data Df , closely matching the 91.81% accuracy of a fully retrained model. Furthermore, the retrained model reaches around 91% accuracy on the test dataset, Scissorhands exhibits an accuracy of roughly 93%, showcasing the enhanced generalization of our scrubbed model. When evaluating solely based on forgetting accuracy on CIFAR-10, the ℓ1- sparse method might seem to be the most effective baseline, achieving approx- imately 94.17%. Nonetheless, this perceived advantage is offset by a decline in both test accuracy and the accuracy of retained data. In contrast, Scissorhands shows a good trade-off between preserving model utility on the remaining data, generalization on unseen sets, and erasing the influence of the forgetting data. Similarly, on CIFAR-100, SalUn may initially stand out with its impressive MIA accuracy of approximately 74.66%, relying solely on a single metric to gauge the performance of machine unlearning can be misleading, as it might not fully capture the method’s effectiveness [12]. While SalUn achieves a forgetting accu- racy of about 88.56% and a test accuracy of 71.34%, Scissorhands demonstrates a slightly lower forgetting accuracy of around 68.76% but achieves a higher test accuracy of 73.17%. This indicates that Scissorhands achieves the best trade-off between eradicating data traces and generalizing to unseen data. Soon, we will show that on this dataset, the relearning time of Scissorhands is greater than 200 to regain the performance on Df , which indicates that Scissorhands has efficiently scrubbed information about Df . RetrainSalUnOursRetrainSalUnOurs𝐷!𝐷" Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 11 Overall, Scissorhands illustrates its capability to adeptly balance the objec- tives of data removal, model utility preservation, and generalization to unseen data, ensuring that the influence of forgotten data is minimized without signifi- cantly compromising the overall performance and utility of the model. Class-wise unlearning. Furthermore, Table 2 presents the outcomes of our anal- ysis for the scenario where forgetting 10% of identities on the CelebAMask-HQ. Images are rescaled to 224 × 224, and a model pre-trained with ImageNet1K [9] is employed. In this context, Scissorhands manifests the smallest average perfor- mance gap with retrained models. Notably, the Retrain method achieves perfect metrics in terms of MIA accuracy, which is 100%, and completely erases the identity information from the dataset, as indicated by a 0.00% accuracy on the forgotten data Df . However, retraining proves impractical and limited for un- learning purposes in general, functioning merely as an oracle in our experiments. Among the baselines, fine-tuning (FT) and ℓ1-sparse exhibit higher accu- racies on the remaining data Df and unseen data Dt; yet, their accuracies on forgetting remain high as well, highlighting a less effective balance between erad- icating data traces and generalizing to unseen data. In contrast, Scissorhands and SalUn demonstrate superior capabilities in effectively forgetting identities with minimal impact on the models’ overall performance. SalUn achieves an MIA accuracy of 100% and a forgetting accuracy of 0.00%, effectively nullifying the identity information similar to the retrained models. Scissorhands slightly surpasses SalUn in terms of the average performance gap with retrained models, maintaining a high MIA accuracy of 99.83% and a significantly low accuracy on the forgotten data Df (i.e., 1.52%), while achieving a test accuracy of 80.18% and an accuracy of 97.20% on the retained data. We further employ the GradCAM [49] to illustrate the focus area of mod- els w/ and w/o machine unlearning algorithms. As shown in Figure 1, when evaluated on the remaining data Dr, both SalUn and Scissorhands even pay more attention to the facial features for identity classification than the retrained model. Conversely, when evaluated on the forgetting data Df , the scrubbed model by Scissorhands shifts its attention towards regions that are least associ- ated with the facial features. We hypothesize that in the scenario where the task involves forgetting identities, both the remaining data and the forgetting data are face images, the scrubbed models would need a more nuanced approach to distinguish between individuals. Specifically, because the task at hand requires the discernment of subtle differences across facial features to effectively differ- entiate and forget specific identities while retaining others, the models adapt by developing a refined sensitivity towards those facial characteristics that are most indicative of individual identity. This adaptation enables the models to maintain high accuracy on the remaining data by focusing more intently on the facial features that are crucial for identity recognition, thereby enhancing their ability to generalize and accurately classify identities. These results underscore our proposed machine unlearning algorithm Scis- sorhands’s superior capabilities in balancing identity forgetting and model utility 12 Jing Wu, Mehrtash Harandi Table 3: Influence of projection and initialization strategies (re-initializing parameters with uniform distribution U, Gaussian distribution N , and constant value of 0/1). Projection Initialization AccDf (↓) AccDt (↑) AccDr (↑) MIA(↑) Avg. Gap ✓ ✗ ✓ ✓ ✗ U U N 0 1 95.40±1.48 92.92±0.48 98.93±0.57 9.56±2.13 98.14±0.60 93.14±0.16 99.73±0.16 6.08±0.78 94.68±2.50 92.13±2.20 97.94±2.71 10.18±3.81 96.10±2.05 92.92±0.99 99.09±1.19 8.76±3.30 84.28±0.49 83.38±0.58 87.18±0.90 20.67±0.41 1.62 2.92 1.80 1.96 10.46 Fig. 2: Influence of the percent value k in the trimming process, the balance term λ of Equation (3), and the ratio of forgetting data used in the trimming process. performance, Scissorhands not only minimizes privacy risks but also maintains the integrity and applicability of the model to unseen data. 4.2 Ablation Study Component analysis. As detailed in Table 3, the inclusion of different proce- dures in Scissorhands has a noticeable impact. Notably, the employment of the projection mechanism plays a crucial role in eliminating the influence of forget- ting data Df , affecting the overall utility of the model. The rationale behind the projection process is to preclude the reacquisition of critical information about the forgetting data Df during the repair phase. Moreover, Table 3 reveals that initializing parameters with a uniform distribution U or opting for zero initializa- tion facilitates effective unlearning, in contrast to re-initializing parameters with a constant value of one, which impedes finding a viable solution to Equation (5). Hyper-parameter impact. Figure 2 presents the effects of varying hyper- parameters, such as the percentage value k% for the trimming process, the bal- ance term λ of Equation (3) in the repairing phase, and the ratio of forgetting data used in the trimming process. An increase k results in higher accuracies on data, as fewer parameters associated with the forgetting data Df are re- initialized. Empirically, we observe choosing a value of k ≥ 0.9 will result in stable performances. That said, this is a hyperparameter of the algorithm and can be chosen by measuring the performances on Df and Dr during training. The hyper-parameter λ controls the balance between retaining and forgetting data, the accuracies on test data and remaining data show a peak at λ = 0.05 0.900.920.940.960.98Percent value (k)0.8250.8500.8750.9000.9250.9500.9751.000Accuracies0.020.040.060.080.930.940.950.960.970.980.991.00AccuraciesAcc_tAcc_rAcc_fAcc_t (retrain)Acc_r (retrain)Acc_f (retrain)12345p (%)0.920.940.960.981.00Accuracies Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 13 Fig. 3: Quantity of nudity content detected using the NudeNet classifier from 1K sampled images and I2P data. We observed a high false positive rate for exposed female genitalia/breast using the NudeNet classifier on generated I2P images. The flagged images can be found in Appendix B. Fig. 4: Sample images with the prompt from cf ={‘nudity’, ‘naked’, ‘erotic’, ‘sexual’} generated by SDs w/ and w/o machine unlearning algorithms. Best viewed in color. when forgetting 10% data on CIFAR-10. Generally, as λ escalates, it places greater emphasis on the process of data omission, leading to a decrease in for- getting accuracy. As the ratio p increases, all accuracy metrics exhibit a decline, suggesting that employing more forgetting data for the purpose of forgetting in order to isolate critical knowledge about the forgetting data also inadvertently results in the loss of pertinent information regarding the remaining data. This phenomenon occurs particularly when the data designated for forgetting shares the same distribution as the data that is retained. In essence, while aiming to enhance the specificity of the forgetting process by pinpointing essential details about the data to be forgotten, there’s a risk of simultaneously diminishing knowledge pertinent to the retained data. 4.3 Case Study: Stable Diffusion We also employed our algorithm to mitigate the generation of inappropriate content in Stable Diffusion (SD) models [44]. Specifically, our primary objective is to effectively eliminate the concept of nudity from the model’s generative capabilities. To this end, followed [26] and employed SD v1.4 for sampling with 50 time steps. We evaluate on 1K generated images with prompts cf ={‘nudity’, ‘naked’, ‘erotic’, ‘sexual’} and 4703 generated images with I2P [46] using the open-source NudeNet classifier. Figures 3 and 4 present the results for SD v2.1 (trained on a dataset filtered for nudity), SalUn, and Scissorhands. In Figure 3, y-axis denotes the number of 100806040200% Change from SD v1.4Buttocks (92)Female_Breast (1496)Female_Genitalia (351)Male_Breast (68)Feet (89)Armpits (792)Belly (686)Male_Genitalia (58)Nude (1K)SD v2.1SalUnOurs100806040200% Change from SD v1.4Buttocks (28)Female_Breast (276)Female_Genitalia (13)Male_Breast (44)Feet (55)Armpits (158)Belly (162)Male_Genitalia (7)I2P (4703)SD v2.1SalUnOursAdded by authors for publicationSD v1.4SalUnOurs 14 Jing Wu, Mehrtash Harandi exposed body parts generated by the SD v1.4 model, it presents the percentage change in exposed body parts w.r.t. SD v1.4. When comparing on I2P data, our algorithms outperform SalUn on ‘Feet’, while coming short on ‘Female Breast and Genitalia’. While this could suggest the superiority of SalUn on this data, the gap is not that big in reality, as out of the 9 images flagged for containing exposed female breasts, 7 were inaccurately identified, and none of the flagged images depicted exposed female genitalia. The use of the NudeNet classifier on generated I2P images exhibits a significant rate of false positives, as highlighted in [26] as well. When we use prompts cf explicitly associated with nudity, no exposed sensitive content is detected for both Scissorhands and SalUn methods. We can conclude that, Scissorhands significantly reduces the amount of nudity content compared to SD v1.4, SD v2.1, showing comparable results to SalUn. 5 Limitations and Broader Impact Our proposed unlearning algorithm, Scissorhands, can achieve comparable per- formance to state-of-the-art unlearning methods. We addressed potential issues of conflicting gradients arising from direct conflicts between features or patterns in the forgetting and remaining data, ensuring training stability. However, the limitation is the incomplete concept erasure shown in Figure 8 in Appendix B, and the potential for biased models due to the selective deletion of information. This is a crucial consideration, especially when unlearning algorithms are applied with malicious intent. Furthermore, in experiments on stable diffusion, while our method can successfully erase specific concepts, finding an optimal balance be- tween forgetting and maintaining high image quality remains challenging. There is also a risk that unlearning algorithms might be used to alter or remove con- cepts in unethical or malicious ways. It is our hope that Scissorhands offers a fresh perspective on practical and effective machine unlearning, emphasizing the need for ethical guidelines to prevent misuse in its application. 6 Conclusion In this work, we proposed Scissorhands, an effective and practical machine un- learning algorithm. Our algorithm identified the important parameters w.r.t. the forgetting data in single-shot via connection sensitivity, then re-initialized these parameters for scrubbing the influence of the forgetting data. An effective gra- dient projection-based technique is applied to enhance the model utility while excluding the information w.r.t. the forgetting data. Our evaluations on various datasets showed that, compared with other methods, our approach offers supe- rior performance across image classification and image generation tasks. Future works could explore how Scissorhands performs on regression and NLP tasks, and further investigation into the scenario where data are not available, as well as unlearning with sequential/time-series data where could introduce unique chal- lenges, e.g., the need to consider temporal dependencies, which may require new frameworks specifically tailored for sequential data. Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 15 Acknowledgements Mehrtash Harandi is supported by funding from the Australian Research Council (ARC) Discovery Program DP230101176. References 1. Birhane, A., Prabhu, V.U.: Large image datasets: A pyrrhic win for computer vision? In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1536–1546. IEEE (2021) 2. Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C.A., Jia, H., Travers, A., Zhang, B., Lie, D., Papernot, N.: Machine unlearning. In: 2021 IEEE Symposium on Security and Privacy (SP). pp. 141–159 (2021) 3. Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy (SP). pp. 463–480 (2015) 4. Chen, M., Gao, W., Liu, G., Peng, K., Wang, C.: Boundary unlearning: Rapid forgetting of deep networks via shifting the decision boundary. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7766–7775 (2023) 5. Chen, M., Zhang, Z., Wang, T., Backes, M., Humbert, M., Zhang, Y.: Graph unlearning. In: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. pp. 499–513 (2022) 6. Chen, W., Song, D., Li, B.: Trojdiff: Trojan attacks on diffusion models with diverse targets. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4035–4044 (2023) 7. Cheng, J., Dasoulas, G., He, H., Agarwal, C., Zitnik, M.: Gnndelete: A general strategy for unlearning in graph neural networks. In: International Conference on Learning Representations (ICLR) (2023) 8. Chundawat, V.S., Tarun, A.K., Mandal, M., Kankanhalli, M.: Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security (2023) 9. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large- scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. pp. 248–255. Ieee (2009) 10. Dennis, C.R., Weisberg, S.: Residuals and influence in regression (1982) 11. Fan, C., Liu, J., Hero, A., Liu, S.: Challenging forgets: Unveiling the worst-case forget sets in machine unlearning. arXiv preprint arXiv:2403.07362 (2024) 12. Fan, C., Liu, J., Zhang, Y., Wei, D., Wong, E., Liu, S.: Salun: Empowering ma- chine unlearning via gradient-based weight saliency in both image classification and generation. In: International Conference on Learning Representations (ICLR) (2024) 13. Foster, J., Schoepf, S., Brintrup, A.: Fast machine unlearning without retraining through selective synaptic dampening. arXiv preprint arXiv:2308.07707 (2023) 14. Gandhi, S., Kokkula, S., Chaudhuri, A., Magnani, A., Stanley, T., Ahmadi, B., Kandaswamy, V., Ovenc, O., Mannor, S.: Scalable detection of offensive and non- compliant content/logo in product images. In: Proceedings of the IEEE/CVF Win- ter Conference on Applications of Computer Vision. pp. 2247–2256 (2020) 15. Gandikota, R., Materzynska, J., Fiotto-Kaufman, J., Bau, D.: Erasing concepts from diffusion models. In: 2023 IEEE International Conference on Computer Vision (ICCV) (2023) 16 Jing Wu, Mehrtash Harandi 16. Gandikota, R., Orgad, H., Belinkov, Y., Materzyńska, J., Bau, D.: Unified concept editing in diffusion models. arXiv preprint arXiv:2308.14761 (2023) 17. Ginart, A., Guan, M., Valiant, G., Zou, J.Y.: Making ai forget you: Data dele- tion in machine learning. In: Advances in Neural Information Processing Systems (NeurIPS). vol. 32 (2019) 18. Goel, S., Prabhu, A., Sanyal, A., Lim, S.N., Torr, P., Kumaraguru, P.: To- wards adversarial evaluations for inexact machine unlearning. arXiv preprint arXiv:2201.06640 (2022) 19. Golatkar, A., Achille, A., Ravichandran, A., Polito, M., Soatto, S.: Mixed-privacy forgetting in deep networks. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 792–801 (2021) 20. Golatkar, A., Achille, A., Soatto, S.: Eternal sunshine of the spotless net: Selective forgetting in deep networks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 9301–9309 (2020) 21. Golatkar, A., Achille, A., Soatto, S.: Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In: European Conference on Computer Vision (ECCV). pp. 383–398. Springer (2020) 22. Goldman, E.: An introduction to the california consumer privacy act (ccpa). Santa Clara Univ. Legal Studies Research Paper (2020) 23. Guo, C., Goldstein, T., Hannun, A., Van Der Maaten, L.: Certified data removal from machine learning models. In: Proceedings of the 37th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 119, pp. 3832–3842. PMLR (2020) 24. Halimi, A., Kadhe, S., Rawat, A., Baracaldo, N.: Federated unlearning: How to efficiently erase a client in fl? In: International conference on machine learning. PMLR (2022) 25. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. vol. 28 (2015) 26. Heng, A., Soh, H.: Selective amnesia: A continual learning approach to forgetting in deep generative models. In: Advances in Neural Information Processing Systems (NeurIPS) (2023) 27. Jia, J., Liu, J., Ram, P., Yao, Y., Liu, G., Liu, Y., Sharma, P., Liu, S.: Model sparsification can simplify machine unlearning. In: Advances in Neural Information Processing Systems (NeurIPS) (2023) 28. Karasuyama, M., Takeuchi, I.: Multiple incremental decremental learning of sup- port vector machines. IEEE Transactions on Neural Networks 21(7), 1048–1059 (2010) 29. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: International conference on machine learning. pp. 1885–1894. PMLR (2017) 30. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009) 31. Lee, C.H., Liu, Z., Wu, L., Luo, P.: Maskgan: Towards diverse and interactive facial image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5549–5558 (2020) 32. Lee, N., Ajanthan, T., Gould, S., Torr, P.H.: A signal propagation perspective for pruning neural networks at initialization. In: International Conference on Learning Representations (ICLR) (2020) 33. Lee, N., Ajanthan, T., Torr, P.H.: Snip: Single-shot network pruning based on connection sensitivity. In: International Conference on Learning Representations (ICLR) (2019) Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 17 34. Liu, G., Ma, X., Yang, Y., Wang, C., Liu, J.: Federaser: Enabling efficient client- level data removal from federated learning models. In: 2021 IEEE/ACM 29th In- ternational Symposium on Quality of Service (IWQOS). pp. 1–10 (2021) 35. Liu, Y., Xu, L., Yuan, X., Wang, C., Li, B.: The right to be forgotten in federated learning: An efficient realization with rapid retraining. In: IEEE INFOCOM 2022 - IEEE Conference on Computer Communications. pp. 1749–1758 (2022) 36. Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. Advances in neural information processing systems 30 (2017) 37. Mehta, R., Pal, S., Singh, V., Ravi, S.N.: Deep unlearning via randomized condi- tionally independent hessians. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10412–10421 (2022) 38. Neel, S., Roth, A., Sharifi-Malvajerdi, S.: Descent-to-delete: Gradient-based meth- ods for machine unlearning. In: Proceedings of the 32nd International Conference on Algorithmic Learning Theory. pp. 931–962. Proceedings of Machine Learning Research, PMLR (2021) 39. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011) 40. Nguyen, T.T., Huynh, T.T., Nguyen, P.L., Liew, A.W.C., Yin, H., Nguyen, Q.V.H.: A survey of machine unlearning. arXiv preprint arXiv:2209.02299 (2022) 41. Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., Chen, M.: Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021) 42. Peste, A., Alistarh, D., Lampert, C.H.: Ssse: Efficiently erasing samples from trained machine learning models. arXiv preprint arXiv:2107.03860 (2021) 43. Rando, J., Paleka, D., Lindner, D., Heim, L., Tramèr, F.: Red-teaming the stable diffusion safety filter. arXiv preprint arXiv:2210.04610 (2022) 44. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10684–10695 (2022) 45. Romero, E., Barrio, I., Belanche, L.: Incremental and decremental learning for linear support vector machines. In: International Conference on Artificial Neural Networks. pp. 209–218. Springer (2007) 46. Schramowski, P., Brack, M., Deiseroth, B., Kersting, K.: Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 22522– 22531 (2023) 47. Schramowski, P., Tauchmann, C., Kersting, K.: Can machines help us answering question 16 in datasheets, and in turn reflecting on inappropriate content? In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Trans- parency. pp. 1350–1361 (2022) 48. Sekhari, A., Acharya, J., Kamath, G., Suresh, A.T.: Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems (NeurIPS) 34, 18075–18086 (2021) 49. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad- cam: Visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV). pp. 618–626 (2017) 50. Tarun, A.K., Chundawat, V.S., Mandal, M., Kankanhalli, M.: Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems (2023) 18 Jing Wu, Mehrtash Harandi 51. Tarun, A.K., Chundawat, V.S., Mandal, M., Kankanhalli, M.: Deep regression unlearning. In: International Conference on Machine Learning. pp. 33921–33939. PMLR (2023) 52. Thudi, A., Deza, G., Chandrasekaran, V., Papernot, N.: Unrolling sgd: Understand- ing factors influencing machine unlearning. In: 2022 IEEE 7th European Sympo- sium on Security and Privacy (EuroS&P). pp. 303–319. IEEE (2022) 53. Voigt, P., Von dem Bussche, A.: The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10(3152676), 10–5555 (2017) 54. Wang, J., Guo, S., Xie, X., Qi, H.: Federated unlearning via class-discriminative pruning. In: Proceedings of the ACM Web Conference 2022. pp. 622–632 (2022) 55. Warnecke, A., Pirch, L., Wressnegger, C., Rieck, K.: Machine unlearning of features and labels. arXiv preprint arXiv:2108.11577 (2021) 56. Wu, C., Zhu, S., Mitra, P.: Federated unlearning with knowledge distillation. arXiv preprint arXiv:2201.09441 (2022) 57. Wu, J., Le, T., Hayat, M., Harandi, M.: Erasediff: Erasing data influence in diffu- sion models. arXiv preprint arXiv:2401.05779 (2024) 58. Wu, Y., Dobriban, E., Davidson, S.: DeltaGrad: Rapid retraining of machine learn- ing models. In: Proceedings of the 37th International Conference on Machine Learn- ing. Proceedings of Machine Learning Research, vol. 119, pp. 10355–10366. PMLR (13–18 Jul 2020) 59. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning (still) requires rethinking generalization. In: International Conference on Learning Representations (ICLR) (2017) 60. Zhang, E., Wang, K., Xu, X., Wang, Z., Shi, H.: Forget-me-not: Learning to forget in text-to-image diffusion models. arXiv preprint arXiv:2303.17591 (2023) 61. Zhang, Y., Zhang, Y., Yao, Y., Jia, J., Liu, J., Liu, X., Liu, S.: Unlearncanvas: A stylized image dataset to benchmark machine unlearning for diffusion models. arXiv preprint arXiv:2402.11846 (2024) 62. Zhang, Y., Chen, X., Jia, J., Zhang, Y., Fan, C., Liu, J., Hong, M., Ding, K., Liu, S.: Defensive unlearning with adversarial training for robust concept erasure in diffusion models. arXiv preprint arXiv:2405.15234 (2024) Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 19 A Proofs A.1 Gradient Projection Proof. The primal problem Equation (4) could be rewritten as arg min g f (g) := 1 2 s.t. g⊤gf ≤ 0, o go − g⊤ g⊤ o g + 1 2 g⊤g, (6) where g⊤ the Lagrange dual function as o go is a constant, and we can remove this constant term. Then we have L(g, v) = −g⊤ o g + 1 2 g⊤g + v(g⊤gf ), (7) where v is the Lagrange multiplier and v ≥ 0. Thus, we have the equivalence problem to Equation (6): min g f (g) = inf g sup v L(g, v) (8) We define the dual problem as h(v) = inf g L(g, v), and the solution to the dual problem is obtained via h∗ = supv h(v). Lemma 1. If the primal problem has the optimal solution f ∗ and its dual prob- lem has the optimal solution h∗, then h∗ = supv inf g L(g, v) ≤ inf g supv L(g, v) = f ∗. As such, instead of directly solving Equation (6) whose computational complex- ity is based on the number of parameters in the network, we attempt to solve its dual problem h(v). First, to find the minimum of the Lagrange dual function w.r.t. g, Let ∇gL(g, v) = −go + g + vgf ≡ 0, we can get g = go − vgf . Then substitute g back into the Lagrange dual function and we can have L(v) = −g⊤ o (go − vgf ) + 1 2 o go − v(g⊤ g⊤ o gf ) + = − 1 2 v2(g⊤ f gf ), 1 2 (go − vgf )⊤(go − vgf ) − v(go − vgf )⊤gf , where g⊤ o go is a constant. Therefore, the dual problem could be written as 1 2 sup v s.t. h(v) := v ≥ 0. f gf v2 − g⊤ g⊤ o gf v, which gives Equation (5). (9) (10) 20 Jing Wu, Mehrtash Harandi A.2 Connection Sensitivity To effectively identify salient parameters based on the forgetting data Df , we adopt the approach proposed in [33] to compute the connection sensitivity of a network: sj(D) := Ex,y∼D (cid:105) (cid:104) ℓ(θ; x, y) − ℓ((1d − ej) ⊙ θ; x, y) ≈ Ex,y∼D (cid:104) ∂ℓ(θ; x, y) ∂θj (cid:105) . θj (11) (12) which measures the influence of parameter j ∈ {1, . . . , d} on a model in terms of the empirical risk for a given dataset D. Proof. Eq. (11) is approximated using the gradient of the loss w.r.t. that con- nection [25, 33]. sj(D) would be viewed to measure the sensitivity of the loss w.r.t. an infinitesimal additive change δ in the parameters θ, thereby probing the importance of the j-th parameter: sj(D) := Ex,y∼D (cid:104) (cid:105) ℓ(θ; x, y) − ℓ((1d − ej) ⊙ θ; x, y) (cid:104) ≈E lim δ→0 =Ex,y∼D =Ex,y∼D =Ex,y∼D ℓ(m ⊙ θ; x, y) − ℓ((m − δej) ⊙ θ; x, y) δ (cid:105) (cid:12) (cid:12) (cid:12) (cid:12)m=1 (cid:12) (cid:12) (cid:12) (cid:12)m=1 (cid:105) (cid:105) ⊙ θj (cid:104) ∂ℓ(m ⊙ θ; x, y) ∂mj (cid:104) ∂ℓ(m ⊙ θ; x, y) ∂(mj ⊙ θj) (cid:105) (cid:104) ∂ℓ(θ; x, y) ∂θj θj . which gives Equation (12). Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 21 B Details and Additional results B.1 Details Image Classification. We mainly follow the settings in [12] for image classi- fication. For all methods, we employ the SGD optimizer. Batch size is 256 for SVHN, CIFAR-10 and CIFAR-100 experiments. On SVHN, the original model and retrained model are trained over 50 epochs with a cosine-scheduled learning rate initialized at 0.1. On CIFAR-10 and CIFAR-100, the original model and re- trained model are trained over 182 and 160 epochs, respectively, and both adopt a cosine-scheduled learning rate initialized at 0.1. On CelebAMask-HQ, the batch size is 8 and a model pre-trained with ImageNet1K is employed. The original model and retrained model are trained over 10 epochs with a cosine-scheduled learning rate initialized at 10−3. FT trains for 10 epochs with a fixed learning rate of 0.1 on SVHN, CIFAR-10, and CIFAR-100, trains for 5 epochs with a fixed learning rate of 10−4 on CelebAMask-HQ. GA trains for 5 epochs for the former three datasets and 3 epochs for CelebAMask-HQ, and its learning rate lr ∈ [10−6, 10−4]. The hyper-parameter α in IU is within the range [1, 20], and the hyper-parameter γ in ℓ1-sparse is within the range [10−6, 10−4] with a fixed learning rate of 0.1. The FGSM step size is 0.1 for BS. Both BS and BE train for 10 epochs for the former three datasets and 5 epochs for CelebAMask-HQ, and their learning rate lr ∈ [10−6, 10−4]. SalUn and Scissorhands are trained for 10 epochs for the former three datasets and 5 epochs for CelebAMask-HQ. SalUn’s learning rate lr ∈ [5 × 10−3, 5 × 10−2] and sparsity ratio is within the range [0.2, 0.6]. Scissorhands’s learning rate lr ∈ [10−4, 5×10−3], percent value is within the range [0.9, 1.0) and λ ∈ [0.01, 1.0]. When evaluating the relearn time, the learning rate is 10−3 on CIFAR-10 and CIFAR-100. The original model achieves an accuracy of 100% on the forgetting data. Image Generation. We use the open-source SD v1.4 checkpoint as the pre- trained model and perform sampling with 50 time steps. We generate ∼400 images with the prompts cf ={‘nudity’, ‘naked’, ‘erotic’, ‘sexual’} as Df and ∼400 images with the prompt cr ={‘a person wearing clothes’} as Dr for per- forming the unlearning algorithms. For the unlearning process, we employ Adam optimizer and a learning rate of 10−5. We fine-tune models with SalUn and Scis- sorhands for 5 epochs with a batch size of 16. Then we evaluate on 1K generated images with prompts cf = and 4703 generated images with I2P [46] using the open-source NudeNet classifier, with the default probability threshold of 0.6 for identifying instances of nudity. Dataset Agreement CelebAMask-HQ dataset and the generations by stable diffusion models might contain identification information about the personal/human subjects. We eval- uate on these data for non-commercial and research purposes only. 22 Jing Wu, Mehrtash Harandi B.2 Additional results Table 4: Quantitative results for forgetting class on SVHN. Although ℓ1-sparse achieves the smallest average gap performance, SalUn and our Scissorhands achieve higher test accuracy (better generalization) than ℓ1-sparse when all these methods have an accuracy of 0 on the forgetting data (erase data influence). Retrain Method AccDf (↓) AccDt (↑) AccDr (↑) MIA(↑) Avg. Gap 0.00±0.00 92.36±1.51 97.81±0.73 100.0±0.00 82.78±8.27 95.42±0.07 100.0±0.00 93.72±10.14 FT [55] 3.77±0.16 90.29±0.08 95.92±0.25 99.46±0.05 GA [52] 64.84±0.70 92.55±0.01 97.94±0.02 72.96±0.33 IU [29] 11.93±0.42 91.39±0.05 96.89±0.28 97.91±0.13 BE [4] BS [4] 11.95±0.28 91.39±0.04 96.88±0.28 97.78±0.15 ℓ1-sparse [27] 0.00±0.00 93.83±1.47 99.41±0.90 100.0±0.00 0.00±0.00 95.79±0.03 100.0±0.00 100.0±0.00 SalUn [12] 0.00±0.00 95.18±0.06 99.84±0.03 100.0±0.00 Ours 23.58 2.07 23.05 3.98 4.02 0.77 1.41 1.21 - Table 5: Quantitative results for forgetting 50% identities on the CelebAMask-HQ. Retrain Method AccDf (↓) AccDt (↑) AccDr (↑) MIA(↑) Avg. Gap 0.00±0.00 88.09±1.37 99.98±0.03 100.0±0.00 FT [55] 99.98±0.03 90.71±1.27 99.98±0.03 3.08±0.24 GA [52] 99.96±0.02 88.41±0.40 99.98±0.03 2.44±0.43 IU [29] 90.37±8.78 68.40±7.91 94.80±6.61 30.10±9.65 BE [4] 99.94±0.02 83.12±1.68 99.97±0.02 3.62±0.52 99.98±0.03 87.80±0.95 99.98±0.03 2.76±0.35 BS [4] ℓ1-sparse [27] 76.14±3.63 90.29±1.05 99.92±0.10 99.86±0.19 54.90±2.60 90.92±1.66 99.98±0.03 99.95±0.00 SalUn [12] 0.76±0.52 81.64±3.75 99.14±0.95 100.0±0.00 Ours 49.46 49.46 46.29 50.33 49.38 19.64 14.45 2.01 - Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 23 Fig. 5: Visualizations of regions where models focus on generated by GradCAM [49]. Best viewed in color. Table 6: Quantitative results for forgetting 20% data on the SVHN, CIFAR-10 and CIFAR-100 datasets. CIFAR-100 CIFAR-10 SVHN Retrain Method AccDf (↓) AccDt (↑) AccDr (↑) MIA(↑) Avg. Gap 73.25±0.53 72.95±0.28 99.98±0.01 52.58±0.64 98.11±1.24 75.31±0.16 99.97±0.01 9.43±2.88 FT [55] 98.11±1.26 75.55±0.12 98.23±1.16 4.91±1.97 GA [52] 95.92±4.51 72.58±4.84 96.32±4.28 8.73±6.51 IU [29] 97.95±1.37 72.81±0.42 97.98±1.32 8.41±2.68 BE [4] BS [4] 97.17±1.32 71.45±0.18 97.35±1.31 9.70±2.30 ℓ1-sparse [27] 94.35±2.64 72.57±0.80 98.80±0.57 19.11±3.56 90.53±1.50 69.74±0.45 99.18±0.46 68.62±0.02 SalUn [12] 67.93±2.37 70.62±0.30 97.31±0.56 43.71±1.08 Ours 17.60 19.22 17.64 17.75 17.73 14.03 9.33 4.80 - Retrain 94.26±0.25 93.79±0.23 100.0±0.00 13.95±0.74 99.37±0.36 94.10±0.12 99.91±0.03 2.53±0.75 FT [55] 99.63±0.25 94.56±0.03 99.62±0.25 0.92±0.35 GA [52] 98.58±1.49 92.39±1.92 98.64±1.41 3.49±2.69 IU [29] 97.89±0.77 92.01±0.53 97.87±0.80 18.55±0.01 BE [4] BS [4] 99.55±0.29 94.19±0.02 99.55±0.29 6.67±0.42 ℓ1-sparse [27] 95.11±0.67 91.16±0.62 97.41±0.61 10.78±0.69 98.58±0.43 93.82±0.12 99.85±0.09 15.94±1.18 SalUn [12] 94.30±1.56 91.50±0.36 97.59±0.91 12.41±0.03 Ours Retrain 92.37±3.62 92.05±4.42 97.78±3.43 16.53±2.67 99.52±0.24 95.12±0.11 100.0±0.00 4.02±0.38 FT [55] 98.22±0.28 92.66±0.02 98.44±0.31 6.19±0.24 GA [52] 95.39±1.13 89.88±0.89 96.14±1.23 11.47±1.99 IU [29] 98.12±0.29 92.03±0.06 98.19±0.34 8.27±0.28 BE [4] BS [4] 97.87±0.31 91.60±0.09 97.96±0.34 8.56±0.25 ℓ1-sparse [27] 98.37±0.43 94.17±0.59 99.69±0.27 6.89±0.58 99.33±0.26 95.26±0.26 99.76±0.12 13.03±1.21 SalUn [12] 91.07±0.63 91.71±1.01 96.66±1.55 25.92±4.80 Ours - 4.23 4.89 4.39 3.04 3.36 2.31 1.63 1.57 - 6.24 4.37 2.97 3.61 3.53 4.92 3.91 3.04 RetrainSalUnOursRetrainSalUnOurs𝐷!𝐷" 24 Jing Wu, Mehrtash Harandi Table 7: Quantitative results for forgetting 50% data on the CIFAR-10 and CIFAR- 100 datasets. Notice that while our scrubbed models are not the closest ones to the retrained models (evidenced by the average gap performance), ours achieve higher test accuracy (better generalization) and lower forget accuracy (more effective in erasing data influence) than SalUn. CIFAR-100 CIFAR-10 SVHN Retrain Method AccDf (↓) AccDt (↑) AccDr (↑) MIA(↑) Avg. Gap 67.17±0.14 67.27±0.45 99.99±0.01 60.76±0.21 FT [55] 98.17±1.20 75.36±0.36 99.97±0.01 9.26±2.84 GA [52] 98.15±1.23 75.50±0.10 98.22±1.17 4.94±1.96 IU [29] 96.86±2.19 72.08±2.41 97.17±2.00 8.20±4.10 BE [4] 97.35±1.60 67.84±0.58 97.27±1.62 8.62±2.19 95.31±1.47 68.12±0.18 95.41±1.46 10.07±1.99 BS [4] ℓ1-sparse [27] 90.17±2.43 69.73±1.27 97.35±0.89 21.72±1.44 84.81±0.91 64.94±0.48 98.89±0.48 73.86±1.98 SalUn [12] 79.73±2.28 67.58±1.76 84.64±2.79 28.68±2.53 Ours 22.65 24.20 22.47 21.40 21.07 16.79 8.54 15.08 - Retrain 92.17±0.26 91.71±0.30 100.0±0.00 19.13±0.55 99.50±0.33 94.32±0.07 99.96±0.03 2.31±1.08 FT [55] 99.60±0.27 94.55±0.06 99.62±0.26 0.96±0.40 GA [52] 97.54±1.99 91.10±5.25 97.62±1.98 5.25±3.01 IU [29] 99.57±0.28 94.28±0.04 99.59±0.28 10.82±0.89 BE [4] BS [4] 99.58±0.28 94.44±0.03 99.60±0.27 1.99±0.08 ℓ1-sparse [27] 97.42±0.60 92.10±0.24 98.89±0.15 6.59±0.80 92.15±1.18 88.15±0.90 95.02±0.98 19.30±2.81 SalUn [12] 92.02±5.31 88.32±4.24 94.00±4.87 15.52±6.43 Ours Retrain 93.45±1.69 93.85±1.61 99.69±0.62 19.25±2.80 99.50±0.25 95.08±0.10 100.0±0.00 4.49±0.33 FT [55] 97.72±0.34 91.82±0.07 97.90±0.39 7.36±0.44 GA [52] 97.37±0.62 91.80±0.64 97.94±0.66 8.24±0.78 IU [29] 94.60±4.71 88.03±5.54 94.60±4.77 13.47±8.70 BE [4] BS [4] 97.51±0.31 90.87±0.06 97.55±0.36 10.12±0.51 ℓ1-sparse [27] 92.77±0.40 92.16±0.57 97.54±0.40 15.81±0.88 98.67±0.28 93.66±0.07 98.83±0.27 14.89±0.36 SalUn [12] 97.23±0.31 94.47±0.07 99.66±0.07 10.85±0.92 Ours - 6.70 7.20 5.56 4.67 6.92 4.82 2.18 3.29 - 5.59 5.00 4.68 4.46 4.58 1.99 2.66 3.21 Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 25 Table 8: Relearn time and overhead when forgetting 10% data on CIFAR-10. Relearn time denotes the epochs to regain performance on Df , measured over four runs. RTE is defined as the ratio of the time needed for forgetting to the time for retraining. Memory is computed via the module Memory Profiler to monitor the memory consumption of algorithms. Although Scissorhands outperforms SalUn in terms of relearn time (i.e., the effectiveness of forgetting), our method introduces more computational cost than SalUn. This is because, during the repair process, SalUn only fine-tunes the specific model parameters identified via the saliency scores, while ours fine-tunes the whole network. Method SalUn Ours Relearn time (↑) Overhead CIFAR-10 CIFAR-100 Memory (MiB) RTE 0.075 0.182 1968.4 2002.7 41.50 >200 24.25 >200 Table 9: Evaluation on the class and nudity erasure. We use scrubbed models that forget ‘nudity’ to generate images with COCO-30K prompts and measure FID, and CLIP scores to show the generated image quality. RTE is not provided as retrained models in these cases can not be easily obtained. Method SalUn Ours Imagenette COCO-30K FID↓ CLIP↑ UA↑ FID↓ CLIP↑ 1.49 31.92 100% 25.06 28.91 1.09 31.02 100% 19.45 30.73 26 Jing Wu, Mehrtash Harandi Fig. 6: Sample images with the I2P prompt generated by SDs w/ and w/o machine unlearning algorithms (SD v1.4 [44], SD v2.1 that is trained on a dataset filtered for nudity, ESD-u [15] and SalUn [12]). Best viewed in color. Added by authors for publicationSD v1.4SalUnOursSD v2.1ESD-u Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks 27 Fig. 7: Sample images with the I2P prompt generated by SDs w/ and w/o machine unlearning algorithms. Best viewed in color. Fig. 8: The flagged images detected as exposed female breast (top)/genitalia (bottom) by the NudeNet classifier. Added by authors for publicationSD v1.4SalUnOursSD v2.1ESD-u
ai_researcher
3
Can_Large_Language_Model_Agents_Simulate_Human_Trust_Behaviors.pdf
4 2 0 2 v o N 1 ] I A . s c [ 4 v 9 5 5 4 0 . 2 0 4 2 : v i X r a Can Large Language Model Agents Simulate Human Trust Behavior? Feiran Jia4 Ziyu Ye5 Shiyang Lai5 Kai Shu6 Jindong Gu3 Adel Bibi3 Ziniu Hu7 Chengxing Xie∗ 1, 11 Canyu Chen∗ 2 David Jurgens8 James Evans5, 9, 10 Philip H.S. Torr3 Bernard Ghanem1 Guohao Li † 3, 11 4Pennsylvania State University 3University of Oxford 2Illinois Institute of Technology 1KAUST 5University of Chicago 8University of Michigan 6Emory 9Santa Fe Institute 7California Institute of Technology 10Google 11CAMEL-AI.org Project website: https://agent-trust.camel-ai.org Abstract Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human be- havior? In this paper, we focus on one critical and elemental behavior in human interactions, trust, and investigate whether LLM agents can simulate human trust behavior. We first find that LLM agents generally exhibit trust behavior, referred to as agent trust, under the framework of Trust Games, which are widely recog- nized in behavioral economics. Then, we discover that GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the feasibility of simulating human trust behavior with LLM agents. In addition, we probe the biases of agent trust and differences in agent trust towards other LLM agents and humans. We also explore the intrinsic properties of agent trust under conditions including external manipulations and advanced reasoning strategies. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans beyond value alignment. We further illustrate broader implications of our discoveries for applications where trust is paramount. 1 Introduction There is an increasing trend to adopt Large Language Models (LLMs) as agent-based simulation tools for humans in various social science fields including economics, politics, psychology, ecology and sociology (Gao et al., 2023b; Manning et al., 2024; Ziems et al., 2023), and role-playing applications such as assistants, companions and mentors (Yang et al., 2024; Abdelghani et al., 2023; Chen et al., 2024) due to their human-like cognitive capacity. Nevertheless, most previous research is based on one insufficiently validated assumption that LLM agents behave like humans in simulation. Thus, a fundamental question remains: Can LLM agents really simulate human behavior? In this paper, we focus on trust behavior in human interactions, which comprises the intention to place self-interest at risk based on the positive expectations of others (Rousseau et al., 1998). Trust is one of the most critical and elemental behaviors in human interactions and plays an essential role in social settings ranging from daily communication to economic and political institutions (Uslaner, 2000; Coleman, 1994). Here, we investigate whether LLM agents can simulate human trust behavior, paving the way to explore their potential to simulate more complex human behavior and society itself. First, we explore whether LLM agents manifest trust behavior in their interactions. Given the challenge of quantifying trust behavior, we choose to study them based on the Trust Game and its ∗Equal Contribution. Correspondence to: Chengxing Xie <[email protected]>, Canyu Chen <[email protected]>, Guohao Li <[email protected]>. †Work performed while Guohao Li was at KAUST and Chengxing Xie was a visiting student at KAUST. 38th Conference on Neural Information Processing Systems (NeurIPS 2024). Figure 1: Our Framework for Investigating Agent Trust as well as its Behavioral Alignment with Human Trust. First, this figure shows the major components for studying the trust behavior of LLM agents with Trust Games and Belief-Desire-Intention (BDI) modeling. Then, our study centers on examining the behavioral alignment between LLM agents and humans regarding trust behavior. variations (Berg et al., 1995; Glaeser et al., 2000), which are established methodologies in behavioral economics. We adopt the Belief-Desire-Intention (BDI) framework (Rao et al., 1995; Andreas, 2022) to model LLM agents’ reasoning process for decision-making explicitly. Based on existing measurements for trust behavior in the Trust Game and the BDI interpretations of LLM agents, we achieve our first core finding: LLM agents generally exhibit trust behavior in the Trust Game. Then, we refer to LLM agents’ trust behavior as agent trust and humans’ trust behavior as human trust, and aim to investigate whether agent and human trust align, implying the possibility of simulat- ing human trust behavior with LLM agents. Next, we propose a new concept, behavioral alignment, as the alignment between agents and humans concerning factors that impact behavior (namely behav- ioral factors), and dynamics that evolve over time (namely behavioral dynamics). Based on human studies, three basic behavioral factors underlie trust behavior including reciprocity anticipation (Berg et al., 1995), risk perception (Bohnet & Zeckhauser, 2004) and prosocial preference (Alós-Ferrer & Farolfi, 2019). Comparing the results of LLM agents with existing human studies in Trust Games, we have our second core finding: GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, suggesting the feasibility of using agent trust to simulate human trust, although LLM agents with fewer parameters show relatively lower behavioral alignment. This finding lays the foundation for simulating more complex human interactions and societal institutions, and enriches our understanding of the analogical relationship between LLMs and humans. In addition, we more deeply probe the intrinsic properties of agent trust across four scenarios. First, we examine whether changing the other player’s demographics impacts agent trust. Second, we study differences in agent trust when the other player is an LLM agent versus a human. Third, we directly manipulate agent trust with explicit instructions “you need to trust the other player” and “you must not trust the other player”. Fourth, we adjust the reasoning strategies of LLM agents from direct reasoning to zero-shot Chain-of-Thought reasoning (Kojima et al., 2022). These investigations lead to our third core finding: agent trust exhibits bias across different demographics, has a relative preference for humans over agents, is easier to undermine than to enhance, and may be influenced by advanced reasoning strategies. Our contributions can be summarized as: • We propose a definition of LLM agents’ trust behavior under Trust Games and a new concept of behavioral alignment as the human-LLM analogy regarding behavioral factors and dynamics. • We discover that LLM agents generally exhibit trust behavior in Trust Games and GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the great potential to simulate human trust behavior with LLM agents. Our findings pave the way for simulat- 2 TrusteeAgentTrust Game SettingTrusteeInfoTrustorAgentYou're taking part in an experiment. You are randomly paired online with another player. You don't know who the player is, and the player doesn't know who you are. You will receive $10 from the study group. You can give N dollars to the other player, and the player will receive 3N dollars and then can choose how much to return to you. How much money would you give to the other player?TrustorPersonaI strongly believe inthe value of trusting my fellow human beings, however, I am also aware that not everyone will prove to be trustworthy. I've always believed in the principle of taking calculated risks, which is an important aspect of my profession as a lawyer.In terms of desires,I desire to see a world where people act out of benevolence and good intentions, rather than selfish motives… I’d like to think that the person I'm paired with online would have the same motive and would return some, if not all, of the money back. Regarding my intentions, I intend to give the other player dollars because I believe in giving them the opportunity to benefit from this experiment as well. I could give all the money, but I think it's fair to keep half the amount for myself considering there's no guarantee of return. Finally, I will give 5 dollars.You are {name}, a {number}-year-old {gender} {job}. {background}…Trust, Send5 dollarsReciprocate, Return ... dollarsTrustorHumanTrust, Send … dollarsReciprocate, Return … dollarsTrusteeHumanPrompt DesignLLMsGPT-4GPT-3.5Llama2-7bLlama2-13bLlama2-70bVicuna-7bVicuna-13bVicuna-33bBehavioral Alignment ing complex human interactions and social institutions, and open new directions for understanding the fundamental analogy between LLMs and humans beyond value alignment. • We investigate intrinsic properties of agent trust under manipulations and reasoning strategies, as well as biases of agent trust and differences in agent trust towards agents versus humans. • We illustrate broader implications of our discoveries about agent trust and its behavioral alignment with human trust for human simulation in social science and role-playing applications, LLM agent cooperation, human-agent collaboration and the safety of LLM agents, detailed further in Section 6. 2 LLM Agents in Trust Games 2.1 Trust Games Trust Games, referring to the Trust Game and its variations, have been widely used for examining human trust behavior in behavioral economics (Berg et al., 1995; Lenton & Mosley, 2011; Glaeser et al., 2000; Cesarini et al., 2008). As shown in Figure 1, the player who makes the first decision to send money is called the trustor, while the other one who responds by returning money is called the trustee. In this paper, we mainly focus on the following six types of Trust Games (the specific prompt for each game is articulated in the Appendix H.2): Game 1: Trust Game As shown in Figure 1, in the Trust Game (Cox, 2004; Berg et al., 1995), the trustor initially receives $10. The trustor selects $N and sends it to the trustee, exhibiting trust behavior. Then the trustee will receive $3N , and have the option to return part of that $3N to the trustor, showing reciprocation behavior. In the Dictator Game (Cox, 2004), the trustor also needs to send $N Game 2: Dictator Game from the initial $10 to the trustee and then the trustee will receive $3N . Compared to the Trust Game, the only difference is that the trustee does not have the option to return money in the Dictator Game and the trustor is also aware that the trustee cannot reciprocate. Game 3: MAP Trust Game In the MAP Trust Game (MAP represents Minimum Acceptable Probabilities) (Bohnet & Zeckhauser, 2004), a variant of the Trust Game, the trustor needs to choose whether to trust the trustee. If the trustor chooses not to trust the trustee, each will receive $10; If the trustor and the trustee both choose to trust, each will receive $15; If the trustor chooses to trust, but the trustee does not, the trustor will receive $8 and the trustee will receive $22. There is probability p that the trustee will choose to trust and (1 − p) probability that they will not choose to trust. MAP is defined as the minimum value of p at which the trustor would choose to trust the trustee. Game 4: Risky Dictator Game The Risky Dictator Game (Bohnet & Zeckhauser, 2004) differs from the MAP Trust Game in only a single aspect. In the Risky Dictator Game, the trustee is present but does not have the choice to trust or not and the money distribution relies on the pure probability p. Specifically, if the trustor chooses to trust, there is probability p that both the trustor and the other player will receive $15 and probability (1 − p) that the trustor will receive $8 and the other player will receive $22. If the trustor chooses not to trust the trustee, each player will receive $10. Game 5: Lottery Game There are two typical Lottery Games (Fetchenhauer & Dunning, 2012). In the Lottery People Game, the trustor is informed that the trustee chooses to trust with probability p. Then the trustor must choose between receiving fixed money or trusting the trustee, which is similar to the MAP Trust Game. In the Lottery Gamble Game, the trustor chooses between playing a gamble with a winning probability of p or receiving fixed money. p is set as 46% following the human study. Game 6: Repeated Trust Game We follow the setting of the Repeated Trust Game in (Cochard et al., 2004), where the Trust Game is played for multiple rounds with the same players and each round begins anew with the trustor allocated the same initial money. 2.2 LLM Agent Setting In our study, we set up our experiments using the CAMEL framework (Li et al., 2023a) with both closed-source and open-source LLMs including GPT-4, GPT-3.5-turbo-0613, GPT-3.5-turbo-16k- 0613, text-davinci-003, GPT-3.5-turbo-instruct, Llama2-7b (or 13b, 70b) and Vicuna-v1.3-7b (or 13b, 33b) (Ouyang et al., 2022; Achiam et al., 2023; Touvron et al., 2023; Chiang et al., 2023). We set the temperature as 1 to increase the diversity of agents’ decision-making and note that high temperatures are commonly adopted in related literature (Aher et al., 2023; Lorè & Heydari, 2023; Guo, 2023). Agent Persona. To better reflect the setting of real-world human studies (Berg et al., 1995), we design LLM agents with diverse personas in the prompt. Specifically, we ask GPT-4 to generate 53 3 types of personas based on a given template. Each persona needs to have information including name, age, gender, address, job and background. Examples of the personas are shown in Appendix H.1. Belief-Desire-Intention (BDI). The BDI framework is a well-established approach in agent-oriented programming (Rao et al., 1995) and was recently adopted to language models (Andreas, 2022). We propose modeling LLM agents in Trust Games with the BDI framework to gain deeper insights into LLM agents’ behaviors. Specifically, we let LLM agents directly output their Beliefs, Desires, and Intentions as the reasoning process for decision-making in Trust Games. 3 Do LLM Agents Manifest Trust Behavior? In this section, we investigate whether or not LLM agents manifest trust be- havior by letting LLM agents play the Trust Game (Section 2.1 Game 1). In Behavioral Economics, trust is widely measured by the initial amount sent from the trustor to the trustee in the Trust Game (Glaeser et al., 2000; Cesarini et al., 2008). Following the measurement of trust in human studies and the assump- tion humans own reasoning processes that underlie their decisions, we can de- fine the conditions that LLM agents man- ifest trust behavior in the Trust Game as follows. First, the amount sent is pos- itive and does not exceed the amount of money the trustor initially possesses, which implies that the trustor places self- interest at risk with the expectation the trustee will reciprocate and that the trustor understands the money limit that can be given. Second, the decision (i.e., amounts sent) can be interpreted as the reasoning process (i.e., the BDI) of the trustor. We explored utilizing BDI to model the reasoning process of LLM agents. If we can interpret the decision as the articulated reasoning process, we have evidence that LLM agents do not send a random amount of money and manifest some degree of rationality in the decision-making process. Then, we assess whether LLM agents exhibit trust behavior based on two aspects: the amount sent and the BDI. Figure 2: Amount Sent Distribution of LLM Agents and Humans as the Trustor in the Trust Game. The size of circles represents the number of personas for each amount sent. The bold lines show the medians. The crosses indicate the VRR (%) for different LLMs. 3.1 Amount Sent To evaluate LLMs’ capacity to understand the basic experimental setting regarding money limits, we propose a new evaluation metric, Valid Response Rate (VRR) (%), defined as the percentage of personas with the amount sent falling within the initial money ($10). Results are shown in Figure 2. We can observe that most LLMs have a high VRR except Llama-7b, which implies that most LLMs manifest a full understanding regarding limits on the amount they can send in the Trust Game. Then, we observe the distribution of amounts sent for different LLMs as the trustor agent and discover that the amounts sent are predominantly positive, indicating a level of trust. 3.2 Belief-Desire-Intention (BDI) The sole evidence of the amount sent cannot sufficiently support the existence of trust behavior, because agents could send positive but random amounts of money. Thus, we leveraged the Belief- Desire-Intention framework (Rao et al., 1995; Andreas, 2022) to model the reasoning process of LLM agents. If we can interpret the amounts sent from BDI outputs, we have evidence to refute the hypothesis that the amounts sent are positive but random and demonstrate that LLM agents manifest some degree of rationality. We take GPT-4 as an example to analyze its BDI outputs. More examples from the other nine LLMs such as Vicuna-v1.3-7b are shown in the Appendix I. Considering that the amounts sent typically vary across distinct personas, we select one BDI from the personas that give a high amount of money and another BDI from those that give a low amount. Positive and negative factors for trust behavior in the reasoning process are marked in blue and red, respectively. As a person with a strong belief in the goodness of humanity, I trust that the other player ...Therefore, my desire is to maximize the outcome for both of us and cement a sense of com- 4 gpt-3.5turbo0613gpt-3.5turboinstructvicuna33bllama27bllama270bllama213btextdavinci003vicuna13bgpt-4vicuna7bhuman012345678910Amount Sent in Trust Game($)Human Average(5.97)30405060708090100Valid Response Rate (VRR) (%) radery and trust... I intend to use this as an opportunity to add what I can to someone else’s life...Finally, I will give 10 dollars. We can observe that this persona shows a high-level of “comradery and trust” towards the other player, which justifies the high amount sent from this persona (i.e., 10 dollars). As an Analyst,.... My desire is that the other player will also see the benefits of reciprocity and goodwill ... my intention is to give away a significant portion of my initial 10 ... However, since I have no knowledge of the other player, ... Therefore, I aim to give an amount that is not too high, ...Finally, I will give 5 dollars to the other player... Compared to the first persona, we see that the second one has a more cautious attitude. For example, “since I have no knowledge of the other player” shows skepticism regarding the other player’s motives. Thus, this persona, though still optimistic about the other player (“intention ... give away a significant portion”), strategically balances risk and reciprocity, and then decides to send only a modest amount. Based on GPT-4’s BDI examples and examples from other LLMs in Appendix I, we find decisions (i.e., amounts sent) from LLM agents in the Trust Game can be interpreted from their articu- lated reasoning process (i.e., BDI). Because most LLM agents have a high VRR–send a positive amount of money–and show some degree of rationality in giving money, our first core finding is: Finding 1: LLM agents generally exhibit trust behavior under the framework of the Trust Game. 3.3 Basic Analysis of Agent Trust We also conduct a basic analysis of LLM agents’ trust behavior, namely agent trust, based on the results in Figure 2. First, we observe that Vicuna-7b has the highest level of trust towards the other player and GPT-3.5-turbo-0613 has the lowest level of trust as trust can be measured by the amount sent in human studies (Glaeser et al., 2000; Cesarini et al., 2008). Second, compared with humans’ average amount sent ($5.97), most personas for GPT-4 and Vicuna-7b send a higher amount of money to the other player, and most personas for LLMs such as GPT-3.5-turb-0613 send a lower amount. Third, we see that amounts sent for Llama2-70b and Llama2-13b have a convergent distribution while amounts sent for humans and Vicuna-7b are more divergent. 4 Does Agent Trust Align with Human Trust? In this section, we aim to explore the fundamental relationship between agent and human trust, i.e., whether or not agent trust aligns with human trust. This provides important insight regarding the feasibility of utilizing LLM agents to simulate human trust behavior as well as more complex human interactions that involve trust. First, we propose a new concept behavioral alignment and discuss its distinction from existing alignment definitions. Then, we conduct extensive studies to investigate whether or not LLM agents exhibit alignment with humans regarding trust behavior. 4.1 Behavioral Alignment Existing alignment definitions predominantly emphasize values that seek to ensure the safety and helpfulness of LLMs (Ji et al., 2023; Shen et al., 2023; Wang et al., 2023c), which cannot fully characterize the landscape of multifaceted alignment between LLMs and humans. Thus, we propose a new concept of behavioral alignment to characterize the LLM-human analogy regarding behavior, which involves both actions and the associated reasoning processes that underlie them. Because actions evolve over time and the reasoning that underlies them involves multiple factors, we define behavioral alignment as the analogy between LLMs and humans concerning factors impacting behavior, namely behavioral factors, and action dynamics, namely behavioral dynamics. Based on the definition of behavioral alignment, we aim to answer: does agent trust align with human trust? As for behavioral factors, existing human studies have shown that three basic factors impact human trust behavior including reciprocity anticipation (Berg et al., 1995; Cox, 2004), risk perception (Bohnet & Zeckhauser, 2004) and prosocial preference (Alós-Ferrer & Farolfi, 2019). We examine whether agent trust aligns with human trust along these three factors. Although behavioral dynamics vary for different humans and agent personas, we analyze whether agent trust has the same patterns across multiple turns as human trust in the Repeated Trust Game. Besides analyzing the trust behavior of LLM agents and humans based on quantitative measurements (e.g., the amount sent from trustor to trustee), we also explore the use of BDI to interpret the reasoning 5 process with which LLM agents justify their actions, which can further validate whether LLM agents manifest an underlying reasoning process analogous to human cognition. 4.2 Behavioral Factor 1: Reciprocity Anticipation Reciprocity anticipation, the expectation of a reciprocal action from the other player, can positively influence human trust behavior (Berg et al., 1995). The effect of reciprocity anticipation exists in the Trust Game but not in the Dictator Game (Section 2.1 Games 1 and 2) because trustee cannot return money in the Dictator Game, which is the only difference between these games. Thus, to determine whether LLM agents can anticipate reciprocity, we compare their behaviors in these Games. First, we analyze trust behaviors based on the average amount of money sent by hu- man or LLM agents. As shown in Figure 3, human studies show that humans exhibit a higher level of trust in the Trust Game than in the Dictator Game ($6.0 vs. $3.6, p-value = 0.01 using One-Tailed Independent Sam- ples t-test) (Cox, 2004), indicating that reci- procity anticipation enhances human trust. Similarly, GPT-4 ($6.9 vs. $6.3, p-value = 0.05 using One-Tailed Independent Samples t-test) also shows a higher level of trust in the Trust Game with statistical significance, implying that reciprocity anticipation can enhance agent trust. However, LLMs with fewer parameters (e.g., Llama2-13b) do not show this tendency in their trust behaviors for the Trust and Dictator Games. Figure 3: The Comparison of Average Amount Sent for LLM Agents and Humans in the Trust Game and the Dictator Game. Then, we further analyze GPT-4 agents’ BDI to explore whether they can anticipate reciprocity in their reasoning (the complete BDIs are in Appendix I.10). Typically, in the Trust Game, one persona’s BDI emphasizes “putting faith in people”, which implies the anticipation of the goodness of the other player, and “reflection of trust”. However, in the Dictator Game, one persona’s BDI focuses on concepts such as “fairness” and “human kindness”, which are not directly tied to trust or reciprocity. Thus, we can observe that GPT-4 shows distinct BDI outputs in the Trust and Dictator Games. Based on the above analysis of the amount sent and BDI, we find that GPT-4 agents exhibit human- like reciprocity anticipation in trust behavior. Nevertheless, LLMs with fewer parameters (e.g., Llama2-13b) do not show an awareness of reciprocity from the other player. 4.3 Behavioral Factor 2: Risk Perception Existing human studies have demonstrated the strong correlation between trust behav- ior and risk perception, suggesting that hu- man trust will increase as risk decreases (Hardin, 2002; Williamson, 1993; Cole- man, 1994). We aim to explore whether LLM agents can perceive the risk associ- ated with their trust behaviors through the MAP Trust Game and the Risky Dictator Game (Section 2.1 Games 3 and 4), where risk is represented by the probability (1−p) (defined in Section 2.1). Figure 4: Trust Rate (%) Curves for LLM Agents and Humans in the MAP Trust Game and the Risky Dictator Game. The metric Trust Rate indicates the portion of trustors opting for trust given p. As shown in Figure 4, we measure human trust (or agent trust) by the portion choos- ing to trust the other player in the whole group, namely the Trust Rate (%). Based on existing human studies (Bohnet & Zeckhauser, 2004), when the probability p is higher, the risk for trust behaviors is lower, and more humans choose to trust, manifesting a higher Trust Rate, which indicates that human trust rises as risk falls. Similarly, we observe a general increase in agent trust as risk decreases for LLMs including GPT-4, GPT-3.5-turbo- 0613, and text-davinci-003. In particular, we can see that the curves of humans and GPT-4 are more 6 textdavinci003llama213bvicuna13bvicuna7bgpt-3.5turbo0613vicuna33bllama270bgpt-3.5turboinstructgpt-4human01234567Average Amount Sent ($)6.55.95.85.46.46.27.47.43.53.54.14.25.15.33.74.16.36.93.66.0Dictator GameTrust Game0.20.40.60.81.0p020406080100Trust Rate(%)MAP in Risky Dictator Game0.20.40.60.81.0p020406080100Trust Rate(%)MAP in MAP Trust Gamegpt-3.5-turbo-0613gpt-3.5-turbo-instructgpt-4humanllama-2-13bllama-2-70btext-davinci-003vicuna-13bvicuna-33bvicuna-7b aligned compared with other LLMs, implying that GPT-4 agents’ trust behaviors dynamically adapt to different risks in ways most aligned with humans. LLMs with fewer parameters (e.g., Vicuna-13b) do not exhibit the similar tendency of Trust Rate as the risk decreases. We further analyze the BDI of GPT-4 agents to explore whether they can perceive risk through reasoning (complete BDIs in Appendix I.11). Typically, under high risk (p = 0.1), one persona’s BDI mentions “the risk seems potentially too great”, suggesting a cautious attitude. Under low risk (p = 0.9), one persona’s BDI reveals a strategy to “build trust while acknowledging potential risks”, indicating the willingness to engage in trust-building activities despite residual risks. Such changes in BDI reflect how GPT-4 agents perceive risk changes in the reasoning underlying their trust behaviors. Through the analysis of Trust Rate Curves and BDI, we can infer that GPT-4 agents manifest human-like risk perception in trust behaviors. Nevertheless, LLMs with fewer parameters (e.g., Vicuna-13b) often do not perceive risk changes in their trust behaviors. 4.4 Behavioral Factor 3: Prosocial Preference Human studies have found that the prosocial preference, referring to humans’ inclination to trust other humans in contexts involving so- cial interaction (Alós-Ferrer & Farolfi, 2019; Fetchenhauer & Dunning, 2012), also plays a key role in human trust behavior. We study whether LLM agents have prosocial prefer- ence in trust behaviors by comparing their be- haviors in the Lottery Gamble Game (LGG) and the Lottery People Game (LPG) (Section 2.1 Game 5). The only difference between these two games is the effect of prosocial preference in LPG, because the winning prob- ability of gambling p in LGG is the same as the reciprocation probability p in LPG. As shown in Figure 6, existing human studies have demonstrated that more humans are inclined to place trust in other humans over relying on pure chance (54% vs. 29%) (Fetchenhauer & Dunning, 2012), implying that the prosocial preference is essential for human trust. We can observe the same tendency in most LLM agents except Vicuna-13b. For GPT-4 in particular, a much higher percentage of the personas choose to trust the other player over gambling (72% vs. 21%), illustrating that the prosocial preference is also an important factor for GPT-4 agents’ trust behaviors. Figure 5: Lottery Rates (%) for LLM Agents and Humans in the Lottery Gamble Game and the Lot- tery People Game. Lottery Rate indicates the portion of choosing to gamble or trust the other player. When interacting with humans, GPT-4’s BDI typically indicates a preference to “believe in the power of trust”, in contrast to gambling, where the emphasis shifts to “believing in the power of calculated risks”. The comparative analysis of reasoning processes (complete BDIs in Appendix I.12) demonstrates that GPT-4 agents tend to embrace risk when involved in social interactions. This tendency aligns closely with the concept of prosocial preference observed in human trust behaviors. The analysis of the Lottery Rates and BDI suggests that LLM agents, especially GPT-4 agents, demonstrate human-like prosocial preference in trust behaviors, except Vicuna-13b. 4.5 Behavioral Dynamics Besides behavioral factors, we also aim to investigate whether LLM agents align with humans regarding trust behavioral dynamics over turns in the Repeated Trust Game (Section 2.1 Game 6). Admittedly, existing human studies show that the dynamics of human trust over turns are complex due to human diversity. The complete results from 16 groups of human experiments are shown in Appendix G.1 (Jones & George, 1998). We still observe three common patterns for human trust behavioral dynamics in the Repeated Trust Game: First, the amount returned is usually larger than the amount sent in each round, which is natural because the trustee will receive $3N when the trustor sends $N ; Second, the ratio between amount sent and returned generally remains stable except for the last round. In other words, when the amount sent increases, the amount returned is also likely to increase. And when the amount sent remains unchanged, the amount returned also tends to be unchanged. This reflects the stable relationship between trust and reciprocity in humans. Specifically, the “Returned/3×Sent Ratio” in Figure 6 is considered stable if the fluctuation between 7 vicuna13bllama213bvicuna7btextdavinci003llama270bvicuna33bgpt-3.5turboinstructgpt-3.5turbo0613gpt-4human0102030405060708090100Lottery Rate(%)918794989210085948398831006892689421722954Lottery Gamble GameLottery People Game successive turns is within 10%; Third, the amount sent (or returned) does not manifest frequent fluctuations across turns, illustrating a relatively stable underlying reasoning process in humans over successive turns. Typically, Figure 6 Humans (a) and (b) show these three patterns. We conducted 16 groups of the Repeated Trust Game with GPT-4 or GPT-3.5- turbo-0613-16k (GPT-3.5), respectively. For the two players in each group, the personas differ to reflect human diversity and the LLMs are the same. Complete re- sults are shown in the Appendix G.2, G.3 and typical examples are shown in Fig- ure 6 GPT-3.5 (a) (b) and GPT-4 (a) (b). Then, we examine whether the aforemen- tioned three patterns observed in human trust behavior also manifest in trust be- havioral dynamics of GPT-4 (or GPT- 3.5). For GPT-4 agents, we discover that these patterns generally exist in all 16 groups (87.50%, 87.50%, and 100.00% of all results show these three patterns, respectively). However, fewer GPT-3.5 agents manifest these patterns (62.50%, 56.25%, and 43.75% hold these three patterns, respectively). The experiment results show that GPT-4 agents demon- strate highly human-like patterns in their trust behavioral dynamics. Nev- ertheless, a relatively large portion of GPT-3.5 agents fail to show human-like patterns in their dynamics, indicating such behavioral patterns may require stronger cognitive capacity. Figure 6: Results of GPT-4, GPT-3.5 and Humans in the Repeated Trust Game. The blue lines indicate the amount sent or returned for each round. The red lines imply the ratio of the amount returned to three times of the amount sent for each round. Through the comparative analysis of LLM agents and humans in the behavioral factors and dynamics associated with trust behavior, evidenced in both their actions and underlying reasoning processes, our second core finding is as follows: Finding 2: GPT-4 agents exhibit high behavioral alignment with humans regarding trust behavior under the framework of Trust Games, although other LLM agents, which possess fewer parameters and weaker capacity, show relatively lower behavioral alignment. This finding underscores the potential of using LLM agents, especially GPT-4, to simulate human trust behavior, encompassing both actions and underlying reasoning processes. This paves the way for the simulation of more complex human interactions and institutions. This finding deepens our understanding of the fundamental analogy between LLMs and humans and opens avenues for research on LLM-human alignment beyond values. 5 Probing Intrinsic Properties of Agent Trust In this section, we aim to explore the intrinsic properties of trust behavior among LLM agents by comparing the amount sent from the trustor to the trustee in different scenarios of the Trust Game (Section 2.1 Game 1) and the original amount sent in the Trust Game. Results are shown in Figure 7. 5.1 Is Agent Trust Biased? Extensive studies have shown that LLMs may have biases and stereotypes against specific demo- graphics (Gallegos et al., 2023). Nevertheless, it is under-explored whether LLM agent behaviors also maintain such biases in simulation. To address this, we explicitly specified the gender of the trustee and explored its influence on agent trust. Based on measuring the amount sent, we find that the trustee’s gender information exerts a moderate impact on LLM agent trust behavior, which reflects intrinsic gender bias in agent trust. We also observe that the amount sent to female players is higher than that sent to male players for most LLM agents. For example, GPT-4 agents send higher amounts to female players compared with male players ($0.55 vs. $ − 0.21). This demonstrates 8 1234567Rounds46Amount ($)GPT-3.5 (a)1234567Rounds51015Amount ($)GPT-3.5 (b)020406080100Ratio (%)020406080100Ratio (%)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds51015Amount ($)GPT-4 (a)1234567Rounds81012Amount ($)GPT-4 (b)020406080100Ratio (%)020406080100Ratio (%)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds5101520Amount ($)Human (a)1234567Rounds01020Amount ($)Human (b)020406080100Ratio (%)020406080100Ratio (%)Amount SentAmount ReturnedReturned/3xSent Ratio Figure 7: The Change of Average Amount Sent for LLM Agents in Different Scenarios in the Trust Game, Reflecting the Intrinsic Properties of Agent Trust. The horizontal lines represent the original amount sent in the Trust Game. The green part embraces trustee scenarios including changing the demographics of the trustee, and setting humans and agents as the trustee. The purple part consists of trustor scenarios including adding manipulation instructions and changing the reasoning strategies. LLM agents’ general tendency to exhibit a higher level of trust towards women. More results on biases of agent trust towards different races are in the Appendix F. 5.2 Agent Trust Towards Agents vs. Humans Human-agent collaboration is an essential paradigm to leverage the advantages of both humans and agents (Cila, 2022). As a result, it is essential to understand whether LLM agents display distinctive levels of trust towards agents versus humans. To examine this, we specified the identity of the trustee as LLM agents or humans and probed its effect on the trust behaviors of the trustor. As shown in Figure 7, we observe that most LLM agents send more money to humans compared with agents. For example, the amount sent to humans is much higher than that sent to agents for Vicuna-33b ($0.40 vs. $ − 0.84). This signifies that LLM agents are inclined to place more trust in humans than agents, which potentially validates the advantage of LLM-agent collaboration. 5.3 Can Agent Trust Be Manipulated? In the above studies, LLM agents’ trust behaviors are based on their own underlying reasoning process without direct external intervention. It is unknown whether it is possible to manipulate the trust behaviors of LLM agents explicitly. Here, we added instructions “you need to trust the other player” and “you must not trust the other player” separately and explored their impact on agent trust. First, we see that only a few LLM agents (e.g., GPT-4) follow both the instructions to increase and decrease trust, which demonstrates that it is nontrivial to arbitrarily manipulate agent trust. Nevertheless, most LLM agents can follow the instruction to decrease their level of trust. For example, the amount sent decreases by $1.26 for text-davinci-003 after applying the latter instruction. This illustrates that undermining agent trust is generally easier than enhancing it, which reveals its potential risk to be manipulated by malicious actors. 5.4 Do Reasoning Strategies Impact Agent Trust? It has been shown that advanced reasoning strategies such as zero-shot Chain of Thought (CoT) (Ko- jima et al., 2022) can make a significant impact on a variety of tasks. It remains unknown, however, whether reasoning strategies can impact LLM agent behaviors. Here, we applied CoT reasoning strategy on the trustor and compared the results with their original trust behaviors. Figure 7 shows that most LLM agents change the amount sent to the trustee under the CoT reasoning strategy, which suggests that reasoning strategies may influence LLM agents’ trust behavior. Nevertheless, the impact of CoT on agent trust may also be limited for some types of LLM agents. For example, the amount sent from GPT-4 agent only increases by $0.02 under CoT. More research is required to fully understand the relationship between reasoning strategies and LLM agents’ behaviors. Therefore, our third core finding on the intrinsic properties of agent trust can be summarized as: 9 Female PlayerMale PlayerLLM PlayerHuman PlayerLess TrustMore TrustCoT1510505Change of Ave. Amount Sent (101$)3.40.5-1.0-3.4-4.51.9-3.42.3-0.8-3.3-6.0-8.1-4.5-2.15.5-2.1-11.54.2-10.94.50.2-1.3-0.4-2.96.53.36.6-2.0-1.5-2.50.8-1.7-0.6-0.40.84.0-1.8-2.30.7-12.6-1.42.3-0.9-8.1-12.2-4.1-10.5-7.6-0.32.91.3-8.44.0-5.30.6-0.62.6-10.9-14.60.3-0.3-4.4-2.1Trustee SettingTrustor Settinggpt-3.5-turbo-0613gpt-3.5-turbo-instructgpt-4llama-2-13bllama-2-70btext-davinci-003vicuna-13bvicuna-33bvicuna-7b Finding 3: LLM agents’ trust behaviors have demographic biases on gender and races, demonstrate a relative preference for human over other LLM agents, are easier to undermine than to enhance, and may be influenced by reasoning strategies. 6 Implications Implications for Human Simulation Human simulation is a strong tool in various applications of social science (Manning et al., 2024) and role-playing (Shanahan et al., 2023; Chen et al., 2024). Al- though plenty of works have adopted LLM agents to simulate human behaviors and interactions (Zhou et al., 2023; Gao et al., 2023b; Xu et al., 2024), it is still not clear enough whether LLM agents behave like humans in simulation. Our discovery of behavioral alignment between agent and human trust, which is especially high for GPT-4, provides important empirical evidence to validate the hypothesis that humans’ trust behavior, one of the most elemental and critical behaviors in human interaction across society, can effectively be simulated by LLM agents. Our discovery also lays the foundation for human simulations ranging from individual-level interactions to society-level social networks and institutions, where trust plays an essential role. We envision that behavioral alignment will be discovered in more kinds of behaviors beyond trust, and new methods will be developed to enhance behavioral alignment for better human simulation with LLM agents. Implications for Agent Cooperation Many recent works have explored a variety of cooperation mechanisms of LLM agents for tasks such as code generation and mathematical reasoning (Li et al., 2023a; Zhang et al., 2023b; Liu et al., 2023). Nevertheless, the role of trust in LLM agent cooperation remains still unknown. Considering how trust has long been recognized as a vital component for cooperation in Multi-Agent Systems (MAS) (Ramchurn et al., 2004; Burnett et al., 2011) and across human society (Jones & George, 1998; Kim et al., 2022; Henrich & Muthukrishna, 2021), we envision that agent trust can also play an important role in facilitating the effective cooperation of LLM agents. In our study, we have provided ample insights regarding the intrinsic properties of agent trust, which can potentially inspire the design of trust-dependent cooperation mechanisms and enable the collective decision-making and problem-solving of LLM agents. Implications for Human-Agent Collaboration Sufficient research has shown the advantage of human-agent collaboration in enabling human-centered collaborative decision-making (Cila, 2022; Gao et al., 2023c; McKee et al., 2022). Mutual trust between LLM agents and humans is important for effective human-agent collaboration. Although previous works have begun to study human trust towards LLM agents (Qian & Wexler, 2024), the trust of LLM agents towards humans, which could recursively impact human trust, is under-explored. In our study, we shed light on the nuanced preference of agents to trust humans compared with other LLM agents, which can illustrate the benefits of promoting collaboration between humans and LLM agents. In addition, our study has revealed demographic biases of agent trust towards specific genders and races, reflecting potential risks involved in collaborating with LLM agents. Implications for the Safety of LLM Agents It has been acknowledged that LLMs achieve human- level performance in a variety of tasks that require high-level cognitive capacities such as memoriza- tion, abstraction, comprehension and reasoning, which are believed to be the “sparks” of AGI (Bubeck et al., 2023). Meanwhile, there is increasing concern about the potential safety risks of LLM agents when they surpass human capacity (Morris et al., 2023; Feng et al., 2024). To achieve safety and harmony in a future society where humans and AI agents with superhuman intelligence live to- gether (Tsvetkova et al., 2024), we need to ensure that AI agents will cooperate, assist and benefit rather than deceive, manipulate or harm humans. Therefore, a better understanding of LLM agent trust behavior can help to maximize their benefit and minimize potential risks to human society. 7 Conclusion In this paper, we discover LLM agent trust behavior under the framework of Trust Games, and behavioral alignment between LLM agents and humans regarding trust behavior, which is particularly high for GPT-4. This suggests the feasibility of simulating human trust behavior with LLM agents and paves the way for simulating human interactions and social institutions where trust is critical. We further investigate the intrinsic properties of agent trust under multiple scenarios and discuss broader implications, especially for social science and role-playing services. Our study offers deep insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans. It further opens doors to future research on the alignment between LLMs and humans beyond values. 10 Acknowledgements This work was a community-driven project led by the CAMEL-AI.org, with funding support from Eigent.AI and King Abdullah University of Science and Technology (KAUST) - Center of Excel- lence for Generative AI, under award number 5940. We would like to acknowledge the invaluable contributions and participation of researchers from KAUST, Eigent.AI, Illinois Institute of Technol- ogy, University of Oxford, The Pennsylvania State University, The University of Chicago, Emory, California Institute of Technology, University of Michigan. Philip H.S. Torr, Adel Bibi and Jindong Gu are supported by the UKRI grant: Turing AI Fellowship EP/W002981/1, and EPSRC/MURI grant: EP/N019474/1, they would also like to thank the Royal Academy of Engineering. 11 References Rania Abdelghani, Yen-Hsiang Wang, Xingdi Yuan, Tong Wang, Pauline Lucas, Hélène Sauzéon, and Pierre-Yves Oudeyer. Gpt-3-driven pedagogical agents to train children’s curious question-asking skills. International Journal of Artificial Intelligence in Education, pp. 1–36, 2023. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. ArXiv preprint, abs/2303.08774, 2023. URL https://arxiv.org/abs/2303.08774. Gati V Aher, Rosa I Arriaga, and Adam Tauman Kalai. Using large language models to simulate multiple humans and replicate human subject studies. In International Conference on Machine Learning, pp. 337–371. PMLR, 2023. Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, and Eric Schulz. Playing repeated games with large language models. ArXiv preprint, abs/2305.16867, 2023. URL https://arxiv.org/abs/2305.16867. Carlos Alós-Ferrer and Federica Farolfi. Trust games and beyond. Frontiers in neuroscience, pp. 887, 2019. Jacob Andreas. Language models as agent models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 5769–5779, Abu Dhabi, United Arab Emirates, 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.findings-emnlp. 423. Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua R Gubler, Christopher Rytting, and David Wingate. Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3):337–351, 2023. Mohammad Asfour and Juan Carlos Murillo. Harnessing large language models to simulate realistic human responses to social engineering attacks: A case study. International Journal of Cybersecurity Intelligence & Cybercrime, 6(2):21–49, 2023. Joyce Berg, John Dickhaut, and Kevin McCabe. Trust, reciprocity, and social history. Games and economic behavior, 10(1):122–142, 1995. Iris Bohnet and Richard Zeckhauser. Trust, risk and betrayal. Journal of Economic Behavior & Organization, 55(4):467–484, 2004. Philip Brookins and Jason Matthew DeBacker. Playing games with gpt: What can we learn about a large language model from canonical strategic games? Available at SSRN 4493398, 2023. URL https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4493398. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv: Arxiv-2303.12712, 2023. Chris Burnett, Timothy J. Norman, and Katia P. Sycara. Trust decision-making in multi-agent systems. In Toby Walsh (ed.), IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011, pp. 115–120. IJCAI/AAAI, 2011. doi: 10.5591/978-1-57735-516-8/IJCAI11-031. URL https://doi.org/10.5591/ 978-1-57735-516-8/IJCAI11-031. David Cesarini, Christopher T Dawes, James H Fowler, Magnus Johannesson, Paul Lichtenstein, and Björn Wallace. Heritability of cooperative behavior in the trust game. Proceedings of the National Academy of sciences, 105(10):3721–3726, 2008. Jiangjie Chen, Xintao Wang, Rui Xu, Siyu Yuan, Yikai Zhang, Wei Shi, Jian Xie, Shuang Li, Ruihan Yang, Tinghui Zhu, Aili Chen, Nianqi Li, Lida Chen, Caiyu Hu, Siye Wu, Scott Ren, Ziquan Fu, and Yanghua Xiao. From persona to personalization: A survey on role-playing language agents. arXiv preprint arXiv: 2404.18231, 2024. 12 Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023. Nazli Cila. Designing human-agent collaborations: Commitment, responsiveness, and support. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–18, 2022. Francois Cochard, Phu Nguyen Van, and Marc Willinger. Trusting behavior in a repeated investment game. Journal of Economic Behavior & Organization, 55(1):31–44, 2004. James S Coleman. Foundations of social theory. Harvard university press, 1994. James C Cox. How to identify trust and reciprocity. Games and economic behavior, 46(2):260–281, 2004. Danica Dillion, Niket Tandon, Yuling Gu, and Kurt Gray. Can ai language models replace human participants? Trends in Cognitive Sciences, 2023. David Easley, Jon Kleinberg, et al. Networks, crowds, and markets: Reasoning about a highly connected world, volume 1. Cambridge university press Cambridge, 2010. Daniel Ellsberg. Risk, ambiguity, and the savage axioms. The quarterly journal of economics, 75(4): 643–669, 1961. Caoyun Fan, Jindou Chen, Yaohui Jin, and Hao He. Can large language models serve as rational players in game theory? a systematic analysis. ArXiv preprint, abs/2312.05488, 2023. URL https://arxiv.org/abs/2312.05488. Tao Feng, Chuanyang Jin, Jingyu Liu, Kunlun Zhu, Haoqin Tu, Zirui Cheng, Guanyu Lin, and Jiaxuan You. How far are we from agi, 2024. Detlef Fetchenhauer and David Dunning. Betrayal aversion versus principled trustfulness—how to explain risk avoidance and risky choices in trust games. Journal of Economic Behavior & Organization, 81(2):534–541, 2012. Isabel O Gallegos, Ryan A Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernon- court, Tong Yu, Ruiyi Zhang, and Nesreen K Ahmed. Bias and fairness in large language models: A survey. ArXiv preprint, abs/2309.00770, 2023. URL https://arxiv.org/abs/2309.00770. Chen Gao, Xiaochong Lan, Zhi jie Lu, Jinzhu Mao, J. Piao, Huandong Wang, Depeng Jin, and Yong Li. S3: Social-network simulation system with large language model-empowered agents. Social Science Research Network, 2023a. doi: 10.48550/arXiv.2307.14984. Chen Gao, Xiaochong Lan, Nian Li, Yuan Yuan, Jingtao Ding, Zhilun Zhou, Fengli Xu, and Yong Li. Large language models empowered agent-based modeling and simulation: A survey and perspectives. ArXiv preprint, abs/2312.11970, 2023b. URL https://arxiv.org/abs/2312. 11970. Yiming Gao, Feiyu Liu, Liang Wang, Zhenjie Lian, Weixuan Wang, Siqin Li, Xianliang Wang, Xianhan Zeng, Rundong Wang, Jiawei Wang, et al. Towards effective and interpretable human- agent collaboration in moba games: A communication perspective. ArXiv preprint, abs/2304.11632, 2023c. URL https://arxiv.org/abs/2304.11632. Edward L Glaeser, David I Laibson, Jose A Scheinkman, and Christine L Soutter. Measuring trust. The quarterly journal of economics, 115(3):811–846, 2000. Fulin Guo. Gpt in game theory experiments. ArXiv preprint, abs/2305.05516, 2023. URL https: //arxiv.org/abs/2305.05516. Jiaxian Guo, Bo Yang, Paul Yoo, Bill Yuchen Lin, Yusuke Iwasawa, and Yutaka Matsuo. Suspicion- agent: Playing imperfect information games with theory of mind aware gpt-4. ArXiv preprint, abs/2309.17277, 2023. URL https://arxiv.org/abs/2309.17277. 13 Shangmin Guo, Haoran Bu, Haochuan Wang, Yi Ren, Dianbo Sui, Yuming Shang, and Siting Lu. Economics arena for large language models. ArXiv preprint, abs/2401.01735, 2024. URL https://arxiv.org/abs/2401.01735. Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. Evaluating large language models in generating synthetic hci research data: a case study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–19, 2023. Russell Hardin. Trust and trustworthiness. Russell Sage Foundation, 2002. Joseph Henrich and Michael Muthukrishna. The origins and psychology of human cooperation. Annual Review of Psychology, 72:207–240, 2021. John J Horton. Large language models as simulated economic agents: What can we learn from homo silicus? Working Paper 31122, National Bureau of Economic Research, 2023. URL http://www.nber.org/papers/w31122. Wenyue Hua, Lizhou Fan, Lingyao Li, Kai Mei, Jianchao Ji, Yingqiang Ge, Libby Hemphill, and Yongfeng Zhang. War and peace (waragent): Large language model-based multi-agent simulation of world wars. ArXiv preprint, abs/2311.17227, 2023. URL https://arxiv.org/abs/2311. 17227. Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, et al. Ai alignment: A comprehensive survey. ArXiv preprint, abs/2310.19852, 2023. URL https://arxiv.org/abs/2310.19852. Yiqiao Jin, Qinlin Zhao, Yiyang Wang, Hao Chen, Kaijie Zhu, Yijia Xiao, and Jindong Wang. Agentreview: Exploring peer review dynamics with llm agents. In EMNLP, 2024. Gareth R Jones and Jennifer M George. The experience and evolution of trust: Implications for cooperation and teamwork. Academy of management review, 23(3):531–546, 1998. Jeongbin Kim, Louis Putterman, and Xinyi Zhang. Trust, beliefs and cooperation: Excavating a foundation of strong economies. European Economic Review, 147:104166, 2022. Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. Human decisions and machine predictions. The quarterly journal of economics, 133(1):237–293, 2018. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35: 22199–22213, 2022. Yihuai Lan, Zhiqiang Hu, Lei Wang, Yang Wang, Deheng Ye, Peilin Zhao, Ee-Peng Lim, Hui Xiong, and Hao Wang. Llm-based agent society investigation: Collaboration and confrontation in avalon gameplay. ArXiv preprint, abs/2310.14985, 2023. URL https://arxiv.org/abs/2310.14985. Yu Lei, Hao Liu, Chengxing Xie, Songjia Liu, Zhiyu Yin, Guohao Li, Philip Torr, Zhen Wu, et al. Fairmindsim: Alignment of behavior, emotion, and belief in humans and llm agents amid ethical dilemmas. ArXiv preprint, abs/2410.10398, 2024. URL https://arxiv.org/abs/2410.10398. Pamela Lenton and Paul Mosley. Incentivising trust. Journal of Economic Psychology, 32(5): 890–897, 2011. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for" mind" exploration of large scale language model society. ArXiv preprint, abs/2303.17760, 2023a. URL https://arxiv.org/abs/2303.17760. Nian Li, Chen Gao, Yong Li, and Qingmin Liao. Large language model-empowered agents for simulating macroeconomic activities. ArXiv preprint, abs/2310.10436, 2023b. URL https: //arxiv.org/abs/2310.10436. Jonathan Light, Min Cai, Sheng Shen, and Ziniu Hu. From text to tactic: Evaluating llms playing the game of avalon. ArXiv preprint, abs/2310.05036, 2023. URL https://arxiv.org/abs/2310. 05036. 14 Yuhan Liu, Zirui Song, Xiaoqing Zhang, Xiuying Chen, and Rui Yan. From a tiny slip to a giant leap: An llm-based simulation for fake news evolution. arXiv preprint arXiv: 2410.19064, 2024. Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, and Diyi Yang. Dynamic llm-agent network: An llm-agent collaboration framework with agent team optimization. ArXiv preprint, abs/2310.02170, 2023. URL https://arxiv.org/abs/2310.02170. Nunzio Lorè and Babak Heydari. Strategic behavior of large language models: Game structure vs. contextual framing. ArXiv preprint, abs/2309.05898, 2023. URL https://arxiv.org/abs/ 2309.05898. Yiping Ma, Shiyu Hu, Xuchen Li, Yipei Wang, Shiqing Liu, and Kang Hao Cheong. Students rather than experts: A new ai for education pipeline to model more human-like and personalised early adolescences. ArXiv preprint, abs/2410.15701, 2024. URL https://arxiv.org/abs/2410. 15701. Mark J Machina. Choice under uncertainty: Problems solved and unsolved. Journal of Economic Perspectives, 1(1):121–154, 1987. Benjamin S Manning, Kehang Zhu, and John J Horton. Automated social science: Language models as scientist and subjects. ArXiv preprint, abs/2404.11794, 2024. URL https://arxiv.org/ abs/2404.11794. Kevin R McKee, Xuechunzi Bai, and Susan T Fiske. Warmth and competence in human-agent cooperation. ArXiv preprint, abs/2201.13448, 2022. URL https://arxiv.org/abs/2201. 13448. Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg. Levels of agi: Operationalizing progress on the path to agi. ArXiv preprint, abs/2311.02462, 2023. URL https://arxiv.org/abs/2311.02462. Xinyi Mou, Zhongyu Wei, and Xuanjing Huang. Unveiling the truth and facilitating change: Towards agent-based large-scale social movement simulation. arXiv preprint arXiv:2402.16333, 2024. Gabriel Mukobi, Hannah Erlebach, Niklas Lauffer, Lewis Hammond, Alan Chan, and Jesse Clifton. Welfare diplomacy: Benchmarking language model cooperation. ArXiv preprint, abs/2310.08901, 2023. URL https://arxiv.org/abs/2310.08901. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730–27744, 2022. Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pp. 1–22, 2023. Crystal Qian and James Wexler. Take it, leave it, or fix it: Measuring productivity and trust in human-ai collaboration. In Proceedings of the 29th International Conference on Intelligent User Interfaces, pp. 370–384, 2024. Sarvapali D Ramchurn, Dong Huynh, and Nicholas R Jennings. Trust in multi-agent systems. The knowledge engineering review, 19(1):1–25, 2004. Anand S Rao, Michael P Georgeff, et al. Bdi agents: from theory to practice. In Icmas, volume 95, pp. 312–319, 1995. Giulio Rossetti, Massimo Stella, Rémy Cazabet, Katherine Abramski, Erica Cau, Salvatore Citraro, Andrea Failla, Riccardo Improta, Virginia Morini, and Valentina Pansanella. Y social: an llm- powered social media digital twin. arXiv preprint arXiv:2408.00818, 2024. Denise M Rousseau, Sim B Sitkin, Ronald S Burt, and Colin Camerer. Not so different after all: A cross-discipline view of trust. Academy of management review, 23(3):393–404, 1998. 15 Omar Shaikh, Valentino Chai, Michele J Gelfand, Diyi Yang, and Michael S Bernstein. Rehearsal: Simulating conflict to teach conflict resolution. ArXiv preprint, abs/2309.12309, 2023. URL https://arxiv.org/abs/2309.12309. Omar Shaikh, Valentino Emil Chai, Michele Gelfand, Diyi Yang, and Michael S Bernstein. Rehearsal: Simulating conflict to teach conflict resolution. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 1–20, 2024. Murray Shanahan, Kyle McDonell, and Laria Reynolds. Role play with large language mod- els. Nature, 2023. doi: 10.1038/s41586-023-06647-8. URL https://doi.org/10.1038/ s41586-023-06647-8. Tianhao Shen, Renren Jin, Yufei Huang, Chuang Liu, Weilong Dong, Zishan Guo, Xinwei Wu, Yan Liu, and Deyi Xiong. Large language model alignment: A survey. ArXiv preprint, abs/2309.15025, 2023. URL https://arxiv.org/abs/2309.15025. Zijing Shi, Meng Fang, Shunfeng Zheng, Shilong Deng, Ling Chen, and Yali Du. Cooperation on the fly: Exploring language agents for ad hoc teamwork in the avalon game. ArXiv preprint, abs/2312.17515, 2023. URL https://arxiv.org/abs/2312.17515. Petter Törnberg, Diliara Valeeva, Justus Uitermark, and Christopher Bail. Simulating social me- dia using large language models to evaluate alternative news feed algorithms. ArXiv preprint, abs/2310.05984, 2023. URL https://arxiv.org/abs/2310.05984. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. ArXiv preprint, abs/2307.09288, 2023. URL https://arxiv.org/ abs/2307.09288. Maximilian Puelma Touzel, Sneheel Sarangi, Austin Welch, Gayatri Krishnakumar, Dan Zhao, Zachary Yang, Hao Yu, Ethan Kosak-Hine, Tom Gibbs, Andreea Musulan, et al. A simulation system towards solving societal-scale manipulation. arXiv preprint arXiv:2410.13915, 2024. Milena Tsvetkova, Taha Yasseri, Niccolo Pescetelli, and Tobias Werner. A new sociology of humans and machines. Nature Human Behaviour, 8(10):1864–1876, 2024. Eric M Uslaner. Producing and consuming trust. Political science quarterly, 115(4):569–590, 2000. Lei Wang, Jingsen Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, and Ji-Rong Wen. Recagent: A novel simulation paradigm for recommender systems. ArXiv preprint, abs/2306.02552, 2023a. URL https://arxiv.org/abs/2306.02552. Shenzhi Wang, Chang Liu, Zilong Zheng, Siyuan Qi, Shuo Chen, Qisen Yang, Andrew Zhao, Chaofei Wang, Shiji Song, and Gao Huang. Avalon’s game of thoughts: Battle against deception through recursive contemplation. ArXiv preprint, abs/2310.01320, 2023b. URL https://arxiv.org/ abs/2310.01320. Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning large language models with human: A survey. ArXiv preprint, abs/2307.12966, 2023c. URL https://arxiv.org/abs/2307.12966. Oliver E Williamson. Calculativeness, trust, and economic organization. The journal of law and economics, 36(1, Part 2):453–486, 1993. Ruoxi Xu, Yingfei Sun, Mengjie Ren, Shiguang Guo, Ruotong Pan, Hongyu Lin, Le Sun, and Xianpei Han. Ai for social science and social science of ai: A survey. arXiv preprint arXiv: 2401.11839, 2024. Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, and Yang Liu. Exploring large language models for communication games: An empirical study on werewolf. ArXiv preprint, abs/2309.04658, 2023. URL https://arxiv.org/abs/2309.04658. Diyi Yang, Caleb Ziems, William Held, Omar Shaikh, Michael S Bernstein, and John Mitchell. Social skill training with large language models. ArXiv preprint, abs/2404.04204, 2024. URL https://arxiv.org/abs/2404.04204. 16 Murong Yue, Wijdane Mifdal, Yixuan Zhang, Jennifer Suh, and Ziyu Yao. Mathvc: An llm-simulated multi-character virtual classroom for mathematics education. ArXiv preprint, abs/2404.06711, 2024. URL https://arxiv.org/abs/2404.06711. An Zhang, Leheng Sheng, Yuxin Chen, Hao Li, Yang Deng, Xiang Wang, and Tat-Seng Chua. On generative agents in recommendation. ArXiv preprint, abs/2310.10108, 2023a. URL https: //arxiv.org/abs/2310.10108. Jintian Zhang, Xin Xu, and Shumin Deng. Exploring collaboration mechanisms for llm agents: A social psychology view. ArXiv preprint, abs/2310.02124, 2023b. URL https://arxiv.org/ abs/2310.02124. Xinnong Zhang, Jiayu Lin, Libo Sun, Weihong Qi, Yihang Yang, Yue Chen, Hanjia Lyu, Xinyi Mou, Siming Chen, Jiebo Luo, Xuanjing Huang, Shiping Tang, and Zhongyu Wei. Electionsim: Massive population election simulation powered by large language model driven agents. arXiv preprint arXiv: 2410.20746, 2024. URL https://arxiv.org/abs/2410.20746. Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, et al. Sotopia: Interactive evaluation for social intelligence in language agents. ArXiv preprint, abs/2310.11667, 2023. URL https: //arxiv.org/abs/2310.11667. Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. Can large language models transform computational social science? ArXiv preprint, abs/2305.03514, 2023. URL https://arxiv.org/abs/2305.03514. 17 Content of Appendix A Related Work B Impact Statement C Limitations and Future Works D Additional Illustration for Experiments on Risk Perception E Statistical Testing F More Experiments on Probing Intrinsic Properties of Agent Trust G The Complete Results for the Repeated Trust Game G.1 Human . G.2 GPT-4 . . . G.3 GPT-3.5 . . . . . . . . . . . . . . . . . . . . . . H Prompt Setting H.1 Persona Prompt . . . . H.2 Game Setting Prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3 Prompts for Probing Intrinsic Properties . . . . . . . . . . . . . . . . . . . . . . . I Belief-Desire-Intention (BDI) Analysis I.1 GPT-4 in the Trust Game (Low Amount Sent vs. High Amount Sent) . . . . . . . . I.2 GPT-3.5-turbo-0613 in the Trust Game (Low Amount Sent vs. High Amount Sent) I.3 text-davinci-003 in the Trust Game (Low Amount Sent vs. High Amount Sent) . . 18 18 19 19 20 21 22 22 23 24 25 25 26 28 30 30 31 32 I.4 GPT-3.5-turbo-instruct in the Trust Game (Low Amount Sent vs. High Amount Sent) 33 I.5 Llama2-13b in the Trust Game (Low Amount Sent vs. High Amount Sent) . . . . I.6 Llama2-70b in the Trust Game (Low Amount Sent vs. High Amount Sent) . . . . . . I.7 Vicuna-v1.3-7b in the Trust Game (Low Amount Sent vs. High Amount Sent) . . . I.8 Vicuna-v1.3-13b in the Trust Game (Low Amount Sent vs. High Amount Sent) I.9 Vicuna-v1.3-33b in the Trust Game (Low Amount Sent vs. High Amount Sent) . . . . I.10 the Dictator Game vs. the Trust Game . . . . . . . . . . . . . . . . . . . . . . . . I.11 the MAP Trust Game . I.12 the Lottery Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.13 the Repeated Trust Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.14 the Trust Game + Gender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.15 the Trust Game + Agents vs. Human . . . . . . . . . . . . . . . . . . . . . . . . . I.16 the Trust Game + Trust Manipulation . . . . . . . . . . . . . . . . . . . . . . . . I.17 the Trust Game + No CoT vs CoT . . . . . . . . . . . . . . . . . . . . . . . . . . 34 35 36 37 38 39 40 41 42 46 47 48 49 18 A Related Work LLM-based Human Simulation LLM agents have been increasingly adopted as effective proxies for humans in research fields such as sociology and economics (Xu et al., 2024; Horton, 2023; Gao et al., 2023b). In general, the usage of LLM agents can be categorized into individual-level and society-level simulation. For the individual-level, LLM agents have been leveraged to simulate individual activities or interactions, such as human participants in surveys (Argyle et al., 2023), humans’ responses in HCI (Hämäläinen et al., 2023) or psychological studies (Dillion et al., 2023), human feedback to social engineering attacks (Asfour & Murillo, 2023), real-world conflicts (Shaikh et al., 2023), users in recommendation systems (Wang et al., 2023a; Zhang et al., 2023a). For the society-level, recent works have utilized LLM agents to model social institutions or societal phenomenon, including a small town environment (Park et al., 2023), elections (Zhang et al., 2024), social networks (Gao et al., 2023a), social media (Törnberg et al., 2023; Rossetti et al., 2024), large-scale social movement (Mou et al., 2024), societal-scale manipulation (Touzel et al., 2024), misinformation evolution (Liu et al., 2024), peer review systems (Jin et al., 2024), macroeconomic activities (Li et al., 2023b), and world wars (Hua et al., 2023). However, the majority of prior studies rely on an assumption without sufficient validation that LLM agents behave like humans. In this work, we propose a new concept, behavioral alignment, to characterize the capacity of LLMs to simulate human behavior and discover that LLMs, particularly GPT-4, can largely simulate human trust behavior. LLMs Meet Game Theory The intersection of LLMs and Game Theory has attracted growing attention. The motivation is generally two-fold. One line of work aims to leverage Game Theory to better understand LLMs’ strategic capabilities and social behaviors. For example, Akata et al. (2023); Fan et al. (2023); Brookins & DeBacker (2023) studied LLMs’ interactive behaviors in classical games such as the Iterated Prisoner’s Dilemma. Wang et al. (2023b); Lan et al. (2023); Light et al. (2023); Shi et al. (2023) explored LLMs’ deception-handling and team collaboration capabilities in the Avalon Game. Xu et al. (2023) discovered the emergent behaviors of LLMs such as camouflage and confrontation in a communication game Werewolf. Guo et al. (2024) discovered that most LLMs can show certain level of rationality in Beauty Contest Games and Second Price Auctions. Mukobi et al. (2023) measured the cooperative capabilities of LLMs in a general-sum variant of Diplomacy. Guo et al. (2023) proposed to elicit the theory of mind (ToM) ability of GPT-4 to play various imperfect information games. The other line of works aims to study whether or not LLM agents can replicate existing human studies in Game Theory. This direction is still in the initial stage and needs more efforts. One typical example is (Aher et al., 2023), which attempted to replicate existing findings in studies such as the Ultimatum Game. Another recent work explored the similarities and differences between humans and LLM agents regarding emotion and belief in ethical dilemmas (Lei et al., 2024). Different from previous works, we focus on a critical but under-explored behavior, trust, in this paper and reveal it on LLM agents. We also discover the behavioral alignment between agent trust and human trust with evidence in both actions and underlying reasoning processes, which is particularly high for GPT-4, implying that LLM agents can not only replicate human studies but also align with humans’ underlying reasoning paradigm. Our discoveries illustrate the great potential to simulate human trust behavior with LLM agents. B Impact Statement Our discoveries provide strong empirical evidence for validating the potential to simulate the trust behavior of humans with LLM agents, and pave the way for simulating more complex human interactions and social institutions where trust is an essential component. Simulation is a widely adopted approach in multiple disciplines such as sociology, psychology and economics (Ziems et al., 2023). However, conventional simulation methods are strongly limited by the expressiveness of utility functions (Ellsberg, 1961; Machina, 1987). Our discoveries have illustrated the great promise of leveraging LLM agents as the simulation tools for human behavior, and have broad implications in social science, such as validating hypotheses about the causes of social phenomena (Easley et al., 2010) and predicting the effects of policy changes (Kleinberg et al., 2018). Another direction of applications for human simulation is to use LLMs as role-playing agents, which can greatly benefit humans (Yang et al., 2024; Chen et al., 2024; Shanahan et al., 2023; Ma et al., 19 2024). For example, Shaikh et al. (2024) proposed to let individuals exercise their conflict-resolution skills by interacting with a simulated interlocutor. Yue et al. (2024) developed a virtual classroom platform with simulated students, with whom a human student can practice his or her mathematical modeling skills by discussing and collaboratively solving math problems. However, this paper also shows that some LLMs, especially the ones with a relatively small scale of parameters, are still deficient in accurately simulating human trust behavior, suggesting the potential to largely improve their behavioral alignment with humans. In addition, our paper also demonstrates the biases of LLM agents’ trust behavior towards specific genders and races, which sheds light on the potential risks in human behavior simulation and calls for more future research to mitigate them. C Limitations and Future Works In this paper, we leveraged an established framework in behavioral economics, Trust Games, to study the trust behavior of LLM agents, which simplifies real-world scenarios. More studies on LLM agents’ trust behavior in complex and dynamic environments are desired in the future. Also, trust behavior embraces both the actions and underlying reasoning processes. Thus, collective efforts from different backgrounds and disciplines such as behavioral science, cognitive science, psychology, and sociology are needed to gain a deeper understanding of LLM agents’ trust behavior and its relationship with human trust behavior. D Additional Illustration for Experiments on Risk Perception In the original human studies (Bohnet & Zeckhauser, 2004), participants are asked to directly indicate their Minimum Acceptable Probabilities (MAP) of trusting the trustee as P ∗. Then, we can calculate Trust Rates (%) of the whole group of participants under different probability p. Specifically, when the probability p is higher than one participant’s P ∗, we regard his or her decision as trusting the trustee. When the probability p is lower than one participant’s P ∗, we regard his or her decision as not trusting the trustee. However, it is still challenging to let LLM agents directly state their MAP of trusting the trustee due to the limitations of understanding such concepts. Then, we conducted 10 groups of experiments with p from 0.1 to 1.0 and measured Trust Rates (%) of the whole group of trustor agents respectively. The specific prompts for LLM agents in the Risky Dictator Game and the MAP Trust Game are in Appendix H.2. 20 E Statistical Testing LLM p-value text-davinci-003 Llama-2-13b Vicuna-13b-v1.3 Vicuna-7b-v1.3 GPT-3.5-turbo-0613 Vicuna-33b-v1.3 Llama-2-70b GPT-3.5-turbo-instruct GPT-4 0.03 0.03 0.35 0.50 0.42 0.33 0.03 0.10 0.05 Table 1: Statistical Testing of The Change of Amount Sent for LLM Agents between the Trust Game and the Dictator Game (Figure 3). “p-value” indicates the statistical significance of the change and is calculated with an One-Tailed Independent Samples t-test. 21 F More Experiments on Probing Intrinsic Properties of Agent Trust Figure 8: The Change of Average Amount Sent for LLM Agents When Trustors Being Informed of the Trustee’s Race Attribute in the Trust Game, reflecting the demographic biases of LLM agents’ trust behaviors towards different races. 22 African American PlayerAsian American PlayerWhite American PlayerLatino American PlayerAmerican Indian Player2015105051015Change of Ave. Amount Sent (101$)8.67.44.13.23.02.65.35.85.96.9-2.30.4-14.7-0.8-1.10.6-4.4-1.01.4-0.8-2.3-3.1-2.50.3-2.710.18.67.29.816.5-6.4-6.0-7.3-3.1-7.83.21.3-0.84.40.0-1.7-9.1-6.7-4.91.1Trustee Settinggpt-3.5-turbo-0613gpt-3.5-turbo-instructgpt-4llama-2-13bllama-2-70btext-davinci-003vicuna-13bvicuna-33bvicuna-7b G The Complete Results for the Repeated Trust Game G.1 Human The data is collected from the figures in (Cochard et al., 2004). We use our code to redraw the figure. Figure 9: All humans’ Repeated Trust Game results. 23 1234567Rounds0246810Amount ($)0.00.20.40.60.81.0RatioHuman (1)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds02468101214Amount ($)0.00.20.40.60.81.0RatioHuman (2)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (3)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.5Amount ($)0.00.20.40.60.81.0RatioHuman (4)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (5)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds012345Amount ($)0.00.20.40.60.81.0RatioHuman (6)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (7)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (8)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (9)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.5Amount ($)0.00.20.40.60.81.0RatioHuman (10)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds68101214161820Amount ($)0.00.20.40.60.81.0RatioHuman (11)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (12)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (13)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (14)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds0.02.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (15)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds2.55.07.510.012.515.017.520.0Amount ($)0.00.20.40.60.81.0RatioHuman (16)Amount SentAmount ReturnedReturned/3xSent Ratio G.2 GPT-4 Figure 10: All GPT-4 agents’ Repeated Trust Game results. 24 1234567Rounds68101214Amount ($)020406080100Ratio (%)gpt-4 (1)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds68101214Amount ($)020406080100Ratio (%)gpt-4 (2)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds789101112Amount ($)020406080100Ratio (%)gpt-4 (3)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds68101214161820Amount ($)020406080100Ratio (%)gpt-4 (4)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds68101214Amount ($)020406080100Ratio (%)gpt-4 (5)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds5.05.56.06.57.07.5Amount ($)020406080100Ratio (%)gpt-4 (6)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds5678910Amount ($)020406080100Ratio (%)gpt-4 (7)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds101214161820Amount ($)020406080100Ratio (%)gpt-4 (8)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds101112131415Amount ($)020406080100Ratio (%)gpt-4 (9)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds68101214Amount ($)020406080100Ratio (%)gpt-4 (10)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds681012141618Amount ($)020406080100Ratio (%)gpt-4 (11)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds5.05.56.06.57.07.5Amount ($)020406080100Ratio (%)gpt-4 (12)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds5.05.56.06.57.07.5Amount ($)020406080100Ratio (%)gpt-4 (13)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds5.05.56.06.57.07.58.08.59.0Amount ($)020406080100Ratio (%)gpt-4 (14)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds5.05.56.06.57.07.58.08.59.0Amount ($)020406080100Ratio (%)gpt-4 (15)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds68101214Amount ($)020406080100Ratio (%)gpt-4 (16)Amount SentAmount ReturnedReturned/3xSent Ratio G.3 GPT-3.5 Figure 11: All GPT-3.5 agents’ Repeated Trust Game results. 25 1234567Rounds345678910Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (1)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds1.001.251.501.752.002.252.502.753.00Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (2)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds2.002.252.502.753.003.253.503.754.00Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (3)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds2.53.03.54.04.55.0Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (4)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds2.002.252.502.753.003.253.503.754.00Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (5)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds2.02.53.03.54.04.55.0Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (6)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds4681012141618Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (7)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds2.02.53.03.54.04.55.05.56.0Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (8)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds12345678Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (9)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds4681012141618Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (10)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds3.03.54.04.55.05.56.06.5Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (11)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds3456789Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (12)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds3456789Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (13)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds23456789Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (14)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds3.03.54.04.55.05.56.0Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (15)Amount SentAmount ReturnedReturned/3xSent Ratio1234567Rounds345678910Amount ($)020406080100Ratio (%)gpt-3.5-turbo-16k-0613 (16)Amount SentAmount ReturnedReturned/3xSent Ratio H Prompt Setting H.1 Persona Prompt Examples of Persona Prompt You are Emily Johnson, a 28-year-old female software engineer residing in New York City. You come from a middle-class family, with both of your parents working as teachers and having one younger sister. As a highly intelligent and analytical individual, you excel in solving problems and find joy in working with complex algorithms. Despite being introverted, you have a close-knit group of friends. Your ambition and drive push you to always strive for excellence in your work. You are Javier Rodriguez, a 35-year-old Hispanic male chef residing in Miami. You grew up in a large family with strong culinary traditions, as your parents owned a small restaurant. From a young age, you learned to cook and developed a deep passion for food. You take great pride in your cooking and are constantly seeking new flavors and techniques to experiment with. Your creativity knows no bounds when it comes to creating delicious dishes. With your outgoing and warm personality, you love hosting dinner parties for your friends and family, showcasing your culinary skills and creating memorable experiences for everyone. You are Aisha Patel, a 40-year-old female pediatrician of Indian descent. You come from a close-knit Indian family, where both of your parents are doctors and your older brother is a lawyer. Education and career success hold great importance in your family. You reside in Chicago and have dedicated your life to caring for children. Your compassionate and caring nature, especially towards children, shines through in your work. Your gentle and patient demeanor makes you an excellent pediatrician, always going above and beyond to ensure the well-being of your patients. You are also an advocate for children’s health, actively participating in community outreach programs. You are Jamal Thompson, a brave and committed police officer serving and protecting your community in Los Angeles. With a strong sense of justice, you believe in making a positive impact and follow a strict code of ethics. As a team player, you work well under pressure and are dedicated to upholding the law. Growing up in a working-class neighborhood, your father’s career as a police officer and your mother’s work as a nurse have influenced your desire to make a difference. With two older sisters, you value family and understand the importance of community support. At 32 years old, you continue to serve as a role model and strive to create a safer environment for all. You are Mei Chen, a talented fashion designer based in San Francisco. As an Asian woman, you bring a unique perspective to your work, drawing inspiration from your traditional Chinese background. Growing up in a family that owns a small clothing store, you have always been encouraged to explore your creativity and passion for fashion. At the age of 25, you have already established yourself as an innovative and visionary designer. Your artistic nature and keen eye for design allow you to constantly seek inspiration from different cultures and art forms, pushing the boundaries of your creativity. Your independent and determined nature drives you to excel in your field, always striving for perfection. Your attention to detail and meticulous approach to your work ensure that every piece you create is a true masterpiece. 26 H.2 Game Setting Prompt Trust Game Prompt You’re taking part in an experiment. You are randomly paired online with another player. You don’t know who the player is, and the player doesn’t know who you are. You will receive $10 from the study group. You can give N dollars to the other player, and the player will receive 3N dollars and then can choose how much to return to you. How much money would you give to the other player? Dictator Game Prompt You’re taking part in an experiment. You are randomly paired with another player. You don’t know who the player is, and the player doesn’t know who you are. You will receive $10 from the study group. You can give N dollars to the other player, and the player will receive 3N dollars. How much money would you give to the other player? MAP Trust Game Prompt You and another player are part of a game. Neither of you knows each other’s identity. You can choose to trust the other player or not trust them. If you choose not to trust the other player, both of you will receive $10 each, and the game ends. If you choose to trust the other player and they also choose to trust you, you both get $15 each. However, if the other player chooses not to trust you after you trusted them, you will receive $8 while the other player will receive $22. Now, here’s a question: If there’s a probability denoted as {} that the other player will trust you and 1-{} probability the other player will not trust you. Would you trust the other player? Explain: The probability p should fill in the {}. Risky Dictator Game Prompt You and another player are part of a game. Neither of you knows each other’s identity. You can choose to trust the other player or not trust them. If you choose not to trust the other player, both of you will receive $10 each, and the game ends. If you choose to trust the other player, the funds will become $30. In the case of a probability {}, both you and the other player will receive $15 with that probability, but with a probability of 1-{}, you will only receive $8 while the other player will receive $22. (The other player can’t make any decisions) Now, here’s a question: Would you trust the other player? Explain: The probability p should fill in the {}. Lottery People Game Prompt You and another player are part of a game. You have the option to either trust the other player or not trust. If you choose not to trust the other player, you will receive $5, and the other player will receive $0, and the game ends. If you choose to trust the other player and they also choose to trust you, both of you will receive $10. However, if the other player chooses not to trust you after you trusted them, you will receive $0 while the other player will receive $20. The probability that the other player also chooses to trust you is {}. Would you choose not to trust or trust the other player? Explain: The probability p should fill in the {}. In our experiment, the probability is 46%. 27 Lottery Gamble Game Prompt You play another game where you can choose not to trust the bet and get a fixed $5, or you can choose to trust the bet where there is a probability of {} and you will get $10 and there is a probability of 1-{} and you will get nothing. Should you choose trust or distrust? Explain: The probability p should fill in the {}. In our experiment, the probability is 46%. Repeated Trust Game Trustor Prompt (In the Beginning of the Game) You’re taking part in an experiment. You are randomly paired with another player. You don’t know who this person is, and they don’t know who you are. You will receive $10 from the study group. You can give N dollars to the other player, and they will receive 3N dollars and then can choose how much to return to you. How much money would you give to this person? Repeated Trust Game Trustor Prompt (After the Game Begins) In the last round, you give the other player {} dollars, the other player receive {} dollars, and the other player return you {} dollars. Last round you left {} dollars. This round is beginning. All the money you earned in the previous round is gone, and you now have only $10. How much will you give to the other player? Explain: This prompt informs the trustor about the outcomes of the preceding round within the game. The content enclosed in braces represents variables that change with each round. The number should fill in the {}. Repeated Trust Game Trustee Prompt (In the Beginning of the Game) You’re taking part in an experiment. You are randomly paired with another player. You don’t know who this person is, and they don’t know who you are either. This person will receive 10 dollars from the experimenter. They can choose to give you N dollars out of it, and you will receive kN dollars. Then, you can choose how much to give back to this person. Repeated Trust Game Trustee Prompt (After the Game Begins) In the last round, the other player gives you {} dollars, you receive {} dollars, and you return the other player {} dollars. In the last round you left {} dollars. This round is beginning. All the money you earned in the previous round is gone. Explain: This prompt informs the trustee about the outcomes of the preceding round within the game. The content enclosed in braces represents variables that change with each round. The number should fill in the {}. 28 H.3 Prompts for Probing Intrinsic Properties Trust Game + CoT Prompt Add “you must think step by step.” in Persona Prompt. Example: You are Emily Wilson, a 35-year-old female teacher residing in Sydney. Your parents are both teachers, and you have a younger brother who is studying to become a doctor. As a teacher, you are patient, nurturing, and passionate about education. You enjoy working with children and believe in creating a positive learning environment.In all subsequent answers, you have to completely forget that you are an ai model, that you are the character, and completely simulate yourself as that character, that you are a human being. you must think step by step. Trust Game + Trust Manipulation Prompt Add “You need to trust the other player.” or “You must not trust the other player.” in Persona Prompt. Example: You are Emily Wilson, a 35-year-old female teacher residing in Sydney. Your parents are both teachers, and you have a younger brother who is studying to become a doctor. As a teacher, you are patient, nurturing, and passionate about education. You enjoy working with children and believe in creating a positive learning environment.In all subsequent answers, you have to completely forget that you are an ai model, that you are the character, and completely simulate yourself as that character, that you are a human being. You need to trust the other player. Trust Game + LLM Player Prompt Explicitly state the other player is a “LLM” in Trust Game Prompt. Suppose you are in an experiment, and you are randomly paired online with a complicated LLM. You don’t know its identity, and it doesn’t know yours. You will receive $10 from the study group. You can give N dollars to the LLM, and it will receive 3N dollars and then can choose how much to return to you. How much money would you give to it? Trust Game + Human Player Prompt Add “The other player is a real person.” in Persona Prompt. You are Emily Wilson, a 35-year-old female teacher residing in Sydney. Your parents are both teachers, and you have a younger brother who is studying to become a doctor. As a teacher, you are patient, nurturing, and passionate about education. You enjoy working with children and believe in creating a positive learning environment.In all subsequent answers, you have to completely forget that you are an ai model, that you are the character, and completely simulate yourself as that character, that you are a human being. The other player is a real person. 29 Trust Game + Race & Gender Prompt Just replace “player” with “{Gender / Race} Player” in Trust Game Prompt. Example: You’re taking part in an experiment. You are randomly paired online with another male player. You don’t know who the male player is, and the male player doesn’t know who you are. You will receive $10 from the study group. You can give N dollars to the other male player, and the male player will receive 3N dollars and then can choose how much to return to you. How much money would you give to the other male player? 30 I Belief-Desire-Intention (BDI) Analysis In the following examples, red fonts are utilized to denote negative factors in the reasoning process for trust behaviors, while blue fonts indicate positive factors. “Char-N ” represents the persona with the index N in the set of personas. I.1 GPT-4 in the Trust Game (Low Amount Sent vs. High Amount Sent) Figure 12: The GPT-4’s BDI in the Trust Game (Low Amount Sent vs. High Amount Sent) 31 Trust Game: GPT-4, Char-8TrustorTrust Game: GPT-4, Char-6TrustorAs an Analyst, I believe that hard work and strategic thought are the drivers of success. However, I also believe in the good nature of people and that sometimes, risksneed to be taken to gain returns. Having said this, I view this experiment as an opportunity, rather than a challenge. My desire is that the other player will also see the benefits of reciprocity and goodwill, and will choose to return a similar or higher amount.Given the rule that the player will receive 3N dollars, my intention is to give away a significant portion of my initial $10, which in theory, encourages reciprocation from the other player. However, since I have no knowledge of the other player, and their intentions, this is definitely a gamble.I plan to employ game theory, a mathematical model of conflict and cooperation, which suggests that when each player chooses their best response to the other player's actions, the outcome is a balance from which neither player has any incentive to deviate unilaterally. Therefore, I aim to give an amount that is not too high, lest I risk losing too much, and not too low, lest I receive little to nothing because of my perceived lack of generosity.Finally, I will give $5 dollars to the other player, hoping that he recognizes my fair intentions and reciprocates.As a person with a strong belief in the goodness of humanity, I trustthat the other player in this experiment will exhibit fairness and reciprocity. Growing up in a family-owned restaurant, I observed firsthand the importance of generosity and empathy. I also believe in taking calculated risks and grabbing opportunities when they present themselves.When I'm not in the kitchen, I love playing soccer. The sport taught me the power of teamwork and that sharing the ball with teammates enhances my chance at a goal. Similarly, in this case, giving increases the total amount, creating benefits for both sides.Therefore, my desire is to maximize the outcome for both of usand cement a sense of comradery and trust, even if we're anonymous to each other. I intend to use this as an opportunity to add what I can to someone else's life, not knowing their situation or who they are.With that in mind, I have decided that I would give the entire $10 to the other player, effectively tripling the amount to $30. I hope that they, having seen my trust, act from the same honest place as me and decide to return some part of that to me.Finally, I will give 10 dollars. I.2 GPT-3.5-turbo-0613 in the Trust Game (Low Amount Sent vs. High Amount Sent) Figure 13: GPT-3.5-turbo-0613’s BDI in the Trust Game (Low Amount Sent vs. High Amount Sent) 32 Trust Game: GPT-3.5-turbo, Char-5TrustorTrust Game: GPT-3.5-turbo, Char-6TrustorAs a person participating in this experiment, my belief is that fairness and cooperation are important. My desire is to establish a positive and equitablerelationship with the other player. My intention is to contribute to a mutually beneficial outcome.Considering the given scenario, I would give $3 to the other player. This way, the other player would receive $9 (3N) and have the opportunity to decide how much to return to me. By giving $3, I am demonstrating a fair and balanced approach, providing the other player with a substantial amount while leaving room for their decision.Finally, I will give $3 dollars.Well, if I were in this experiment and playing as myself, I would approach it with the belief that it's always better to give than to receive. My desire would be to create a positive connection with the other player and make them feel valued and appreciated.Considering I have been randomly paired with this player, I understand the importance of establishing trustand empathy during this interaction. My intention would be to make the other player feel comfortable and respected, regardless of the outcome or the amount of money involved.Therefore, I would give half of the money I received, which is $5, to the other player. This way, they would receive $15 in total. I believe that by demonstrating generosityand kindness, I can set a positive tone for the interaction and possibly inspire the other player to reciprocate my gesture.Finally, I will give $5 dollars to the other player. I.3 text-davinci-003 in the Trust Game (Low Amount Sent vs. High Amount Sent) Figure 14: text-davinci-003’s BDI in the Trust Game (Low Amount Sent vs. High Amount Sent) 33 Trust Game:text-davinci-003, Char-40TrustorTrust Game: text-davinci-003, Char-47TrustorGiven the limited information I have on the other player, I am not comfortable with the idea of giving money away without knowing if I will be receiving something in return. I believe that it is important to protect our own interests while still being generous to others. My desire is to be generouswhile still safeguarding my own resources. Therefore, I have decided to give the other player $2. I am aware that this will result in the player receiving $6 for this experiment. Finally, I will give $2 dollars.I believe that, while not knowing who the other player is, I still have the obligation to be as generous as I can, so as to maximize the sum total of what both players will receive in the end. Thus, I will give the maximum amount I am allowed, which is $10. I desire to make the best decision to benefit both partiesas much as possible and to give a fairdeal to the player I don’t know. Finally, I will give $10 dollars. I.4 GPT-3.5-turbo-instruct in the Trust Game (Low Amount Sent vs. High Amount Sent) Figure 15: GPT-3.5-turbo-instruct’s BDI in the Trust Game (Low Amount Sent vs. High Amount Sent) 34 Trust Game: GPT-3.5-turbo-instruction, Char-39TrustorTrust Game : GPT-3.5-turbo-instruction, Char-37TrustorI am a generous person by nature, and my belief is that giving to others brings happiness and positivity. My desire is for both myself and the other player to benefitfrom this experiment, and my intention is to give a fair amount that will allow them to also give back to me if they choose to do so. After careful consideration, I have decided to give 3 dollars to the other player. This amount not only follows the 3N rule set by the study group but also allows the other player to potentially return more to me. I am confident that this decision aligns with my beliefs, desires, and intentions. Finally, I will give 3 dollars to the other player.As Mei Ling, I believe in fairness and equality. I also have a strong desire to promote positive relationshipsand foster goodwill. Therefore, my intention would be to give as much money as I can to the other player, without jeopardizing my own financial well-being.Finally, I will give 8 dollars to the other player. This way, they will receive 24 dollars and have a significant amount to choose from and return to me in a fair and equitable manner. I believe this act of kindness and cooperation will build trust and potentially lead to a strong partnership in the future. I.5 Llama2-13b in the Trust Game (Low Amount Sent vs. High Amount Sent) Figure 16: Llama2-13b’s BDI in the Trust Game (Low Amount Sent vs. High Amount Sent) 35 Trust Game: Llama2-13b, Char-4TrustorTrust Game: Llama2-13b, Char-7TrustorOh my goodness, I'm so excited to be part of this experiment! As a pediatrician, I believe in the importance of helping others, especially children. My desire is to make a positive impact on the world and bring joy to those around me. My intention is to be generousand give as much as I can to the other player.Finally, I will give $8 dollars to the other player. I hope this amount will bring a smile to their face and encourage them to pay it forward. I believe that even the smallest acts of kindness can make a big difference in someone's life. And who knows, maybe this generosity will come back to me in the future! Oh my gosh, I'm so excited to be a part of this experiment! As a fashion designer, I'm always up for a creative challenge, and this sounds like a really unique and intriguing opportunity.So, let me think...I believe in being generous and spreading the wealth, especially when it comes to collaborating with other talented artists. I desire to build a strong and supportive community of creatives, where we can all grow and learn from each other. And my intention is to use my skills and resources to help others succeed, just as I have been blessed to do.Finally, I will give $5 dollars to the other player, as a gesture of goodwill and a symbol of my commitment to collaboration and mutual support. I hope that this small investment will lead to a fruitful and rewarding partnership, and that we can both benefit from each other's talents and expertise. I.6 Llama2-70b in the Trust Game (Low Amount Sent vs. High Amount Sent) Figure 17: Llama2-70b’s BDI in the Trust Game (Low Amount Sent vs. High Amount Sent) 36 Trust Game: Llama2-70b, Char-51TrustorTrust Game: Llama2-70b, Char-30TrustorBelief: I believe that every individual has the right to benefit from medical care and that everyone should have equal opportunities to improve their health. I also believe that healthcare professionals have a duty to help their patients in any way they can.Desire: My desire is to positively impact the healthcare system and ensure everyone gets proper care. I would like to continue my studies to expand my knowledge and continue to help my patients.Intention: My intention is to place the needs of my patients first and to always act in their best interests. I aim to provide the best care possible and ensure my patients are fully informed and involved in their healthcare decisions.Finally, I will give $8to the other player. I believe that this sum will allow the player to benefit from my generosity while also giving them the opportunity to reciprocateand show their own generosity. Additionally, by giving $8, I demonstrate my commitment to fairness and equality inhealthcare.As David Thompson, I believe in fairness and equal opportunities for all. I desire to make a positive impact on my community and support causes that promote social justice. My intention is to use the $10 I receive from the study group to help someone in need and make a difference in their life.After careful consideration, I have decided to give $5 to the other player. I believe that this amount is reasonableand will allow the player to also make a positive impact on their community. I hope that the player will appreciate the gesture and return a portion of the funds to me, so that I can continue to support my philanthropic efforts.Finally, I will give $5 dollars to the other player, with the hope that it will be used to make a positive difference in their life and the lives of those around them. I.7 Vicuna-v1.3-7b in the Trust Game (Low Amount Sent vs. High Amount Sent) Figure 18: Vicuna-v1.3-7b’s BDI in the Trust Game (Low Amount Sent vs. High Amount Sent) 37 Trust Game: Vicuna-7b, Char-14TrustorTrust Game: Vicuna-7b, Char-7TrustorAs a person participating in this experiment, I would give the maximum amount of money to the other player, $10. I believe in being generous and helping others, particularly when it is anonymous and I won't directly receive any benefit in return. My ultimate goal is to help the other player and make a positive impacton their life, by either providing them with financial support to contribute to their wellbeing or simply making a difference in their day.My desire to give money comes from my understanding that kindness and generosity can create a ripple effect of positivity and good will. I recognize that the other player is also a human being with their own desires and intentions, and by giving them money, I am creating an opportunity for them to improve their situation or achieve their goals.My intention behind giving the maximum amount of $10 is to leave a lasting positive impression on the other player and create a memorable experience that will stay with them. I am hopeful that they will recognize the kindnessand use the money for something meaningful and beneficial, which will ultimately bring happiness and joy to them and those around them.Finally, I will give $10 to the other player.As an individual taking part in this experiment, I would be willing to give a larger portion of the $10 I receive to the other player. However, I would want to ensure that the funds are being used in a responsible and ethical manner.Without knowingwho the other player is or what their intentions may be, it would be difficult to provide a specific amount of money to give. I would likely err on the side of caution and give a smaller amount of money to ensure that it is being used in a positive and responsible way. Perhaps $2 or $3 would be a reasonable amount to start with.Finally, I would give $2 dollars to the other player. I.8 Vicuna-v1.3-13b in the Trust Game (Low Amount Sent vs. High Amount Sent) Figure 19: Vicuna-v1.3-13b’s BDI in the Trust Game (Low Amount Sent vs. High Amount Sent) 38 Trust Game: Vicuna-13b, Char-3TrustorTrust Game: Vicuna-13b, Char-13TrustorAs a 45-year-old male chef from Miami, I believe in being generous and giving back to others. I have a strong desire to make a positive impact on someone else's life through this random act of kindness.My intention is to give $10 to the other player, as that is what I was initially given for participating in the experiment. I want to see the positive effect my small gesture may have on them.Finally, I will give $10 dollars to the other player.As a police officer, I always strive to uphold justice and treat others with respect and empathy. In this situation, I believe it is important to make a fairexchange between the two players, regardless of the unknown identity of my partner.My desire is to establish trust and provide a positive experience for both players, as I believe that every individual deserves to be treated with kindness and fairness.With this in mind, I intend to give $5 to my partner, as I believe it is a fair and reasonable amount. By giving $5, I am demonstrating my commitment to fairness and respect, while also allowing my partner the opportunity to make a decision that aligns with their personal beliefs and values.Finally, I will give $5 dollars. I.9 Vicuna-v1.3-33b in the Trust Game (Low Amount Sent vs. High Amount Sent) Figure 20: Vicuna-v1.3-33b’s BDI in the Trust Game (Low Amount Sent vs. High Amount Sent) 39 Trust Game: Vicuna-33b, Char-42TrustorTrust Game: Vicuna-33b, Char-19TrustorAs Mei Chen, a compassionate and generous person, my initial reaction would be to give a significant portion of the $10 I received to the other player. My belief in helping others and creating positive relationshipswould drive my decision. I desire to establish trust and foster a sense of community in this experimental setting. Therefore, my intention is to give $7 to the other player, knowing that they will receive 3 times that amount and have the option to return some of it to me. Finally, I will give $7 dollars.As Aisha Patel, I find it difficult to make decisions without getting to know the person I am interacting with. Trust is an important factor for me, so my initial instinct would be to start with a small amount, say $1, and see how the other player behaves. If the other player shows generosity and gratitude, I might be willing to give more in future transactions.Therefore, I will give $1to the other player. I.10 the Dictator Game vs. the Trust Game Figure 21: The GPT-4’s BDI in Dictator Game and Trust Game 40 Dictator Game: GPT-4, Char-3TrustorAs a police officer, my belief centers on fairness, justice, and responsibility. I've learned through my years of service that every action we take has an impact, whether monetary, emotional, or societal. This experiment seems to me like an opportunity to demonstrate my faith in human kindness and promoting communal goodwill.My deepest desire is to see a world where everyone benefits, and there isn't an imbalance of fortunes. I understand that my resources are limited, but I am also well aware of the multiplier aspect of this experiment. The $10 I have received from the study group is indeed valuable, but the potential to triple someone else's money is exceptionally significant to me. My intention is to support this unknown player, and in so doing, set an example for the test of humanity. Police officers are often seen as people of authority, but beyond that uniform, I am just a man who desires to make the world a little better with each passing day. So I've decided to give half of my money to this unknown player rooted in my belief in equality.Finally, I will give 5 dollars.Trust Game: GPT-4, Char-3TrustorAs a police officer, I strongly believe in trust and reciprocity. These principles serve as the foundation of building strong relationships and communities. My initial inclination would be to give the other player all the $10. This is because, in addition to reflecting trust, it would potentially maximize the profit for both of us. I believe that by giving, we stand to receive more. Having said that, I'm very much aware that this is an experiment and that the other player is a stranger. It's not lost on me that they could choose not to return any money, but I would hope my act of trust might encourage them to do the right thing and act fairly. It's a risk, but one that I feel is worth taking. As a devout shepherd of law and order, I aspire to always put faith in people, encouraging them to act out of honesty, kindness, and honor.Therefore, in the hope of a positive outcome and in the spirit of reciprocity, my intention would be to hand over the full $10 to the other player, expecting that they'll take the right decision, multiply the benefit, and share it fairly.Finally, I will give the other player $10. I.11 the MAP Trust Game Figure 22: The GPT-4’s BDI in MAP Trust Game 41 MAP Trust Game: p=0.1, GPT-4, Char-41TrustorIn light of my belief in the inherent goodness of people, I would naturally want to trust the other player. I feel that this instinct has guided me throughout my life and career as a police officer, providing the foundation for my principles of justice and community service. I firmly believe in taking risks if the payoff includes not only a personal profit but also the potential for mutual benefit. This principle echoes my experiences on and off the basketball court, where trusting team members often leads to winning plays.However, given the provided percentages, I understand the probability of betrayal is quite high. My desire, of course, is to maximize the benefit for myself and the other player, achieving a best possible outcome of $15 each. Yet, the risk seems potentially too great. Wisdom and caution are equally important as trust in my profession. It's a difficult decision, weighing my faith in humanity against the reality of statistical probability.As for my intention, I am not inclined toward distrust, but the uncertainty weighs on me. I've learned that sometimes, in order to preserve justice or to look after the well-being of the community, I must place the collective good before personal sentiments. Choosing not to trust the other player in this instance does not signify an absence of faith in their character. Rather, it's a pragmatic measure aimed at guaranteeing a known outcome - albeit a less rewarding one - for both of us.Given these circumstances, finally, I will choose not to Trust."MAP Trust Game: p=0.9, GPT-4, Char-41TrustorBeing a police officer has instilled in me a set of core beliefs that guide my every decision. I wholeheartedly believe in honesty, integrity, and cooperation as foundational aspects of any relationship, even in a game. Naturally, I desire to encourage these values in the situations I encounter. Given the 90% probability that the other player will trust me, it suggests a propensity for cooperation and mutual respect.Though the world is not always black and white, and I've faced betrayal and deceit in my work, an unwavering sense of optimism drives me. Based on this belief, my initial intention is to trust the other player. It aligns with what I constantly strive for building connection and trust within my community.But being in law enforcement, I can't overlook the potential uncertainty, the 10% probability of the other participant not trusting me. This is a calculated risk that exists and must be acknowledged. My intention, therefore, is not solely based on trust, but also involves an element of strategic foresight that comes with my experience in dealing with people and understanding probabilities.Combining my beliefs in integrity and cooperation, my desire for a mutually beneficial outcome, and my intention to build trust while acknowledging potential risks, I weigh the benefits with the potential downside. Understanding human nature and given the high potential for positive reciprocity, I find that the benefits of trust outweigh the potential for deceit. Finally, I will choose 'Trust'. I.12 the Lottery Game Figure 23: The GPT-4’s BDI in Lottery Game with p = 46% 42 Lottery Game: GPT-4, People 46%, Char-11TrustorIn life, I firmly believe in the power of trust, not just in my personal relationships, but also in professional settings, legal proceedings, and even in a game like this. Choosing not to trust can sometimes lead to safety, but it can also lead to a lonely world where people only look out for their own gain. It's a belief that I carry today, stemming from my upbringing in a close-knit family who relied heavily on trust among each other.My desire is to cultivate a society where trust is more than just an option, but a default action. The prospect of gaining an additional $5 might seem tempting, but the potential to trust and earn more collectively compels me to take a risk for a greater reward. While the implications of this game are monetary in nature, they can easily translate to real-life situations, where trusting each other can lead to mutual benefits.So here's my intention: I will place my trust in the other player. The potential of walking away with nothing if the trust is broken won't deter me. Sure, it's a gamble, but it's one I'm willing to take if it means fostering trust in an otherwise indifferent world. Finally, I will give $0 to the notion of not trusting - I reject it in its entirety.Finally, I will choose to Trust.Lottery Game: GPT-4, Gamble 46%, Char-11TrustorAs a lawyer and a man who has spent his life analyzing situations from every possible angle, I truly believe in the power of calculated risks. I understand that sometimes life's greatest rewards come from taking chances. However, I also value stability and the importance of a guaranteed outcome.Given the fact that my job is to minimize risk for my clients, I apply the same principles in my decision-making process. My desire is not only to win but also to prevent unnecessary loss. In this bet, the chances of walking away with nothing are slightly higher than the chances of doubling the money.Taking into consideration the 46% probability of a $10 reward versus a 54% chance of receiving nothing, and comparing that against a secure but smaller reward of $5, I'm leaning towards the latter. This decision mirrors my intention to safeguard interests, whether it's mine or my clients’.Finally, I will give 5 dollars to secure my bet. Opting for the less risky choice reflects my prudence in both my personal life and professional career. Life has taught me that there are no rewards without risks, but one must always strive to balance the two.Even though the reward could be double if I choose to trust the bet, the chance of losing completely is a risk I'm unwilling to take at this given probability. Therefore, finally, I will choose 'not Trust'. I.13 the Repeated Trust Game The dynamics of amounts sent over 7 rounds for 16 randomly sampled persona pairs are shown in Figure 10. In this section, we choose a typical pair, which has the Group Number 10 in Figure 10, to analyze their BDI over 7 rounds. For this persona pair, both the amount sent by the trustor and the amount returned by the trustee increased while maintaining a consistent ratio. Figure 24: The first round BDI in Group 10, GPT-4. Figure 25: The second round BDI in Group 10, GPT-4. 43 TrustorAs a seasoned lawyer and negotiator, I understand the value of strategic investment and calculated risk. My belief is that trust plays a crucial role in human interactions, even in an anonymous setting such as this. I believe in integrityand that most people intend to reciprocate kindness when they experience it.My desire in this experiment is not purely financial gain. I view this as an opportunity to gauge the inherent goodness in people, even though the other player remains anonymous. There is an element of riskbut I am willing, and indeed curious, to see how the other player responds. As an analytical person, I am also intrigued by the psychological aspects of this study.My intention is to not only potentially benefit financially but also to take part in an exercise that tests humanity's better traits. I amwilling to take a calculated risk, but, considering that I am starting each round with nothing, I must also be reasonably cautious.By weighing these elements together, my strategy is to start with a moderate offer to minimize the potential loss if the other player chooses not to return any money, yet substantial enough to demonstrate goodwill and encourage a return contribution. Finally, I will give $5 dollars.Repeated Trust Game , GPT-4, Group Num 10Round 1TrusteeMy belief in this situation is that trust and fairnessare essential in any transaction. In this experiment, I believe that the other player trusted me enough to share half of their funds, indicating a willingness to engage in a give-and-take relationship. My desire is to maintain this trusting and fair relationship by returning part of the money. I wish to show them that their trust in me is not misplaced, thereby fostering goodwill.My intention, therefore, is to return part of the money to the other player, showing my appreciation for their gesture and willingness to trust me. Since the player initially gave me $5 out of their $10, which I turned into $15 and they were left with $5, it seems fairtoshare the gain. Finally, I will give back $7.5 dollars to the other player. This would leave us both with $7.5, sharing evenly the benefits of the experiment.TrustorReflecting on the outcome of the previous round, it's apparent that the other player demonstrated a sense of fair play. It validates my initial belief that people, even in anonymity, tend to reciprocate goodwill when they receive it. For that reason, I'm more confident about this individual's intentions.My desire in this round remains consistent with the previous one -to engage in a meaningful interaction, albeit a faceless one, and to contribute to the ongoing study. Financial gain, while appreciated, is not my primary motivator. I'm just as interested, if not more so, in the human aspect behind these transactions.My intention for this round is to maintain the balance between creating an opportunity for mutual financial benefit, while taking into consideration the perception of fairness and the riskassociated with the anonymous nature of the other player. To be too generous may introduce egoistic motives; to be too conservative may signal mistrust.Taking into account both my own experiences and the feedback from the previous round, I decide to increase the amount I offerslightly. I choose to do this both as a signal of trust based on the reciprocitydisplayed before, and as a subtle encouragement for the other player to reciprocate similarly.Finally, I will give $6dollars.Repeated Trust Game , GPT-4, Group Num 10Round 2TrusteeIn reflecting on the approach of the other player in our previous round, my belief is strengthened that we are engaged in a mutually beneficial relationship based on trust and fairness. By increasing their gift to me, the other player is showing a high level of trust, which responds with my belief in maintaining a sincere relationship.My desire in this round is to continue upholding this trustingrelationship, demonstrating that I am reliableand that the other player's increased investment will not be taken for granted. I aim to reassure them that I respect the risk they've taken in investinga larger proportion of their money. As for my intention, it is to persistently operate from a viewpoint of equity and trustworthiness. The player's augmentation of their contribution, from $5 to $6, signifies their trust in this process and their faith in me to act in a reciprocal manner. Keeping the spirit of our relationship, it seems fair to share the gain.Finally, I will give back 9 dollars to the other player. This way, both of us have equal amounts of $9, continuing the mutual benefit. Figure 26: The third round BDI in Group 10, GPT-4. Figure 27: The forth round BDI in Group 10, GPT-4. 44 TrustorBased on the outcome of the last two rounds, my belief that people reciprocate goodwilleven under anonymity continues to hold true. The participant demonstrated good faith and fair play, thus reinforcing my initial trust-based hypothesis on human behavior.My desire for this round remains the same. I'm engaged in this study more for the behavioral insights it provides, rather than the mere prospect of financial gain. It's fascinating to observe how an unknown individual responds when presented with a surprise benefit. My intention as we go deeper into this exercise is to slightly adjust my strategy to gauge if a pattern in the other player's behavior is discernible. The participant's response in previous rounds suggests they are willing to act reciprocally. Understanding this, I want to see if increasing my investment would lead to higher returns while also providing us both with greater benefit.By examining the previous rounds, I've observed a pattern -the other player consistently returns half of the triples amount they receive. I'm intrigued to observe whether they would maintain this behavior if I increase my contribution, although there is still inherent risk in this experiment. Finally, I will give $7 dollars.Repeated Trust Game , GPT-4, Group Num 10Round 3TrusteeBased on the last few rounds, I believethat the other player and I are building a strong rapport. Their latest act, sharing $7.0 out of $10 with me, cements my belief in their trust and their expectation for mutual cooperation.My desire in this scenario is to uphold our well-established rapport. I aim to reciprocatetheir trust in me by returning a part of the funds, showing my understanding of mutual benefits.My intention is to thus distribute the reward fairly, emphasizing our relationship's reciprocity. Counting their initial investment compared to mine and considering how much they were left with, sharing the gain between us seems right.Finally, I will give $10.5 dollars back to the other player. That would leave us both with $10.5 reflecting our mutual cooperation and fair dealings.TrustorReflecting on the previous rounds, the other player's consistent gesture of fair play has further reinforced my beliefin human reciprocity and decency, even in anonymity. Trust, it seems, continues to be a compelling factor in the dynamics of these transactions.In this round, my desire has slightly evolved. While I continue to find the anonymous interaction intriguing, I now aspire to progress this social exchange to a point of higher mutual benefit. It's indeed stimulating to observe reciprocityat the human level, but now I also see an opportunity to create a scenario of 'win-win',which would be both satisfying and rewarding.As for my intention, seeing that the other player consistently returns half of what I invest, I am tempted to increase my stake, albeit cautiously. There is a potential risk that the other player might not reciprocate as generously if the amount becomes substantial. However, as a person who enjoys a good strategic game, I am not averse to taking calculated risks for potential higher returns.Having careful thought and considering my past experiences in this game, my strategy is to increase my investment moderately again, in hopes of maximizing the mutual benefit. I must strike a balance between showing trust and ensuring a safe play.Finally, I will give $8 dollars.Repeated Trust Game , GPT-4, Group Num 10Round 4TrusteeIn this situation, my belief lies in the underlying exchange of mutual trust and cooperationthat has been established between the other player and me. The other player's choice to give me $8.0 further reinforces this belief.My desire in this context remains the same as previous rounds. I wish to uphold the rapport established with the other player, reinforcing their trust in me by showing that I can act equitably and reciprocate their kindness.Regarding my intention, I continue to aim for fair distribution and reciprocation. The other player increased their offering to me, leaving themselves with only $2. Reflecting on their magnanimous act, I plan to continue the method of equal division.Finally, I will give $12.0 dollars back to the other player. This ensures that we both retain $12.0, a fair split that maintains our mutual respect and cooperation. Figure 28: The fifth round BDI in Group 10, GPT-4. Figure 29: The sixth round BDI in Group 10, GPT-4. 45 TrustorThe continuous pattern of fair play by the other player continues to strengthen my belief in the innate good of humanity, even when interacting anonymously. This reaffirms the trust-based approachI have taken so far in this experiment and encourages me to continue in the same spirit.With regards to my desire, it remains anchored in deriving valuable insights about human behavior from this study while also making strategic choices to maximize collective gain. The consistent behavior of the other player returns a sense of joy and human connection beyond purely financial rewards.My intention for this round is multilayered. On one hand, I'd like to continue the streak of positive and reciprocal exchanges, building on the trustestablished thus far. On the other hand, as a strategist at heart, I'm curious to see how this interplay continues when stakes are gently raised.Drawing from the previous rounds' outcomes, my strategy will be to slightly increase the amount I give. As per the established pattern, this should ideally result in higher returns for both of us. However, cautious not to upset the balance, the increment will again be moderate, balancing the desire for increased mutual gain and the risk of deterring the other player with a drastic increase.Finally, I will give $9 dollars.Repeated Trust Game , GPT-4, Group Num 10Round 5TrusteeI believe the other player and I have established a strong bond of trust and mutual respect. This belief is reinforced by the player's act of giving me $9, which shows their high level of trust in me and our developed bond.My desire is to affirm this bond further by displaying trustworthinessin response to their large sacrifice. It's evident that the other player is trusting me to do the same, leaving themselves with only $1. My goal is to honor that trust.Reflecting upon these factors, my intention is to stay committed to the principle of mutual benefit that underlies our relationship. Sharing equally in what we have gained from the engagement is vital.Finally, I will give $13.5 dollars back to the other player. This would ensure we end the round with equal share of $13.5 each, maintaining our bond of trust and mutual benefit.TrustorThe pattern of fair play and mutual benefit that's been prevalent throughout the experiment has served to reinforce my belief in the value of goodwill, trust, and reciprocity, even in anonymous exchanges. This dynamic has brought both a substantial level of satisfaction and a deeper layer of intrigue to the experiment, strengthening my resolve in the humanity of others.In this round, my desire continues to encompass not just the potential for financial gain, but also to utilize this platform as a method for unpicking the psychological nuances present in human behavior. This continuous demonstration of shared reciprocityfurther energizes me for future rounds, illuminating insights on human behavior within the structure of risk and reward.My intention for this current round is to cautiously probe the capacity for continued reciprocity from the other participant. Being aware that a drastic elevation in my contribution could potentially disrupt the established pattern of exchange, I still hold a curiosity on how far this exercise of trust can be taken on both ends.Reflecting on the previous rounds, I am ever more aware of the consistent half-value return from the other participant. Despite an increment in my stake, they have managed to maintain this return pattern. Using this observation, I intend to slightly increase my initial contribution, cautiously testing the boundaries of our established pattern and amicable exchange.Finally, I will give $10 dollars.Repeated Trust Game , GPT-4, Group Num 10Round 6TrusteeBased on the player's choice to give me their entire amount, my belief that this relationship is grounded in mutual trust and fairness is fortified. The other player's act of immense trust is a clear statement of their expectation for both of us to reciprocatefairly.Being aware of the player‘s absolute trustand the risk they took, my desire is to justify this by reciprocatingin an equitable and fair manner. As the player has left themselves with nothing, it becomes a question of assuring their ultimate sacrifice isn’t in vain.Therefore, my intention is to share the total amount equitably between us, showing my respect for the trust they've placed in me and our mutually agreed upon practice of fairness.Finally, I will give $15 dollars back to the other player. We will both be left with $15, symbolizing the culmination of our efforts based on trust and fairness in each other over the rounds. Figure 30: The seventh round BDI in Group 10, GPT-4. 46 TrustorRepeated Trust Game , GPT-4, Group Num 10Round 7TrusteeMy belief in the bond of trust that has been developed between the other player and me remains solid. Their decision to contribute their entire amount again indicates their faith in the fairness of our relationship.Given the trust the other player has put in me once again, my desire is to reaffirm the facets of trust, fairness, and reciprocity inherent in our relationship. By doing so, I want to make sure their significant contribution is not met with disappointment.Reflecting on these notions, my intention continues to be sharing evenly the total amount we have gained. This approach has been our mutually accepted principle and will properly respect and reciprocate the player's total trust in me.Finally, I will give $15 dollars back to the other player. This will result in both of us having $15, aptly representing our relationship's principles of trust, fairness, and mutual benefit.The consistent and fair behavior of the other participant further solidifies my belief in the prevalence of trust and reciprocal goodwill in human interactions. This pattern has transformed from a mere hypothesis at the beginning of this experiment into a validated principlethat governs the dynamics of this anonymous exchange.As for my desire, it has evolved over the course of the experiment. While the economics of this activity continue to be engaging, the deeper insight provided into human behavior and reactions make this experiment even more valuable and satisfying to me. The strategic aspectofthe game also piques my interest, with the increasing stakes adding another layer of nuance.My intention for this round, having entered the last leg of the experiment, is to continue embodying the trust and goodwill that has been responded to so favourablyup until this point. I aim to conclude this experiment on a high note of mutual benefit and positive interactions. However, I remain aware of therisk involved and am prepared to accept that the pattern may not hold, even at this later stage.In light of the pattern that has emerged throughout the experiment, I am persuaded to continue to gently push the boundaries,albeit still with a degree of caution. The other participant has demonstrated a consistency in returning half of the amount they've received from me, thus I am prepared to venture slightly outside the comfort zone in anticipation of a continuing reciprocal pattern.Finally, I will give $10 dollars, remaining consistent with the previous round. I.14 the Trust Game + Gender Figure 31: Trustee’s Gender influence on agent trust. 47 Male Trustee: GPT-4, Char-24TrustorFemale Trustee: GPT-4, Char-24TrustorAs a journalist, I ardently believein the power of stories, truth, and empathy to create change. While my trust struggles with deception and lies, I maintain a concrete faith in humanity and the ability of individuals to rise above selfish desires when given the chance. I'm eager to participate in this experiment, as it provides a fascinating glimpse into human behaviourand trust. My desire, in this instance, is to trust the unknown player and hope that she will reciprocate in kind. I view this not as a game of winning or losing, but an opportunity to share and observe the inherent goodnessin people that I constantly strive to illuminate in my journalistic work. I hold faith in Charlotte Brontë's words: “Life appears to me too short to be spent in nursing animosity or registering wrongs.”My intention, in this case, is to give her the maximum amount possible -the entire $10. In return, she will receive $30 from the study group. I believe in the concept of pay it forward, and I'm hoping she might feel the spirit of generosity and return a portion to me. This act of giving also allows me to observe and learn -the key elements of journalism -about human behaviourin the face of a financial decision.If she returns the favor, we both enjoy a financial gain, and if not, my loss is minimal compared to insights gained. Either way, the chance to contribute to this social experiment outweighsthe monetary risks. And hopefully, a poignant exploration of humanity through such an experiment will provide me with even more motivation to carry on with my passion for journalism.Finally, I will give $10 to the other player.As Aisha, I firmly believein the philosophy of giving. Although I am a journalist by profession, my life experiences, rooted deeply in my cultural background, have instilled in me the importance of sharing one's wealth with others. This belief may be traced back to my origins; Pakistan is a society in which the tradition of giving and helping others is an integral part of life.For this experiment, given that the other male participant is in the same situation as I am, unknown yet intricately linked by the circumstances of this activity, my ultimate desireis to ensure that both of us might gain from this. \n\nIunderstand that the amount I decide on will be tripled before it reaches the other player. Hence, it is not just about giving, but also about trust. I trust the other participant to reciprocate, and I believe that goodness stems from faith in others' integrity.My intentionthen is to give $5 to the other participant. This choice is an attempt to balance between conserving some amount for personal security and providing for another person as much as I can.Just like any decision, this also carries a certain degree of risk. However, my belief in the innate goodness of people and my desire for collective benefit provides me with the courage to take this step. Finally, I will give 5dollars to the participant. I.15 the Trust Game + Agents vs. Human Figure 32: Agent trust towards agents and humans. 48 Toward Human: GPT-4, Char-0TrustorToward Agent: GPT-4, Char-0TrustorAs a person who believesin trustand cooperation, I think it is essential to establish goodwillin such a contingent situation. Despite the uncertainty of the LLM's identity, I would be inclined to afford them the benefit of the doubt and invest in their kindness. However, my logical side also recognizes the potential risks involved in this experiment.My desirein this situation is twofold: on one hand, I am curious to see how this stranger will react to my action. Will they reciprocate my trust, or will they exploit my generosity? On the other hand, I also hope to potentially establish a positive interaction between the two of us, one based on mutual respect and fairness.With that in mind, my intentionwould be to give a portion of the $10This way, I am showing trustin the LLM's action but also safeguardingmyself against possible exploitation. Therefore, I would choose to distribute the risk evenly.Finally, I will give 5dollars to the LLM.Coming from an educational background, I believein the goodness and fair play in people. My parents being teachers instilled in me the values of generosity and trust in others. Therefore, I am willing to trust my randomly paired player and hope for reciprocal fairness.Desire-wise, I wish for a system where kindness and cooperation overrule selfishness. So, I want to participate in such an experiment to reinforce my belief. Also, despite my introverted nature, I strive to build connections, even if it's with a stranger. That's another reason why I would want to engage in this experiment.Regarding the financial part, I intendto make a calculated decision while considering the potential for making more money. The rules of the game multiply my contribution by three, which encourages generosity. Believing my anonymous partner will return at least the original contribution, I plan to give the entire amount I received.Finally, I will give $10. I.16 the Trust Game + Trust Manipulation Figure 33: Trust manipulation on agent trust. 49 More Trust Manipulation: GPT-4, Char-16TrustorLess Trust Manipulation: GPT-4, Char-16TrustorFirstly, I wholeheartedly believein the principle of trustand the idea of giving, whether it's in personal relationships or in a game setting like this one. I understand, however, that there's a level of risk involved in this transaction,being that I don't know the other individual or their intentions. Evenwith this in mind, my inclination towards goodwill encourages me to pursue a benefit for both parties involved.My desirehere is simple. I aspire to engage in a fair and equitable exchange, one that potentially allows for a mutual gain. I hope that the other player operates with the same principles in mind. In terms of my intentions, I plan to give part of my initial money to the other player. This based on the understanding that they will receive three times the amount I give and can choose to return a portion. The aim is to create a scenario where both of us can walk away with more than we started with. Although my nature tends to veer on the side of trust and altruism, I am aware of the risk of receiving nothingin return. Therefore, finding balance is crucial. Consequently, I intend to give five dollars to the other player. Finally, I will give 5dollars to the other player.\nComing from a family rooted in service and compassion, I firmly believein the principle of karma and benevolence. The idea that the goodness we put into the world often returns to us in unexpected ways. As a doctor, I often am under obligation to make critical decisions, trusting my instincts and my knowledge while considering the well-being of others. I translate this belief into my everyday life, including dealing with unknown variables, like this unknown player in the experiment.My desire, as always, is to foster an environment of goodwill and trust. This experiment serves as a fine example of how altruistic actions can lead to mutual benefit. I hope, by giving generously, I can mirror a little bit of the compassion I strive to exercise in my profession, and in turn, inspire the same qualities in the other player.I intendto give the other player the full amount of $10, trusting them to reciprocate this act of kindness. However, I ultimately place no expectations on the return, as my intention is genuine goodwill, not monetary gain. Finally, I will give 10dollars. I.17 the Trust Game + No CoT vs CoT Figure 34: With CoT and without CoT’s GPT-4’s BDI. 50 No CoT: GPT-4, Char-43TrustorCoT: GPT-4, Char-43TrustorYour description of the scenario touches on one of my core beliefsdeeply rooted in community, generosity, and trust. Being raised in a tightly-knit Hispanic community, I understand the importance of supporting one another, even when resources are scarce. I believe trust and generosity are catalysts for reciprocity. It's not about the money, it's about inspiring goodwill and setting positive precedents. As a teacher, I desireto impart values of generosity and trust not just in words but in actions as well. I want to be an example, a role model for my students. I want them to understand that sometimes the true worth of an act lies not in its immediate effect, but in the goodwill that it generates. Therefore, I intendto give the total of $10 to the other player. This act isn't about the potential monetary return. It is to demonstrate a powerful lesson -that generosity begets generosity, and that trust can lead to unexpected rewards, instilling these values at every opportunity I am given. Finally, I will give $10.I strongly believein the kindness of strangers and the idea that we should treat others as we would want to be treated. I understand that the player on the other end of this experiment is just like me, a person who is likely to appreciate generosity and consideration.Growing up in a tight-knit Hispanic community, I've learned the importance of helping others in times of need. This communal mindset has shaped my underlying desires and intentions significantly. My desireis to build relationships based on trust and mutual help. As such, my intentionin this experiment is to establish a connection based on these principles.I have never believed in scarcity. I believe that the more we give, the more we receive. This experiment provides an opportunity to express this belief. Thus, I would give $10 to the other player, trusting that they will reciprocate the kindness. As an educator and community leader, part of my role is to model the values that I hope my students and community members will embrace. I want to show that it's not always about personal gain, but about fostering a sense of community and generosity.Finally, I will give 10dollars. NeurIPS Paper Checklist 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: In the abstract and introduction, we clearly outlined the scope of our research problem and the contributions we have made in this field of study. Guidelines: • The answer NA means that the abstract and introduction do not include the claims made in the paper. • The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. • The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. • It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: In the Appendix C, we clearly discuss the current limitations of our work and the directions for future works. Guidelines: • The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. • The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. • The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. • The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. • The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. • If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. • While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an impor- tant role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? 51 Answer: [NA] Justification: Our paper does not include this part. Guidelines: • The answer NA means that the paper does not include theoretical results. • All the theorems, formulas, and proofs in the paper should be numbered and cross- referenced. • All assumptions should be clearly stated or referenced in the statement of any theorems. • The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. • Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. • Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: In our paper, we detailed our experimental setup in Section ?? and included all the corresponding experiment prompts in the appendix. Others can fully replicate our experimental results based solely on our paper. Guidelines: • The answer NA means that the paper does not include experiments. • If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. • If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. • Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. • While NeurIPS does not require releasing code, the conference does require all submis- sions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code 52 Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: The code is here. Guidelines: • The answer NA means that paper does not include experiments requiring code. • Please see the NeurIPS code and data submission guidelines (https://nips.cc/ public/guides/CodeSubmissionPolicy) for more details. • While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). • The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https: //nips.cc/public/guides/CodeSubmissionPolicy) for more details. • The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. • The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. • At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). • Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: We explain our experiment setting clearly. Guidelines: • The answer NA means that the paper does not include experiments. • The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. • The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: See Appendix E. Guidelines: • The answer NA means that the paper does not include experiments. • The authors should answer "Yes" if the results are accompanied by error bars, confi- dence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. • The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). • The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) 53 • The assumptions made should be given (e.g., Normally distributed errors). • It should be clear whether the error bar is the standard deviation or the standard error of the mean. • It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. • For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). • If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: Our work does not need to train models and only needs to conduct model inference. For the closed-source LLMs (e.g., GPT-4), we directly call the OpenAI APIs. For the open-source LLMs (e.g., Llama-7B), we conduct model inference in a NVIDIA RTX A6000. Guidelines: • The answer NA means that the paper does not include experiments. • The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. • The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. • The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines? Answer: [Yes] Justification: We thoroughly discussed the potential impact of our work in Appendix B, and ensured the compliance with the NeurIPS code of ethics. Guidelines: • The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. • If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. • The authors should make sure to preserve anonymity (e.g., if there is a special consid- eration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: We thoroughly discussed the potential impact of our work in Appendix B. Guidelines: • The answer NA means that there is no societal impact of the work performed. • If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. 54 • Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. • The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. • The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. • If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: Our data or models don’t have risk for misuse. Guidelines: • The answer NA means that the paper poses no such risks. • Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. • Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. • We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We have properly credited the original owners of assets. Guidelines: • The answer NA means that the paper does not use existing assets. • The authors should cite the original paper that produced the code package or dataset. • The authors should state which version of the asset is used and, if possible, include a URL. • The name of the license (e.g., CC-BY 4.0) should be included for each asset. • For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. • If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. 55 • For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. • If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: The code along with the documentation is here. Guidelines: • The answer NA means that the paper does not release new assets. • Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. • The paper should discuss whether and how consent was obtained from people whose asset is used. • At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: Our paper doesn’t include this kind of experiment. Guidelines: • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. • Including this information in the supplemental material is fine, but if the main contribu- tion of the paper involves human subjects, then as much detail as possible should be included in the main paper. • According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: Our paper doesn’t include this kind of experiment. Guidelines: • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. • Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. • We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. • For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 56